Github Project : example-spark-scala-read-and-write-from-hdfs
Common part
sbt Dependencies
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1" % "provided" libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.6.1" % "provided" libraryDependencies += "com.databricks" %% "spark-csv" % "1.3.0"
assembly Dependency
// In build.sbt import sbt.Keys._ assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)
// In project/assembly.sbt addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.1")
HDFS URI
HDFS URI are like that : hdfs://namenodedns:port/user/hdfs/folder/file.csv
Default port is 8020.
Init SparkContext and SQLContext
val conf = new SparkConf().setAppName("example-spark-scala-read-and-write-from-hdfs") // Creation of SparContext and SQLContext val sc = new SparkContext(conf) val sqlContext = new SQLContext(sc)
How to write a file to HDFS with Spark Scala?
Code example
// Defining an Helloworld class case class HelloWorld(message: String) // ====== Creating a dataframe with 1 partition val df = Seq(HelloWorld("helloworld")).toDF().coalesce(1) // ======= Writing files // Writing Dataframe as parquet file df.write.format("parquet").mode("overwrite").save(hdfs_master + "user/hdfs/wiki/testwiki") // Writing Dataframe as csv file df.write.format("com.databricks.spark.csv").mode("overwrite").save(hdfs_master + "user/hdfs/wiki/testwiki.csv")
How to read a file from HDFS with Spark Scala ?
Code example
// ======= Reading files // Reading parquet files into a Spark Dataframe val df_parquet = sqlContext.read.parquet(hdfs_master + "user/hdfs/wiki/testwiki") // Reading csv files into a Spark Dataframe val df_csv = sqlContext.read.format("com.databricks.spark.csv").option("inferSchema", "true").load(hdfs_master + "user/hdfs/wiki/testwiki.csv")