Hi, I have defined a custom schema as shown below : val customSchema = StructType( > StructField("year", IntegerType, true), > StructField("make", StringType, true), > StructField("model", StringType, true), > StructField("comment", StringType, true),
StructField("blank", StringType, true)) Is there any way instead of defining it spark job file I can read from file. I am using Spark-csv to read my data file val df = sqlContext.read .format("com.databricks.spark.csv") .option("header", "true") // Use first line of all files as header .schema(customSchema) .load("cars.csv")val selectedData = df.select("year", "model") selectedData.write .format("com.databricks.spark.csv") .option("header", "true") .save("newcars.csv")