Yea, this is exactly what I have been worried of the recent changes
(discussed in https://issues.apache.org/jira/browse/SPARK-24924)
See https://github.com/apache/spark/pull/17916. This should be fine in
upper Spark versions.

FYI, +Wechen and Dongjoon
I want to add Thomas Graves and Gengliang Wang too but can't fine their
email addresses.

2018년 8월 31일 (금) 오전 11:52, Srabasti Banerjee <srabast...@ymail.com.invalid>님이
작성:

> Hi,
>
> I am trying to run below code to read file as a dataframe onto a Stream
> (for Spark Streaming) developed via Eclipse IDE, defining schemas
> appropriately, by running thin jar on server and am getting error below.
> Tried out suggestions from researching on internet based on 
> "spark.read.option.schema.csv"
> similar errors with no success.
>
> Am thinking this can be a bug as the changes might not have been done for
> readStream option? Has anybody encountered similar issue for Spark
> Streaming?
>
> Looking forward to hear your response(s)!
>
> Thanks
> Srabasti Banerjee
>
> *Error*
> *Exception in thread "main" java.lang.RuntimeException: Multiple sources
> found for csv (com.databricks.spark.csv.DefaultSource15,
> org.apache.spark.sql.execution.datasources.csv.CSVFileFormat), please
> specify the fully qualified class name.*
>
> *Code:*
> *val csvdf = spark.readStream.option("sep",
> ",").schema(userSchema).csv("server_path") //does not resolve error*
> *val csvdf = spark.readStream.option("sep",
> ",").schema(userSchema).format("com.databricks.spark.csv").csv("server_path")
> //does not resolve error*
> * val csvdf = spark.readStream.option("sep",
> ",").schema(userSchema).csv("server_path") //does not resolve error*
> *val csvdf = spark.readStream.option("sep",
> ",").schema(userSchema).format("org.apache.spark.sql.execution.datasources.csv").csv("server_path")
> //does not resolve errorval csvdf = spark.readStream.option("sep",
> ",").schema(userSchema).format("org.apache.spark.sql.execution.datasources.csv.CSVFileFormat").csv("server_path")
> //does not resolve errorval csvdf = spark.readStream.option("sep",
> ",").schema(userSchema).format("com.databricks.spark.csv.DefaultSource15").csv("server_path")
> //does not resolve error*
>
>
>
>
>

Reply via email to