I'm using Spark2.0.0 to do sql analysis over parquet files, when using
`read().parquet("path")`, or `write().parquet("path")` in Java(I followed
the example java file in source code exactly), I always encountered

*Exception in thread "main" java.lang.RuntimeException: Multiple sources
found for parquet
(org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat,
org.apache.spark.sql.execution.datasources.parquet.DefaultSource), please
specify the fully qualified class name.*

Any idea why?

Thanks.

Best,
Jelly

Reply via email to