[ https://issues.apache.org/jira/browse/SPARK-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wenchen Fan resolved SPARK-19919. --------------------------------- Resolution: Fixed Fix Version/s: 2.2.0 Issue resolved by pull request 17256 [https://github.com/apache/spark/pull/17256] > Defer input path validation into DataSource in CSV datasource > ------------------------------------------------------------- > > Key: SPARK-19919 > URL: https://issues.apache.org/jira/browse/SPARK-19919 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 2.2.0 > Reporter: Hyukjin Kwon > Priority: Trivial > Fix For: 2.2.0 > > > Currently, if other datasources fail to infer the schema, it returns {{None}} > and then this is being validated in {{DataSource}} as below: > {code} > scala> spark.read.json("emptydir") > org.apache.spark.sql.AnalysisException: Unable to infer schema for JSON. It > must be specified manually.; > {code} > {code} > scala> spark.read.orc("emptydir") > org.apache.spark.sql.AnalysisException: Unable to infer schema for ORC. It > must be specified manually.; > {code} > {code} > scala> spark.read.parquet("emptydir") > org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. > It must be specified manually.; > {code} > However, CSV it checks it within the datasource implementation and throws > another exception message as below: > {code} > scala> spark.read.csv("emptydir") > java.lang.IllegalArgumentException: requirement failed: Cannot infer schema > from an empty set of files > {code} > We could remove this duplicated check and validate this in one place in the > same way with the same message. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org