[ https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277979#comment-15277979 ]
Hyukjin Kwon commented on SPARK-15245: -------------------------------------- Sorry for leaving comments again and again but I think this JIRA might not have to be closed (my PR was closed though) because there is a kind of hidden option {{basePath}} for reading partitioned table for datasources, [here|https://github.com/apache/spark/blob/f7b7ef41662d7d02fc4f834f3c6c4ee8802e949c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala#L170-L173] and {{stream()}} API is overwriting this, [here|https://github.com/apache/spark/blob/f7b7ef41662d7d02fc4f834f3c6c4ee8802e949c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala#L183]. So, I feel like the message has to be corrected anyway.. > stream API throws an exception with an incorrect message when the path is not > a direcotry > ----------------------------------------------------------------------------------------- > > Key: SPARK-15245 > URL: https://issues.apache.org/jira/browse/SPARK-15245 > Project: Spark > Issue Type: Improvement > Components: SQL > Reporter: Hyukjin Kwon > Priority: Trivial > > {code} > val path = "tmp.csv" // This is not a directory > val cars = spark.read > .format("csv") > .stream(path) > .write > .option("checkpointLocation", "streaming.metadata") > .startStream("tmp") > {code} > This throws an exception as below. > {code} > java.lang.IllegalArgumentException: Option 'basePath' must be a directory > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117) > at > org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54) > at > org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350) > at > org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197) > at > org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201) > at > org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201) > at > org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310) > at > scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) > {code} > It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be > great if it has a better message. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org