[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277697#comment-15277697
 ] 

Sean Owen commented on SPARK-15245:
-----------------------------------

The message seems correct. I agree it could possibly be checked earlier though, 
yes. I also agree it's not clear if it's worth a little extra code and extra 
calls to check it, since it's a rare failure mode and handled quickly and 
correctly anyway. It doesn't affect normal usage.

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-15245
>                 URL: https://issues.apache.org/jira/browse/SPARK-15245
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Hyukjin Kwon
>            Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>       at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>       at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>       at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>       at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>       at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>       at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>       at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>       at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>       at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>       at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>       at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>       at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to