[ https://issues.apache.org/jira/browse/SPARK-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
zenglinxi updated SPARK-16408: ------------------------------ Description: when use Spark-sql to execute sql like: {noformat} add file hdfs://xxx/user/test; {noformat} if the HDFS path( hdfs://xxx/user/test) is a directory, then we will get an exception like: {noformat} org.apache.spark.SparkException: Added file hdfs://xxx/user/test is a directory and recursive is not turned on. at org.apache.spark.SparkContext.addFile(SparkContext.scala:1372) at org.apache.spark.SparkContext.addFile(SparkContext.scala:1340) at org.apache.spark.sql.hive.execution.AddFile.run(commands.scala:117) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) {noformat} was: when use Spark-sql to execute sql like: {quote} add file hdfs://xxx/user/test; {quote} if the HDFS path( hdfs://xxx/user/test) is a directory, then we will get an exception like: {quote} org.apache.spark.SparkException: Added file hdfs://xxx/user/test is a directory and recursive is not turned on. at org.apache.spark.SparkContext.addFile(SparkContext.scala:1372) at org.apache.spark.SparkContext.addFile(SparkContext.scala:1340) at org.apache.spark.sql.hive.execution.AddFile.run(commands.scala:117) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) {quote} > SparkSQL Added file get Exception: is a directory and recursive is not turned > on > -------------------------------------------------------------------------------- > > Key: SPARK-16408 > URL: https://issues.apache.org/jira/browse/SPARK-16408 > Project: Spark > Issue Type: Task > Components: SQL > Affects Versions: 1.6.2 > Reporter: zenglinxi > > when use Spark-sql to execute sql like: > {noformat} > add file hdfs://xxx/user/test; > {noformat} > if the HDFS path( hdfs://xxx/user/test) is a directory, then we will get an > exception like: > {noformat} > org.apache.spark.SparkException: Added file hdfs://xxx/user/test is a > directory and recursive is not turned on. > at org.apache.spark.SparkContext.addFile(SparkContext.scala:1372) > at org.apache.spark.SparkContext.addFile(SparkContext.scala:1340) > at org.apache.spark.sql.hive.execution.AddFile.run(commands.scala:117) > at > org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58) > at > org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56) > at > org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org