[ https://issues.apache.org/jira/browse/SPARK-20370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993405#comment-15993405 ]
Steve Loughran commented on SPARK-20370: ---------------------------------------- Is this the bit under the PR tagged "!! HACK ALERT !!" by any chance? If so, it seems to have gone in for a Hive metastore workaround. I wonder if there is/can be a solution in Hive-land. > create external table on read only location fails > ------------------------------------------------- > > Key: SPARK-20370 > URL: https://issues.apache.org/jira/browse/SPARK-20370 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0, 2.1.0 > Environment: EMR 5.4.0, hadoop 2.7.3, spark 2.1.0 > Reporter: Gaurav Shah > Priority: Minor > > Create External table via following fails: > sqlContext.createExternalTable( > "table_name", > "org.apache.spark.sql.parquet", > inputSchema, > Map( > "path" -> "s3a://bucket-name/folder", > "mergeSchema" -> "false" > ) > ) > Spark in the following commit tries to check if it has write access to giving > location, which fails and so the table meta creation fails. > https://github.com/apache/spark/pull/13270/files > The table creation script works even if cluster has read only access in spark > 1.6, but fails in spark 2.0 -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org