Github user squito commented on the issue:
https://github.com/apache/spark/pull/19343
OK, closing this and the jira
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19343
Thanks! Maybe we can close it now and revisit it when we have a better way
to resolve the file system specific issues?
---
Github user squito commented on the issue:
https://github.com/apache/spark/pull/19343
I don't see much point in putting this in the docs ... it seems too
fine-grained a detail to be useful there. I just don't see the users who
encounter this exception from going to look at the spot
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19343
@squito Thank you!
Instead of changing the source codes, could we just update the document
https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#hive-tables ?
This might be
Github user squito commented on the issue:
https://github.com/apache/spark/pull/19343
whoops, sorry I wrote [CORE] out of habit!
> Spark SQL might not be deployed in the HDFS system. Conceptually, this
HDFS-specific codes should not be part of our HiveExternalCatalog .