This sounds like a problem that was fixed in Spark 1.3.1.

https://issues.apache.org/jira/browse/SPARK-6351

On Mon, Jun 1, 2015 at 5:44 PM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> This thread
> <http://stackoverflow.com/questions/24048729/how-to-read-input-from-s3-in-a-spark-streaming-ec2-cluster-application>
> has various methods on accessing S3 from spark, it might help you.
>
> Thanks
> Best Regards
>
> On Sun, May 24, 2015 at 8:03 AM, ogoh <oke...@gmail.com> wrote:
>
>>
>> Hello,
>> I am using Spark1.3 in AWS.
>> SparkSQL can't recognize Hive external table on S3.
>> The following is the error message.
>> I appreciate any help.
>> Thanks,
>> Okehee
>> ------
>> 15/05/24 01:02:18 ERROR thriftserver.SparkSQLDriver: Failed in [select
>> count(*) from api_search where pdate='2015-05-08']
>> java.lang.IllegalArgumentException: Wrong FS:
>>
>> s3://test-emr/datawarehouse/api_s3_perf/api_search/pdate=2015-05-08/phour=00,
>> expected: hdfs://10.128.193.211:9000
>>         at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:647)
>>         at
>> org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:467)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:252)
>>         at
>>
>> org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$6.apply(newParquet.scala:251)
>> at
>>
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-can-t-read-S3-path-for-hive-external-table-tp23002.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to