Re: Shark/Spark running on EC2 can read from S3 bucket but cannot write to it - "Wrong FS"
I am running Spark 0.9.1 and Shark 0.9.1. Sorry I didn't include that. On Thu, Jul 31, 2014 at 9:50 AM, William Cox wrote: > *The Shark-specific group appears to be in moderation pause, so I'm asking > here.* > > I'm running Shark/Spark on EC2. I am using Shark to query data from a S3 > bucket and then write the results back to a S3 bucket. The data is read > fine, but when I write I get an error: > > 14/07/31 16:42:30 INFO scheduler.TaskSetManager: Loss was due to >> java.lang.IllegalArgumentException: Wrong FS: >> s3n://id:key@shadoop/tmp/hive-root/hive_2014-07-31_16-39-29_825_6436105804053790400/_tmp.-ext-1, >> expected: hdfs://ecmachine.compute-1.amazonaws.com:9000 [duplicate 3] > > Is there some setting that I change to allow it to write to a S3 file > system? I've tried all sorts of different queries to write to S3. This > particular one was: > >> INSERT OVERWRITE DIRECTORY 's3n://id:key@shadoop/bucket' SELECT * FROM >> table; > > Thanks for your help! > > -William > >
Shark/Spark running on EC2 can read from S3 bucket but cannot write to it - "Wrong FS"
*The Shark-specific group appears to be in moderation pause, so I'm asking here.* I'm running Shark/Spark on EC2. I am using Shark to query data from a S3 bucket and then write the results back to a S3 bucket. The data is read fine, but when I write I get an error: 14/07/31 16:42:30 INFO scheduler.TaskSetManager: Loss was due to > java.lang.IllegalArgumentException: Wrong FS: > s3n://id:key@shadoop/tmp/hive-root/hive_2014-07-31_16-39-29_825_6436105804053790400/_tmp.-ext-1, > expected: hdfs://ecmachine.compute-1.amazonaws.com:9000 [duplicate 3] Is there some setting that I change to allow it to write to a S3 file system? I've tried all sorts of different queries to write to S3. This particular one was: > INSERT OVERWRITE DIRECTORY 's3n://id:key@shadoop/bucket' SELECT * FROM > table; Thanks for your help! -William