s3a uses amazon's own libraries; it's tested against frankfurt too.

you have to view s3a support in Hadoop 2.6 as beta-release: it works, with some 
issues. Hadoop 2.7.0+ has it all working now, though are left with the task of 
getting hadoop-aws and the amazon JAR onto your classpath via the --jars 
option, as they aren't in the spark-assembly JAR


On 1 Jul 2015, at 04:46, Aaron Davidson 
<ilike...@gmail.com<mailto:ilike...@gmail.com>> wrote:

Should be able to use s3a (on new hadoop versions), I believe that will try or 
at least has a setting for v4

On Tue, Jun 30, 2015 at 8:31 PM, Exie 
<tfind...@prodevelop.com.au<mailto:tfind...@prodevelop.com.au>> wrote:
Not sure if this helps, but the options I set are slightly different:

val hadoopConf=sc.hadoopConfiguration
hadoopConf.set("fs.s3n.awsAccessKeyId","key")
hadoopConf.set("fs.s3n.awsSecretAccessKey","secret")

Try setting them to s3n as opposed to just s3

Good luck!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/s3-bucket-access-read-file-tp23536p23560.html
Sent from the Apache Spark User List mailing list archive at 
Nabble.com<http://Nabble.com>.

---------------------------------------------------------------------
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: 
user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>



Reply via email to