Hmm do I always need to have that in my driver program? Why can't I set it
somewhere such that spark cluster realizes that is needs to use s3?





On Fri, Aug 26, 2016 5:13 AM, Devi P.V devip2...@gmail.com wrote:
The following piece of code works for me to read data from S3 using Spark.

val conf = new SparkConf().setAppName("Simple Application").setMaster("local 
[*]")
val sc = new SparkContext(conf)
val hadoopConf=sc.hadoopConfigurat ion;
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native 
.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAcces sKeyId",AccessKey)
hadoopConf.set("fs.s3.awsSecre tAccessKey",SecretKey)
var jobInput = sc.textFile("s3://path to bucket")

Thanks


On Fri, Aug 26, 2016 at 5:16 PM, kant kodali < kanth...@gmail.com > wrote:
Hi guys,
Are there any instructions on how to setup spark with S3 on AWS?
Thanks!

Reply via email to