Hmm do I always need to have that in my driver program? Why can't I set it
somewhere such that spark cluster realizes that is needs to use s3?
On Fri, Aug 26, 2016 5:13 AM, Devi P.V devip2...@gmail.com wrote:
The following piece of code works for me to read data from S3 using Spark.
val
The following piece of code works for me to read data from S3 using Spark.
val conf = new SparkConf().setAppName("Simple
Application").setMaster("local[*]")
val sc = new SparkContext(conf)
val hadoopConf=sc.hadoopConfiguration;
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native
Hi guys,
Are there any instructions on how to setup spark with S3 on AWS?
Thanks!