xing them with spark.hadoop.
you can also set the env vars, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY;
SparkEnv will pick these up and set the relevant spark context keys for you
On 3 May 2016, at 01:53, Zhang, Jingyu
<jingyu.zh...@news.com.au<mailto:jingyu.zh...@news.com.au>> wrote:
Hi All,
I am usin
oop-aws/index.html,
>>> which you can set in your spark context by prefixing them with spark.hadoop.
>>>
>>> you can also set the env vars, AWS_ACCESS_KEY_ID and
>>> AWS_SECRET_ACCESS_KEY; SparkEnv will pick these up and set the relevant
>>> spark conte
and
>> AWS_SECRET_ACCESS_KEY; SparkEnv will pick these up and set the relevant
>> spark context keys for you
>>
>>
>> On 3 May 2016, at 01:53, Zhang, Jingyu <jingyu.zh...@news.com.au> wrote:
>>
>> Hi All,
>>
>> I am using Eclipse with Maven
Env will pick these up and set the relevant
> spark context keys for you
>
>
> On 3 May 2016, at 01:53, Zhang, Jingyu <jingyu.zh...@news.com.au> wrote:
>
> Hi All,
>
> I am using Eclipse with Maven for developing Spark applications. I got a
> error for Reading from S3 i
loping Spark applications. I got a error
for Reading from S3 in Scala but it works fine in Java when I run them in the
same project in Eclipse. The Scala/Java code and the error in following
Scala
val uri = URI.create("s3a://" + key + ":" + seckey + "@" +
"graphcluster
Hi All,
I am using Eclipse with Maven for developing Spark applications. I got a
error for Reading from S3 in Scala but it works fine in Java when I run
them in the same project in Eclipse. The Scala/Java code and the error in
following
Scala
val uri = URI.create("s3a://" + key + &q