Hi,

It seems to me that the checkpoint command is not persisting the
SparkContext hadoop configuration correctly . Can this be a possibility ?



Thanks,
Natu

On Mon, Jun 13, 2016 at 11:57 AM, Natu Lauchande <nlaucha...@gmail.com>
wrote:

> Hi,
>
> I am testing disaster recovery from Spark and having some issues when
> trying to restore an input file from s3 :
>
> 2016-06-13 11:42:52,420 [main] INFO
> org.apache.spark.streaming.dstream.FileInputDStream$FileInputDStreamCheckpointData
> - Restoring files for time 1465810860000 ms -
> [s3n://bucketfoo/filefoo/908966c353654a21bc7b2733d65b7c19_availability_1463900408888.csv]
> Exception in thread "main" java.lang.IllegalArgumentException: AWS Access
> Key ID and Secret Access Key must be specified as the username or password
> (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or
> fs.s3n.awsSecretAccessKey properties (respectively).
>
>
> I am basically following the pattern
> https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/RecoverableNetworkWordCount.scala
> . And added on the stream creation the environment variables.
>
> Can anyone in the list help me on figuring out why i am having this error ?
>
> Thanks,
> Natu
>
>
>

Reply via email to