Thanks, Akhil. I had high hopes for #2, but tried all and no luck.
I was looking at the source and found something interesting. The Stack
Trace (below) directs me to FileInputDStream.scala (line 141). This is
version 1.1.1, btw. Line 141 has:
private def fs: FileSystem = {
if (fs_ ==
Try the following:
1. Set the access key and secret key in the sparkContext:
ssc.sparkContext.hadoopConfiguration.set(AWS_ACCESS_KEY_ID,yourAccessKey)
ssc.sparkContext.hadoopConfiguration.set(AWS_SECRET_ACCESS_KEY,yourSecretKey)
2. Set the access key and secret key in the environment before
: hadoopConfiguration for StreamingContext
To: ak...@sigmoidanalytics.com
CC: u...@spark.incubator.apache.org
Thanks, Akhil. I had high hopes for #2, but tried all and no luck.
I was looking at the source and found something interesting. The Stack Trace
(below) directs me to FileInputDStream.scala (line 141
Looks like the latest version 1.2.1 actually does use the configured hadoop
conf. I tested it out and that does resolve my problem.
thanks,
marc
On Tue, Feb 10, 2015 at 10:57 AM, Marc Limotte mslimo...@gmail.com wrote:
Thanks, Akhil. I had high hopes for #2, but tried all and no luck.
I