Hi Marcelo,

Thanks for the advice. I understand that we could set the configurations
before creating SparkContext. My question is
SparkContext.hadoopConfiguration.set("key","value") doesn't seem to
propagate to all subsequent SQLContext jobs. Note that I mentioned I can
load the parquet file but I cannot perform a count on the parquet file
because of the AmazonClientException. It means that the credential is used
during the loading of the parquet but not when we are processing the
parquet file. How this can happen?

Best Regards,

Jerry


On Tue, Oct 27, 2015 at 2:05 PM, Marcelo Vanzin <van...@cloudera.com> wrote:

> On Tue, Oct 27, 2015 at 10:43 AM, Jerry Lam <chiling...@gmail.com> wrote:
> > Anyone experiences issues in setting hadoop configurations after
> > SparkContext is initialized? I'm using Spark 1.5.1.
> >
> > I'm trying to use s3a which requires access and secret key set into
> hadoop
> > configuration. I tried to set the properties in the hadoop configuration
> > from sparktcontext.
> >
> > sc.hadoopConfiguration.set("fs.s3a.access.key", AWSAccessKeyId)
> > sc.hadoopConfiguration.set("fs.s3a.secret.key", AWSSecretKey)
>
> Try setting "spark.hadoop.fs.s3a.access.key" and
> "spark.hadoop.fs.s3a.secret.key" in your SparkConf before creating the
> SparkContext.
>
> --
> Marcelo
>

Reply via email to