; Marcelo Vanzin
Cc: user@spark.apache.org
Subject: RE: [Spark-SQL]: Unable to propagate hadoop configuration after
SparkContext is initialized
After a draft glance, seems a bug in Spark SQL, do you mind to create a jira
for this? And then I can start to fix it.
Thanks,
Hao
From: Jerry Lam
On Tue, Oct 27, 2015 at 10:43 AM, Jerry Lam wrote:
> Anyone experiences issues in setting hadoop configurations after
> SparkContext is initialized? I'm using Spark 1.5.1.
>
> I'm trying to use s3a which requires access and secret key set into hadoop
> configuration. I tried
If setting the values in SparkConf works, there's probably some bug in
the SQL code; e.g. creating a new Configuration object instead of
using the one in SparkContext. But I'm not really familiar with that
code.
On Tue, Oct 27, 2015 at 11:22 AM, Jerry Lam wrote:
> Hi
Hi Marcelo,
Thanks for the advice. I understand that we could set the configurations
before creating SparkContext. My question is
SparkContext.hadoopConfiguration.set("key","value") doesn't seem to
propagate to all subsequent SQLContext jobs. Note that I mentioned I can
load the parquet file but
Hi Marcelo,
I tried setting the properties before instantiating spark context via
SparkConf. It works fine.
Originally, the code I have read hadoop configurations from hdfs-site.xml
which works perfectly fine as well.
Therefore, can I conclude that sparkContext.hadoopConfiguration.set("key",
]: Unable to propagate hadoop configuration after
SparkContext is initialized
Hi Marcelo,
I tried setting the properties before instantiating spark context via
SparkConf. It works fine.
Originally, the code I have read hadoop configurations from hdfs-site.xml which
works perfectly fine as well