[ 
https://issues.apache.org/jira/browse/SPARK-13979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390831#comment-15390831
 ] 

Luciano Resende commented on SPARK-13979:
-----------------------------------------

This scenario usually happens when you pass specific configurations via 
HadoopConfiguration when creating the Spark context :

    val conf = new SparkConf().setAppName("test").setMaster("local")
    sc = new SparkContext(conf)
    sc.hadoopConfiguration.set("key1", "value1")

Then, across the Spark code, sometimes we seem to be doing the right thing like 
in SessionState.scala

def newHadoopConf(): Configuration = {
    val hadoopConf = new 
Configuration(sparkSession.sparkContext.hadoopConfiguration)
    conf.getAllConfs.foreach { case (k, v) => if (v ne null) hadoopConf.set(k, 
v) }
    hadoopConf
}

But in other places, we seem to be ignoring the provided HadoopConfiguration, 
like when DataSourceStratgy.scala, where we call SparkHadoopUtil.get.conf and 
it's implementation creates an empty hadoop configuration

  def newConfiguration(conf: SparkConf): Configuration = {
    val hadoopConf = new Configuration()
    appendS3AndSparkHadoopConfigurations(conf, hadoopConf)
    hadoopConf
  }

Note that, in this case, S3 might still work, when AWS_ACCESS_KEY_ID and 
AWS_SECRET_ACCESS_KEY are available as environment variables, as there is some 
magic to update hadoopConf with the proper information based on these values. 
But if you are providing other configurations in hadoopConfig programmatically 
when creating the spark context then this will be broken.

I am trying to investigate it further to see if I can find a more centralized 
place to workaround/fix and always honor any programmatically provided hadoop 
configuration.



> Killed executor is respawned without AWS keys in standalone spark cluster
> -------------------------------------------------------------------------
>
>                 Key: SPARK-13979
>                 URL: https://issues.apache.org/jira/browse/SPARK-13979
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.2
>         Environment: I'm using Spark 1.5.2 with Hadoop 2.7 and running 
> experiments on a simple standalone cluster:
> 1 master
> 2 workers
> All ubuntu 14.04 with Java 8/Scala 2.10
>            Reporter: Allen George
>
> I'm having a problem where respawning a failed executor during a job that 
> reads/writes parquet on S3 causes subsequent tasks to fail because of missing 
> AWS keys.
> h4. Setup:
> I'm using Spark 1.5.2 with Hadoop 2.7 and running experiments on a simple 
> standalone cluster:
> 1 master
> 2 workers
> My application is co-located on the master machine, while the two workers are 
> on two other machines (one worker per machine). All machines are running in 
> EC2. I've configured my setup so that my application executes its task on two 
> executors (one executor per worker).
> h4. Application:
> My application reads and writes parquet files on S3. I set the AWS keys on 
> the SparkContext by doing:
> val sc = new SparkContext()
> val hadoopConf = sc.hadoopConfiguration
> hadoopConf.set("fs.s3n.awsAccessKeyId", "SOME_KEY")
> hadoopConf.set("fs.s3n.awsSecretAccessKey", "SOME_SECRET")
> At this point I'm done, and I go ahead and use "sc".
> h4. Issue:
> I can read and write parquet files without a problem with this setup. *BUT* 
> if an executor dies during a job and is respawned by a worker, tasks fail 
> with the following error:
> "Caused by: java.lang.IllegalArgumentException: AWS Access Key ID and Secret 
> Access Key must be specified as the username or password (respectively) of a 
> s3n URL, or by setting the {{fs.s3n.awsAccessKeyId}} or 
> {{fs.s3n.awsSecretAccessKey}} properties (respectively)."
> h4. Basic analysis
> I think I've traced this down to the following:
> SparkHadoopUtil is initialized with an empty {{SparkConf}}. Later, classes 
> like {{DataSourceStrategy}} simply call {{SparkHadoopUtil.get.conf}} and 
> access the (now invalid; missing various properties) {{HadoopConfiguration}} 
> that's built from this empty {{SparkConf}} object. It's unclear to me why 
> this is done, and it seems that the code as written would cause broken 
> results anytime callers use {{SparkHadoopUtil.get.conf}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to