[ https://issues.apache.org/jira/browse/SPARK-24271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16490499#comment-16490499 ]
Jami Malikzade commented on SPARK-24271: ---------------------------------------- [~ste...@apache.org] Thank you > sc.hadoopConfigurations can not be overwritten in the same spark context > ------------------------------------------------------------------------ > > Key: SPARK-24271 > URL: https://issues.apache.org/jira/browse/SPARK-24271 > Project: Spark > Issue Type: Bug > Components: Spark Shell > Affects Versions: 2.3.0 > Reporter: Jami Malikzade > Priority: Major > > If for example we pass to spark context following configs : > sc.hadoopConfiguration.set("fs.s3a.access.key", "correctAK") > sc.hadoopConfiguration.set("fs.s3a.secret.key", "correctSK") > sc.hadoopConfiguration.set("fs.s3a.endpoint", "objectstorage:8773") // > sc.hadoopConfiguration.set("fs.s3a.impl", > "org.apache.hadoop.fs.s3a.S3AFileSystem") > sc.hadoopConfiguration.set("fs.s3a.connection.ssl.enabled", "false") > We are able later read from bucket. So behavior is expected. > If in the same sc I will change credentials to wrong, and will try to read > from bucket it will still work, > and vice versa if it were wrong credentials,changing to working will not work. > sc.hadoopConfiguration.set("fs.s3a.access.key", "wrongAK") // > sc.hadoopConfiguration.set("fs.s3a.secret.key", "wrongSK") // -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org