[ 
https://issues.apache.org/jira/browse/SPARK-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15598339#comment-15598339
 ] 

Yuehua Zhang commented on SPARK-18017:
--------------------------------------

Thanks for your input! I only want to change the parameter for one job, so I 
can't edit Hadoop config file. For the other option, if i add it through 
spark-submit command i will get "Warning: Ignoring non-spark config property: 
fs.s3n.block.size=524288000". 
The reason i am thinking this is related to spark upgrade is because this 
setting worked well on Spark 1.6 but stopped working after we upgraded to Spark 
2.0.

> Changing Hadoop parameter through 
> sparkSession.sparkContext.hadoopConfiguration doesn't work
> --------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18017
>                 URL: https://issues.apache.org/jira/browse/SPARK-18017
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>         Environment: Scala version 2.11.8; Java 1.8.0_91; 
> com.databricks:spark-csv_2.11:1.2.0
>            Reporter: Yuehua Zhang
>
> My Spark job tries to read csv files on S3. I need to control the number of 
> partitions created so I set Hadoop parameter fs.s3n.block.size. However, it 
> stopped working after we upgrade Spark from 1.6.1 to 2.0.0. Not sure if it is 
> related to https://issues.apache.org/jira/browse/SPARK-15991. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to