[ 
https://issues.apache.org/jira/browse/SPARK-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14136637#comment-14136637
 ] 

Andrew Or commented on SPARK-3560:
----------------------------------

[~sandyr] So is the fix simply to not set `spark.jars` for yarn-cluster mode?

> In yarn-cluster mode, jars are distributed through multiple mechanisms.
> -----------------------------------------------------------------------
>
>                 Key: SPARK-3560
>                 URL: https://issues.apache.org/jira/browse/SPARK-3560
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.1.0
>            Reporter: Sandy Ryza
>            Priority: Critical
>
> In yarn-cluster mode, jars given to spark-submit's --jars argument should be 
> distributed to executors through the distributed cache, not through fetching.
> Currently, Spark tries to distribute the jars both ways, which can cause 
> executor errors related to trying to overwrite symlinks without write 
> permissions.
> It looks like this was introduced by SPARK-2260, which sets spark.jars in 
> yarn-cluster mode.  Setting spark.jars is necessary for standalone cluster 
> deploy mode, but harmful for yarn cluster deploy mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to