[ 
https://issues.apache.org/jira/browse/SPARK-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-14908:
------------------------------------

    Assignee: Apache Spark

> Provide support  HDFS-located resources for "spark.executor.extraClasspath" 
> on YARN
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-14908
>                 URL: https://issues.apache.org/jira/browse/SPARK-14908
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>            Reporter: Dubkov Mikhail
>            Assignee: Apache Spark
>            Priority: Minor
>
> On our project we use custom implementation of SparkSerializer and we found 
> that it loads serializer class when launch executor (SparkEnv.create()). So, 
> we were forced to use "spark.executor.extraClassPath"  and custom serializer 
> class loads fine for now. But, it is not well for deployment process, because 
> currently, "spark.executor.ClassPath" does not support hdfs-based resoruces, 
> that means we should deploy artifact with serializer to each Hadoop node. We 
> would like to simplify deployment process.
> We have tried make changes for this purpose and it works now for us. The 
> changes is relevant only for Hadoop/YARN deployment.
> We didn't any workaround how we can avoid extra class path definition for 
> custom serializer implementation, please, let us know if we missed something.
> I will create pull request for master branch, could you please look into 
> changes and go back with feedback?
> We need these changes in master branch to simplify our future upgrade and I 
> hope this improvement can be helpful for other Spark users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to