[ 
https://issues.apache.org/jira/browse/SPARK-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15580135#comment-15580135
 ] 

Sean Owen commented on SPARK-650:
---------------------------------

But, why do you need to do it before you have an RDD? You can easily make this 
a library function. Or, just some static init that happens on demand whenever a 
certain class is loaded. The nice thing about that is that it's transparent, 
just like with any singleton / static init in the JVM.

If you really want, you can make an empty RDD and repartition it and use that 
as a dummy, but it only serves to do some initialization early that would 
happen transparently anyway.

> Add a "setup hook" API for running initialization code on each executor
> -----------------------------------------------------------------------
>
>                 Key: SPARK-650
>                 URL: https://issues.apache.org/jira/browse/SPARK-650
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Matei Zaharia
>            Priority: Minor
>
> Would be useful to configure things like reporting libraries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to