[ 
https://issues.apache.org/jira/browse/SPARK-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16054529#comment-16054529
 ] 

Ritesh Tijoriwala commented on SPARK-12414:
-------------------------------------------

I have a similar situation. I have several classes that I would like to 
instantiate and use in Executors for e.g. DB connections, Elasticsearch 
clients, etc. I don't want to write instantiation code in Spark functions and 
use "statics".  There was a nit trick suggested here - 
https://issues.apache.org/jira/browse/SPARK-650 but seems like this will not 
work starting 2.0.0 as a consequence of this ticket. 

Could anybody from Spark community recommend how to do some initialization on 
each Spark executor for the job before any task execution begins?

> Remove closure serializer
> -------------------------
>
>                 Key: SPARK-12414
>                 URL: https://issues.apache.org/jira/browse/SPARK-12414
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Andrew Or
>            Assignee: Sean Owen
>             Fix For: 2.0.0
>
>
> There is a config `spark.closure.serializer` that accepts exactly one value: 
> the java serializer. This is because there are currently bugs in the Kryo 
> serializer that make it not a viable candidate. This was uncovered by an 
> unsuccessful attempt to make it work: SPARK-7708.
> My high level point is that the Java serializer has worked well for at least 
> 6 Spark versions now, and it is an incredibly complicated task to get other 
> serializers (not just Kryo) to work with Spark's closures. IMO the effort is 
> not worth it and we should just remove this documentation and all the code 
> associated with it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to