Currently, an executor is always run in it's own JVM, so it should be
possible to just use some static initialization to e.g. launch a
sub-process and set up a bridge with which to communicate.

This is would be a fairly advanced use case, however.

- Patrick



On Thu, May 29, 2014 at 8:39 PM, ansriniv <ansri...@gmail.com> wrote:
> Hi Matei,
>
> Thanks for the reply.
>
> I would like to avoid having to spawn these external processes every time
> during the processing of the task to reduce task latency. I'd like these to
> be pre-spawned as much as possible - tying them to lifecycle of
> corresponding threadpool thread would simplify management for me.
>
> Also, during processing some back and forth communication is required
> between the Spark executer thread and its associated external process.
>
> For these 2 reasons, pipe() wouldnt meet my requirement.
>
> Is there any hook in the ThreadPoolExecutor created by the Spark Executor to
> plug in my own ThreadFactory ?
>
> Thanks
> Anand
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-hook-to-create-external-process-tp6526p6552.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to