Hi together,

i'm currently setting up a dev-environment for Spark.

We are planning to use a commercial, hostname-dongeled 3rd-party-library in
our Spark-jobs.

The question which arises now: 
a) Is it possible (maybe on job-level) to tell the Spark-Executor which
hostname it should report to the job-application which runs on the executor?

Maybe this is something that can be achieved with some specific
Executor-implementation (mesos, yarn)?

I'm thinking of some job-specific option like dockers "-h" command line
option.

And to extend this question:
b) Is there some executor, which allows to run jobs within docker-instances?
Maybe this would enable me to set hostname on job level.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Override-hostname-of-Spark-executors-tp20968.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to