this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-do-I-get-the-executor-ID-from-running-Java-code-tp19092p25258.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
The spark UI lists a number of Executor IDS on the cluster. I would like
to access both executor ID and Task/Attempt IDs from the code inside a
function running on a slave machine.
Currently my motivation is to examine parallelism and locality but in
Hadoop this aids in allowing code to write