I dug a bit more and the executor ID is a number so it's seems there is not
possible workaround.
Looking at the code of the CoarseGrainedSchedulerBackend.scala:
https://github.com/apache/spark/blob/6324eb7b5b0ae005cb2e913e36b1508bd6f1b9b8/core/src/main/scala/org/apache/spark/scheduler/cluster/Coa
When I said scheduler I meant executor backend.
2014-09-16 13:26 GMT+01:00 Luis Ángel Vicente Sánchez <
langel.gro...@gmail.com>:
> It seems that, as I have a single scala application, the scheduler is the
> same and there is a collision between executors of both spark context. Is
> there a way t
It seems that, as I have a single scala application, the scheduler is the
same and there is a collision between executors of both spark context. Is
there a way to change how the executor ID is generated (maybe an uuid
instead of a sequential number..?)
2014-09-16 13:07 GMT+01:00 Luis Ángel Vicente
I have a standalone spark cluster and from within the same scala
application I'm creating 2 different spark context to run two different
spark streaming jobs as SparkConf is different for each of them.
I'm getting this error that... I don't really understand:
14/09/16 11:51:35 ERROR OneForOneStra