Hi ,

I have a strange behavior when i creating standalone spark container using
docker
Not sure why by default it is assigning 4 cores to the first Job it submit
and then all the other jobs are in wait state  , Please suggest if there is
an setting to change this

i tried --executor-cores 1 but it has no effect

[image: Inline image 1]

Reply via email to