Hi Keith, we are running into the same issue here with Spark standalone
1.2.1. I was wondering if you have found a solution or workaround.
--
View this message in context:
Maybe I should put this another way. If spark has two jobs, A and B, both
of which consume the entire allocated memory pool, is it expected that
spark can launch B before the executor processes tied to A are completely
terminated?
On Thu, Oct 9, 2014 at 6:57 PM, Keith Simmons ke...@pulse.io
Hi Folks,
We have a spark job that is occasionally running out of memory and hanging
(I believe in GC). This is it's own issue we're debugging, but in the
meantime, there's another unfortunate side effect. When the job is killed
(most often because of GC errors), each worker attempts to kill
Actually, it looks like even when the job shuts down cleanly, there can be
a few minutes of overlap between the time the next job launches and the
first job actually terminates it's process. Here's some relevant lines
from my log:
14/10/09 20:49:20 INFO Worker: Asked to kill executor