This is something of a wild guess, but I find that when executors start
disappearingfor no obvious reason, this is usually because the yarn
node-managers have decided that the containers are using too much memory and
then terminate the executors.
Unfortunately, to see evidence of this, one
What version of Spark are you using? Have you set any shuffle configs?
On Wed, Aug 19, 2015 at 11:46 AM, unk1102 umesh.ka...@gmail.com wrote:
I have one Spark job which seems to run fine but after one hour or so
executor start getting lost because of time out something like the
following
Moving this back onto user@
Regarding GC, can you look in the web UI and see whether the GC time
metric dominates the amount of time spent on each task (or at least the
tasks that aren't completing)?
Also, have you tried bumping your spark.yarn.executor.memoryOverhead? YARN
may be killing your
GC wouldn't necessarily result in errors - it could just be slowing down
your job and causing the executor JVMs to stall. If you click on a stage
in the UI, you should end up on a page with all the metrics concerning the
tasks that ran in that stage. GC Time is one of these task metrics.
-Sandy
Hi where do I see GC time in UI? I have set spark.yarn.executor.memoryOverhead
as 3500 which seems to be good enough I believe. So you mean only GC could
be the reason behind timeout I checked Yarn logs I did not see any GC error
there. Please guide. Thanks much.
On Thu, Aug 20, 2015 at 8:14 PM,