This is really weird and I’m surprised no one has found this issue yet.

I’ve spent about an hour or more trying to debug this :-(

My spark install is ignoring ALL my memory settings.  And of course my job
is running out of memory.

The default is 512MB so pretty darn small.

The worker and master start up and both use 512M

This alone is very weird and poor documentation IMO because:

 "SPARK_WORKER_MEMORY, to set how much total memory workers have to give
executors (e.g. 1000m, 2g)”

… so if it’s giving it to executors, AKA the memory executors run with,
then it should be SPARK_EXECUTOR_MEMORY…

… and the worker actually uses SPARK_DAEMON memory.

but actually I’m right.  It IS SPARK_EXECUTOR_MEMORY… according to
bin/spark-class

… but, that’s not actually being used :-(

that setting is just flat out begin ignored and it’s just using 512MB.  So
all my jobs fail.

… and I write an ‘echo’ so I could trace the spark-class script to see what
the daemons are actually being run with and spark-class wasn’t being called
with and nothing is logged for the coarse grained executor.  I guess it’s
just inheriting the JVM opts from it’s parent and Java is launching the
process directly?

This is a nightmare :(

-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>

Reply via email to