I've set up a YARN (Hadoop 2.4.1) cluster with Spark 1.0.1 and I've
been seeing some inconsistencies with out of memory errors
(java.lang.OutOfMemoryError: unable to create new native thread) when
increasing the number of executors for a simple job (wordcount).
The general format of my submission
Hi Calvin,
When you say "until all the memory in the cluster is allocated and the job
gets killed", do you know what's going on? Spark apps should never be
killed for requesting / using too many resources? Any associated error
message?
Unfortunately there are no tools currently for tweaking the