Additional notes:
I did not find anything wrong with the number of threads (ps -u USER -L |
wc -l): around 780 on the master and 400 on executors. I am running on 100
r3.2xlarge.

On Tue, Mar 24, 2015 at 12:38 PM, Thomas Gerber <thomas.ger...@radius.com>
wrote:

> Hello,
>
> I am seeing various crashes in spark on large jobs which all share a
> similar exception:
>
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
>
> I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
>
> Does anyone know how to avoid those kinds of errors?
>
> Noteworthy: I added -XX:ThreadStackSize=10m on both driver and executor
> extra java options, which might have amplified the problem.
>
> Thanks for you help,
> Thomas
>

Reply via email to