I doubt you're hitting the limit of threads you can spawn, but as you
say, running out of memory that the JVM process is allowed to allocate
since your threads are grabbing stacks 10x bigger than usual. The
thread stacks are >4GB by themselves.

I suppose you can't not up the stack size so much?

If so then I think you need to make more, smaller executors instead?

On Tue, Mar 24, 2015 at 7:38 PM, Thomas Gerber <thomas.ger...@radius.com> wrote:
> Hello,
>
> I am seeing various crashes in spark on large jobs which all share a similar
> exception:
>
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
>
> I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.
>
> Does anyone know how to avoid those kinds of errors?
>
> Noteworthy: I added -XX:ThreadStackSize=10m on both driver and executor
> extra java options, which might have amplified the problem.
>
> Thanks for you help,
> Thomas

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to