Hello,

I am seeing various crashes in spark on large jobs which all share a
similar exception:

java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)

I increased nproc (i.e. ulimit -u) 10 fold, but it doesn't help.

Does anyone know how to avoid those kinds of errors?

Noteworthy: I added -XX:ThreadStackSize=10m on both driver and executor
extra java options, which might have amplified the problem.

Thanks for you help,
Thomas

Reply via email to