Another thing I forgot to mention is that it happens after running for
several hours say (4 to 5 hours) I am not sure why it is creating so many
threads? any way to control them?

On Fri, Oct 28, 2016 at 12:47 PM, kant kodali <kanth...@gmail.com> wrote:

>  "dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to create
> new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:714)
>         at scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoin
> Pool.java:1672)
>         at scala.concurrent.forkjoin.ForkJoinPool.signalWork(ForkJoinPo
> ol.java:1966)
>         at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.push(ForkJo
> inPool.java:1072)
>         at scala.concurrent.forkjoin.ForkJoinTask.fork(ForkJoinTask.
> java:654)
>         at scala.collection.parallel.ForkJoinTasks$WrappedTask$
>
> This is the error produced by the Spark Driver program which is running on
> client mode by default so some people say just increase the heap size by
> passing the --driver-memory 3g flag however the message *"**unable to
> create new native thread**"*  really says that the JVM is asking OS to
> create a new thread but OS couldn't allocate it anymore and the number of
> threads a JVM can create by requesting OS is platform dependent but
> typically it is 32K threads on a 64-bit JVM. so I am wondering why spark is
> even creating so many threads and how do I control this number?
>

Reply via email to