Re: Spark Executor Lost issue

2016-09-28 Thread Sushrut Ikhar
Can you add more details like are you using rdds/datasets/sql ..; are you doing group by/ joins ; is your input splittable? btw, you can pass the config the same way you are passing memryOverhead: e.g. --conf spark.default.parallelism=1000 or through spark-context in code Regards, Sushrut Ikhar

Re: Spark Executor Lost issue

2016-09-28 Thread Aditya
Hi All, Any updates on this? On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote: Try with increasing the parallelism by repartitioning and also you may increase - spark.default.parallelism You can also try with decreasing num-executor cores. Basically, this happens when the executor

Re: Spark Executor Lost issue

2016-09-28 Thread Aditya
: Thanks Sushrut for the reply. Currently I have not defined spark.default.parallelism property. Can you let me know how much should I set it to? Regards, Aditya Calangutkar On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote: Try with increasing the parallelism by repartitioning

Spark Executor Lost issue

2016-09-28 Thread Aditya
I have a spark job which runs fine for small data. But when data increases it gives executor lost error.My executor and driver memory are set at its highest point. I have also tried increasing--conf spark.yarn.executor.memoryOverhead=600but still not able to fix the problem. Is there any other

Re: Spark executor lost because of GC overhead limit exceeded even though using 20 executors using 25GB each

2015-08-18 Thread Ted Yu
) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor-lost-because-of-GC-overhead-limit-exceeded-even-though-using-20-executors-using-25GB-h-tp24322.html Sent from

Spark executor lost because of GC overhead limit exceeded even though using 20 executors using 25GB each

2015-08-18 Thread unk1102
) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor-lost-because-of-GC-overhead-limit-exceeded-even-though-using-20-executors-using

Re: Spark executor lost because of time out even after setting quite long time out value 1000 seconds

2015-08-17 Thread Akhil Das
=-XX:MaxPermSize=512M --driver-java-options -XX:MaxPermSize=512m --driver-memory 4g --master yarn-client --executor-memory 25G --executor-cores 8 --num-executors 5 --jars /path/to/spark-job.jar -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor

Spark executor lost because of time out even after setting quite long time out value 1000 seconds

2015-08-16 Thread unk1102
=-XX:MaxPermSize=512M --driver-java-options -XX:MaxPermSize=512m --driver-memory 4g --master yarn-client --executor-memory 25G --executor-cores 8 --num-executors 5 --jars /path/to/spark-job.jar -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor-lost

Re: Spark executor lost

2014-12-04 Thread Akhil Das
:30 PM Eastern Standard Time *To: *user@spark.apache.org *Subject: *Spark executor lost We are using Spark job server to submit spark jobs (our spark version is 0.91). After running the spark job server for a while, we often see the following errors (executor lost) in the spark job server log

Spark executor lost

2014-12-03 Thread S. Zhou
We are using Spark job server to submit spark jobs (our spark version is 0.91). After running the spark job server for a while, we often see the following errors (executor lost) in the spark job server log. As a consequence, the spark driver (allocated inside spark job server) gradually loses

RE: Spark executor lost

2014-12-03 Thread Ganelin, Ilya
with Good (www.good.com) -Original Message- From: S. Zhou [myx...@yahoo.com.INVALIDmailto:myx...@yahoo.com.INVALID] Sent: Wednesday, December 03, 2014 06:30 PM Eastern Standard Time To: user@spark.apache.org Subject: Spark executor lost We are using Spark job server to submit spark jobs

Re: Spark executor lost

2014-12-03 Thread Ted Yu
...@yahoo.com.INVALID] *Sent: *Wednesday, December 03, 2014 06:30 PM Eastern Standard Time *To: *user@spark.apache.org *Subject: *Spark executor lost We are using Spark job server to submit spark jobs (our spark version is 0.91). After running the spark job server for a while, we often see