Re: Spark Executor Lost issue

2016-09-28 Thread Sushrut Ikhar
Can you add more details like are you using rdds/datasets/sql ..; are you doing group by/ joins ; is your input splittable? btw, you can pass the config the same way you are passing memryOverhead: e.g. --conf spark.default.parallelism=1000 or through spark-context in code Regards, Sushrut Ikhar

Re: Spark Executor Lost issue

2016-09-28 Thread Aditya
Hi All, Any updates on this? On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote: Try with increasing the parallelism by repartitioning and also you may increase - spark.default.parallelism You can also try with decreasing num-executor cores. Basically, this happens when the executor

Re: Spark Executor Lost issue

2016-09-28 Thread Aditya
: Thanks Sushrut for the reply. Currently I have not defined spark.default.parallelism property. Can you let me know how much should I set it to? Regards, Aditya Calangutkar On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote: Try with increasing the parallelism by repartitioning

Re: Spark executor lost because of GC overhead limit exceeded even though using 20 executors using 25GB each

2015-08-18 Thread Ted Yu
Do you mind providing a bit more information ? release of Spark code snippet of your app version of Java Thanks On Tue, Aug 18, 2015 at 8:57 AM, unk1102 umesh.ka...@gmail.com wrote: Hi this GC overhead limit error is making me crazy. I have 20 executors using 25 GB each I dont understand

Re: Spark executor lost because of time out even after setting quite long time out value 1000 seconds

2015-08-17 Thread Akhil Das
It could be stuck on a GC pause, Can you check a bit more in the executor logs and see whats going on? Also from the driver UI you would get to know at which stage it is being stuck etc. Thanks Best Regards On Sun, Aug 16, 2015 at 11:45 PM, unk1102 umesh.ka...@gmail.com wrote: Hi I have

Re: Spark executor lost

2014-12-04 Thread Akhil Das
It says connection refused, just make sure the network is configured properly (open the ports between master and the worker nodes). If the ports are configured correctly, then i assume the process is getting killed for some reason and hence connection refused. Thanks Best Regards On Fri, Dec 5,

RE: Spark executor lost

2014-12-03 Thread Ganelin, Ilya
You want to look further up the stack (there are almost certainly other errors before this happens) and those other errors may give your better idea of what is going on. Also if you are running on yarn you can run yarn logs -applicationId yourAppId to get the logs from the data nodes. Sent

Re: Spark executor lost

2014-12-03 Thread Ted Yu
bq. to get the logs from the data nodes Minor correction: the logs are collected from machines where node managers run. Cheers On Wed, Dec 3, 2014 at 3:39 PM, Ganelin, Ilya ilya.gane...@capitalone.com wrote: You want to look further up the stack (there are almost certainly other errors