Hi All !! I am getting the following error in interactive spark-shell
[0.8.1]
*org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed more than
0 times; aborting job java.lang.OutOfMemoryError: GC overhead limit
exceeded*
But i had set the following in the spark.env.sh and
To be clear on what your configuration will do:
- SPARK_DAEMON_MEMORY=8g will make your standalone master and worker
schedulers have a lot of memory. These do not impact the actual amount of
useful memory given to executors or your driver, however, so you probably
don't need to set this.
-
PS you have a typo in DEAMON - its DAEMON. Thanks Latin.
On Mar 24, 2014 7:25 AM, Sai Prasanna ansaiprasa...@gmail.com wrote:
Hi All !! I am getting the following error in interactive spark-shell
[0.8.1]
*org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed more
than 0 times;
1. Note sure on this, I don't believe we change the defaults from Java.
2. SPARK_JAVA_OPTS can be used to set the various Java properties (other
than memory heap size itself)
3. If you want to have 8 GB executors then, yes, only two can run on each
16 GB node. (In fact, you should also keep a