Thanks Aaron and Sean...

Setting SPARK_MEM finally worked. But i have a small doubt.
1)What is the default value that is allocated for JVM and for HEAP_SPACE
for Garbage collector.

2)Usually we set 1/3 of total memory for heap. So what should be the
practice for Spark processes. Where & how should we set them.
And what is the default value does it assume?

3) Moreover, if we set SPARK_MEM to say 8g and i have a 16g RAM, can only
two executors run max on a node of a cluster ??


Thanks Again !!




On Mon, Mar 24, 2014 at 2:13 PM, Sean Owen <so...@cloudera.com> wrote:

> PS you have a typo in "DEAMON" - its DAEMON. Thanks Latin.
> On Mar 24, 2014 7:25 AM, "Sai Prasanna" <ansaiprasa...@gmail.com> wrote:
>
>> Hi All !! I am getting the following error in interactive spark-shell
>> [0.8.1]
>>
>>
>>  *org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed more
>> than 0 times; aborting job java.lang.OutOfMemoryError: GC overhead limit
>> exceeded*
>>
>>
>> But i had set the following in the spark.env.sh and hadoop-env.sh
>>
>> export SPARK_DEAMON_MEMORY=8g
>> export SPARK_WORKER_MEMORY=8g
>> export SPARK_DEAMON_JAVA_OPTS="-Xms8g -Xmx8g"
>> export SPARK_JAVA_OPTS="-Xms8g -Xmx8g"
>>
>>
>> export HADOOP_HEAPSIZE=4000
>>
>> Any suggestions ??
>>
>> --
>> *Sai Prasanna. AN*
>> *II M.Tech (CS), SSSIHL*
>>
>>
>>


-- 
*Sai Prasanna. AN*
*II M.Tech (CS), SSSIHL*


*Entire water in the ocean can never sink a ship, Unless it gets inside.All
the pressures of life can never hurt you, Unless you let them in.*

Reply via email to