Hi Akhil,

 

Thanks a lot!

 

After set export _JAVA_OPTIONS="-Xmx5g", the OutOfMemory exception disappeared. 
But this make me confused, so the driver-memory options doesn’t work for 
spark-submit to YARN (I haven’t check other clusters), is it a bug?

 

Regards,

 

Shuai

 

 

From: Akhil Das [mailto:ak...@sigmoidanalytics.com] 
Sent: Wednesday, April 01, 2015 2:40 AM
To: Shuai Zheng
Cc: user@spark.apache.org
Subject: Re: --driver-memory parameter doesn't work for spark-submmit on yarn?

 

Once you submit the job do a ps aux | grep spark-submit and see how much is the 
heap space allocated to the process (the -Xmx params), if you are seeing a 
lower value you could try increasing it yourself with:

 

export _JAVA_OPTIONS="-Xmx5g"




Thanks

Best Regards

 

On Wed, Apr 1, 2015 at 1:57 AM, Shuai Zheng <szheng.c...@gmail.com> wrote:

Hi All,

 

Below is the my shell script:

 

/home/hadoop/spark/bin/spark-submit --driver-memory=5G --executor-memory=40G 
--master yarn-client --class com.***.FinancialEngineExecutor 
/home/hadoop/lib/my.jar s3://bucket/vriscBatchConf.properties 

 

My driver will load some resources and then broadcast to all executors.

 

That resources is only 600MB in ser format, but I always has out of memory 
exception, it looks like the driver doesn’t allocate right memory to my driver.

 

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

        at java.lang.reflect.Array.newArray(Native Method)

        at java.lang.reflect.Array.newInstance(Array.java:70)

        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1670)

        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1344)

        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)

        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)

        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)

        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)

        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)

        at com.***.executor.support.S3FileUtils.loadCache(S3FileUtils.java:68)

 

Do I do anything wrong here? 

 

And no matter how much I set for --driver-memory value (from 512M to 20G), it 
always give me error on the same line (that line try to load a 600MB java 
serialization file). So looks like the script doesn’t allocate right memory to 
driver in my case?

 

Regards,

 

Shuai

 

Reply via email to