Did you confirm through the Spark UI how much memory is getting allocated to 
your application on each worker?

Mohammed

From: Vijayasarathy Kannan [mailto:kvi...@vt.edu]
Sent: Monday, May 4, 2015 3:36 PM
To: Andrew Ash
Cc: user@spark.apache.org
Subject: Re: Spark JVM default memory

I am trying to read in a file (4GB file). I tried setting both 
"spark.driver.memory" and "spark.executor.memory" to large values (say 16GB) 
but I still get a GC limit exceeded error. Any idea what I am missing?

On Mon, May 4, 2015 at 5:30 PM, Andrew Ash 
<and...@andrewash.com<mailto:and...@andrewash.com>> wrote:
It's unlikely you need to increase the amount of memory on your master node 
since it does simple bookkeeping.  The majority of the memory pressure across a 
cluster is on executor nodes.

See the conf/spark-env.sh file for configuring heap sizes, and this section in 
the docs for more information on how to make these changes: 
http://spark.apache.org/docs/latest/configuration.html

On Mon, May 4, 2015 at 2:24 PM, Vijayasarathy Kannan 
<kvi...@vt.edu<mailto:kvi...@vt.edu>> wrote:
Starting the master with "/sbin/start-master.sh" creates a JVM with only 512MB 
of memory. How to change this default amount of memory?

Thanks,
Vijay


Reply via email to