Are you launching this using our EC2 scripts? Or have you set up a cluster by 
hand?

Matei

On Jun 12, 2014, at 2:32 PM, Aliaksei Litouka <aliaksei.lito...@gmail.com> 
wrote:

> spark-env.sh doesn't seem to contain any settings related to memory size :( I 
> will continue searching for a solution and will post it if I find it :)
> Thank you, anyway
> 
> 
> On Wed, Jun 11, 2014 at 12:19 AM, Matei Zaharia <matei.zaha...@gmail.com> 
> wrote:
> It might be that conf/spark-env.sh on EC2 is configured to set it to 512, and 
> is overriding the application’s settings. Take a look in there and delete 
> that line if possible.
> 
> Matei
> 
> On Jun 10, 2014, at 2:38 PM, Aliaksei Litouka <aliaksei.lito...@gmail.com> 
> wrote:
> 
> > I am testing my application in EC2 cluster of m3.medium machines. By 
> > default, only 512 MB of memory on each machine is used. I want to increase 
> > this amount and I'm trying to do it by passing --executor-memory 2G option 
> > to the spark-submit script, but it doesn't seem to work - each machine uses 
> > only 512 MB instead of 2 gigabytes. What am I doing wrong? How do I 
> > increase the amount of memory?
> 
> 

Reply via email to