I am testing my application in EC2 cluster of m3.medium machines. By
default, only 512 MB of memory on each machine is used. I want to increase
this amount and I'm trying to do it by passing --executor-memory 2G option
to the spark-submit script, but it doesn't seem to work - each machine uses
only 512 MB instead of 2 gigabytes. What am I doing wrong? How do I
increase the amount of memory?

Reply via email to