Aaron,
spark.executor.memory is set to 2454m in my spark-defaults.conf, which is a
reasonable value for EC2 instances which I use (they are m3.medium
machines). However, it doesn't help and each executor uses only 512 MB of
memory. To figure out why, I examined spark-submit and spark-class scripts
spark-env.sh doesn't seem to contain any settings related to memory size :(
I will continue searching for a solution and will post it if I find it :)
Thank you, anyway
On Wed, Jun 11, 2014 at 12:19 AM, Matei Zaharia matei.zaha...@gmail.com
wrote:
It might be that conf/spark-env.sh on EC2 is
Are you launching this using our EC2 scripts? Or have you set up a cluster by
hand?
Matei
On Jun 12, 2014, at 2:32 PM, Aliaksei Litouka aliaksei.lito...@gmail.com
wrote:
spark-env.sh doesn't seem to contain any settings related to memory size :( I
will continue searching for a solution and
The scripts for Spark 1.0 actually specify this property in
/root/spark/conf/spark-defaults.conf
I didn't know that this would override the --executor-memory flag, though,
that's pretty odd.
On Thu, Jun 12, 2014 at 6:02 PM, Aliaksei Litouka
aliaksei.lito...@gmail.com wrote:
Yes, I am
It might be that conf/spark-env.sh on EC2 is configured to set it to 512, and
is overriding the application’s settings. Take a look in there and delete that
line if possible.
Matei
On Jun 10, 2014, at 2:38 PM, Aliaksei Litouka aliaksei.lito...@gmail.com
wrote:
I am testing my application in