Spark ignores SPARK_WORKER_MEMORY?

2016-01-13 Thread Barak Yaish
Hello, Although I'm setting SPARK_WORKER_MEMORY in spark-env.sh, looks like this setting is ignored. I can't find any indication at the scripts under bin/sbin that -Xms/-Xmx are set. If I ps the worker pid, it looks like memory set to 1G: [hadoop@sl-env1-hadoop1 spark-1.5.2-bin-hadoop2.6]$ ps

Lost tasks due to OutOfMemoryError (GC overhead limit exceeded)

2016-01-12 Thread Barak Yaish
Hello, I've a 5 nodes cluster which hosts both hdfs datanodes and spark workers. Each node has 8 cpu and 16G memory. Spark version is 1.5.2, spark-env.sh is as follow: export SPARK_MASTER_IP=10.52.39.92 export SPARK_WORKER_INSTANCES=4 export SPARK_WORKER_CORES=8 export SPARK_WORKER_MEMORY=4g

Spark 1.5.1 standalone cluster - wrong Akka remoting config?

2015-10-08 Thread Barak Yaish
Doing my firsts steps with Spark, I'm facing problems submitting jobs to cluster from the application code. Digging the logs, I noticed some periodic WARN messages on master log: 15/10/08 13:00:00 WARN remote.ReliableDeliverySupervisor: Association with remote system