Hello,
Although I'm setting SPARK_WORKER_MEMORY in spark-env.sh, looks like this
setting is ignored. I can't find any indication at the scripts under
bin/sbin that -Xms/-Xmx are set.
If I ps the worker pid, it looks like memory set to 1G:
[hadoop@sl-env1-hadoop1 spark-1.5.2-bin-hadoop2.6]$ ps
Hello,
I've a 5 nodes cluster which hosts both hdfs datanodes and spark workers.
Each node has 8 cpu and 16G memory. Spark version is 1.5.2, spark-env.sh is
as follow:
export SPARK_MASTER_IP=10.52.39.92
export SPARK_WORKER_INSTANCES=4
export SPARK_WORKER_CORES=8
export SPARK_WORKER_MEMORY=4g
Doing my firsts steps with Spark, I'm facing problems submitting jobs to
cluster from the application code. Digging the logs, I noticed some
periodic WARN messages on master log:
15/10/08 13:00:00 WARN remote.ReliableDeliverySupervisor: Association with
remote system