Hi,
I am facing same problem. My spark-env.sh has below entries yet I see the
yarn container with only 1G and yarn only spawns two workers.
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=3G
SPARK_EXECUTOR_INSTANCES=5
Please let me know if you are able to resolve this issue.
Thank you
--
Did you `export` the environment variables? Also, are you running in client
mode or cluster mode? If it still doesn't work you can try to set these
through the spark-submit command lines --num-executors, --executor-cores,
and --executor-memory.
2014-10-23 19:25 GMT-07:00 firemonk9
Hi Sophia, did you ever resolve this?
A common cause for not giving resources to the job is that the RM cannot
communicate with the workers.
This itself has many possible causes. Do you have a full stack trace from
the logs?
Andrew
2014-06-13 0:46 GMT-07:00 Sophia sln-1...@163.com:
With the
With the yarn-client mode,I submit a job from client to yarn,and the spark
file spark-env.sh:
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_EXECUTOR_INSTANCES=4
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=2G