Hi All,

I am new to spark and have installed spark cluster over my system having hadoop cluster.I want to process data stored in HDFS through spark.

When I am running code in eclipse it is giving the following warning repeatedly: scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources.

I have made changes to spark-env.sh file as below
export SPARK_WORKER_INSTANCES=1
export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_EXECUTOR_MEMORY=1g

I am running the spark standalone cluster.In cluster UI it is showing all workers with allocated resources but still its not working.
what other configurations are needed to be changed?

Thanks
Madhvi Gupta

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to