Hi,

I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.

On UI it is showing that master is working

Thanks
Madhvi
On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
In your eclipse, while you create your SparkContext, set the master uri as shown in the web UI's top left corner like: spark://someIPorHost:7077 and it should be fine.

Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:22 PM, madhvi <madhvi.gu...@orkash.com <mailto:madhvi.gu...@orkash.com>> wrote:

    Hi All,

    I am new to spark and have installed spark cluster over my system
    having hadoop cluster.I want to process data stored in HDFS
    through spark.

    When I am running code in eclipse it is giving the following
    warning repeatedly:
    scheduler.TaskSchedulerImpl: Initial job has not accepted any
    resources; check your cluster UI to ensure that workers are
    registered and have sufficient resources.

    I have made changes to spark-env.sh file as below
    export SPARK_WORKER_INSTANCES=1
    export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
    export SPARK_WORKER_MEMORY=1g
    export SPARK_WORKER_CORES=2
    export SPARK_EXECUTOR_MEMORY=1g

    I am running the spark standalone cluster.In cluster UI it is
    showing all workers with allocated resources but still its not
    working.
    what other configurations are needed to be changed?

    Thanks
    Madhvi Gupta

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
    <mailto:user-unsubscr...@spark.apache.org>
    For additional commands, e-mail: user-h...@spark.apache.org
    <mailto:user-h...@spark.apache.org>



Reply via email to