On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote:
Your spark master should be spark://swetha:7077 :)

Thanks
Best Regards

On Mon, Apr 20, 2015 at 2:44 PM, madhvi <madhvi.gu...@orkash.com <mailto:madhvi.gu...@orkash.com>> wrote:

    PFA screenshot of my cluster UI

    Thanks
    On Monday 20 April 2015 02:27 PM, Akhil Das wrote:
    Are you seeing your task being submitted to the UI? Under
    completed or running tasks? How much resources are you allocating
    for your job? Can you share a screenshot of your cluster UI and
    the code snippet that you are trying to run?

    Thanks
    Best Regards

    On Mon, Apr 20, 2015 at 12:37 PM, madhvi <madhvi.gu...@orkash.com
    <mailto:madhvi.gu...@orkash.com>> wrote:

        Hi,

        I Did the same you told but now it is giving the following error:
        ERROR TaskSchedulerImpl: Exiting due to error from cluster
        scheduler: All masters are unresponsive! Giving up.

        On UI it is showing that master is working

        Thanks
        Madhvi

        On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
        In your eclipse, while you create your SparkContext, set the
        master uri as shown in the web UI's top left corner like:
        spark://someIPorHost:7077 and it should be fine.

        Thanks
        Best Regards

        On Mon, Apr 20, 2015 at 12:22 PM, madhvi
        <madhvi.gu...@orkash.com <mailto:madhvi.gu...@orkash.com>>
        wrote:

            Hi All,

            I am new to spark and have installed spark cluster over
            my system having hadoop cluster.I want to process data
            stored in HDFS through spark.

            When I am running code in eclipse it is giving the
            following warning repeatedly:
            scheduler.TaskSchedulerImpl: Initial job has not
            accepted any resources; check your cluster UI to ensure
            that workers are registered and have sufficient resources.

            I have made changes to spark-env.sh file as below
            export SPARK_WORKER_INSTANCES=1
            export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
            export SPARK_WORKER_MEMORY=1g
            export SPARK_WORKER_CORES=2
            export SPARK_EXECUTOR_MEMORY=1g

            I am running the spark standalone cluster.In cluster UI
            it is showing all workers with allocated resources but
            still its not working.
            what other configurations are needed to be changed?

            Thanks
            Madhvi Gupta

            
---------------------------------------------------------------------
            To unsubscribe, e-mail:
            user-unsubscr...@spark.apache.org
            <mailto:user-unsubscr...@spark.apache.org>
            For additional commands, e-mail:
            user-h...@spark.apache.org
            <mailto:user-h...@spark.apache.org>







    ---------------------------------------------------------------------
    To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
    <mailto:user-unsubscr...@spark.apache.org>
    For additional commands, e-mail: user-h...@spark.apache.org
    <mailto:user-h...@spark.apache.org>


Thanks Akhil,

It worked fine after replacing IP with the hostname and running the code by making jar of it by spark submit

Madhvi

Reply via email to