No I am not getting any task on the UI which I am running.Also I have set instances=1 but on UI it is showing 2 workers.i am running the java word count code exactly but i have the text file in HDFS.Following is the part of my code I writing to make connection

SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
    sparkConf.setMaster("spark://192.168.0.119:7077");
    JavaSparkContext ctx = new JavaSparkContext(sparkConf);
    Configuration conf = new Configuration();
    conf.set("fs.default.name", "hdfs://192.168.0.119:9000");
    FileSystem dfs = FileSystem.get(conf);
JavaRDD<String> lines = ctx.textFile(dfs.getWorkingDirectory()+"/spark.txt", 1);

Thanks
On Monday 20 April 2015 02:27 PM, Akhil Das wrote:
Are you seeing your task being submitted to the UI? Under completed or running tasks? How much resources are you allocating for your job? Can you share a screenshot of your cluster UI and the code snippet that you are trying to run?

Thanks
Best Regards

On Mon, Apr 20, 2015 at 12:37 PM, madhvi <madhvi.gu...@orkash.com <mailto:madhvi.gu...@orkash.com>> wrote:

    Hi,

    I Did the same you told but now it is giving the following error:
    ERROR TaskSchedulerImpl: Exiting due to error from cluster
    scheduler: All masters are unresponsive! Giving up.

    On UI it is showing that master is working

    Thanks
    Madhvi

    On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
    In your eclipse, while you create your SparkContext, set the
    master uri as shown in the web UI's top left corner like:
    spark://someIPorHost:7077 and it should be fine.

    Thanks
    Best Regards

    On Mon, Apr 20, 2015 at 12:22 PM, madhvi <madhvi.gu...@orkash.com
    <mailto:madhvi.gu...@orkash.com>> wrote:

        Hi All,

        I am new to spark and have installed spark cluster over my
        system having hadoop cluster.I want to process data stored in
        HDFS through spark.

        When I am running code in eclipse it is giving the following
        warning repeatedly:
        scheduler.TaskSchedulerImpl: Initial job has not accepted any
        resources; check your cluster UI to ensure that workers are
        registered and have sufficient resources.

        I have made changes to spark-env.sh file as below
        export SPARK_WORKER_INSTANCES=1
        export HADOOP_CONF_DIR=/root/Documents/hadoop/etc/hadoop
        export SPARK_WORKER_MEMORY=1g
        export SPARK_WORKER_CORES=2
        export SPARK_EXECUTOR_MEMORY=1g

        I am running the spark standalone cluster.In cluster UI it is
        showing all workers with allocated resources but still its
        not working.
        what other configurations are needed to be changed?

        Thanks
        Madhvi Gupta

        ---------------------------------------------------------------------
        To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
        <mailto:user-unsubscr...@spark.apache.org>
        For additional commands, e-mail: user-h...@spark.apache.org
        <mailto:user-h...@spark.apache.org>





Reply via email to