I think it is already fixed if your problem is exactly the same as what
mentioned in this JIRA (https://issues.apache.org/jira/browse/SPARK-14423).

Thanks
Jerry

On Wed, May 18, 2016 at 2:46 AM, satish saley <satishsale...@gmail.com>
wrote:

> Hello,
> I am executing a simple code with yarn-cluster
>
> --master
> yarn-cluster
> --name
> Spark-FileCopy
> --class
> my.example.SparkFileCopy
> --properties-file
> spark-defaults.conf
> --queue
> saleyq
> --executor-memory
> 1G
> --driver-memory
> 1G
> --conf
> spark.john.snow.is.back=true
> --jars
> hdfs://myclusternn.com:8020/tmp/saley/examples/examples-new.jar
> --conf
> spark.executor.extraClassPath=examples-new.jar
> --conf
> spark.driver.extraClassPath=examples-new.jar
> --verbose
> examples-new.jar
> hdfs://myclusternn.com:8020/tmp/saley/examples/input-data/text/data.txt
> hdfs://myclusternn.com:8020/tmp/saley/examples/output-data/spark
>
>
> I am facing
>
> Resource hdfs://
> myclusternn.com/user/saley/.sparkStaging/application_5181/examples-new.jar
> changed on src filesystem (expected 1463440119942, was 1463440119989
> java.io.IOException: Resource hdfs://
> myclusternn.com/user/saley/.sparkStaging/application_5181/examples-new.jar
> changed on src filesystem (expected 1463440119942, was 1463440119989
>
> I see a jira for this
> https://issues.apache.org/jira/browse/SPARK-1921
>
> Is this yet to be fixed or fixed as part of another jira and need some
> additional config?
>

Reply via email to