Hi

        I could run spark trunk code on top of yarn 2.0.5-alpha by 

SPARK_JAR=./core/target/spark-core-assembly-0.8.0-SNAPSHOT.jar ./run 
spark.deploy.yarn.Client \
  --jar examples/target/scala-2.9.3/spark-examples_2.9.3-0.8.0-SNAPSHOT.jar \
  --class spark.examples.SparkPi \
  --args yarn-standalone \
  --num-workers 3 \
  --worker-memory 2g \
  --worker-cores 2


While, if I use make-distribution.sh to build a release package and use this 
package on the cluster. Then it fails to run up. I do copy examples jar to 
jars/ dir. 
The other mode say standalone/mesos/local runs well with the release package.

The error encounter is :

Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
        at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2265)
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2272)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2311)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2293)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:317)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
        at spark.deploy.yarn.Client.prepareLocalResources(Client.scala:117)
        at spark.deploy.yarn.Client.run(Client.scala:59)
        at spark.deploy.yarn.Client$.main(Client.scala:318)
        at spark.deploy.yarn.Client.main(Client.scala)


google result seems leading to hdfs core-default.xml not included in the fat 
jar. While I checked that it did.
Any idea on this issue? Thanks!


Best Regards,
Raymond Liu

Reply via email to