[ 
https://issues.apache.org/jira/browse/SPARK-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363227#comment-14363227
 ] 

Jesper Lundgren edited comment on SPARK-6355 at 3/16/15 2:14 PM:
-----------------------------------------------------------------

[~srowen] Thank you for your reply. 
I use spark-submit --class class.Main local:/application.jar . 
https://spark.apache.org/docs/1.2.1/submitting-applications.html under 
"Advanced Dependency Management" mentions local:/ can be used when a jar is 
pre-distributed instead of uploading using the built in file server. Maybe I am 
misunderstanding but I believe it is meant to work for the main application jar 
as well as for --jars config option.

I am running standalone cluster with Zookeeper HA and have on occasion had 
problem crashing on restart due to the spark fileserver being unavailable to 
distribute the jar to the worker nodes (I can't reliably reproduce this yet). I 
intended to use local:/ as a fix but seems this option does not work in 
standalone cluster.


was (Author: koudelka):
[~srowen] spark-submit --class class.Main local:/application.jar . 
https://spark.apache.org/docs/1.2.1/submitting-applications.html under 
"Advanced Dependency Management" mentions local:/ can be used when a jar is 
pre-distributed instead of uploading using the built in file server. Maybe I am 
misunderstanding but I believe it is meant to work for the main application jar 
as well as for --jars config option.

I am running standalone cluster with Zookeeper HA and have on occasion had 
problem crashing on restart due to the spark fileserver being unavailable to 
distribute the jar to the worker nodes (I can't reliably reproduce this yet). I 
intended to use local:/ as a fix but seems this option does not work in 
standalone cluster.

> Spark standalone cluster does not support local:/ url for jar file
> ------------------------------------------------------------------
>
>                 Key: SPARK-6355
>                 URL: https://issues.apache.org/jira/browse/SPARK-6355
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.3.0, 1.2.1
>            Reporter: Jesper Lundgren
>
> Submitting a new spark application to a standalone cluster with local:/path 
> will result in an exception.
> Driver successfully submitted as driver-20150316171157-0004
> ... waiting before polling master for driver state
> ... polling master for driver state
> State of driver-20150316171157-0004 is ERROR
> Exception from cluster was: java.io.IOException: No FileSystem for scheme: 
> local
> java.io.IOException: No FileSystem for scheme: local
>       at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
>       at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>       at 
> org.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:141)
>       at 
> org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to