I was adding some bad jars I guess. I deleted all the jars and copied
them again and it works.

2015-01-08 14:15 GMT+01:00 Guillermo Ortiz <konstt2...@gmail.com>:
> When I try to execute my task with Spark it starts to copy the jars it
> needs to HDFS and it finally fails, I don't know exactly why. I have
> checked HDFS and it copies the files, so, it seems to work that part.
> I changed the log level to debug but there's nothing else to help.
> What else does Spark need to copy that it could be an empty string?
>
> Spark assembly has been built with Hive, including Datanucleus jars on 
> classpath
> 15/01/08 14:06:32 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where
> applicable
> 15/01/08 14:06:32 INFO RMProxy: Connecting to ResourceManager at
> vmlbyarnl01.lvtc.gsnet.corp/180.133.240.174:8050
> 15/01/08 14:06:33 INFO Client: Got cluster metric info from
> ResourceManager, number of NodeManagers: 3
> 15/01/08 14:06:33 INFO Client: Max mem capabililty of a single
> resource in this cluster 97280
> 15/01/08 14:06:33 INFO Client: Preparing Local resources
> 15/01/08 14:06:34 WARN BlockReaderLocal: The short-circuit local reads
> feature cannot be used because libhadoop cannot be loaded.
> 15/01/08 14:06:34 INFO Client: Uploading
> file:/home/spark-1.1.1-bin-hadoop2.4/lib/spark-assembly-1.1.1-hadoop2.4.0.jar
> to 
> hdfs://vmlbnanodl01.lvtc.gsnet.corp:8020/user/hdfs/.sparkStaging/application_1417607109980_0017/spark-assembly-1.1.1-hadoop2.4.0.jar
> 15/01/08 14:06:42 INFO Client: Uploading
> file:/user/local/etc/lib/my-spark-streaming-scala.jar to
> hdfs://vmlbnanodl01.lvtc.gsnet.corp:8020/user/hdfs/.sparkStaging/application_1417607109980_0017/my-spark-streaming-scala.jar
> Exception in thread "main" java.lang.IllegalArgumentException: Can not
> create a Path from an empty string
>         at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
>         at org.apache.hadoop.fs.Path.<init>(Path.java:135)
>         at org.apache.hadoop.fs.Path.<init>(Path.java:94)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$class.copyRemoteFile(ClientBase.scala:159)
>         at org.apache.spark.deploy.yarn.Client.copyRemoteFile(Client.scala:37)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$5$$anonfun$apply$2.apply(ClientBase.scala:236)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$5$$anonfun$apply$2.apply(ClientBase.scala:231)
>         at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$5.apply(ClientBase.scala:231)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$5.apply(ClientBase.scala:229)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at 
> org.apache.spark.deploy.yarn.ClientBase$class.prepareLocalResources(ClientBase.scala:229)
>         at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:37)
>         at org.apache.spark.deploy.yarn.Client.runApp(Client.scala:74)
>         at org.apache.spark.deploy.yarn.Client.run(Client.scala:96)
>         at org.apache.spark.deploy.yarn.Client$.main(Client.scala:176)
>         at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to