thanks! i will try that. i guess what i am most confused about is why the executors are trying to retrieve the jars directly using the info i provided to add jars to my spark context. i mean, thats bound to fail no? i could be on a different machine (so my file://) isnt going to work for them, or i could have the jars in a directory that is only readable by me.
how come the jars are not just shipped to yarn as part of the job submittal? i am worried i am supposed to put the jars in a "central" location and yarn is going to fetch them from there, leading to jars in yet another place such as on hdfs which i find pretty messy. On Thu, Jun 19, 2014 at 2:54 PM, Marcelo Vanzin <van...@cloudera.com> wrote: > Coincidentally, I just ran into the same exception. What's probably > happening is that you're specifying some jar file in your job as an > absolute local path (e.g. just > "/home/koert/test-assembly-0.1-SNAPSHOT.jar"), but your Hadoop config > has the default FS set to HDFS. > > So your driver does not know that it should tell executors to download > that file from the driver. > > If you specify the jar with the "file:" scheme that should solve the > problem. > > On Thu, Jun 19, 2014 at 10:22 AM, Koert Kuipers <ko...@tresata.com> wrote: > > i am trying to understand how yarn-client mode works. i am not using > > Application application_1403117970283_0014 failed 2 times due to AM > > Container for appattempt_1403117970283_0014_000002 exited with exitCode: > > -1000 due to: File file:/home/koert/test-assembly-0.1-SNAPSHOT.jar does > not > > exist > > .Failing this attempt.. Failing the application. > > > -- > Marcelo >