Re: yarn SPARK_CLASSPATH

2014-01-14 Thread Tom Graves
The right way to setup yarn/hadoop is tricky as its really very dependent upon 
your usage of it. 
 
Since HBase is a hadoop service you might just add it to your hadoop config 
yarn.application.classpath and have it on the classpath for all 
users/applications of that grid.  In this way you are treating it like how it 
picks up the HDFS jars.  Not sure if you have control over that though?  The 
risk of doing this is if there are dependency conflicts or versioning issues.  
If you have other applications like mapReduce that use hbase then it might make 
sense to do this.

The other option is to modify the spark on yarn code to have it add it to the 
classpath for you.  Either just add whatever is in the SPARK_CLASSPATH to it or 
have a separate variable.  Even though we wouldn't use it I can see it as being 
useful for some installations. 

Tom



On Monday, January 13, 2014 5:58 PM, Eric K lekimb...@gmail.com wrote:
 
Thanks for that extra insight about yarn.  I am new to the whole yarn 
eco-system so i've been having trouble figuring out the right way to do some 
things.  Sounds like even though the jars are already installed as part of our 
cluster on all the nodes, i should just go ahead and add them with the --files 
methods to simplify things and avoid having them added for all applications.

Thanks




On Mon, Jan 13, 2014 at 3:01 PM, Tom Graves tgraves...@yahoo.com wrote:

I'm assuming you actually installed the jar on all the yarn clusters then?


In general this isn't a good idea on yarn as most users don't have permissions 
to install things on the nodes themselves.  The idea is Yarn provides a 
certain set of jars which really should be just the yarn/hadoop framework,  it 
adds those to your classpath and the user provides everything else application 
specific when they submit their application and those get distributed with the 
app and added to the classpath.   If you are worried about it being downloaded 
everytime, you can use the public distributed cache on yarn as a way to 
distribute it and share it.  It will only be removed from that nodes 
distributed cache if other applications need that space.


That said what yarn adds to the classpath is configurable via the hadoop 
configuration file yarn-site.xml, config name: yarn.application.classpath.  So 
you can change the config to add it, but it will be added for all types of 
applications. 


You can use the --files and --archives options in yarn-standalone mode to use 
the distributed cache.  To make it public, make sure permissions on the file 
are set appropriately.


Tom



On Monday, January 13, 2014 3:49 PM, Eric Kimbrel lekimb...@gmail.com wrote:
 
Is there any extra trick required to use jars on the SPARK_CLASSPATH when 
running spark on yarn?

I have several jars added to the SPARK_CLASSPATH in spark_env.sh   When my job 
runs i print the SPARK_CLASSPATH so i can
 see that the jars were added to the environment that the app master is running 
in, however even though the jars are on the class path I continue to get class 
not found errors.

I have also tried setting SPARK_CLASSPATH via SPARK_YARN_USER_ENV



Re: yarn SPARK_CLASSPATH

2014-01-13 Thread Tom Graves
I'm assuming you actually installed the jar on all the yarn clusters then?

In general this isn't a good idea on yarn as most users don't have permissions 
to install things on the nodes themselves.  The idea is Yarn provides a certain 
set of jars which really should be just the yarn/hadoop framework,  it adds 
those to your classpath and the user provides everything else application 
specific when they submit their application and those get distributed with the 
app and added to the classpath.   If you are worried about it being downloaded 
everytime, you can use the public distributed cache on yarn as a way to 
distribute it and share it.  It will only be removed from that nodes 
distributed cache if other applications need that space.

That said what yarn adds to the classpath is configurable via the hadoop 
configuration file yarn-site.xml, config name: yarn.application.classpath.  So 
you can change the config to add it, but it will be added for all types of 
applications. 

You can use the --files and --archives options in yarn-standalone mode to use 
the distributed cache.  To make it public, make sure permissions on the file 
are set appropriately.

Tom



On Monday, January 13, 2014 3:49 PM, Eric Kimbrel lekimb...@gmail.com wrote:
 
Is there any extra trick required to use jars on the SPARK_CLASSPATH when 
running spark on yarn?

I have several jars added to the SPARK_CLASSPATH in spark_env.sh   When my job 
runs i print the SPARK_CLASSPATH so i can see that the jars were added to the 
environment that the app master is running in, however even though the jars are 
on the class path I continue to get class not found errors.

I have also tried setting SPARK_CLASSPATH via SPARK_YARN_USER_ENV

Re: Run Spark on Yarn Remotely

2013-12-16 Thread Tom Graves
The hadoop conf dir is what controls which YARN cluster it goes to so its a 
matter of putting in the correct configs for the cluster you want it to go to. 

You have to execute the org.apache.spark.deploy.yarn.Client or your application 
will not run on yarn in standalone mode.   The client is what has the logic to 
submit it to yarn and start it under yarn.   Your application code just gets 
started in a thread under the YARN application master. 
if you export SPARK_PRINT_LAUNCH_COMMAND=1 when you run the spark-class command 
you still the java command it executes.  

Note that the spark on yarn standalone (yarn-standalone) model is more of a 
batch mode where you are expected to submit your pre-defined application, it 
runs for a certain (relatively short) period, and then it exits.  Its not 
really for long lived things, interactive querying, or the shark server model 
where you submit multiple things to the same spark context.  In the 0.8.1 
release there is a client mode for yarn that will let you run spark shell and 
may fit your use case better.   
https://github.com/apache/incubator-spark/blob/branch-0.8/docs/running-on-yarn.md
 - look at the yarn-client mode.

Tom




On Monday, December 16, 2013 10:02 AM, Karavany, Ido ido.karav...@intel.com 
wrote:
 
 
 Hi All,
 
We’ve started with deploying spark on  Hadoop 2 and Yarn. Our previous 
configuration (still not a production cluster) was Spark on Mesos.
 
We’re running a java application (which runs from tomcat server). The 
application builds a singleton java spark context when it is first lunch and 
then all users’ requests are executed using this same spark context.
 
With Mesos – creating the context included few simple operation and was 
possible via the java application.
 
I successfully executed Spark and Yarn example and even my own example 
(although I was unable to find the output logs)
I noticed that it is being done using org.apache.spark.deploy.yarn.Client but 
have no example regarding how it can be done.
 
Successful command:
 
SPARK_JAR=/app/spark-0.8.0-incubating/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop2.0.4-Intel.jar
 ./spark-class org.apache.spark.deploy.yarn.Client   --jar 
/app/iot/test/test3-0.0.1-SNAPSHOT.jar   --class test3.yarntest  
--args yarn-standalone   --num-workers 3   --master-memory 4g   
--worker-memory 2g   --worker-cores
 
 
When I try to emulate the previous method we used and simple execute my test 
jar  - the execution hangs.
 
Our main goal is to be able to execute spark context on yarn from java code 
(and not shell script) and create a singleton spark context.
In addition the application should be executed on a remote YARN server and not 
on a local one.
 
Can you please advice?
 
Thanks,
Ido
 
 
 
 
 
Problematic Command:
 
/usr/java/latest/bin/java -cp 
/usr/lib/hbase/hbase-0.94.7-Intel.jar:/usr/lib/hadoop/hadoop-auth-2.0.4-Intel.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/app/spark-0.8.0-incubating/conf:/app/spark-0.8.0-incubating/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop2.0.4-Intel.jar:/etc/hadoop/conf:/etc/hbase/conf:/etc/hadoop/conf:/app/iot/test/test3-0.0.1-SNAPSHOT.jar
-Djava.library.path=/usr/lib/hadoop/lib/native -Xms512m -Xmx512m test3.yarntest
 
Spark Context code piece:
 
JavaSparkContext sc = new JavaSparkContext(
yarn-standalone,
SPARK YARN TEST
);
 
 
Log:
 
13/12/12 17:30:36 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/12/12 17:30:36 INFO spark.SparkEnv: Registering BlockManagerMaster
13/12/12 17:30:36 INFO storage.MemoryStore: MemoryStore started with capacity 
323.9 MB.
13/12/12 17:30:36 INFO storage.DiskStore: Created local directory at 
/tmp/spark-local-20131212173036-09c0
13/12/12 17:30:36 INFO network.ConnectionManager: Bound socket to port 39426 
with id = ConnectionManagerId(ip-172-31-43-121.eu-west-1.compute.internal,39426)
13/12/12 17:30:36 INFO storage.BlockManagerMaster: Trying to register 
BlockManager
13/12/12 17:30:36 INFO storage.BlockManagerMaster: Registered BlockManager
13/12/12 17:30:37 INFO server.Server: jetty-7.x.y-SNAPSHOT
13/12/12 17:30:37 INFO server.AbstractConnector: Started 
SocketConnector@0.0.0.0:43438
13/12/12 17:30:37 INFO broadcast.HttpBroadcast: Broadcast server started at 
http://172.31.43.121:43438
13/12/12 17:30:37 INFO spark.SparkEnv: Registering MapOutputTracker
13/12/12 17:30:37 INFO spark.HttpFileServer: HTTP File server directory is 
/tmp/spark-b48abc5a-53c6-4af1-9c3c-725e1cd7fbb9
13/12/12 17:30:37 INFO server.Server: jetty-7.x.y-SNAPSHOT
13/12/12 17:30:37 INFO server.AbstractConnector: Started 
SocketConnector@0.0.0.0:60476
13/12/12 17:30:37 INFO server.Server: jetty-7.x.y-SNAPSHOT
13/12/12 17:30:37 INFO handler.ContextHandler: started 
o.e.j.s.h.ContextHandler{/storage/rdd,null}
13/12/12 17:30:37 INFO handler.ContextHandler: started 

Re: App master failed to find application jar in the master branch on YARN

2013-11-19 Thread Tom Graves
The property is deprecated but will still work. Either one is fine.

Launching the job from the namenode is fine . 

I brought up a cluster with 2.0.5-alpha and built the latest spark master 
branch and it runs fine for me. It looks like namenode 2.0.5-alpha won't even 
start with the defaulFs of file:///.  Please make sure your namenode is 
actually up and running and you are pointing to it because you can run some 
jobs successfully without it (on a single node cluster), but when you have a 
multinode cluster  here is the error I get when I run without a namenode up and 
it looks very similar to your error message:

        appDiagnostics: Application application_1384876319080_0001 failed 1 
times due to AM Container for appattempt_1384876319080_0001_01 exited with  
exitCode: -1000 due to: java.io.FileNotFoundException: File 
file:/home/tgravescs/spark-master/assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop2.0.5-alpha.jar
 does not exist


When you changed the default fs config did you restart the cluster?


Can you try just running the examples jar:

SPARK_JAR=assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop2.0.5-alpha.jar

./spark-class  org.apache.spark.deploy.yarn.Client --jar 
examples/target/scala-2.9.3/spark-examples-assembly-0.9.0-incubating-SNAPSHOT.jar
  --class org.apache.spark.examples.SparkPi  --args yarn-standalone  
--num-workers 2  --master-memory 2g --worker-memory 2g --worker-cores 1

On the client side you should see messages like this:
13/11/19 15:41:30 INFO yarn.Client: Uploading 
file:/home/tgravescs/spark-master/examples/target/scala-2.9.3/spark-examples-assembly-0.9.0-incubating-SNAPSHOT.jar
 to 
hdfs://namenode.host.com:9000/user/tgravescs/.sparkStaging/application_1384874528558_0003/spark-examples-assembly-0.9.0-incubating-SNAPSHOT.jar
13/11/19 15:41:31 INFO yarn.Client: Uploading 
file:/home/tgravescs/spark-master/assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop2.0.5-alpha.jar
 to 
hdfs://namenode.host.com:9000/user/tgravescs/.sparkStaging/application_1384874528558_0003/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop2.0.5-alpha.jar

Tom



On Tuesday, November 19, 2013 5:35 AM, guojc guoj...@gmail.com wrote:
 
Hi Tom,
   Thank you for your response. I  have double checked that I had upload both 
jar in the same folder on hdfs. I think the namefs.default.name/name you 
pointed out is the old deprecated name for fs.defaultFS config accordiing  
http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-project-dist/hadoop-common/DeprecatedProperties.html
 .  Anyway, we have tried both  fs.default.name and  fs.defaultFS set to hdfs 
namenode, and the situation remained same. And we have removed SPARK_HOME env 
variable on worker node.  An additional information might be related is that 
job submission is done on the same machine of hdfs namenode.  But I'm not sure 
this will cause the problem.

Thanks,
Jiacheng Guo



On Tue, Nov 19, 2013 at 11:50 AM, Tom Graves tgraves...@yahoo.com wrote:

Sorry for the delay. What is the default filesystem on your HDFS setup?  It 
looks like its set to file: rather then hdfs://.  That is the only reason I can 
think its listing the directory as  
file:/home/work/.sparkStaging/application_1384588058297_0056.  Its basically 
just copying it local rather then uploading to hdfs and its just trying to use 
the local  
file:/home/work/guojiacheng/spark/assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop2.0.5-alpha.jar.
  It generally would create that in hdfs so it accessible on all the nodes.  Is 
your /home/work nfs mounted on all the nodes?    


You can find the default fs by looking at the Hadoop config files.  Generally 
in core-site.xml.  its specified by:         namefs.default.name/name


Its pretty odd if those are its erroring with file:// when you specified 
hdfs://.
when you tried the hdfs:// did you upload both the spark jar and your client 
jar (SparkAUC-assembly-0.1.jar)?  If not try that and make sure to put hdfs:// 
on them when you export SPARK_JAR and specify the --jar option.  



I'll try to reproduce the error tomorrow to see if a bug was introduced when I 
added the feature to run spark from HDFS.


Tom



On Monday, November 18, 2013 11:13 AM, guojc guoj...@gmail.com wrote:
 
Hi Tom,
   I'm on Hadoop 2.05.  I can launch application spark 0.8 release normally. 
However I switch to git master branch version with application built with it, 
I got the jar not found exception and same happens to the example application. 
I have tried both file:// protocol and hdfs:// protocol with jar in local file 
system and hdfs respectively, and even tried jar list parameter when new spark 
context.  The exception is slightly different for hdfs protocol and local file 
path. My application launch command is   


 
SPARK_JAR=/home/work/guojiacheng/spark/assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop2.0.5-alpha.jar

Re: App master failed to find application jar in the master branch on YARN

2013-11-18 Thread Tom Graves
Hey Jiacheng Guo,

do you have SPARK_EXAMPLES_JAR env variable set?  If you do, you have to add 
the --addJars parameter to the yarn client and point to the spark examples jar. 
 Or just unset SPARK_EXAMPLES_JAR env variable.

You should only have to set SPARK_JAR env variable.  

If that isn't the issue let me know the build command you used and hadoop 
version, and your defaultFs or hadoop.

Tom



On Saturday, November 16, 2013 2:32 AM, guojc guoj...@gmail.com wrote:
 
hi,
   After reading about the exiting progress in consolidating shuffle, I'm eager 
to trying out the last master branch. However up to launch the example 
application, the job failed with prompt the app master failed to find the 
target jar. appDiagnostics: Application application_1384588058297_0017 failed 1 
times due to AM Container for appattempt_1384588058297_0017_01 exited with  
exitCode: -1000 due to: java.io.FileNotFoundException: File 
file:/${my_work_dir}/spark/examples/target/scala-2.9.3/spark-examples-assembly-0.9.0-incubating-SNAPSHOT.jar
 does not exist.

  Is there any change on how to launch a yarn job now?

Best Regards,
Jiacheng Guo

Re: SPARK + YARN the general case

2013-11-15 Thread Tom Graves
Hey Bill,

Currently the Spark on Yarn only supports batch mode where you submit your job 
via the yarn Client.   Note that this will hook the spark UI up to the Yarn 
ResourceManager web UI.  Is there something more you were looking for then just 
finding the spark web ui for various jobs?

There is a pull request (101) to get spark shell working with YARN.

Tom



On Thursday, November 14, 2013 10:57 AM, Bill Sparks jspa...@cray.com wrote:
 
Sorry for the following question, but I just need a little clarity on 
expectations of Spark using YARN. 

Is it possible to use the spark-shell with YARN ? Or is the only way to submit 
a Spark job to YARN is by write a Java application and submit it via the 
yarn.Client application. 

Also is there a way of running the Spark master so that it can communicate with 
YARN so I can use the web UI for job tracking.

Thanks,
  Bill

Re: SPARK + YARN the general case

2013-11-15 Thread Tom Graves
Hey Phillip,

I haven't actually tried spark streaming on YARN at this point so I can't say 
for sure, but as you say just from what I've read I don't see anything that 
would prevent it from working. 

Since I don't know of anyone that has tried it I wouldn't be surprised it 
atleast something small needs to be fixed to support it though.

Tom



On Friday, November 15, 2013 9:40 AM, Philip Ogren philip.og...@oracle.com 
wrote:
 
Tom,

Can you just clarify that when you say Spark on Yarn only supports
batch mode that you are not excluding Spark Streaming from working
with Yarn?  A quick scan of the Spark Streaming documentation makes
no mention of Yarn, but I thought that this should be possible.  

Thanks,
Philip



On 11/15/2013 7:15 AM, Tom Graves wrote:

Hey Bill,


Currently the Spark on Yarn only supports batch mode where you submit your job 
via the yarn Client.   Note that this will hook the spark UI up to the Yarn 
ResourceManager web UI.  Is there something more you were looking for then 
just finding the spark web ui for various jobs?


There is a pull request (101) to get spark shell working with YARN.


Tom



On Thursday, November 14, 2013 10:57 AM, Bill Sparks jspa...@cray.com wrote:
 
Sorry for the following question, but I just need a little clarity on 
expectations of Spark using YARN. 


Is it possible to use the spark-shell with YARN ? Or is the only way to submit 
a Spark job to YARN is by write a Java application and submit it via the 
yarn.Client application. 


Also is there a way of running the Spark master so that it can communicate 
with YARN so I can use the web UI for job tracking.


Thanks,
  Bill



Re: SPARK + YARN the general case

2013-11-15 Thread Tom Graves
Yes that is correct.  It has a static set of nodes currently.  We want to make 
that more dynamic in the future also.

Tom



On Friday, November 15, 2013 2:16 PM, Michael (Bach) Bui free...@adatao.com 
wrote:
 
Tom, more on Shark type of applications on Yarn.
In the current implementation, during the duration of a SparkContext execution, 
Yarn will give an unchanged set of nodes to the SparkContext, is that right?
If that is the case, IMO, it may not be the best architecture for Shark, 
because users may load data from nodes that are not in the given set of nodes. 
Am I right? 






On Nov 15, 2013, at 12:51 PM, Tom Graves tgraves...@yahoo.com wrote:

 Shark is not currently supported on yarn. There are 2 ways this could be done 
that come to mind. One would be to run shark as the application itself that 
gets started on the application master in the current yarn-standalone mode, the 
other is with using the yarn-client introduced in the spark-shell pull request. 
 I saw some changes that went into Shark that were to support running it along 
with the yarn-client pull request  (101), but I haven't had time to actually 
try these yet. 


Tom



On Friday, November 15, 2013 10:45 AM, Michael (Bach) Bui free...@adatao.com 
wrote:
 
Hi Tom,


I have another question on SoY. Seems like the current implementation will not 
support interactive type of application like Shark, right?
Thanks.



On Nov 15, 2013, at 8:15 AM, Tom Graves tgraves...@yahoo.com wrote:

Hey Bill,


Currently the Spark on Yarn only supports batch mode where you submit your 
job via the yarn Client.   Note that this will hook the spark UI up to the 
Yarn ResourceManager web UI.  Is there something more you were looking for 
then just finding the spark web ui for various jobs?


There is a pull request (101) to get spark shell working with YARN.


Tom



On Thursday, November 14, 2013 10:57 AM, Bill Sparks jspa...@cray.com wrote:
 
Sorry for the following question, but I just need a little clarity on 
expectations of Spark using YARN. 


Is it possible to use the spark-shell with YARN ? Or is the only way to 
submit a Spark job to YARN is by write a Java application and submit it via 
the yarn.Client application. 


Also is there a way of running the Spark master so that it can communicate 
with YARN so I can use the web UI for job tracking.


Thanks,
  Bill






Re: Spark (trunk/yarn) on CDH4.3.0.2 - YARN

2013-09-09 Thread Tom Graves
You use yarn-standalone as the MASTER url so replace spark://a.b.c:7077 with 
yarn-standalone.

The important notes section of the yarn doc mentions it: 
https://github.com/mesos/spark/blob/master/docs/running-on-yarn.md

Tom


 From: Vipul Pandey vipan...@gmail.com
To: user@spark.incubator.apache.org 
Sent: Monday, September 9, 2013 12:32 PM
Subject: Re: Spark (trunk/yarn) on CDH4.3.0.2 - YARN
 


Thanks for the tip - I'm building off of the master and against CDH4.3.0 now 
(my cluster is CDH4.3.0.2) - apache hadoop version is hadoop2.0.0
http://www.cloudera.com/content/cloudera-content/cloudera-docs/PkgVer/3.25.2013/CDH-Version-and-Packaging-Information/cdhvd_topic_3_1.html

After following the instructions on the doc below, here's what I found : 

- SPARK_HADOOP_VERSION=2.0.0-cdh4.3.0 SPARK_YARN=true ./sbt/sbt assembly
This results in module not found : 
org.apache.hadoop#hadoop-client;2.0.0-mr2-cdh4.3.0.2
with below as one of the warning messages
[warn]  Cloudera Repository: tried
[warn]   
http://repository.cloudera.com/artifactory/cloudera-repos/org/apache/hadoop/hadoop-client/2.0.0-mr2-cdh4.3.0.2/hadoop-client-2.0.0-mr2-cdh4.3.0.2.pom

I realized that they have made their repository secure now so http won't work. 
Changing it to https in SparkBuild.scala helps. Someone may want to make that 
change and check in.

Also, Executing the assembly command above does not generate the example jars 
as mentioned in the directions. I had to run sbt package to get that jar and 
rerun the assembly. 

I was able to run the example just fine. 


Now, the next question. How should I initialize my SparkContext for Yarn  : 
This is what I had with the standalone mode - 
    val sc = new SparkContext(spark://a.b.c:7077, indexXformation, , 
Seq());
Do i do something here? or will the client pick up the yarn configurations from 
the hadoop config?

Vipul




On Fri, Sep 6, 2013 at 4:30 PM, Tom Graves tgraves...@yahoo.com wrote:

Which spark branch are you building off of?  
If using master branch follow the directions here: 
https://github.com/mesos/spark/blob/master/docs/running-on-yarn.md


Make sure to set your Hadoop version to CDh.


I'm not sure what the CDh versions map to in regular apache Hadoop but if its 
newer then the apache hadoop 2.0.5-alpha then they changed
 yarn Apis so it won't work without changes to the app master.

Tom

On Sep 6, 2013, at 5:37 PM, Vipul Pandey vipan...@gmail.com wrote:


I'm unable to successfully run the SparkPi example in my YARN cluster. 
I did whatever has been specified here (didn't change anything anywhere) : 
http://spark.incubator.apache.org/docs/0.7.0/running-on-yarn.html
and added HADOOP_CONF_DIR as well. (btw, on sbt/sbt assembly - the jar file 
it generates is spark-core-assembly-0.6.0.jar)


I get the following exception in my container : 


Exception in thread main java.lang.reflect.UndeclaredThrowableException at 
org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:135)
 at 
org.apache.hadoop.yarn.api.impl.pb.client.AMRMProtocolPBClientImpl.registerApplicationMaster(AMRMProtocolPBClientImpl.java:103)
 at 
spark.deploy.yarn.ApplicationMaster.registerApplicationMaster(ApplicationMaster.scala:123)
 at 
spark.deploy.yarn.ApplicationMaster.spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:52)
 at 
spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:42) 
at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:396) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
 at spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:40) at
 spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:340) at 
spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed on 
local exception: com.google.protobuf.InvalidProtocolBufferException: Message 
missing required fields: callId, status; Host Details : local host is: 
rd17d01ls-vm0109.rd.geo.apple.com/17.134.172.65; destination host is: 
rd17d01ls-vm0110.rd.geo.apple.com:8030;  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:212)
 at $Proxy7.registerApplicationMaster(Unknown Source) at 
org.apache.hadoop.yarn.api.impl.pb.client.AMRMProtocolPBClientImpl.registerApplicationMaster(AMRMProtocolPBClientImpl.java:100)
 ... 9 more
Caused by: java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: Message missing required 
fields: callId, status; Host Details : local host is: 
rd17d01ls-vm0109.rd.geo.apple.com/17.134.172.65; destination host is: 
rd17d01ls-vm0110.rd.geo.apple.com:8030;  at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761) at 
org.apache.hadoop.ipc.Client.call(Client.java:1239