Is this the full stack trace ?

On Wed, Feb 18, 2015 at 2:39 AM, sachin Singh <sachin.sha...@gmail.com>
wrote:

> Hi,
> I want to run my spark Job in Hadoop yarn Cluster mode,
> I am using below command -
> spark-submit --master yarn-cluster --driver-memory 1g --executor-memory 1g
> --executor-cores 1 --class com.dc.analysis.jobs.AggregationJob
> sparkanalitic.jar param1 param2 param3
> I am getting error as under, kindly suggest whats going wrong ,is command
> is
> proper or not ,thanks in advance,
>
> Exception in thread "main" org.apache.spark.SparkException: Application
> finished with failed status
>         at
> org.apache.spark.deploy.yarn.ClientBase$class.run(ClientBase.scala:509)
>         at org.apache.spark.deploy.yarn.Client.run(Client.scala:35)
>         at org.apache.spark.deploy.yarn.Client$.main(Client.scala:139)
>         at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/issue-Running-Spark-Job-on-Yarn-Cluster-tp21697.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
*Harshvardhan Chauhan*  |  Software Engineer
*GumGum* <http://www.gumgum.com/>  |  *Ads that stick*
310-260-9666  |  ha...@gumgum.com

Reply via email to