?????? Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

2016-06-22 Thread ????????
i download offical release version ,not build myself ; but i don't config that para ,there no error ; -- -- ??: "Jeff Zhang";; : 2016??6??22??(??) 2:09 ??: "Yash Sharma"; :

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

2016-06-22 Thread Saisai Shao
spark.yarn.jar (none) The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it

Re: FullOuterJoin on Spark

2016-06-22 Thread Nirav Patel
Can your domain list fit in memory of one executor. if so you can use broadcast join. You can always narrow down to inner join and derive rest from original set if memory is issue there. If you are just concerned about shuffle memory then to reduce amount of shuffle you can do following: 1)

?????? Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

2016-06-22 Thread ????????
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2 /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10 Warning:

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

2016-06-22 Thread Jeff Zhang
Make sure you built spark with -Pyarn, and check whether you have class ExecutorLauncher in your spark assembly jar. On Wed, Jun 22, 2016 at 2:04 PM, Yash Sharma wrote: > How about supplying the jar directly in spark submit - > > ./bin/spark-submit \ >> --class

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

2016-06-22 Thread Yash Sharma
How about supplying the jar directly in spark submit - ./bin/spark-submit \ > --class org.apache.spark.examples.SparkPi \ > --master yarn-client \ > --driver-memory 512m \ > --num-executors 2 \ > --executor-memory 512m \ > --executor-cores 2 \ >

Re: spark job automatically killed without rhyme or reason

2016-06-22 Thread Nirav Patel
spark is memory hogger and suicidal if you have a job processing bigger dataset. however databricks claims that spark > 1.6 have optimization related to memory footprint as well as processing. It will only be available if you use dataframe or dataset. if you are using rdd you have to do lot of

Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

2016-06-22 Thread ????????
i config this para at spark-defaults.conf spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m

<    1   2