You can also export the variable SPARK_PRINT_LAUNCH_COMMAND before launching a 
spark-submit command to display the java command that will be launched, e.g.:

export SPARK_PRINT_LAUNCH_COMMAND=1
/opt/spark/bin/spark-submit --master yarn --deploy-mode cluster --class 
kelkoo.SparkAppTemplate --jars 
hdfs://prod-cluster/user/preaudc/jars/apps/joda-convert-1.6.jar,hdfs://prod-cluster/user/preaudc/jars/apps/joda-time-2.3.jar,hdfs://prod-cluster/user/preaudc/jars/apps/logReader-1.0.22.jar
 --driver-memory 512M --driver-library-path /opt/hadoop/lib/native 
--driver-class-path /usr/share/java/mysql-connector-java.jar --executor-memory 
1G --executor-cores 1 --queue spark-batch --num-executors 2 
hdfs://prod-cluster/user/preaudc/jars/apps/logProcessing-1.0.10.jar --log_dir 
/user/kookel/logs --country fr a b c
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /usr/lib/jvm/java-openjdk/bin/java -cp 
:/usr/share/java/mysql-connector-java.jar:/opt/spark/conf:/opt/spark/lib/spark-assembly-hadoop.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.1.jar:/opt/spark/lib/datanucleus-core-3.2.2.jar:/opt/spark/lib/datanucleus-rdbms-3.2.1.jar:/etc/hadoop:/etc/hadoop
 -XX:MaxPermSize=128m -Djava.library.path=/opt/hadoop/lib/native -Xms512M 
-Xmx512M org.apache.spark.deploy.SparkSubmit --master yarn --deploy-mode 
cluster --class kelkoo.SparkAppTemplate --jars 
hdfs://prod-cluster/user/preaudc/jars/apps/joda-convert-1.6.jar,hdfs://prod-cluster/user/preaudc/jars/apps/joda-time-2.3.jar,hdfs://prod-cluster/user/preaudc/jars/apps/logReader-1.0.22.jar
 --driver-memory 512M --driver-library-path /opt/hadoop/lib/native 
--driver-class-path /usr/share/java/mysql-connector-java.jar --executor-memory 
1G --executor-cores 1 --queue spark-batch --num-executors 2 
hdfs://prod-cluster/user/preaudc/jars/apps/logProcessing-1.0.10.jar --log_dir 
/user
/kookel/logs --country fr a b c
========================================
(...)



Christophe.

On 10/02/2015 07:26, Akhil Das wrote:
Yes like this:

/usr/lib/jvm/java-7-openjdk-i386/bin/java -cp 
::/home/akhld/mobi/localcluster/spark-1/conf:/home/akhld/mobi/localcluster/spark-1/lib/spark-assembly-1.1.0-hadoop1.0.4.jar:/home/akhld/mobi/localcluster/spark-1/lib/datanucleus-core-3.2.2.jar:/home/akhld/mobi/localcluster/spark-1/lib/datanucleus-rdbms-3.2.1.jar:/home/akhld/mobi/localcluster/spark-1/lib/datanucleus-api-jdo-3.2.1.jar
 -XX:MaxPermSize=128m -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit 
--class org.apache.spark.repl.Main spark-shell

It launches spark-shell.


Thanks
Best Regards

On Tue, Feb 10, 2015 at 11:36 AM, Hafiz Mujadid 
<hafizmujadi...@gmail.com<mailto:hafizmujadi...@gmail.com>> wrote:
hi experts!

Is there any way to run spark application using java -cp command ?


thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/running-spark-project-using-java-cp-command-tp21567.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: 
user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>




________________________________
Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 158 Ter Rue du Temple 75003 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.

Reply via email to