Hi,
I have executed my spark job using spark-submit on my local machine and on
cluster.
Now I want to try using HDFS. I mean put the data (text file) on hdfs and
read from there, execute the jar file and finally write the output to hdfs.
I got this error after running the job:

*failed to launch org.apache.spark.deploy.master.Master:*
*log is following:*
*Spark Command: /scratch/p_corpus/tools/jdk1.8.0_112/bin/java -cp
$/home/user-folder/cluster-conf-1369394/spark/:/scratch/p_corpus/tools/spark-2.0.1-bin-hadoop2.6/jars/*:/home/user-folder/cluster-conf-1369394/hadoop/:/home/user-folder/cluster-conf-1369394/hadoop/
-Xmx1g org.apache.spark.deploy.master.Master --host
taurusi5551.taurus.hrsk.tu-dresden.de
<http://taurusi5551.taurus.hrsk.tu-dresden.de> --port 7077 --webui-port
8080 /home/user-folder/cluster-conf-1369394/spark*
*========================================*
*17/01/12 14:49:32 INFO master.Master: Started daemon with process name:
8524@taurusi5551*
*17/01/12 14:49:32 INFO util.SignalUtils: Registered signal handler for
TERM*
*17/01/12 14:49:32 INFO util.SignalUtils: Registered signal handler for HUP*
*17/01/12 14:49:32 INFO util.SignalUtils: Registered signal handler for INT*
*Usage: Master [options]*

*Options:*
*  -i HOST, --ip HOST     Hostname to listen on (deprecated, please use
--host or -h) *
*  -h HOST, --host HOST   Hostname to listen on*
*  -p PORT, --port PORT   Port to listen on (default: 7077)*
*  --webui-port PORT      Port for web UI (default: 8080)*
*  --properties-file FILE Path to a custom Spark properties file.*
*                         Default is conf/spark-defaults.conf.*

Any help would be really appreciated.

Best,
Soheila

Reply via email to