Hi,
I am successfully running python app via pyCharm in local mode
setMaster("local[*]")

When I turn on SparkConf().setMaster("yarn-client")

and run via

park-submit PysparkPandas.py


I run into issue:
Error from python worker:
  /cube/PY/Python27/bin/python: No module named pyspark
PYTHONPATH was:

/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/filecache/18/spark-assembly-1.4.1-hadoop2.6.0.jar

I am running java
hadoop@pluto:~/pySpark$ /opt/java/jdk/bin/java -version
java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)

Should I try same thing with java 6/7

Is this packaging issue or I have something wrong with configurations ...

Regards,

-- 
Aleksandar Kacanski

Reply via email to