Hello,
I am trying to run a spark job (which runs fine on the master node of the
cluster), on a HDFS hadoop cluster using YARN. When I run the job which has
a rdd.saveAsTextFile() line in it, I get the following error:
*SystemError: unknown opcode*
The entire stacktrace has been appended to
This should be the case that you run different versions for Python in
driver and slaves, Spark 1.4 will double check that will release
soon).
SPARK_PYTHON should be PYSPARK_PYTHON
On Tue, May 26, 2015 at 11:21 AM, Nikhil Muralidhar nmural...@gmail.com wrote:
Hello,
I am trying to run a