I have a spark app which is composed of multiple files. 

When I launch Spark using: 

    ../hadoop/spark-install/bin/spark-submit main.py --py-files
/home/poiuytrez/naive.py,/home/poiuytrez/processing.py,/home/poiuytrez/settings.py
 
--master spark://spark-m:7077

I am getting an error:

    15/03/13 15:54:24 INFO TaskSetManager: Lost task 6.3 in stage 413.0 (TID
5817) on executor spark-w-3.c.databerries.internal:
org.apache.spark.api.python.PythonException (Traceback (most recent call
last):   File "/home/hadoop/spark-install/python/pyspark/worker.py", line
90, in main
        command = pickleSer._read_with_length(infile)   File
"/home/hadoop/spark-install/python/pyspark/serializers.py", line 151, in
_read_with_length
        return self.loads(obj)   File
"/home/hadoop/spark-install/python/pyspark/serializers.py", line 396, in
loads
        return cPickle.loads(obj) ImportError: No module named naive

It is weird because I do not serialize anything. naive.py is also available
on every machine at the same path.

Any insight on what could be going on? The issue does not happen on my
laptop.

PS : I am using Spark 1.2.0.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-using-multiple-python-files-spark-submit-tp22080.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to