Dat Tran created SPARK-12753:
--------------------------------

             Summary: Import error during unit test while calling a function 
from reduceByKey()
                 Key: SPARK-12753
                 URL: https://issues.apache.org/jira/browse/SPARK-12753
             Project: Spark
          Issue Type: Question
          Components: PySpark
    Affects Versions: 1.6.0
         Environment: El Capitan, Single cluster Hadoop, Python 3, Spark 1.6, 
Anaconda 
            Reporter: Dat Tran
            Priority: Trivial


The current directory structure for my test script is as follows:
project/
  script/
     __init__.py 
     map.py
  test/
    __init.py__
    test_map.py

I have attached map.py and test_map.py file with this issue. 

When I run the nosetest in the test directory, the test fails. I get no module 
named "script" found error. 
However when I modify the map_add function to replace the call to add within 
reduceByKey in map.py like this:

def map_add(df):
        result = df.map(lambda x: (x.key, x.value)).reduceByKey(lambda x,y: x+y)
        return result

The test passes.

Also, when I run the original test_map.py from the project directory, the test 
passes. 

I am not able to figure out why the test doesn't detect the script module when 
it is within the test directory. 

I have also attached the log error file. Any help will be much appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to