[ 
https://issues.apache.org/jira/browse/SPARK-18644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710507#comment-15710507
 ] 

Bryan Cutler commented on SPARK-18644:
--------------------------------------

Yeah, [~vanzin] is right, it's a python thing.  See the stack trace below, the 
{{inspect}} module imports {{tokenize}} which finds your local file first
{noformat}
Traceback (most recent call last):
  File "repo/spark/tokenize.py", line 1, in <module>
    from pyspark import SparkContext
  File "repo/spark/python/lib/pyspark.zip/pyspark/__init__.py", line 44, in 
<module>
  File "repo/spark/python/lib/pyspark.zip/pyspark/context.py", line 33, in 
<module>
  File "repo/spark/python/lib/pyspark.zip/pyspark/java_gateway.py", line 31, in 
<module>
  File "repo/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 
18, in <module>
  File "/usr/lib/python2.7/pydoc.py", line 56, in <module>
    import sys, imp, os, re, types, inspect, __builtin__, pkgutil, warnings
  File "/usr/lib/python2.7/inspect.py", line 39, in <module>
    import tokenize
  File "repo/spark/tokenize.py", line 1, in <module>
      from pyspark import SparkContext
ImportError: cannot import name SparkContext
{noformat}


> spark-submit fails to run python scripts with specific names
> ------------------------------------------------------------
>
>                 Key: SPARK-18644
>                 URL: https://issues.apache.org/jira/browse/SPARK-18644
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, Spark Submit
>    Affects Versions: 2.0.2
>         Environment: Ubuntu 16.04
>            Reporter: Jussi Jousimo
>            Priority: Minor
>
> I'm trying to run simple python script named tokenize.py with spark-submit. 
> The script only imports SparkContext:
> from pyspark import SparkContext
> And I run it with:
> spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.0.2 
> tokenize.py
> However, the script fails:
> ImportError: cannot import name SparkContext
> I have set all necessary environment variables, etc. Strangely, it seems the 
> filename is causing this error. If I rename the file to, e.g., tokenizer.py 
> and run again, it runs fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to