[ 
https://issues.apache.org/jira/browse/SPARK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14396040#comment-14396040
 ] 

RoCm commented on SPARK-6699:
-----------------------------

I dont think this is a problem caused by missing numpy module.
Didnt think  pyspark ever needs to have a numpy dependency. But correct me if 
I'm wrong.

To confirm this was not caused by not having numpy I switched my PYTHONPATH to 
different python installation 
(which had numpy)

And running pyspark with new python interpreter  also gives me the same error

>From the traceback following statment in java_gateway.py is causing this.
    proc = Popen(command, stdin=PIPE, env=env)


C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin>pyspark
Running python with 
PYTHONPATH=C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin\..\python\lib\py4j-0.8.2.1-src.zip;C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin\..\python;
Python 2.7.8 |Anaconda 2.1.0 (64-bit)| (default, Jul  2 2014, 15:12:11) [MSC 
v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
Traceback (most recent call last):
  File 
"C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin\..\python\pyspark\shell.py",
 line 50, in <module>
    sc = SparkContext(appName="PySparkShell", pyFiles=add_files)
  File 
"C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\python\pyspark\context.py",
 line 108, in __init__
    SparkContext._ensure_initialized(self, gateway=gateway)
  File 
"C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\python\pyspark\context.py",
 line 222, in _ensure_initialized
    SparkContext._gateway = gateway or launch_gateway()
  File 
"C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\python\pyspark\java_gateway.py",
 line 65, in launch_gateway
    proc = Popen(command, stdin=PIPE, env=env)
  File "C:\Users\roXYZ\Anaconda\lib\subprocess.py", line 710, in __init__    
errread, errwrite)
  File "C:\Users\roXYZ\Anaconda\lib\subprocess.py", line 958, in _execute_child
    startupinfo)
WindowsError: [Error 5] Access is denied


> PySpark  Acess Denied error in windows seen only in ver 1.3
> -----------------------------------------------------------
>
>                 Key: SPARK-6699
>                 URL: https://issues.apache.org/jira/browse/SPARK-6699
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.3.0
>         Environment: Windows 8.1 x64
> Windows 7 SP1 x64
>            Reporter: RoCm
>
> Downloaded version 1.3 and tried to run pyspark
> I hit this error and unable to proceed (tried versions 1.2 and 1.1 works fine)
> Pasting the error logs below
> C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin>pyspark
> Running python with 
> PYTHONPATH=C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin\..\python\lib\py4j-0.8.2.1-src.zip;
> C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin\..\python;
> Python 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)] on 
> win32
> Type "help", "copyright", "credits" or "license" for more information.
> No module named numpy
> Traceback (most recent call last): File 
> "C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\bin\..\python\pyspark\shell.py",
>  line 50, in <module>
>     sc = SparkContext(appName="PySparkShell", pyFiles=add_files)
> File 
> "C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\python\pyspark\context.py",
>  line 108, in __init__
>     SparkContext._ensure_initialized(self, gateway=gateway)
> File 
> "C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\python\pyspark\context.py",
>  line 222, in _ensure_initialized
>     SparkContext._gateway = gateway or launch_gateway()
>  File 
> "C:\Users\roXYZ\.babun\cygwin\home\roXYZ\spark-1.3.0-bin-hadoop2.4\python\pyspark\java_gateway.py",
>  line 65, in launch_gateway
>     proc = Popen(command, stdin=PIPE, env=env)
>   File "C:\Python27\lib\subprocess.py", line 710, in __init__    errread, 
> errwrite)
>   File "C:\Python27\lib\subprocess.py", line 958, in _execute_child    
> startupinfo)
> WindowsError: [Error 5] Access is denied



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to