Hi!

I downloaded and extracted Spark to local folder under windows 7 and have
successfully played with it in pyspark interactive shell.

BUT

When I try to use spark-submit (for example: job-submit pi.py ) I get:

C:\spark-1.2.1-bin-hadoop2.4\bin>spark-submit.cmd pi.py
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
15/02/26 18:21:37 INFO SecurityManager: Changing view acls to: sergun
15/02/26 18:21:37 INFO SecurityManager: Changing modify acls to: sergun
15/02/26 18:21:37 INFO SecurityManager: SecurityManager: authentication
disabled
; ui acls disabled; users with view permissions: Set(sergun); users with mo
dify permissions: Set(user)
15/02/26 18:21:38 INFO Slf4jLogger: Slf4jLogger started
15/02/26 18:21:38 INFO Remoting: Starting remoting
15/02/26 18:21:39 INFO Remoting: Remoting started; listening on addresses
:[akka
.tcp://sparkDriver@mypc:56640]
15/02/26 18:21:39 INFO Utils: Successfully started service 'sparkDriver' on
port
 56640.
15/02/26 18:21:39 INFO SparkEnv: Registering MapOutputTracker
15/02/26 18:21:39 INFO SparkEnv: Registering BlockManagerMaster
15/02/26 18:21:39 INFO DiskBlockManager: Created local directory at
C:\Users\sergun\AppData\Local\Temp\spark-adddeb0b-d6c8-4720-92e3-05255d46ea66\spark-c65cd4
06-28a4-486d-a1ad-92e4814df6fa
15/02/26 18:21:39 INFO MemoryStore: MemoryStore started with capacity 265.0
MB
15/02/26 18:21:40 WARN NativeCodeLoader: Unable to load native-hadoop
library fo
r your platform... using builtin-java classes where applicable
15/02/26 18:21:40 ERROR Shell: Failed to locate the winutils binary in the
hadoo
p binary path
java.io.IOException: Could not locate executable C:\\bin\winutils.exe in the
Had
oop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
        at
org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
        at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Group
s.java:240)
        at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
nformation.java:255)
        at
org.apache.hadoop.security.UserGroupInformation.setConfiguration(User
GroupInformation.java:283)
        at
org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:
44)
        at
org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala
:214)
        at
org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.sca
la)
        at
org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1873)
        at
org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:105)
        at
org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:180)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:308)
        at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:240)
        at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.sc
ala:61)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown
Source)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown
Sou
rce)
        at java.lang.reflect.Constructor.newInstance(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
        at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:214)
        at
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand
.java:79)
        at
py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Unknown Source)
15/02/26 18:21:41 INFO HttpFileServer: HTTP File server directory is
C:\Users\sergun\AppData\Local\Temp\spark-79f2a924-4fff-432c-abc8-ac9c6c4ee0c7\spark-1f295
e28-f0db-4daf-b877-2a47990b6e88
15/02/26 18:21:41 INFO HttpServer: Starting HTTP Server
15/02/26 18:21:41 INFO Utils: Successfully started service 'HTTP file
server' on
 port 56641.
15/02/26 18:21:41 INFO Utils: Successfully started service 'SparkUI' on port
404
0.
15/02/26 18:21:41 INFO SparkUI: Started SparkUI at http://mypc:4040
15/02/26 18:21:42 INFO Utils: Copying C:\spark-1.2.1-bin-hadoop2.4\bin\pi.py
to
C:\Users\sergun\AppData\Local\Temp\spark-76a21028-ccce-4308-9e70-09c3cfa76477\
spark-56b32155-2779-4345-9597-2bfa6a87a51d\pi.py
Traceback (most recent call last):
  File "C:/spark-1.2.1-bin-hadoop2.4/bin/pi.py", line 29, in <module>
    sc = SparkContext(appName="PythonPi")
  File "C:\spark-1.2.1-bin-hadoop2.4\python\pyspark\context.py", line 105,
in __
init__
    conf, jsc)
  File "C:\spark-1.2.1-bin-hadoop2.4\python\pyspark\context.py", line 153,
in _d
o_init
    self._jsc = jsc or self._initialize_context(self._conf._jconf)
  File "C:\spark-1.2.1-bin-hadoop2.4\python\pyspark\context.py", line 202,
in _i
nitialize_context
    return self._jvm.JavaSparkContext(jconf)
  File
"C:\spark-1.2.1-bin-hadoop2.4\python\lib\py4j-0.8.2.1-src.zip\py4j\java_g
ateway.py", line 701, in __call__
  File
"C:\spark-1.2.1-bin-hadoop2.4\python\lib\py4j-0.8.2.1-src.zip\py4j\protoc
ol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling
None.org.apache.spa
rk.api.java.JavaSparkContext.
: java.lang.NullPointerException
        at java.lang.ProcessBuilder.start(Unknown Source)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
650)
        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
        at org.apache.spark.util.Utils$.fetchFile(Utils.scala:445)
        at org.apache.spark.SparkContext.addFile(SparkContext.scala:1004)
        at
org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:28
8)
        at
org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:28
8)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:288)
        at
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.sc
ala:61)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown
Source)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown
Sou
rce)
        at java.lang.reflect.Constructor.newInstance(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
        at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:214)
        at
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand
.java:79)
        at
py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Unknown Source)

What is wrong on my side?

Should I run some scripts before spark-submit.cmd?

Regards,
Sergey.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/can-not-submit-job-to-spark-in-windows-tp21824.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to