I solved this problem following this article
http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7

1) download compiled winutils.exe from
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty("hadoop.home.dir", "d:\\winutil\\")

It solved my original problem. But then I got a new error

lang.NullPointerException org.apache.hadoop.util.Shell.runCommand
        at java.lang.ProcessBuilder.start(Unknown Source)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
        at org.apache.spark.util.Utils$.fetchFile(Utils.scala:411)
        at 
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDep
endencies$6.apply(Executor.scala:350)
        at 
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDep
endencies$6.apply(Executor.scala:347)
        at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scal
a:772)
        at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
        at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
        at 
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies
(Executor.scala:347)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)

Very frustrated. Does anybody successfully get spark running on Windows?


Regards,

Ningjun Wang
Consulting Software Engineer
LexisNexis
121 Chanlon Road
New Providence, NJ 07974-1541

-----Original Message-----
From: Marcelo Vanzin [mailto:van...@cloudera.com] 
Sent: Wednesday, January 28, 2015 5:15 PM
To: Wang, Ningjun (LNG-NPV)
Cc: user@spark.apache.org
Subject: Re: Spark on Windows 2008 R2 serv er does not work

https://issues.apache.org/jira/browse/SPARK-2356

Take a look through the comments, there are some workarounds listed there.

On Wed, Jan 28, 2015 at 1:40 PM, Wang, Ningjun (LNG-NPV) 
<ningjun.w...@lexisnexis.com> wrote:
> Has anybody successfully install and run spark-1.2.0 on windows 2008 
> R2 or windows 7? How do you get that works?
>
>
>
> Regards,
>
>
>
> Ningjun Wang
>
> Consulting Software Engineer
>
> LexisNexis
>
> 121 Chanlon Road
>
> New Providence, NJ 07974-1541
>
>
>
> From: Wang, Ningjun (LNG-NPV) [mailto:ningjun.w...@lexisnexis.com]
> Sent: Tuesday, January 27, 2015 10:28 PM
> To: user@spark.apache.org
> Subject: Spark on Windows 2008 R2 serv er does not work
>
>
>
> I download and install  spark-1.2.0-bin-hadoop2.4.tgz pre-built 
> version on Windows 2008 R2 server. When I submit a job using 
> spark-submit, I got the following error
>
>
>
> WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform
>
> ... using builtin-java classes where applicable
>
> ERROR org.apache.hadoop.util.Shell: Failed to locate the winutils 
> binary in the hadoop binary path
>
> java.io.IOException: Could not locate executable null\bin\winutils.exe 
> in the Hadoop binaries.
>
>         at 
> org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
>
>         at 
> org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
>
>         at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
>
>         at 
> org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
>
>         at
> org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
>
>         at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
>
>         at
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups
> .java:240)
>
>         at
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupIn
> formation.java:255)
>
>
>
>
>
> Please advise. Thanks.
>
>
>
>
>
> Ningjun
>
>
>
>



--
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to