Re: failed to run spark sample on windows

2015-09-30 Thread Renyi Xiong
thanks a lot, it works now after I set %HADOOP_HOME%

On Tue, Sep 29, 2015 at 1:22 PM, saurfang  wrote:

> See
>
> http://stackoverflow.com/questions/26516865/is-it-possible-to-run-hadoop-jobs-like-the-wordcount-sample-in-the-local-mode
> ,
> https://issues.apache.org/jira/browse/SPARK-6961 and finally
> https://issues.apache.org/jira/browse/HADOOP-10775. The easy solution is
> to
> download a Windows Hadoop distribution and point %HADOOP_HOME% to that
> location so winutils.exe can be picked up.
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/failed-to-run-spark-sample-on-windows-tp14393p14407.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: failed to run spark sample on windows

2015-09-29 Thread Renyi Xiong
not sure, so downloaded  again release 1.4.1 with Hadoop 2.6 and later
options from http://spark.apache.org/downloads.html assuming the version is
consistent and run the following on Windows 10

c:\spark-1.4.1-bin-hadoop2.6>bin\run-example HdfsTest 

still got similar exception below: (I heard there's permission config for
hdfs, if so how do I do that?)

15/09/29 13:03:26 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:465)
at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:398)
at
org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:390)
at
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at org.apache.spark.executor.Executor.org
$apache$spark$executor$Executor$$updateDependencies(Executor.scala:390)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

On Mon, Sep 28, 2015 at 4:39 PM, Ted Yu  wrote:

> What version of hadoop are you using ?
>
> Is that version consistent with the one which was used to build Spark
> 1.4.0 ?
>
> Cheers
>
> On Mon, Sep 28, 2015 at 4:36 PM, Renyi Xiong 
> wrote:
>
>> I tried to run HdfsTest sample on windows spark-1.4.0
>>
>> bin\run-sample org.apache.spark.examples.HdfsTest 
>>
>> but got below exception, any body any idea what was wrong here?
>>
>> 15/09/28 16:33:56.565 ERROR SparkContext: Error initializing SparkContext.
>> java.lang.NullPointerException
>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
>> at org.apache.hadoop.util.Shell.run(Shell.java:418)
>> at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
>> at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
>> at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
>> at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
>> at
>> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
>> at
>> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:130)
>> at org.apache.spark.SparkContext.(SparkContext.scala:515)
>> at org.apache.spark.examples.HdfsTest$.main(HdfsTest.scala:32)
>> at org.apache.spark.examples.HdfsTest.main(HdfsTest.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>> at
>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>> at
>> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>> at
>> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>
>


Re: failed to run spark sample on windows

2015-09-29 Thread saurfang
See
http://stackoverflow.com/questions/26516865/is-it-possible-to-run-hadoop-jobs-like-the-wordcount-sample-in-the-local-mode,
https://issues.apache.org/jira/browse/SPARK-6961 and finally
https://issues.apache.org/jira/browse/HADOOP-10775. The easy solution is to
download a Windows Hadoop distribution and point %HADOOP_HOME% to that
location so winutils.exe can be picked up.



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/failed-to-run-spark-sample-on-windows-tp14393p14407.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: failed to run spark sample on windows

2015-09-28 Thread Ted Yu
What version of hadoop are you using ?

Is that version consistent with the one which was used to build Spark 1.4.0
?

Cheers

On Mon, Sep 28, 2015 at 4:36 PM, Renyi Xiong  wrote:

> I tried to run HdfsTest sample on windows spark-1.4.0
>
> bin\run-sample org.apache.spark.examples.HdfsTest 
>
> but got below exception, any body any idea what was wrong here?
>
> 15/09/28 16:33:56.565 ERROR SparkContext: Error initializing SparkContext.
> java.lang.NullPointerException
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
> at org.apache.hadoop.util.Shell.run(Shell.java:418)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
> at
> org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
> at
> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:130)
> at org.apache.spark.SparkContext.(SparkContext.scala:515)
> at org.apache.spark.examples.HdfsTest$.main(HdfsTest.scala:32)
> at org.apache.spark.examples.HdfsTest.main(HdfsTest.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
> at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
> at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>