> On 19 May 2015, at 03:08, Justin Pihony <[email protected]> wrote:
>
>
> 15/05/18 22:03:14 INFO Executor: Fetching
> http://192.168.56.1:49752/jars/twitter4j-media-support-3.0.3.jar with
> timestamp 1432000973058
> 15/05/18 22:03:14 INFO Utils: Fetching
> http://192.168.56.1:49752/jars/twitter4j-media-support-3.0.3.jar to
> C:\Users\Justin\AppData\Local\Temp\spark-4a37d3
> e9-34a2-40d4-b09b-6399931f527d\userFiles-65ee748e-4721-4e16-9fe6-65933651fec1\fetchFileTemp8970201232303518432.tmp
> 15/05/18 22:03:14 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
> java.lang.NullPointerException
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.ja
> va:715)
> at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
> at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
> at org.apache.spark.util.Utils$.fetchFile(Utils.scala:443)
> at
you're going to need to set up Hadoop on your system enough for to execute the
chmod operation via the winutils.exe
one tactic: grab the hortonworks windows version, install it (including setting
up HADOOP_HOME). You don't need to run any of the hadoop services, you just
need the binaries in the right place.
other:
1. grab the copy of the relevant binaries which I've stuck up online
https://github.com/steveloughran/clusterconfigs/tree/master/clusters/morzine/hadoop_home/bin
2. install to some directory hadoop/bin
3. set the env variable HADOOP_HOME to the hadoopp dir (not the bin one)
4. set PATH=%PATH%;%HADOOP_HOME%/bin
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]