It is Hadoop-2.4.0 with spark-1.3.0.
I found that the problem only happen if there are multi nodes. If the cluster
has only one node, it works fine.
For example if the cluster has a spark-master on machine A and a spark-worker
on machine B, this problem happen. If both spark-master and
I used spark standalone cluster on Windows 2008. I kept on getting the
following error when trying to save an RDD to a windows shared folder
rdd.saveAsObjectFile(file:///T:/lab4-win02/IndexRoot01/tobacco-07/myrdd.obj)
15/05/22 16:49:05 ERROR Executor: Exception in task 0.0 in stage 12.0 (TID
The stack trace is related to hdfs.
Can you tell us which hadoop release you are using ?
Is this a secure cluster ?
Thanks
On Fri, May 22, 2015 at 1:55 PM, Wang, Ningjun (LNG-NPV)
ningjun.w...@lexisnexis.com wrote:
I used spark standalone cluster on Windows 2008. I kept on getting the