All - this issue showed up when I was tearing down a spark context and
creating a new one. Often, I was unable to then write to HDFS due to this
error. I subsequently switched to a different implementation where instead
of tearing down and re initializing the spark context I'd instead submit a
Hey,
Did you find any solution for this issue, we are seeing similar logs in our
Data node logs. Appreciate any help.
2015-05-15 10:51:43,615 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
NttUpgradeDN1:50010:DataXceiver error processing WRITE_BLOCK operation
src:
I am seeing this on hadoop 2.4.0 version.
Thanks for your suggestions, i will try those and let you know if they help
!
On Sat, May 16, 2015 at 1:57 AM, Steve Loughran ste...@hortonworks.com
wrote:
What version of Hadoop are you seeing this on?
On 15 May 2015, at 20:03, Puneet Kapoor
Hi all, as the last stage of execution, I am writing out a dataset to disk.
Before I do this, I force the DAG to resolve so this is the only job left in
the pipeline. The dataset in question is not especially large (a few
gigabytes). During this step however, HDFS will inevitable crash. I will