what is the hadoop version?

You could check log on a datanode around that time. You could post any suspicious errors. For e.g. you can trace a particular block in client and datanode logs.

Most likely it not a NameNode issue, but you can check NameNode log as well.

Raghu.

Xavier Stevens wrote:
Does anyone have an expected or experienced write speed to HDFS outside
of Map/Reduce?  Any recommendations on properties to tweak in
hadoop-site.xml?
Currently I have a multi-threaded writer where each thread is writing to
a different file.  But after a while I get this:
java.io.IOException: Could not get block locations. Aborting...
 at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFS
Client.java:2081)
 at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1300(DFSClient.ja
va:1702)
 at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClie
nt.java:1818)
Which is perhaps indicating that the namenode is overwhelmed? Thanks, -Xavier


Reply via email to