Jean-Daniel, Property dfs.datanode.socket.write.timeout is not set in hadoop-site.xml. It does not appear in hadoop-default.xml either.
Do you know why the data node sockets timed out ? The host does not look overloaded. Thank you for your cooperation, M. On Mon, Jan 26, 2009 at 3:33 PM, Jean-Daniel Cryans <jdcry...@apache.org> wrote: > Michael, > > You don't see anything in your region server logs? Mmm., we usually get > those if we don't set the following in the hadoop-site.xml file: > > <property> > <name>dfs.datanode.socket.write.timeout</name> > <value>0</value> > </property> > > See if it stops the exception. In any case, until Hadoop 0.18.3 and Hadoop > 0.19.1, you should probably still use that config to be safe. > > J-D > > On Mon, Jan 26, 2009 at 7:31 AM, Michael Dagaev > <michael.dag...@gmail.com>wrote: > >> Hi, all >> >> I found a lot of SocketTimeoutExceptions in the data node logs (see >> below) >> but I did not find any error in the region server logs. >> >> Does this exception indicate a real problem or we can just ignore it? >> >> I read both the mail of Jean-Adrien and HBASE-24 but I understood >> neither the root cause of the problem nor solution for it. >> >> Did anybody run into this problem and solve it? >> >> Thank you for your cooperation, >> M. >> >> P. S. The SocketTimeoutException >> >> ERROR org.apache.hadoop.dfs.DataNode: >> DatanodeRegistration(10.254.55.239:50010, >> storageID=DS-1287311144-10.254.55.239-50010-1232442318823, >> infoPort=50075, ipcPort=50020):DataXceiver: >> java.net.SocketTimeoutException: 480000 millis timeout while waiting >> for channel to be ready for write. ch : >> java.nio.channels.SocketChannel[connected local=/<data node >> host>:50010 remote=/<data node host>:55417] >> at >> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185) >> at >> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159) >> at >> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198) >> at >> org.apache.hadoop.dfs.DataNode$BlockSender.sendChunks(DataNode.java:1917) >> at >> org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:2011) >> at >> org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock(DataNode.java:1140) >> at >> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1068) >> at java.lang.Thread.run(Thread.java:619) >> >