[ 
https://issues.apache.org/jira/browse/HDFS-3555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399802#comment-13399802
 ] 

Hadoop QA commented on HDFS-3555:
---------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533151/hdfs-3555-2.txt
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2689//console

This message is automatically generated.
                
> idle client socket triggers DN ERROR log (should be INFO or DEBUG)
> ------------------------------------------------------------------
>
>                 Key: HDFS-3555
>                 URL: https://issues.apache.org/jira/browse/HDFS-3555
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.2
>         Environment: Red Hat Enterprise Linux Server release 6.2 (Santiago)
>            Reporter: Jeff Lord
>            Assignee: Andy Isaacson
>         Attachments: hdfs-3555-2.txt, hdfs-3555.patch
>
>
> Datanode service is logging java.net.SocketTimeoutException at ERROR level.
> This message indicates that the datanode is not able to send data to the 
> client because the client has stopped reading. This message is not really a 
> cause for alarm and should be INFO level.
> 2012-06-18 17:47:13 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode 
> DatanodeRegistration(x.x.x.x:50010, 
> storageID=DS-196671195-10.10.120.67-50010-1334328338972, infoPort=50075, 
> ipcPort=50020):DataXceiver
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
> channel to be ready for write. ch : java.nio.channels.SocketChannel[connected 
> local=/10.10.120.67:50010 remote=/10.10.120.67:59282]
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at 
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at 
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:397)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:493)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:267)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to