[ 
https://issues.apache.org/jira/browse/HDFS-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13940763#comment-13940763
 ] 

Brandon Li commented on HDFS-6123:
----------------------------------

[~szetszwo], in the following change, the debug information will be logged in 
both cases, and the only difference is the severity level. 

Is my understanding correct: The exception is normal. We try to give it the 
severity as the lowest as possible but at the same time we need to make sure it 
will appear in the log. 
 
{noformat}
-          LOG.info("exception: ", e);
+        if (LOG.isTraceEnabled()) {
+          LOG.trace("Failed to send data:", e);
+        } else {
+          LOG.info("Failed to send data: " + e);
+        }
{noformat}

> Improve datanode error messages
> -------------------------------
>
>                 Key: HDFS-6123
>                 URL: https://issues.apache.org/jira/browse/HDFS-6123
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Minor
>         Attachments: 6123_20140318.patch
>
>
> [~yeshavora] found two cases that there are unnecessary exception stack trace 
> in datanode log:
> - SocketTimeoutException
> {noformat}
> 2014-03-07 03:30:44,567 INFO datanode.DataNode 
> (BlockSender.java:sendPacket(563)) - exception:
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
> channel to be ready for write. ch : java.nio.channels.SocketChannel[connected 
> local=/xx.xx.xx.xx:1019 remote=/xx.xx.xx.xx:37997]
>     at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>     ...
> {noformat}
> - ReplicaAlreadyExistsException
> {noformat}
> 2014-03-07 03:02:39,334 ERROR datanode.DataNode (DataXceiver.java:run(234)) - 
> xx.xx.xx.xx:1019:DataXceiver error processing WRITE_BLOCK operation src: 
> /xx.xx.xx.xx:32959 dest: /xx.xx.xx.xx:1019
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-1409640778-xx.xx.xx.xx-1394150965191:blk_1073742158_1334 already exists in 
> state TEMPORARY and thus cannot be created.
>     at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:874)
>     ...
> {noformat}
> Both cases are normal.  They are not bugs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to