[ 
https://issues.apache.org/jira/browse/HDFS-17273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794528#comment-17794528
 ] 

ASF GitHub Bot commented on HDFS-17273:
---------------------------------------

hfutatzhanghb commented on code in PR #6321:
URL: https://github.com/apache/hadoop/pull/6321#discussion_r1419950588


##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java:
##########
@@ -934,14 +936,14 @@ void waitForAckedSeqno(long seqno) throws IOException {
             }
             try {
               dataQueue.wait(1000); // when we receive an ack, we notify on
-              long duration = Time.monotonicNow() - begin;
-              if (duration > writeTimeout) {
+              long duration = Time.monotonicNowNanos() - begin;

Review Comment:
   > Get it , thanks @hfutatzhanghb comment.
   > 
   > ```
   > if (TimeUnit.NANOSECONDS.toMillis(duration) > writeTimeout) {
   >     LOG.error("No ack received, took {}ms (threshold={}ms). "
   >         + "File being written: {}, block: {}, "
   >         + "Write pipeline datanodes: {}.",
   >         TimeUnit.NANOSECONDS.toMillis(duration), writeTimeout, src, block, 
nodes);
   >     throw new InterruptedIOException("No ack received after " +
   >       TimeUnit.NANOSECONDS.toSeconds(duration) + "s and a timeout of " +
   >         TimeUnit.MILLISECONDS.toSeconds(writeTimeout) + "s");
   > }
   > ```
   > 
   > how about it?
   
   @hfutatzhanghb Sir, thanks a lot for your reviewing, you are welcome.  This 
part i did not use nanosecond as unit because i think printing error logs using 
nanosecond is not very intuitive, So I keep it the way it was.  As this PR's 
title described, its goal is for better debugging. Of course,it can be 
nanosecond, we can here other peoples' thoughts. Thanks again.



##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java:
##########
@@ -934,14 +936,14 @@ void waitForAckedSeqno(long seqno) throws IOException {
             }
             try {
               dataQueue.wait(1000); // when we receive an ack, we notify on
-              long duration = Time.monotonicNow() - begin;
-              if (duration > writeTimeout) {
+              long duration = Time.monotonicNowNanos() - begin;

Review Comment:
   > Get it , thanks @hfutatzhanghb comment.
   > 
   > ```
   > if (TimeUnit.NANOSECONDS.toMillis(duration) > writeTimeout) {
   >     LOG.error("No ack received, took {}ms (threshold={}ms). "
   >         + "File being written: {}, block: {}, "
   >         + "Write pipeline datanodes: {}.",
   >         TimeUnit.NANOSECONDS.toMillis(duration), writeTimeout, src, block, 
nodes);
   >     throw new InterruptedIOException("No ack received after " +
   >       TimeUnit.NANOSECONDS.toSeconds(duration) + "s and a timeout of " +
   >         TimeUnit.MILLISECONDS.toSeconds(writeTimeout) + "s");
   > }
   > ```
   > 
   > how about it?
   
   @hfutatzhanghb Sir, thanks a lot for your reviewing, you are welcome.  This 
part i did not use nanosecond as unit because i think printing error logs using 
nanosecond is not very intuitive, So I keep it the way it was.  As this PR's 
title described, its goal is for better debugging. Of course,it can be 
nanosecond, we can here other peoples' thoughts. Thanks again.





> Change the way of computing some local variables duration for better debugging
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-17273
>                 URL: https://issues.apache.org/jira/browse/HDFS-17273
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: farmmamba
>            Assignee: farmmamba
>            Priority: Minor
>              Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to