mccormickt12 commented on PR #6446:
URL: https://github.com/apache/hadoop/pull/6446#issuecomment-1897870432

   > > Ok so it seems we didn't remove any exception logging right? We are just 
additionally keeping track of them and logging them again at the end?
   > 
   > correct. store all exceptions in a map and then print them if we fail the 
request at the end.
   
   Its both right. You keep each stack trace as a warn 
   
   ```        
   String msg = String.format("Failed to read block %s for file %s from 
datanode %s. "
                   + "Exception is %s. Retry with the current or next available 
datanode.",
               getCurrentBlock().getBlockName(), src, 
currentNode.getXferAddr(), e);
   DFSClient.LOG.warn(msg);
   
   ```
   
   AND you add it to the map and print them later right?
   
   ```
   exceptionMap.get(datanode).add(e);
   ```
   
   
   So this part of the PR description "The existence of exception stacktrace in 
the log has caused multiple hadoop users at Linkedin to consider this WARN 
message as the RC/fatal error for their jobs. We would like to improve the log 
message and avoid sending the stacktrace to dfsClient.LOG when a read 
succeeds." 
   doesn't seem like is actually happening since we are still printing the warn 
each time and these will still get printed regardless of if the call fails. 
Perhaps you meant to change this to info?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to