[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17788243#comment-17788243
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-----------------------------------------

anujmodi2021 opened a new pull request, #6284:
URL: https://github.com/apache/hadoop/pull/6284

   Jira Ticket: https://issues.apache.org/jira/browse/HADOOP-18872
   Trunk PR: https://github.com/apache/hadoop/pull/6019
   Cherry-picked commit: 
https://github.com/apache/hadoop/commit/000a39ba2d2131ac158e23b35eae8c1329681bff
   
   Description: 
   There was a bug identified where retry count in the client correlation id 
was wrongly reported for sub-sequential and parallel operations triggered by a 
single file system call. This was due to reusing same tracing context for all 
such calls.
   We create a new tracing context as soon as HDFS call comes. We keep on 
passing that same TC for all the client calls.
   
   For instance, when we get a createFile call, we first call metadata 
operations. If those metadata operations somehow succeeded after a few retries, 
the tracing context will have that many retry count in it. Now when actual call 
for create is made, same retry count will be used to construct the 
headers(clientCorrelationId). Alhough the create operation never failed, we 
will still see retry count from the previous request.
   
   Fix is to use a new tracing context object for all the network calls made. 
All the sub-sequential and parallel operations will have same primary request 
Id to correlate them, yet they will have their own tracing of retry count.




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-18872
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18872
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: build
>    Affects Versions: 3.3.6
>            Reporter: Anmol Asrani
>            Assignee: Anuj Modi
>            Priority: Major
>              Labels: Bug, pull-request-available
>             Fix For: 3.4.0
>
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to