ZhangHB created HDFS-16880:
------------------------------

             Summary: modify invokeSingleXXX interface in order to pass actual 
file src to namenode for debug info.
                 Key: HDFS-16880
                 URL: https://issues.apache.org/jira/browse/HDFS-16880
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: rbf
    Affects Versions: 3.3.4
            Reporter: ZhangHB


We found lots of INFO level log like below:
{quote}2022-12-30 15:31:04,169 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: / is closed by 
DFSClient_attempt_1671783180362_213003_m_000077_0_1102875551_1
2022-12-30 15:31:04,186 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: / is closed by DFSClient_NONMAPREDUCE_1198313144_27480
{quote}
It lost the real path of completeFile. Actually this is caused by : 

 
*org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient#invokeSingle(java.lang.String,
 org.apache.hadoop.hdfs.server.federation.router.RemoteMethod)*

In this method, it instantiates a RemoteLocationContext object:

*RemoteLocationContext loc = new RemoteLocation(nsId, "/", "/");*

and then execute: *Object[] params = method.getParams(loc);*

The problem is right here, becasuse we always use new RemoteParam(), so, 

context.getDest() always return "/"; That's why we saw lots of incorrect logs.

 

After diving into invokeSingleXXX source code, I found the following RPCs 
classified as need actual src and not need actual src.

 

*need src path RPC:*

addBlock、abandonBlock、getAdditionalDatanode、complete

*not need src path RPC:*

updateBlockForPipeline、reportBadBlocks、getBlocks、updatePipeline、invokeAtAvailableNs(invoked
 by: 
getServerDefaults、getBlockKeys、getTransactionID、getMostRecentCheckpointTxId、versionRequest、getStoragePolicies)

 

After changes, the src can be pass to NN correctly.

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to