[ 
https://issues.apache.org/jira/browse/HDFS-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17598789#comment-17598789
 ] 

ASF GitHub Bot commented on HDFS-16748:
---------------------------------------

Hexiaoqiao commented on PR #4813:
URL: https://github.com/apache/hadoop/pull/4813#issuecomment-1233881312

   @ZanderXu Thanks involve me here. IIUC, this improvement will help Router to 
forward renewLease to only one or certain Namenode, right? If that, it makes 
sense to me. Only one nit comment, title 'by namespace id and iNodeId via RBF' 
seems not related to updates completely. I would like to hear some other folks' 
comments.




> DFSClient should uniquely identify writing files by namespace id and iNodeId
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-16748
>                 URL: https://issues.apache.org/jira/browse/HDFS-16748
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Critical
>              Labels: pull-request-available
>
> DFSClient should diff the writing files with namespaceId and iNodeId, because 
> the writing files may belongs to different namespace with the same iNodeId.
> And the related code as bellows:
> {code:java}
> public void putFileBeingWritten(final long inodeId,
>       final DFSOutputStream out) {
>     synchronized(filesBeingWritten) {
>       filesBeingWritten.put(inodeId, out);
>       // update the last lease renewal time only when there was no
>       // writes. once there is one write stream open, the lease renewer
>       // thread keeps it updated well with in anyone's expiration time.
>       if (lastLeaseRenewal == 0) {
>         updateLastLeaseRenewal();
>       }
>     }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to