[ 
https://issues.apache.org/jira/browse/HDFS-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16821676#comment-16821676
 ] 

He Xiaoqiao commented on HDFS-14405:
------------------------------------

Thanks [~crh] for the ping, IIUC, for ZKDelegationTokenSecretManager,
#storeToken(for getDelegationToken)/#renewToken(for renew token) both do cache 
locally and persist new DT to backend zookeeper then return. Of course, for 
#renewToken, it should check if token exist currently, check path: cache 
locally, if not then check backend zookeeper.
#cancelToken also update local cache and backend zookeeper.
#getTokenInfo (for token checking) query cache first, if not exist then request 
zookeeper to retrieve.
So, I don't think same client could not renew token just created even if 
request to different routers.
I think there could be some low performance issue, especially there are 
hundreds of thousands or million or more tokens, sync to or query from router 
will be slow. FYI. Please correct me if wrong.

> RBF: Client should be able to renew DT immediately after it fetched the DT
> --------------------------------------------------------------------------
>
>                 Key: HDFS-14405
>                 URL: https://issues.apache.org/jira/browse/HDFS-14405
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Fengnan Li
>            Assignee: Fengnan Li
>            Priority: Minor
>
> By the current design, once a dt is generated it needs to sync to other 
> routers as well as backing up in the state store, therefore there is a time 
> gap between other routers are able to know the existence of this token.
> Ideally, the same client should be able to renew the token it just created 
> through fetchdt even though two calls are hitting two distinct routers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to