[ 
https://issues.apache.org/jira/browse/HDFS-17148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17752897#comment-17752897
 ] 

ASF GitHub Bot commented on HDFS-17148:
---------------------------------------

simbadzina commented on code in PR #5936:
URL: https://github.com/apache/hadoop/pull/5936#discussion_r1290501755


##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/SQLDelegationTokenSecretManager.java:
##########
@@ -153,6 +163,39 @@ public synchronized TokenIdent 
cancelToken(Token<TokenIdent> token,
     return super.cancelToken(token, canceller);
   }
 
+  /**
+   * Obtain a list of tokens that will be considered for cleanup, based on the 
last
+   * time the token was updated in SQL. This list may include tokens that are 
not
+   * expired and should not be deleted (e.g. if the token was last renewed 
using a
+   * higher renewal interval).
+   * The number of results is limited to reduce performance impact. Some level 
of
+   * contention is expected when multiple routers run cleanup simultaneously.
+   * @return Map of tokens that have not been updated in SQL after the token 
renewal
+   *         period.
+   */
+  @Override
+  protected Map<TokenIdent, DelegationTokenInformation> getTokensForCleanup() {
+    Map<TokenIdent, DelegationTokenInformation> tokens = new HashMap<>();
+    try {
+      // Query SQL for tokens that haven't been updated after
+      // the last token renewal period.
+      long maxModifiedTime = Time.now() - getTokenRenewInterval();
+      Map<byte[], byte[]> tokenInfoBytesList = 
selectTokenInfos(maxModifiedTime,
+          this.maxTokenCleanupResults);
+
+      LOG.info("Found {} tokens for cleanup", tokenInfoBytesList.size());
+      for (Map.Entry<byte[], byte[]> tokenInfoBytes : 
tokenInfoBytesList.entrySet()) {
+        TokenIdent tokenIdent = createTokenIdent(tokenInfoBytes.getKey());
+        DelegationTokenInformation tokenInfo = 
createTokenInfo(tokenInfoBytes.getValue());
+        tokens.put(tokenIdent, tokenInfo);
+      }
+    } catch (IOException | SQLException e) {
+      LOG.error("Failed to get all tokens in SQL secret manager", e);

Review Comment:
   This is not `all tokens` but a subset, filtered my maxModifiedTime.





> RBF: SQLDelegationTokenSecretManager must cleanup expired tokens in SQL
> -----------------------------------------------------------------------
>
>                 Key: HDFS-17148
>                 URL: https://issues.apache.org/jira/browse/HDFS-17148
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: rbf
>            Reporter: Hector Sandoval Chaverri
>            Priority: Major
>              Labels: pull-request-available
>
> The SQLDelegationTokenSecretManager fetches tokens from SQL and stores them 
> temporarily in a memory cache with a short TTL. The ExpiredTokenRemover in 
> AbstractDelegationTokenSecretManager runs periodically to cleanup any expired 
> tokens from the cache, but most tokens have been evicted automatically per 
> the TTL configuration. This leads to many expired tokens in the SQL database 
> that should be cleaned up.
> The SQLDelegationTokenSecretManager should find expired tokens in SQL instead 
> of in the memory cache when running the periodic cleanup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to