[jira] [Work logged] (HDFS-16518) KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager

2022-03-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16518?focusedWorklogId=750499=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-750499
 ]

ASF GitHub Bot logged work on HDFS-16518:
-

Author: ASF GitHub Bot
Created on: 30/Mar/22 21:00
Start Date: 30/Mar/22 21:00
Worklog Time Spent: 10m 
  Work Description: omalley commented on pull request #4100:
URL: https://github.com/apache/hadoop/pull/4100#issuecomment-1083620684


   Sorry, I meant to also close & comment on the jira. Committing it and 
referencing this PR, means that I approved it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 750499)
Time Spent: 2h  (was: 1h 50m)

> KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager
> -
>
> Key: HDFS-16518
> URL: https://issues.apache.org/jira/browse/HDFS-16518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> KeyProvider implements Closable interface but some custom implementation of 
> KeyProvider also needs explicit close in KeyProviderCache. An example is to 
> use custom KeyProvider in DFSClient to integrate read encrypted file on HDFS. 
> KeyProvider  currently gets closed in KeyProviderCache only when cache entry 
> is expired or invalidated. In some cases, this is not happening. This seems 
> related to guava cache.
> This patch is to use hadoop JVM shutdownhookManager to globally cleanup cache 
> entries and thus close KeyProvider using cache hook right after filesystem 
> instance gets closed in a deterministic way.
> {code:java}
> Class KeyProviderCache
> ...
>  public KeyProviderCache(long expiryMs) {
>   cache = CacheBuilder.newBuilder()
> .expireAfterAccess(expiryMs, TimeUnit.MILLISECONDS)
> .removalListener(new RemovalListener() {
>   @Override
>   public void onRemoval(
>   @Nonnull RemovalNotification notification) {
> try {
>   assert notification.getValue() != null;
>   notification.getValue().close();
> } catch (Throwable e) {
>   LOG.error(
>   "Error closing KeyProvider with uri ["
>   + notification.getKey() + "]", e);
> }
>   }
> })
> .build(); 
> }{code}
> We could have made a new function KeyProviderCache#close, have each DFSClient 
> call this function and close KeyProvider at the end of each DFSClient#close 
> call but it will expose another problem to potentially close global cache 
> among different DFSClient instances.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16518) KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager

2022-03-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16518?focusedWorklogId=749335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-749335
 ]

ASF GitHub Bot logged work on HDFS-16518:
-

Author: ASF GitHub Bot
Created on: 29/Mar/22 13:41
Start Date: 29/Mar/22 13:41
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #4100:
URL: https://github.com/apache/hadoop/pull/4100#issuecomment-1081888202


   Hi @omalley - it is committed to trunk, branch-3.3, and branch-2.10 without 
your approval in either JIRA or GitHub PR. Do you have any reason to approve 
the change?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 749335)
Time Spent: 1h 50m  (was: 1h 40m)

> KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager
> -
>
> Key: HDFS-16518
> URL: https://issues.apache.org/jira/browse/HDFS-16518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> KeyProvider implements Closable interface but some custom implementation of 
> KeyProvider also needs explicit close in KeyProviderCache. An example is to 
> use custom KeyProvider in DFSClient to integrate read encrypted file on HDFS. 
> KeyProvider  currently gets closed in KeyProviderCache only when cache entry 
> is expired or invalidated. In some cases, this is not happening. This seems 
> related to guava cache.
> This patch is to use hadoop JVM shutdownhookManager to globally cleanup cache 
> entries and thus close KeyProvider using cache hook right after filesystem 
> instance gets closed in a deterministic way.
> {code:java}
> Class KeyProviderCache
> ...
>  public KeyProviderCache(long expiryMs) {
>   cache = CacheBuilder.newBuilder()
> .expireAfterAccess(expiryMs, TimeUnit.MILLISECONDS)
> .removalListener(new RemovalListener() {
>   @Override
>   public void onRemoval(
>   @Nonnull RemovalNotification notification) {
> try {
>   assert notification.getValue() != null;
>   notification.getValue().close();
> } catch (Throwable e) {
>   LOG.error(
>   "Error closing KeyProvider with uri ["
>   + notification.getKey() + "]", e);
> }
>   }
> })
> .build(); 
> }{code}
> We could have made a new function KeyProviderCache#close, have each DFSClient 
> call this function and close KeyProvider at the end of each DFSClient#close 
> call but it will expose another problem to potentially close global cache 
> among different DFSClient instances.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16518) KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager

2022-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16518?focusedWorklogId=748879=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-748879
 ]

ASF GitHub Bot logged work on HDFS-16518:
-

Author: ASF GitHub Bot
Created on: 28/Mar/22 20:16
Start Date: 28/Mar/22 20:16
Worklog Time Spent: 10m 
  Work Description: omalley closed pull request #4100:
URL: https://github.com/apache/hadoop/pull/4100


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 748879)
Time Spent: 1h 40m  (was: 1.5h)

> KeyProviderCache close cached KeyProvider with Hadoop ShutdownHookManager
> -
>
> Key: HDFS-16518
> URL: https://issues.apache.org/jira/browse/HDFS-16518
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> KeyProvider implements Closable interface but some custom implementation of 
> KeyProvider also needs explicit close in KeyProviderCache. An example is to 
> use custom KeyProvider in DFSClient to integrate read encrypted file on HDFS. 
> KeyProvider  currently gets closed in KeyProviderCache only when cache entry 
> is expired or invalidated. In some cases, this is not happening. This seems 
> related to guava cache.
> This patch is to use hadoop JVM shutdownhookManager to globally cleanup cache 
> entries and thus close KeyProvider using cache hook right after filesystem 
> instance gets closed in a deterministic way.
> {code:java}
> Class KeyProviderCache
> ...
>  public KeyProviderCache(long expiryMs) {
>   cache = CacheBuilder.newBuilder()
> .expireAfterAccess(expiryMs, TimeUnit.MILLISECONDS)
> .removalListener(new RemovalListener() {
>   @Override
>   public void onRemoval(
>   @Nonnull RemovalNotification notification) {
> try {
>   assert notification.getValue() != null;
>   notification.getValue().close();
> } catch (Throwable e) {
>   LOG.error(
>   "Error closing KeyProvider with uri ["
>   + notification.getKey() + "]", e);
> }
>   }
> })
> .build(); 
> }{code}
> We could have made a new function KeyProviderCache#close, have each DFSClient 
> call this function and close KeyProvider at the end of each DFSClient#close 
> call but it will expose another problem to potentially close global cache 
> among different DFSClient instances.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org