[ 
https://issues.apache.org/jira/browse/HDFS-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852261#comment-13852261
 ] 

Andrew Wang commented on HDFS-5636:
-----------------------------------

I actually thought about this a bit more, and in terms of user experience, I 
propose the following:

* If an expiry is specified in the CDInfo, check it against the pool's max 
expiration and error if invalid.
* If an expiry is not specified, use the pool's max expiration. By default, the 
max expiration is never/infinite.
* If a pool is modified to a lower max expiration, for all directives in the 
pool, set their expiration to min(oldExpiry, newMaxExpiration).
* If a directive is modified to a new pool and a new expiration is not 
specified, set its expiration to min(oldExpiration, maxExpirationOfNewPool).

The modify behavior will be kinda warty unfortunately when listCachePools is 
combined with modifyPool, since the CachePoolInfos you get out will have the 
expiry set (so behavior #4 won't happen). It'll be fine in CacheAdmin though, 
which is probably good enough since setting up pools is probably happening 
through that.

> Enforce a max TTL per cache pool
> --------------------------------
>
>                 Key: HDFS-5636
>                 URL: https://issues.apache.org/jira/browse/HDFS-5636
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: caching, namenode
>    Affects Versions: 3.0.0
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>
> It'd be nice for administrators to be able to specify a maximum TTL for 
> directives in a cache pool. This forces all directives to eventually age out.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to