[ 
https://issues.apache.org/jira/browse/HDFS-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13854417#comment-13854417
 ] 

Colin Patrick McCabe commented on HDFS-5636:
--------------------------------------------

I think forcing users to do 
{{directiveBuilder.setExpiry(Expiry.newRelative(CachePoolInfo.RELATIVE_EXPIRY_NEVER))}}
 to set "no expiration time" is kind of gross.  Why does the user need to know 
how we're representing "never" in the protobuf?  What about having 
clearMaxRelativeExpiryMs, etc. functions in CacheDirectiveInfo#Builder and 
CachePoolInfo#Builder so people don't have to wrestle with our internal 
constants?  Maybe all it has to do is set the field to null.

I also feel like MAX_RELATIVE_EXPIRY_MS should be a constant in Expiration, and 
enforced in Expiration.  You fixed the overflow for your code path-- but let's 
fix it for every code path, rather than potentially letting users overflow us 
with wacky values.

Change to TestFsDatasetCache#BLOCK_SIZE seems unrelated to the other changes.  
would prefer to do this separately

> Enforce a max TTL per cache pool
> --------------------------------
>
>                 Key: HDFS-5636
>                 URL: https://issues.apache.org/jira/browse/HDFS-5636
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: caching, namenode
>    Affects Versions: 3.0.0
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-5636-1.patch, hdfs-5636-2.patch
>
>
> It'd be nice for administrators to be able to specify a maximum TTL for 
> directives in a cache pool. This forces all directives to eventually age out.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to