[ https://issues.apache.org/jira/browse/HDFS-5636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859509#comment-13859509 ]
Hudson commented on HDFS-5636: ------------------------------ SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1655 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1655/]) Add updated editsStored files missing from initial HDFS-5636 commit. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1554293) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml > Enforce a max TTL per cache pool > -------------------------------- > > Key: HDFS-5636 > URL: https://issues.apache.org/jira/browse/HDFS-5636 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, namenode > Affects Versions: 3.0.0 > Reporter: Andrew Wang > Assignee: Andrew Wang > Fix For: 3.0.0 > > Attachments: hdfs-5636-1.patch, hdfs-5636-2.patch, hdfs-5636-3.patch > > > It'd be nice for administrators to be able to specify a maximum TTL for > directives in a cache pool. This forces all directives to eventually age out. -- This message was sent by Atlassian JIRA (v6.1.5#6160)