[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13501860#comment-13501860
 ] 

caofangkun commented on MAPREDUCE-2494:
---------------------------------------

Hi Robert Joseph Evans :
LinkedHashMap has two ordering mode : insertion-order and access-order.
In this issue why don't U use access-order(It is more like LRU )? 
That is :
  private static LinkedHashMap<String, CacheStatus> cachedArchives = new 
LinkedHashMap<String, CacheStatus>(16, 0.75f, true);

                
> Make the distributed cache delete entires using LRU priority
> ------------------------------------------------------------
>
>                 Key: MAPREDUCE-2494
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2494
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: distributed-cache
>    Affects Versions: 0.20.205.0, 0.21.0
>            Reporter: Robert Joseph Evans
>            Assignee: Robert Joseph Evans
>             Fix For: 0.20.205.0, 0.23.0
>
>         Attachments: MAPREDUCE-2494-20.20X-V1.patch, 
> MAPREDUCE-2494-20.20X-V3.patch, MAPREDUCE-2494-V1.patch, 
> MAPREDUCE-2494-V2.patch
>
>
> Currently the distributed cache will wait until a cache directory is above a 
> preconfigured threshold.  At which point it will delete all entries that are 
> not currently being used.  It seems like we would get far fewer cache misses 
> if we kept some of them around, even when they are not being used.  We should 
> add in a configurable percentage for a goal of how much of the cache should 
> remain clear when not in use, and select objects to delete based off of how 
> recently they were used, and possibly also how large they are/how difficult 
> is it to download them again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to