[ 
https://issues.apache.org/jira/browse/LUCENE-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12779646#action_12779646
 ] 

Uwe Schindler edited comment on LUCENE-2075 at 11/18/09 8:39 PM:
-----------------------------------------------------------------

Patch that fixes the bug in javac with typed arrays (because of that it does 
not allow generic array creation - the problem is that heap is a generic array 
in PQ, but implemented as Object[]).

I fixed the PQueue by returning a List<CacheEntry<K,V>> values() and also made 
the private maxSize in the PriorityQueue protected. So it does not need to 
implement an own insertWithOverflow. As this class moves to Lucene Core, we 
should not make such bad hacks. 

We need a good testcase for the whole cache class. It was hard to me to find a 
good test that hits the PQueue at all (its only used in special cases). Hard 
stuff :(

      was (Author: thetaphi):
    Patch that fixes the bug in javac with typed arrays (because of that it 
does not allow typed arrays without unchecked casts...).

I fixed the PQueue by returning a List<CacheEntry<K,V>> values() and also mad 
the private maxSize in the PriorityQueue protected. So it does not need to 
implement an own insertWithOverflow. As this class moves to Lucene Core, we 
should not make such bad hacks. 

We need a good testcase for the whole cache class. It was hard to me to find a 
good test that hits the PQueue at all (its only used in special cases). Hard 
stuff :(
  
> Share the Term -> TermInfo cache across threads
> -----------------------------------------------
>
>                 Key: LUCENE-2075
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2075
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Michael McCandless
>            Priority: Minor
>             Fix For: 3.1
>
>         Attachments: ConcurrentLRUCache.java, LUCENE-2075.patch, 
> LUCENE-2075.patch, LUCENE-2075.patch, LUCENE-2075.patch
>
>
> Right now each thread creates its own (thread private) SimpleLRUCache,
> holding up to 1024 terms.
> This is rather wasteful, since if there are a high number of threads
> that come through Lucene, you're multiplying the RAM usage.  You're
> also cutting way back on likelihood of a cache hit (except the known
> multiple times we lookup a term within-query, which uses one thread).
> In NRT search we open new SegmentReaders (on tiny segments) often
> which each thread must then spend CPU/RAM creating & populating.
> Now that we are on 1.5 we can use java.util.concurrent.*, eg
> ConcurrentHashMap.  One simple approach could be a double-barrel LRU
> cache, using 2 maps (primary, secondary).  You check the cache by
> first checking primary; if that's a miss, you check secondary and if
> you get a hit you promote it to primary.  Once primary is full you
> clear secondary and swap them.
> Or... any other suggested approach?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to