[ 
https://issues.apache.org/jira/browse/LUCENE-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12978391#action_12978391
 ] 

Michael McCandless commented on LUCENE-2474:
--------------------------------------------

bq. "close event" doesn't really describe the behavior since an event is not 
generated on every close of every reader as one might expect.

Right, it's really more like a "segment is unloaded" event.  Ie a single 
segment can have many cloned/reopened SegmentReaders, all sharing the same 
"core" (= same cache entry eg in FieldCache)... when this event occurs in means 
all SegmentReaders for a given segment have been closed.  But, then we also 
need to generate this for toplevel readers, since [horribly] such readers are 
still allowed into eg the FieldCache.

bq. This implementation is problematic since higher level readers don't 
propagate the event listeners to subreaders... i.e. I need to walk the tree 
myself and add add a listener to every reader, and on a reopen() I would need 
to walk the tree again and add listeners only to the new readers that have a 
new coreKey.

I think we should just fix that, ie so your listener is propagated to the subs 
and to reopened readers (and their subs and their reopens, etc.).

{quote}
We've talked before about putting caches directly on the readers - that still 
seems like the most straightforward approach?
{quote}

This would be great, but I'm not sure I'd call it straightforward :)  I think a 
separate baby step (ie this proposed approach) is fine for today?

bq. We really need one cache that doesn't care about deletions, and one cache 
that does.

And maybe norms, since they too can change in cloned SegmentReaders that 
otherwise share the same core.  Or, maybe we make the "core" a first class 
object, and you interact with it to cache things that don't care about changes 
to deletions/norms.  Or, the core could just the first SegmentReader to be 
opened on this segment.  Or something.

I think Earwin has also worked out some sort of caching model w/ 
IndexReaders... Earwin, how do you handle timely eviction?

> Allow to plug in a Cache Eviction Listener to IndexReader to eagerly clean 
> custom caches that use the IndexReader (getFieldCacheKey)
> ------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-2474
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2474
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>            Reporter: Shay Banon
>         Attachments: LUCENE-2474.patch
>
>
> Allow to plug in a Cache Eviction Listener to IndexReader to eagerly clean 
> custom caches that use the IndexReader (getFieldCacheKey).
> A spin of: https://issues.apache.org/jira/browse/LUCENE-2468. Basically, its 
> make a lot of sense to cache things based on IndexReader#getFieldCacheKey, 
> even Lucene itself uses it, for example, with the CachingWrapperFilter. 
> FieldCache enjoys being called explicitly to purge its cache when possible 
> (which is tricky to know from the "outside", especially when using NRT - 
> reader attack of the clones).
> The provided patch allows to plug a CacheEvictionListener which will be 
> called when the cache should be purged for an IndexReader.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to