Yeah, yeah, you are def right...if you have field caches larger than
your RAM, you can def spill off to HD. I just wonder if your going to
get performance that is acceptable if you are actually using all of
those fieldcaches and have to go to disk a lot. It would be awesome to
know how that works though...I was very interested in it, but have not
had the time to get together enough data and what not for some good
testing. Kind of fell off my priorities...

I think your 2 readers question is interesting and I will certainly
think about it. Right now though, each IndexReader instance holds it own
cache. I'll have to dig back into the code and see about possibly keying
on the directory or something?

Then again, Karl's latest issue may help make 2 readers lose some if its
advantage: https://issues.apache.org/jira/browse/LUCENE-1265 so it may
not be wise to go out of the way to support that use case.

Also, keep in mind that this code may not end up in the results of this
issue at all. I basically just put it out there to demonstrate the kind
of advantage you can get in reopen speed with a large field cache. Hoss
did a great job on the API though, so whoever actually hammers this out
may stick with a lot of it.

Who knows...if you report back with some numbers, maybe youll influence
how things go <g>.

- Mark

On Thu, 2008-04-17 at 10:25 -0700, Britske wrote:
> The obstacle I'm seeing is that I have a lot of fields which use sorting.
> Sooner or later this will give an OutOfMem-error since the field-cache grows
> too large. Am i correct in assuming that implementing for instance a EHCache
> with flush-to-disk would solve this issue?  (With a tradeoff for performance
> of course)
> 
> Moreover, when warming readers with the patch, thus having 2 reader open at
> the same time (I am using solr searchers btw, but I guess these use the same
> underlying lucene-code, I'll have to check) can these 2 readers shares the
> same fieldcache and thus eliminate the required double memory  while
> warming? 
> 
> Thanks.
> 
> 
> markrmiller wrote:
> > 
> > It does not specifically incorporate caching to disk, but what it does
> > do is easily allow you to provide a new Cache implementation. The
> > default implementation is just a simple in memory Map, but its trivial
> > to provide a new implementation using something like EHCache to back the
> > Cache implementation.
> > 
> > I don't know if caching to disk will really be that much of a benefit,
> > so if you play around I would love to hear your results.
> > 
> > The big benefit is
> > if you are reopening Readers with field caches, it can be waaay faster.
> > 
> > 
> > - Mark
> > 
> > On Thu, 2008-04-17 at 05:14 -0700, Britske wrote:
> >> I've seen some recent activity on LUCENE-831 "Complete overhaul of
> >> FieldCache
> >> API" and read that it must be able to cleanly patch to trunk (haven't
> >> tried
> >> yet). 
> >> 
> >> What I'd like to know from people involved is if this patch incorporates
> >> offloading of fieldcache to disk, or if this hasn't yet been taken into
> >> account. As far as I can follow it, this was one of the initial
> >> intentions. 
> >> 
> >> Thanks,
> >> Britske
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> > 
> > 
> > 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to