@Eric : As you said, each use-case is different. We actually autowarm our
caches to 80% and we have a 99% hit ratio on filter cache. For query cache,
hit ratios are like 25% but given that cache hit saves us about 10X, we
strive to increase cache hit ratio.

@Yang : You can't do a direct copy of values. Values are related to
lucene's internal document id and they can change during an index update.
The change can happen because of document being deleted, segments being
merged or new segments being created. Solr's caches refer to global doc id
which are even more prone to change (because of index merges).



On 28 July 2014 21:32, Erick Erickson <erickerick...@gmail.com> wrote:

> bq: autowarmcount=1024...
>
> That's the point, this is quite a high number in my
> experience.
>
> I've rarely seen numbers above 128 show much of
> any improvement. I've seen a large number of
> installations use much smaller autowarm numbers,
> as in the 16-32 range and be quite content.
>
> I _really_ recommend you try to use much smaller
> numbers then _measure_ whether the first few
> queries after a commit show unacceptable
> response times before trying to make things
> "better". This really feels like premature
> optimization.
>
> Of course you know your problem space better than
> I do, it's just that I've spent too much of my
> professional life fixing the wrong "problem"; I've
> become something of a "measure first" curmudgeon.
>
> FWIW,
> Erick
>
>
> On Sun, Jul 27, 2014 at 10:48 PM, YouPeng Yang <yypvsxf19870...@gmail.com>
> wrote:
>
> > Hi Erick
> >
> > We do the DIH job from the DB and committed frequently.It takes a long
> time
> > to autowarm the filterCaches after commit or soft commit  happened when
> > setting the autowarmcount=1024,which I do think is small enough.
> > So It comes up an idea that whether it  could  directly pass the
> reference
> > of the caches   over to the new caches so that the autowarm processing
> will
> > take much fewer time .
> >
> >
> >
> > 2014-07-28 2:30 GMT+08:00 Erick Erickson <erickerick...@gmail.com>:
> >
> > > Why do you think you _need_ to autowarm the entire cache? It
> > > is, after all, an LRU cache, the theory being that the most recent
> > > queries are most likely to be reused.
> > >
> > > Personally I'd run some tests on using small autowarm counts
> > > before getting at all mixed up in some complex scheme that
> > > may not be useful at all. Say an autowarm count of 16. Then
> > > measure using that, then say 32 then... Insure you have a real
> > > problem before worrying about a solution! ;)
> > >
> > > Best,
> > > Erick
> > >
> > >
> > > On Fri, Jul 25, 2014 at 6:45 AM, Shawn Heisey <s...@elyograg.org>
> wrote:
> > >
> > > > On 7/24/2014 8:45 PM, YouPeng Yang wrote:
> > > > > To Matt
> > > > >
> > > > >   Thank you,your opinion is very valuable ,So I have checked the
> > source
> > > > > codes about how the cache warming  up. It seems to just put items
> of
> > > the
> > > > > old caches into the new caches.
> > > > >   I will pull Mark Miller into this discussion.He is the one of the
> > > > > developer of the Solr whom  I had  contacted with.
> > > > >
> > > > >  To Mark Miller
> > > > >
> > > > >    Would you please check out what we are discussing in the last
> two
> > > > > posts.I need your help.
> > > >
> > > > Matt is completely right.  Any commit can drastically change the
> Lucene
> > > > document id numbers.  It would be too expensive to determine which
> > > > numbers haven't changed.  That means Solr must throw away all cache
> > > > information on commit.
> > > >
> > > > Two of Solr's caches support autowarming.  Those caches use queries
> as
> > > > keys and results as values.  Autowarming works by re-executing the
> top
> > N
> > > > queries (keys) in the old cache to obtain fresh Lucene document id
> > > > numbers (values).  The cache code does take *keys* from the old cache
> > > > for the new cache, but not *values*.  I'm very sure about this, as I
> > > > wrote the current (and not terribly good) LFUCache.
> > > >
> > > > Thanks,
> > > > Shawn
> > > >
> > > >
> > >
> >
>



-- 
---
Thanks & Regards
Umesh Prasad

Reply via email to