Re: Solr7: Bad query throughput around commit time

2017-11-11 Thread Erick Erickson
Nawab:

bq: Cache hit ratios are all in 80+% (even when i decreased the
filterCache to 128)

This suggests that you use a relatively small handful of fq clauses,
which is perfectly fine. Having 450M docs and a cache size of 1024 is
_really_ scary! You had a potential for a 57G (yes, gigabyte)
filterCache. Fortunately you apparently don't use enough different fq
clauses to fill it up, or they match very few documents. I cheated a
little, if the result set is small the individual doc IDs are stored
rather than a bitset 450M bits wide Your
admin>>core>>plugins/stats>>filterCache should show you how many
evictions there are which is another interesting stat.

As it is, you're filterCache might use up 7G or so. Hefty but you have
lots of RAM.

*
bq:  Document cache hitratio is really bad,

This is often the case. Getting documents really means, here, getting
the _stored_ values. The point of the documentCache is to keep entries
in a cache for the various elements of a single request to use. To
name just 2
> you get the stored values for the "fl" list
> you highlight.

These are separate, and each accesses the stored values. Problem is,
"accessing the stored values" means
1> reading the document from disk
2> decompressing a 16K block minimum.

I'm skipping the fact that returning docValues doesn't need the stored
data, but you get the idea.

Anyway, not having to read/decompress for both the"fl" list and
highlighting is what the documentCache is about. That's where the
recommendation "size it as (max # of users) * (max rows)"
recommendation comes in (if you can afford the memory certainly).

Some users have situations where the documentCache hit ratio is much
better, but I'd be surprised if any core with 450M docs even got
close.

*
bq: That supported the hypothesis that the query throughput decreases
after opening a new searcher and **not** after committing the index

Are you saying that you have something of a sawtooth pattern? I.e.
queries are slow "for a while" after opening a new searcher but then
improve until the next commit? This is usually an autowarm problem, so
you might address it with a more precise autowarm. Look particularly
for anything that sorts/groups/facets. Any such fields should have
docValues=true set. Unfortunately this will require a complete
re-index. Don't be frightened by the fact that enabling docValues will
cause your index size on disk to grow. Paradoxically that will
actually _lower_ the size of the JVM heap requirements. Essentially
the additional size on disk is the serialized structure that would
have to be built in the JVM. Since it is pre-built at index time, it
can be MMapped and use OS memory space and not JVM.

*
450M docs and 800G index size is quite large and a prime candidate for
sharding FWIW.

Best,
Erick




On Sat, Nov 11, 2017 at 4:52 PM, Nawab Zada Asad Iqbal  wrote:
> ~248 gb
>
> Nawab
>
>
> On Sat, Nov 11, 2017 at 2:41 PM Kevin Risden  wrote:
>
>> > One machine runs with a 3TB drive, running 3 solr processes (each with
>> one core as described above).
>>
>> How much total memory on the machine?
>>
>> Kevin Risden
>>
>> On Sat, Nov 11, 2017 at 1:08 PM, Nawab Zada Asad Iqbal 
>> wrote:
>>
>> > Thanks for a quick and detailed response, Erick!
>> >
>> > Unfortunately i don't have a proof, but our servers with solr 4.5 are
>> > running really nicely with the above config. I had assumed that same  or
>> > similar settings will also perform well with Solr 7, but that assumption
>> > didn't hold. As, a lot has changed in 3 major releases.
>> > I have tweaked the cache values as you suggested but increasing or
>> > decreasing doesn't seem to do any noticeable improvement.
>> >
>> > At the moment, my one core has 800GB index, ~450 Million documents, 48 G
>> > Xmx. GC pauses haven't been an issue though.  One machine runs with a 3TB
>> > drive, running 3 solr processes (each with one core as described
>> above).  I
>> > agree that it is a very atypical system so i should probably try
>> different
>> > parameters with a fresh eye to find the solution.
>> >
>> >
>> > I tried with autocommits (commit with opensearcher=false very half
>> minute ;
>> > and softcommit every 5 minutes). That supported the hypothesis that the
>> > query throughput decreases after opening a new searcher and **not** after
>> > committing the index . Cache hit ratios are all in 80+% (even when i
>> > decreased the filterCache to 128, so i will keep it at this lower value).
>> > Document cache hitratio is really bad, it drops to around 40% after
>> > newSearcher. But i guess that is expected, since it cannot be warmed up
>> > anyway.
>> >
>> >
>> > Thanks
>> > Nawab
>> >
>> >
>> >
>> > On Thu, Nov 9, 2017 at 9:11 PM, Erick Erickson 
>> > wrote:
>> >
>> > > What evidence to you have that the changes you've made to your configs
>> > > are useful? There's lots of things in here that 

Re: Solr7: Bad query throughput around commit time

2017-11-11 Thread Nawab Zada Asad Iqbal
~248 gb

Nawab


On Sat, Nov 11, 2017 at 2:41 PM Kevin Risden  wrote:

> > One machine runs with a 3TB drive, running 3 solr processes (each with
> one core as described above).
>
> How much total memory on the machine?
>
> Kevin Risden
>
> On Sat, Nov 11, 2017 at 1:08 PM, Nawab Zada Asad Iqbal 
> wrote:
>
> > Thanks for a quick and detailed response, Erick!
> >
> > Unfortunately i don't have a proof, but our servers with solr 4.5 are
> > running really nicely with the above config. I had assumed that same  or
> > similar settings will also perform well with Solr 7, but that assumption
> > didn't hold. As, a lot has changed in 3 major releases.
> > I have tweaked the cache values as you suggested but increasing or
> > decreasing doesn't seem to do any noticeable improvement.
> >
> > At the moment, my one core has 800GB index, ~450 Million documents, 48 G
> > Xmx. GC pauses haven't been an issue though.  One machine runs with a 3TB
> > drive, running 3 solr processes (each with one core as described
> above).  I
> > agree that it is a very atypical system so i should probably try
> different
> > parameters with a fresh eye to find the solution.
> >
> >
> > I tried with autocommits (commit with opensearcher=false very half
> minute ;
> > and softcommit every 5 minutes). That supported the hypothesis that the
> > query throughput decreases after opening a new searcher and **not** after
> > committing the index . Cache hit ratios are all in 80+% (even when i
> > decreased the filterCache to 128, so i will keep it at this lower value).
> > Document cache hitratio is really bad, it drops to around 40% after
> > newSearcher. But i guess that is expected, since it cannot be warmed up
> > anyway.
> >
> >
> > Thanks
> > Nawab
> >
> >
> >
> > On Thu, Nov 9, 2017 at 9:11 PM, Erick Erickson 
> > wrote:
> >
> > > What evidence to you have that the changes you've made to your configs
> > > are useful? There's lots of things in here that are suspect:
> > >
> > >   1
> > >
> > > First, this is useless unless you are forceMerging/optimizing. Which
> > > you shouldn't be doing under most circumstances. And you're going to
> > > be rewriting a lot of data every time See:
> > >
> > > https://lucidworks.com/2017/10/13/segment-merging-deleted-
> > > documents-optimize-may-bad/
> > >
> > > filterCache size of size="10240" is far in excess of what we usually
> > > recommend. Each entry can be up to maxDoc/8 and you have 10K of them.
> > > Why did you choose this? On the theory that "more is better?" If
> > > you're using NOW then you may not be using the filterCache well, see:
> > >
> > > https://lucidworks.com/2012/02/23/date-math-now-and-filter-queries/
> > >
> > > autowarmCount="1024"
> > >
> > > Every time you commit you're firing off 1024 queries which is going to
> > > spike the CPU a lot. Again, this is super-excessive. I usually start
> > > with 16 or so.
> > >
> > > Why are you committing from a cron job? Why not just set your
> > > autocommit settings and forget about it? That's what they're for.
> > >
> > > Your queryResultCache is likewise kind of large, but it takes up much
> > > less space than the filterCache per entry so it's probably OK. I'd
> > > still shrink it and set the autowarm to 16 or so to start, unless
> > > you're seeing a pretty high hit ratio, which is pretty unusual but
> > > does happen.
> > >
> > > 48G of memory is just asking for long GC pauses. How many docs do you
> > > have in each core anyway? If you're really using this much heap, then
> > > it'd be good to see what you can do to shrink in. Enabling docValues
> > > for all fields you facet, sort or group on will help that a lot if you
> > > haven't already.
> > >
> > > How much memory on your entire machine? And how much is used by _all_
> > > the JVMs you running on a particular machine? MMapDirectory needs as
> > > much OS memory space as it can get, see:
> > >
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> > >
> > > Lately we've seen some structures that consume memory until a commit
> > > happens (either soft or hard). I'd shrink my autocommit down to 60
> > > seconds or even less (openSearcher=false).
> > >
> > > In short, I'd go back mostly to the default settings and build _up_ as
> > > you can demonstrate improvements. You've changed enough things here
> > > that untangling which one is the culprit will be hard. You want the
> > > JVM to have as little memory as possible, unfortunately that's
> > > something you figure out by experimentation.
> > >
> > > Best,
> > > Erick
> > >
> > > On Thu, Nov 9, 2017 at 8:42 PM, Nawab Zada Asad Iqbal <
> khi...@gmail.com>
> > > wrote:
> > > > Hi,
> > > >
> > > > I am committing every 5 minutes using a periodic cron job  "curl
> > > > http://localhost:8984/solr/core1/update?commit=true;. Besides this,
> my
> > > app
> > > > doesn't do any soft or hard commits. With Solr 7 upgrade, I am
> noticing
> > > > 

Re: Solr7: Bad query throughput around commit time

2017-11-11 Thread Kevin Risden
> One machine runs with a 3TB drive, running 3 solr processes (each with
one core as described above).

How much total memory on the machine?

Kevin Risden

On Sat, Nov 11, 2017 at 1:08 PM, Nawab Zada Asad Iqbal 
wrote:

> Thanks for a quick and detailed response, Erick!
>
> Unfortunately i don't have a proof, but our servers with solr 4.5 are
> running really nicely with the above config. I had assumed that same  or
> similar settings will also perform well with Solr 7, but that assumption
> didn't hold. As, a lot has changed in 3 major releases.
> I have tweaked the cache values as you suggested but increasing or
> decreasing doesn't seem to do any noticeable improvement.
>
> At the moment, my one core has 800GB index, ~450 Million documents, 48 G
> Xmx. GC pauses haven't been an issue though.  One machine runs with a 3TB
> drive, running 3 solr processes (each with one core as described above).  I
> agree that it is a very atypical system so i should probably try different
> parameters with a fresh eye to find the solution.
>
>
> I tried with autocommits (commit with opensearcher=false very half minute ;
> and softcommit every 5 minutes). That supported the hypothesis that the
> query throughput decreases after opening a new searcher and **not** after
> committing the index . Cache hit ratios are all in 80+% (even when i
> decreased the filterCache to 128, so i will keep it at this lower value).
> Document cache hitratio is really bad, it drops to around 40% after
> newSearcher. But i guess that is expected, since it cannot be warmed up
> anyway.
>
>
> Thanks
> Nawab
>
>
>
> On Thu, Nov 9, 2017 at 9:11 PM, Erick Erickson 
> wrote:
>
> > What evidence to you have that the changes you've made to your configs
> > are useful? There's lots of things in here that are suspect:
> >
> >   1
> >
> > First, this is useless unless you are forceMerging/optimizing. Which
> > you shouldn't be doing under most circumstances. And you're going to
> > be rewriting a lot of data every time See:
> >
> > https://lucidworks.com/2017/10/13/segment-merging-deleted-
> > documents-optimize-may-bad/
> >
> > filterCache size of size="10240" is far in excess of what we usually
> > recommend. Each entry can be up to maxDoc/8 and you have 10K of them.
> > Why did you choose this? On the theory that "more is better?" If
> > you're using NOW then you may not be using the filterCache well, see:
> >
> > https://lucidworks.com/2012/02/23/date-math-now-and-filter-queries/
> >
> > autowarmCount="1024"
> >
> > Every time you commit you're firing off 1024 queries which is going to
> > spike the CPU a lot. Again, this is super-excessive. I usually start
> > with 16 or so.
> >
> > Why are you committing from a cron job? Why not just set your
> > autocommit settings and forget about it? That's what they're for.
> >
> > Your queryResultCache is likewise kind of large, but it takes up much
> > less space than the filterCache per entry so it's probably OK. I'd
> > still shrink it and set the autowarm to 16 or so to start, unless
> > you're seeing a pretty high hit ratio, which is pretty unusual but
> > does happen.
> >
> > 48G of memory is just asking for long GC pauses. How many docs do you
> > have in each core anyway? If you're really using this much heap, then
> > it'd be good to see what you can do to shrink in. Enabling docValues
> > for all fields you facet, sort or group on will help that a lot if you
> > haven't already.
> >
> > How much memory on your entire machine? And how much is used by _all_
> > the JVMs you running on a particular machine? MMapDirectory needs as
> > much OS memory space as it can get, see:
> > http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> >
> > Lately we've seen some structures that consume memory until a commit
> > happens (either soft or hard). I'd shrink my autocommit down to 60
> > seconds or even less (openSearcher=false).
> >
> > In short, I'd go back mostly to the default settings and build _up_ as
> > you can demonstrate improvements. You've changed enough things here
> > that untangling which one is the culprit will be hard. You want the
> > JVM to have as little memory as possible, unfortunately that's
> > something you figure out by experimentation.
> >
> > Best,
> > Erick
> >
> > On Thu, Nov 9, 2017 at 8:42 PM, Nawab Zada Asad Iqbal 
> > wrote:
> > > Hi,
> > >
> > > I am committing every 5 minutes using a periodic cron job  "curl
> > > http://localhost:8984/solr/core1/update?commit=true;. Besides this, my
> > app
> > > doesn't do any soft or hard commits. With Solr 7 upgrade, I am noticing
> > > that query throughput plummets every 5 minutes - probably when the
> commit
> > > happens.
> > > What can I do to improve this? I didn't use to happen like this in
> > solr4.5.
> > > (i.e., i used to get a stable query throughput of  50-60 queries per
> > > second. Now there are spikes to 60 qps interleaved by drops to almost

Re: Solr7: Bad query throughput around commit time

2017-11-11 Thread Nawab Zada Asad Iqbal
Thanks for a quick and detailed response, Erick!

Unfortunately i don't have a proof, but our servers with solr 4.5 are
running really nicely with the above config. I had assumed that same  or
similar settings will also perform well with Solr 7, but that assumption
didn't hold. As, a lot has changed in 3 major releases.
I have tweaked the cache values as you suggested but increasing or
decreasing doesn't seem to do any noticeable improvement.

At the moment, my one core has 800GB index, ~450 Million documents, 48 G
Xmx. GC pauses haven't been an issue though.  One machine runs with a 3TB
drive, running 3 solr processes (each with one core as described above).  I
agree that it is a very atypical system so i should probably try different
parameters with a fresh eye to find the solution.


I tried with autocommits (commit with opensearcher=false very half minute ;
and softcommit every 5 minutes). That supported the hypothesis that the
query throughput decreases after opening a new searcher and **not** after
committing the index . Cache hit ratios are all in 80+% (even when i
decreased the filterCache to 128, so i will keep it at this lower value).
Document cache hitratio is really bad, it drops to around 40% after
newSearcher. But i guess that is expected, since it cannot be warmed up
anyway.


Thanks
Nawab



On Thu, Nov 9, 2017 at 9:11 PM, Erick Erickson 
wrote:

> What evidence to you have that the changes you've made to your configs
> are useful? There's lots of things in here that are suspect:
>
>   1
>
> First, this is useless unless you are forceMerging/optimizing. Which
> you shouldn't be doing under most circumstances. And you're going to
> be rewriting a lot of data every time See:
>
> https://lucidworks.com/2017/10/13/segment-merging-deleted-
> documents-optimize-may-bad/
>
> filterCache size of size="10240" is far in excess of what we usually
> recommend. Each entry can be up to maxDoc/8 and you have 10K of them.
> Why did you choose this? On the theory that "more is better?" If
> you're using NOW then you may not be using the filterCache well, see:
>
> https://lucidworks.com/2012/02/23/date-math-now-and-filter-queries/
>
> autowarmCount="1024"
>
> Every time you commit you're firing off 1024 queries which is going to
> spike the CPU a lot. Again, this is super-excessive. I usually start
> with 16 or so.
>
> Why are you committing from a cron job? Why not just set your
> autocommit settings and forget about it? That's what they're for.
>
> Your queryResultCache is likewise kind of large, but it takes up much
> less space than the filterCache per entry so it's probably OK. I'd
> still shrink it and set the autowarm to 16 or so to start, unless
> you're seeing a pretty high hit ratio, which is pretty unusual but
> does happen.
>
> 48G of memory is just asking for long GC pauses. How many docs do you
> have in each core anyway? If you're really using this much heap, then
> it'd be good to see what you can do to shrink in. Enabling docValues
> for all fields you facet, sort or group on will help that a lot if you
> haven't already.
>
> How much memory on your entire machine? And how much is used by _all_
> the JVMs you running on a particular machine? MMapDirectory needs as
> much OS memory space as it can get, see:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
> Lately we've seen some structures that consume memory until a commit
> happens (either soft or hard). I'd shrink my autocommit down to 60
> seconds or even less (openSearcher=false).
>
> In short, I'd go back mostly to the default settings and build _up_ as
> you can demonstrate improvements. You've changed enough things here
> that untangling which one is the culprit will be hard. You want the
> JVM to have as little memory as possible, unfortunately that's
> something you figure out by experimentation.
>
> Best,
> Erick
>
> On Thu, Nov 9, 2017 at 8:42 PM, Nawab Zada Asad Iqbal 
> wrote:
> > Hi,
> >
> > I am committing every 5 minutes using a periodic cron job  "curl
> > http://localhost:8984/solr/core1/update?commit=true;. Besides this, my
> app
> > doesn't do any soft or hard commits. With Solr 7 upgrade, I am noticing
> > that query throughput plummets every 5 minutes - probably when the commit
> > happens.
> > What can I do to improve this? I didn't use to happen like this in
> solr4.5.
> > (i.e., i used to get a stable query throughput of  50-60 queries per
> > second. Now there are spikes to 60 qps interleaved by drops to almost
> > **0**).  Between those 5 minutes, I am able to achieve high throughput,
> > hence I guess that issue is related to indexing or merging, and not query
> > flow.
> >
> > I have 48G allotted to each solr process, and it seems that only ~50% is
> > being used at any time, similarly CPU is not spiking beyond 50% either.
> > There is frequent merging (every 5 minute) , but i am not sure if that is
> > a cause of the slowdown.
> >
> > Here 

Re: Streaming and large resultsets

2017-11-11 Thread Susmit Shukla
Hi Lanny,

For long running streaming queries with many shards and huge resultsets,
solrj's default settings for http max connections/connections per host may
not be enough. If you are using the worker collection (/stream), it depends
on dispensing http clients using SolrClientCache with default limits. Could
be useful to turn on debug logging and check.

Thanks,
Susmit

On Thu, Nov 9, 2017 at 8:35 PM, Lanny Ripple  wrote:

> First, Joel, thanks for your help on this.
>
> 1) I have to admit we really haven't played with a lot of system tuning
> recently (before DocValues for sure).   We'll go through another tuning
> round.
>
> 2) At the time I ran these numbers this morning we were not indexing.  We
> build this collection once a month and then client jobs can update it.  I
> was watching our job queue and there were no jobs running at that time.
> It's possible someone else was querying against other collections but they
> wouldn't have been updating this collection.
>
> 3) I'll try /export on each node.  We're pretty cookie-cutter with all
> nodes being the same and configuration controlled with puppet.  We collect
> system metrics to a Graphite display panel and no host looks out of sorts
> relative to the others.  That said I wouldn't be surprised if a node was
> out of whack.
>
> Thanks again.
>   -ljr
>
> On Thu, Nov 9, 2017 at 2:34 PM Joel Bernstein  wrote:
>
> > In my experience this should be very fast:
> >
> >  search(graph-october,
> > q="outs:tokenA",
> > fl="id,token",
> > sort="id asc",
> > qt="/export",
> > wt=javabin)
> >
> >
> > When the DocValues cache is statically warmed for the two output fields I
> > would see somewhere around 500,000 docs per second exported from a single
> > node.
> >
> > You have sixteen shards which would give you 16 times the throughput. But
> > off course the data is being sent back through the single aggregator node
> > so your throughput is only as fast as the aggregator node can process the
> > results.
> >
> > This does not explain the slowness that you are seeing. I see a couple of
> > possible reasons:
> >
> > 1) The memory on the system is not tuned optimally. You allocated a large
> > amount of memory to the heap and are not providing enough memory to OS
> > filesystem. Lucene DocValues use the OS filesystem cache for the
> DocValues
> > caches. So I would bump down the size of heap considerably.
> >
> > 2) Are you indexing while querying at all? If you are you would need to
> be
> > statically warming the DocValues caches for the id field which is used
> for
> > sorting. Following each commit there is a top level docvalues cache that
> is
> > rebuilt for sorting on string fields. If you use a static warming query
> it
> > will warm the cache before making the new searcher live for searchers. I
> > would also pause indexing if possible and run queries only to see how it
> > runs without indexing.
> >
> > 3) Try running a query directly to /export handler on each node. Possibly
> > one of your nodes is slow for some reason and that is causing the entire
> > query to respond slowly.
> >
> >
> >
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> > On Thu, Nov 9, 2017 at 2:22 PM, Lanny Ripple 
> wrote:
> >
> > > Happy to do so.  I am testing streams for the first time so we don't
> have
> > > any 5.x experience.  The collection I'm testing was loaded after going
> to
> > > 6.6.1 and fixing up the solrconfig for lucene_version and removing the
> > > /export clause.  The indexes run 57G per replica.  We are using 64G
> hosts
> > > with 48G heaps using G1GC but this isn't our only large collection.
> This
> > > morning as I'm running these our cluster is quiet.  I realize some of
> the
> > > performance we are seeing is going to be our data size so not expecting
> > any
> > > silver bullets.
> > >
> > > We are storing 858M documents that are basically
> > >
> > > id: String
> > > token: String
> > > outs: String[]
> > > outsCount: Int
> > >
> > > All stored=true, docvalues=true.
> > >
> > > The `outs` reference a select number of tokens (1.26M).  Here are
> current
> > > percentiles of our outsCount
> > >
> > > `outsCount`
> > > 50%   12
> > > 85%  127
> > > 98%  937
> > > 99.9% 16,284
> > >
> > > I'll display the /stream query but I'm setting up the same thing in
> > solrj.
> > > I'm going to name our small result set "tokenA" and our large one
> > "tokenV".
> > >
> > >   search(graph-october,
> > > q="outs:tokenA",
> > > fl="id,token",
> > > sort="id asc",
> > > qt="/export",
> > > wt=javabin)
> > >
> > > I've placed this in file /tmp/expr and invoke with
> > >
> > >   curl -sSN -m 3600 --data-urlencode expr@/tmp/expr
> > > http://host/solr/graph-october/stream
> > >
> > > The large resultset query replaces "tokenA" with "tokenV".
> > >
> > > My /select query is
> > >
> > >   curl -sSN -m 3600 -d wt=csv -d rows=1 -d 

Re: Nested facet complete wrong counts

2017-11-11 Thread Kenny Knecht
RRGG - [banging my head against the wall]
Of course. You are abolutely right about the multi valuedness
Thanks for the 7.0 hint. Gives a reason to upgrade.
Need to re-index when upgrading?

Kenny



[image: ONTOFORCE] 
Kenny Knecht, PhD
CTO and technical lead
+32 486 75 66 16 <0032498464291>
ke...@ontoforce.com 
www.ontoforce.com Meetdistrict, Ottergemsesteenweg-Zuid 808, 9000 Gent,
Belgium
CIC, One Broadway, MA 02142 Cambridge, United States

On 11 November 2017 at 15:52, Yonik Seeley  wrote:

> Also, If you're looking at all constraints, you shouldn't need refine:true
> But if you do need it, it was only added in Solr 7.0 (and I see you're
> using 6.6)
>
> -Yonik
>
>
> On Sat, Nov 11, 2017 at 9:48 AM, Yonik Seeley  wrote:
> > On Sat, Nov 11, 2017 at 9:18 AM, Kenny Knecht 
> wrote:
> >> Hi Yonik,
> >>
> >> I am aware of the estimate on the hll. But we don't use the hll as a
> >> baseline for comparison. We ask the values for one facet (for example
> >> Gender). We store these counts for each bucket. Next we do another
> request.
> >> This time for a facet and a subfacet (for example Gender x Type). We sum
> >> all the values of Type with the same Gender and compare these sums with
> the
> >> numbers of previous request. These numbers differ by 60% which is quite
> >> worrying. Not always it depends on the facet, but still.
> >> Did you get any reports like this?
> >
> > Nope.  The counts for the scenario you describe should add up exactly
> > for single-valued fields.  Are you sure you're adding in the "missing"
> > bucket?
> >
> > When you some up the sub-facets on Type, do you get a value under or
> > over the counts on the parent facet?
> > Verify that Type is single-valued.  One would not expect facets on a
> > multi-valued field to add up in the same way.
> > Verify that you're getting all of the Type constraints by using a
> > limit of -1on that sub-facet.
> >
> > -Yonik
>


Re: Nested facet complete wrong counts

2017-11-11 Thread Yonik Seeley
Also, If you're looking at all constraints, you shouldn't need refine:true
But if you do need it, it was only added in Solr 7.0 (and I see you're
using 6.6)

-Yonik


On Sat, Nov 11, 2017 at 9:48 AM, Yonik Seeley  wrote:
> On Sat, Nov 11, 2017 at 9:18 AM, Kenny Knecht  wrote:
>> Hi Yonik,
>>
>> I am aware of the estimate on the hll. But we don't use the hll as a
>> baseline for comparison. We ask the values for one facet (for example
>> Gender). We store these counts for each bucket. Next we do another request.
>> This time for a facet and a subfacet (for example Gender x Type). We sum
>> all the values of Type with the same Gender and compare these sums with the
>> numbers of previous request. These numbers differ by 60% which is quite
>> worrying. Not always it depends on the facet, but still.
>> Did you get any reports like this?
>
> Nope.  The counts for the scenario you describe should add up exactly
> for single-valued fields.  Are you sure you're adding in the "missing"
> bucket?
>
> When you some up the sub-facets on Type, do you get a value under or
> over the counts on the parent facet?
> Verify that Type is single-valued.  One would not expect facets on a
> multi-valued field to add up in the same way.
> Verify that you're getting all of the Type constraints by using a
> limit of -1on that sub-facet.
>
> -Yonik


Re: Nested facet complete wrong counts

2017-11-11 Thread Yonik Seeley
On Sat, Nov 11, 2017 at 9:18 AM, Kenny Knecht  wrote:
> Hi Yonik,
>
> I am aware of the estimate on the hll. But we don't use the hll as a
> baseline for comparison. We ask the values for one facet (for example
> Gender). We store these counts for each bucket. Next we do another request.
> This time for a facet and a subfacet (for example Gender x Type). We sum
> all the values of Type with the same Gender and compare these sums with the
> numbers of previous request. These numbers differ by 60% which is quite
> worrying. Not always it depends on the facet, but still.
> Did you get any reports like this?

Nope.  The counts for the scenario you describe should add up exactly
for single-valued fields.  Are you sure you're adding in the "missing"
bucket?

When you some up the sub-facets on Type, do you get a value under or
over the counts on the parent facet?
Verify that Type is single-valued.  One would not expect facets on a
multi-valued field to add up in the same way.
Verify that you're getting all of the Type constraints by using a
limit of -1on that sub-facet.

-Yonik


Re: Nested facet complete wrong counts

2017-11-11 Thread Kenny Knecht
Hi Yonik,

I am aware of the estimate on the hll. But we don't use the hll as a
baseline for comparison. We ask the values for one facet (for example
Gender). We store these counts for each bucket. Next we do another request.
This time for a facet and a subfacet (for example Gender x Type). We sum
all the values of Type with the same Gender and compare these sums with the
numbers of previous request. These numbers differ by 60% which is quite
worrying. Not always it depends on the facet, but still.
Did you get any reports like this?

Thanks

Kenny

Op 11-nov.-2017 01:47 schreef "Yonik Seeley" :

> I do notice you are using hll (hyper-log-log) which is a distributed
> cardinality *estimate* : https://en.wikipedia.org/wiki/HyperLogLog
>
> -Yonik
>
>
> On Fri, Nov 10, 2017 at 11:32 AM, kenny  wrote:
> > Hi all,
> >
> > We are doing some tests in solr 6.6 with json facet api and we get
> > completely wrong counts for some combination of  facets
> >
> > Setting: We have a set of fields for 376k documents in our query (total
> 120M
> > documents). We work with 2 shards. When doing first a faceting over the
> > first facet and keeping these numbers, we subsequently do a nested
> faceting
> > over both facets.
> >
> > Then we add the numbers of sub-facet and expect to get the
> (approximately)
> > the same numbers back. Sometimes we get rounding errors of about 1%
> > difference. But on other occasions it seems to way off
> >
> > for example
> >
> > Gender (3 values) Country (211 values)
> > 16226 - 18424 = -2198 (-13.5461604832%)
> > 282854 - 464387 = -181533 (-64.1790464338%)
> > 40489 - 47902 = -7413 (-18.3086764306%)
> > 36672 - 49749 = -13077 (-35.6593586387%)
> >
> > Gender (3 values)  Status (17 Values)
> > 16226 - 16273 = -47 (-0.289658572661%)
> > 282854 - 435974 = -153120 (-54.1339348215%)
> > 40489 - 49925 = -9436 (-23.305095211%)
> > 36672 - 54019 = -17347 (-47.3031195462%)
> >
> > ...
> >
> > These are the typical requests we submit. So note that we have refine
> and an
> > overrequest, but we in the case of Gender vs Request we should query all
> the
> > buckets anyway.
> >
> > {"wt":"json","rows":0,"json.facet":"{\"Status_sfhll\":\"
> hll(Status_sf)\",\"Status_sf\":{\"type\":\"terms\",\"field\"
> :\"Status_sf\",\"missing\":true,\"refine\":true,\"
> overrequest\":50,\"limit\":50,\"offset\":0}}","q":"*:*","fq"
> :["type:\"something\""]}
> >
> > {"wt":"json","rows":0,"json.facet":"{\"Gender_sf\":{\"
> type\":\"terms\",\"field\":\"Gender_sf\",\"missing\":true,\
> "refine\":true,\"overrequest\":10,\"limit\":10,\"offset\":0,
> \"facet\":{\"Status_sf\":{\"type\":\"terms\",\"field\":\"
> Status_sf\",\"missing\":true,\"refine\":true,\"overrequest\"
> :50,\"limit\":50,\"offset\":0}}},\"Gender_sfhll\":\"hll(
> Gender_sf)\"}","q":"*:*","fq":["type:\"something\""]}
> >
> > Is this a known bug? Would switching to old facet api resolve this? Are
> > there other parameters we miss?
> >
> >
> > Thanks
> >
> >
> > kenny
> >
> >
>


Re: Nested facet complete wrong counts

2017-11-11 Thread Kenny Knecht
Thank you. But as I showed in my example we used refine and overrequest is
not strictly needed because we need all buckets anyway. But that can hardly
explain an error of 60%, right?

Op 10-nov.-2017 19:29 schreef "Amrit Sarkar" :

> Kenny,
>
> This is a known behavior in multi-sharded collection where the field values
> belonging to same facet doesn't reside in same shard. Yonik Seeley has
> improved the Json Facet feature by introducing "overrequest" and "refine"
> parameters.
>
> Kindly checkout Jira:
> https://issues.apache.org/jira/browse/SOLR-7452
> https://issues.apache.org/jira/browse/SOLR-9432
>
> Relevant blog: https://medium.com/@abb67cbb46b/1acfa77cd90c
>
> On 10 Nov 2017 10:02 p.m., "kenny"  wrote:
>
> > Hi all,
> >
> > We are doing some tests in solr 6.6 with json facet api and we get
> > completely wrong counts for some combination of  facets
> >
> > Setting: We have a set of fields for 376k documents in our query (total
> > 120M documents). We work with 2 shards. When doing first a faceting over
> > the first facet and keeping these numbers, we subsequently do a nested
> > faceting over both facets.
> >
> > Then we add the numbers of sub-facet and expect to get the
> (approximately)
> > the same numbers back. Sometimes we get rounding errors of about 1%
> > difference. But on other occasions it seems to way off
> >
> > for example
> >
> > Gender (3 values) Country (211 values)
> > 16226 - 18424 = -2198 (-13.5461604832%)
> > 282854 - 464387 = -181533 (-64.1790464338%)
> > 40489 - 47902 = -7413 (-18.3086764306%)
> > 36672 - 49749 = -13077 (-35.6593586387%)
> >
> > Gender (3 values)  Status (17 Values)
> > 16226 - 16273 = -47 (-0.289658572661%)
> > 282854 - 435974 = -153120 (-54.1339348215%)
> > 40489 - 49925 = -9436 (-23.305095211%)
> > 36672 - 54019 = -17347 (-47.3031195462%)
> >
> > ...
> >
> > These are the typical requests we submit. So note that we have refine and
> > an overrequest, but we in the case of Gender vs Request we should query
> all
> > the buckets anyway.
> >
> > {"wt":"json","rows":0,"json.facet":"{\"Status_sfhll\":\"hll(
> > Status_sf)\",\"Status_sf\":{\"type\":\"terms\",\"field\":\"S
> > tatus_sf\",\"missing\":true,\"refine\":true,\"overrequest\":
> > 50,\"limit\":50,\"offset\":0}}","q":"*:*","fq":["type:\"something\""]}
> >
> > {"wt":"json","rows":0,"json.facet":"{\"Gender_sf\":{\"type\"
> > :\"terms\",\"field\":\"Gender_sf\",\"missing\":true,\"refine
> > \":true,\"overrequest\":10,\"limit\":10,\"offset\":0,\"
> > facet\":{\"Status_sf\":{\"type\":\"terms\",\"field\":\"Statu
> > s_sf\",\"missing\":true,\"refine\":true,\"overrequest\":50,\
> > "limit\":50,\"offset\":0}}},\"Gender_sfhll\":\"hll(Gender_
> > sf)\"}","q":"*:*","fq":["type:\"something\""]}
> >
> > Is this a known bug? Would switching to old facet api resolve this? Are
> > there other parameters we miss?
> >
> >
> > Thanks
> >
> >
> > kenny
> >
> >
> >
>