On Wed, 2013-04-24 at 23:10 +0200, Daniel Tyreus wrote:
> But why is it slow to generate facets on a result set of 0? Furthermore,
> why does it take the same amount of time to generate facets on a result set
> of 2000 as 100,000 documents?

The default faceting method for your query is field cache. Field cache
faceting works by generating a structure for all the values for the
field in the whole corpus. It is exactly the same work whether you hit
0, 2K or 100M documents with your query.

After the structure has been build, the actual counting of values in the
facet is fast. There is not much difference between 2K and 100K hits.

> This leads me to believe that the FQ is being applied AFTER the facets are
> calculated on the whole data set. For my use case it would make a ton of
> sense to apply the FQ first and then facet. Is it possible to specify this
> behavior or do I need to get into the code and get my hands dirty?

As you write later, you have tried fc, enum and fcs, with fcs having the
fastest first-request-time time. That is understandable as it is
segment-oriented and (nearly) just a matter of loading the values
sequentially from storage. However, the general observation is that it
is about 10 times as slow as the fc-method for subsequent queries. Since
you are doing NRT that might still leave fcs as the best method for you.

As for creating a new faceting implementation that avoids the startup
penalty by using only the found documents, then it is technically quite
simple: Use stored fields, iterate the hits and request the values.
Unfortunately this scales poorly with the number of hits, so unless you
can guarantee that you will always have small result sets, this is
probably not a viable option.

- Toke Eskildsen, State and University Library, Denmark

Reply via email to