hi all, 

We did try the q=queryA AND queryB, vs q=queryA&fq=queryB.   For all tests,
we commented out caching, and reload core between queries to be ultra sure
that we are getting good comps on time.

we have so many unique Fq and such frequent commits that caches are always
invalidated, so our tests for the most part were with caches all commented
out.   Further, we have seen some gains in using an autoCommit strategy of
10 or 15 seconds, but still the first queries are horrible.

Also, we tried to set some of these &fq in the "newSearcher" so that it
warms at least an OS caches once before registering the Searcher as
available.

The index size is around 121GB.  So it's just outside of Modest size, but
not yet unacceptable range.  THe docs are all modest in content.  Small
pieces of content, mostly strings, think Metadata of PDF files for the most
part.  not even the OCR content of them, just great well defined metadata.

[quote]
 How much memory are you giving the JVM? Are you autowarming? Are you
indexing while this is going on, and if what are your commit parameters? If
you add &debug=true to your query, one of 
the returned sections 
[/quote]

We tried with several sizes of heap, the gains were minimal.  Above that no
gain.
If we use autowarming in either filterCache or NewSearcher query, the query
takes too long, then several newsearcher classes get created and we start
seeing maximum newSearcher exceeded errors

It's by using &debug=true and &debug=timing that we isolated this.  the
Query time took the longest.  Sometimes Prepare takes a little time too.
Forget it if we add a facet that adds another 500+ ms at the low end ... 

Very perplexing and fun challenge.  Thank Toke for that info on the heap
size pointers, we will dial it down on the Heap size

Anria 





--
View this message in context: 
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250798.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to