Anria B. wrote:
> Schema investigations : the are frequently on Multivalued string
f> ields, and we believe that it may also be slowing down the even more,
> but we were wondering why. When we run on single valued fields its
> faster than the multi-valued fields, even
, State and University Library, Denmark
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4251176.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Wed, Jan 13, 2016 at 7:01 PM, Shawn Heisey wrote:
[...]
>> 2. q=*=someField:SomeVal ---> takes 2.5 seconds
>> 3.q=someField:SomeVal --> 300ms
[...]
>>
>> have any of you encountered such a thing?
>> that FQ degrades query time by so much?
> A value of * for
and pointers and hints, you kept us busy
with changing our mindset on a lot of things here.
Regards
Anria
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4251212.html
Sent from the Solr - User mailing list archive at Nabble.com.
Anria B. wrote:
> Thanks Toke for this. It gave us a ton to think about, and it really helps
> supporting the notion of several smaller indexes over one very large one,>
> where we can rather distribute a few JVM processes with less size each, than
> have one massive one
Jack:
I think that was for faceting? SOLR-8096 maybe?
On Thu, Jan 14, 2016 at 12:25 AM, Toke Eskildsen
wrote:
> On Wed, 2016-01-13 at 15:01 -0700, Anria B. wrote:
>
> [256GB RAM]
>
>> 1. Collection has 20-30 million docs.
>
> Just for completeness: How large is the
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250798.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Wed, 2016-01-13 at 15:01 -0700, Anria B. wrote:
[256GB RAM]
> 1. Collection has 20-30 million docs.
Just for completeness: How large is the collection in bytes?
> 2. q=*=someField:SomeVal ---> takes 2.5 seconds
> 3.q=someField:SomeVal --> 300ms
> 4. as numFound -> infinity,
;: 0
},
"facet": {
"time": 0
},
"mlt": {
"time": 0
},
"highlight": {
"time": 0
},
"stats": {
"time": 0
},
"debug": {
"time": 0
}
},
"process": {
"time": 266,
"query": {
"time": 266
},
"facet": {
"time": 0
},
"mlt": {
"time": 0
},
"highlight": {
"time": 0
},
"stats": {
"time": 0
},
"debug": {
"time": 0
}
}
}
}
}
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250823.html
Sent from the Solr - User mailing list archive at Nabble.com.
in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250855.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 1/14/2016 12:07 PM, Anria B. wrote:
> Here are some Actual examples, if it helps
>
> wt=json=*:*=on=SolrDocumentType:"invalidValue"=timestamp=0=0=timing
> "QTime": 590,
> Now we wipe out all caches, and put the filter in q.
>
>
r.java:617)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250836.html
Sent from the Solr - User ma
On 1/14/2016 1:01 PM, Anria B. wrote:
> Here is a stacktrace of when we put a in the autowarming, or in the
> "newSearcher" to warm up the collection after a commit.
> org.apache.solr.core.SolrCore - org.apache.solr.common.SolrException: Error
> opening new searcher. exceeded limit of
That sounds like it. Sorry my memory is so hazy.
Maybe Yonik can either confirm that that Jira is still outstanding or close
it, and confirm if these symptoms are related.
-- Jack Krupansky
On Thu, Jan 14, 2016 at 10:54 AM, Erick Erickson
wrote:
> Jack:
>
> I think
n, caches off ... this same
> > phenomenon persisted.
> >
> > As for Tomcat, it's an easy enough test to run it in Jetty. We will sure
> > try that! GC we've had default and G1 setups.
> >
> > Thanks for giving us something to think about
> >
> > Anria
> >
> >
> >
> > --
> > View this message in context:
> http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250600.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>
for giving us something to think about
Anria
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250600.html
Sent from the Solr - User mailing list archive at Nabble.com.
- -Xmx=64GB.
Thanks
Anria
--
View this message in context:
http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 1/13/2016 3:01 PM, Anria B. wrote:
> I have a Really fun question to ask. I'm sitting here looking at what is by
> far the beefiest box I've ever seen in my life. 256GB of RAM, extreme
> TerraBytes of disc space, the works. Linux server properly partitioned
>
> Yet, what we are seeing goes
omething to think about
>
> Anria
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/fq-degrades-qtime-in-a-20million-doc-collection-tp4250567p4250600.html
> Sent from the Solr - User mailing list archive at Nabble.com.
19 matches
Mail list logo