: 29.557.308
Regards,
Bernd
Am 22.07.2011 00:10, schrieb Santiago Bazerque:
Hello Erick,
I have a 1.7MM documents, 3.6GB index. I also hava an unusual amount of
dynamic fields, that I use for sorting. My FieldCache currently has about
13.000 entries, even though my index only has 1-3 queries
Hello Erick,
I have a 1.7MM documents, 3.6GB index. I also hava an unusual amount of
dynamic fields, that I use for sorting. My FieldCache currently has about
13.000 entries, even though my index only has 1-3 queries per second. Each
query sorts by two dynamic fields, and facets on 3-4 fields
...
Best
Erick
On Sun, Jun 19, 2011 at 10:32 AM, Santiago Bazerque sbazer...@gmail.com
wrote:
Hello Erick, thanks for your answer!
Yes, our over-optimization is mainly due to paranoia over these strange
commit times. The long optimize time persisted in all the subsequent
commits
Hello!
Here is a puzzling experiment:
I build an index of about 1.2MM documents using SOLR 3.1. The index has a
large number of dynamic fields (about 15.000). Each document has about 100
fields.
I add the documents in batches of 20, and every 50.000 documents I optimize
the index.
The first 10
it after 10?
Best
Erick
On Sun, Jun 19, 2011 at 6:04 AM, Santiago Bazerque sbazer...@gmail.com
wrote:
Hello!
Here is a puzzling experiment:
I build an index of about 1.2MM documents using SOLR 3.1. The index has a
large number of dynamic fields (about 15.000). Each document has about
Hello,
I have a 7Gb index having 2MM documents. Each document has about 400 fields,
but fields are dynamic and in total I have ~200k fields.
We're using SOLR 3.1 and tomcat 5.5. We are seeing very slow start-up times
(from tomcat startup to SOLR ready to answer queries about 5 minutes). We
have
Hello,
I am using the new SOLR 3.1 for a 2.6 Gb, 1MM documents index. Reading the
forums and the archive I learned that SOLR and Lucene now manage commits and
transactions a bit differently than in previous versions, and indeed I feel
the behavior has changed.
Here's the thing: committing a few