Hi,

I have 2 Solr-shards. One is filled with approx. 25mio documents (local
index 6GB), the other with 10mio documents (2.7GB size).
I am trying to create some kind of 'word cloud' to see the frequency of
words for a *text_general *field.
For this I am currently using a facet over this field and I am also
restricting the documents by using some other filters in the query.

The performance is really bad for the first call and then pretty fast for
the following calls.

The maximum Java heap size is 3G for each shard. Both shards are running on
the same physical server which has 12G RAM.

Question: Should I reduce the documents in one shard, so that the index is
equal or less the Java Heap size for this shard? Or is
there another method to avoid this slow calls?

Thank you

Daniel

Reply via email to