We had a similar case for multivalued fields with a lot of unique values per field in some cases. Using facet.method=enum instead of facet.method=fc fixed the problem. Can run slower though.
Dmitry On Tue, Sep 3, 2013 at 5:04 PM, Dennis Schafroth <den...@indexdata.com>wrote: > We are harvesting and indexing bibliographic data, thus having many > distinct author names in our index. While testing Solr 4 I believe I had > pushed a single core to 100 million records (91GB of data) and everything > was working fine and fast. After adding a little more to the index, then > following started to happen: > > 17328668 [searcherExecutor-4-thread-1] WARN org.apache.solr.core.SolrCore > – Approaching too many values for UnInvertedField faceting on field > 'author_exact' : bucket size=16726546 > 17328701 [searcherExecutor-4-thread-1] INFO org.apache.solr.core.SolrCore > – UnInverted multi-valued field > {field=author_exact,memSize=336715415,tindexSize=5001903,time=31595,phase1=31465,nTerms=12048027,bigTerms=0,termInstances=57751332,uses=0} > 18103757 [searcherExecutor-4-thread-1] ERROR org.apache.solr.core.SolrCore > – org.apache.solr.common.SolrException: Too many values for UnInvertedField > faceting on field author_exact > at org.apache.solr.request.UnInvertedField.<init>(UnInvertedField.java:181) > at > org.apache.solr.request.UnInvertedField.getUnInvertedField(UnInvertedField.java:664) > > I can see that we reached a limit of bucket size. Is there a way to adjust > this? The index also seem to explode in size (217GB). > > Thinking that I had reached a limit for what a single core could handle in > terms of facet, I deleted records in the index, but even now at 1/3 (32 > million) it will still fails with above error. I have optimised with > expungeDeleted=true. The index is somewhat larger (76GB) than I would have > expected. > > While we can still use the index and get facets back using enum method on > that field, I would still like a way to fix the index if possible. Any > suggestions? > > cheers, > :-Dennis