@topcat: you need to call close() method for solr request after using them.
In general,
SolrQueryRequest request = new SolrQueryRequest();
try {
.
} finally {
request.close();
}
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp30670
@topcat: you need to call close() method for solr request after using them.
In general,
SolrQueryRequest request = new SolrQueryRequest();
try {
.
} finally {
request.close();
}
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp30670
dear erolagnab,
it is your code in the solr server?
which class i can put it?
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp3067057p3517780.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sorry to pull this up again, but I've faced a similar issue and would like to
share the solution.
In my situation, I uses SolrQueryRequest, SolrCore, SolrQueryResponse to
explicitly perform the search.
The gotcha from my code is that I didn't call SolrQueryRequest.close() hence
the increasing memo
Bernd, in our case, optimizing the index seems to flush the FieldCache for
some reason. On the other hand, doing a few commits without optimizing seems
to make the problem worse.
Hope that helps, we would like to give it a try and debug this in Lucene,
but are pressed for time right now. Perhaps l
The current status of my installation is that with some tweeking of
JAVA I get a runtime of about 2 weeks until OldGen (14GB) is filled
to 100 percent and won't free anything even with FullGC.
The part of fieldCache in a HeapDump to that time is over 80 percent
from the whole heap (20GB). And that
Hello Erick,
I have a 1.7MM documents, 3.6GB index. I also hava an unusual amount of
dynamic fields, that I use for sorting. My FieldCache currently has about
13.000 entries, even though my index only has 1-3 queries per second. Each
query sorts by two dynamic fields, and facets on 3-4 fields that
Hi Erik,
as far as I can see with MemoryAnalyzer from the heap:
- the class fieldCache has a HashMap
- one entry of the HashMap is FieldCacheImpl$StringIndex which is "mister big"
- FieldCacheImpl$StringIndex is a WeakHashMap
- WeakHashMap has three entries
-- 63.58 percent of heap
-- 8.14 perce
Sorry, it was late last night when I typed that...
Basically, if you sort and facet on #all# the fields you mentioned, it
should populate
the cache in one go. If the problem is that you just have too many unique terms
for all those operations, then it should go bOOM.
But, frankly, that's unlikely
Hi Erik,
I will take some memory snapshots during the next week,
but how can it be to get OOMs with one query?
- I started with 6g for JVM --> 1 day until OOM.
- increased to 8 g --> 2 days until OOM
- increased to 10g --> 3.5 days until OOM
- increased to 16g --> 5 days until OOM
- currently 20g
Well, if my theory is right, you should be able to generate OOMs at will by
sorting and faceting on all your fields in one query.
But Lucene's cache should be garbage collected, can you take some memory
snapshots during the week? It should hit a point and stay steady there.
How much memory are yo
Hi Erik,
yes I'm sorting and faceting.
1) Fields for sorting:
sort=f_dccreator_sort, sort=f_dctitle, sort=f_dcyear
The parameter "facet.sort=" is empty, only using parameter "sort=".
2) Fields for faceting:
f_dcperson, f_dcsubject, f_dcyear, f_dccollection, f_dclang, f_dctypenorm,
f_d
The first question I have is whether you're sorting and/or
faceting on many unique string values? I'm guessing
that sometime you are. So, some questions to help
pin it down:
1> what fields are you sorting on?
2> what fields are you faceting on?
3> how many unique terms in each (see the solr admin p
13 matches
Mail list logo