: Alright, i can now confirm the issue has been resolved by reducing precision. 
: The garbage collector on nodes without reduced precision has a real hard time 
: keeping up and clearly shows a very different graph of heap consumption.
: 
: Consider using MINUTE, HOUR or DAY as precision in case you suffer from 
: excessive memory consumption:
: 
: recip(ms(NOW/<PRECISION>,<DATE_FIELD>),<TIME_FRACTION>,1,1)

FWIW: it sounds like your problem wasn't actually related to your 
fieldCache, but probably instead if was because of how big your 
queryResultCache is....

: > > Am i correct when i assume that Lucene FieldCache entries are added for
: > > each unique function query?  In that case, every query is a unique cache

...no, the FieldCache has one entry per field name, and the value of that 
cache is an "array" keyed off of the internal docId for every doc in the 
index, and the corrisponding value (it's an uninverted version of lucene's 
inverted index for doing fast value lookups by document)

changes in the *values* used in your function queries won't affect 
FieldCache usage -- only changing the *fields* used in your functions 
would impact that.

: > > each unique function query?  In that case, every query is a unique cache
: > > entry because it operates on milliseconds. If all doesn't work i might be

what you describe is correct, but not in the FieldCache -- the 
queryResultCache is where queries that deal with the main result set (ie: 
paginated and/or sorted) wind up .. having lots of distinct queries in 
the "bq" (or "q") param will make the number of unique items in that cache 
grow significantly (just like having lots of distinct queries in the "fq" 
will cause your filterCache to grow significantly)

you should definitley checkout what max size you have configured for your 
queryResultCache ... it sounds like it's proably too big, if you were 
getting OOM errors from having high precision dates in your boost queries.  
while i think using less precision is a wise choice, you should still 
consider dialing that max size down, so that if some other usage pattern 
still causes lots of unique queries in a short time period (a bot crawling 
your site map perhaps) it doesn't fill up and cause another OOM



-Hoss

Reply via email to