Re: Performance degradation with distributed search

2012-02-06 Thread oleole
Yonik,

Thanks for your reply. Yeah that's the first thing I tried (adding fsv=true
to the query) and it surprised me too. Could it due to we're using many
complex sortings (20 sortings with dismax, and, or...). Any thing it can be
optimized? Looks like it's calculated twice in solr?

XJ

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Performance-degradation-with-distributed-search-tp3715060p3720739.html
Sent from the Solr - User mailing list archive at Nabble.com.


best way to update custom fieldcache after index commit?

2011-06-01 Thread oleole
Hi,

We use solr and lucene fieldcache like this
static DocTerms myfieldvalues =
org.apache.lucene.search.FieldCache.DEFAULT.getTerms(reader, myField);
which is initialized at first use and will stay in memory for fast retrieval
of field values based on DocID

The problem is after an index/commit, the lucene fieldcache is reloaded in
the new searcher, but this static list need to updated as well,  what is the
best way to handle this? Basically we want to update those custom filedcache
whenever there is a commit. The possible solution I can think of:

1) manually call an request handler to clean up those custom stuffs after
commit, which is a hack and ugly.
2) use some listener event (not sure whether I can use newSearcher event
listener in Solr); also there seems to be a lucene ticket (
https://issues.apache.org/jira/browse/LUCENE-2474, Allow to plug in a Cache
Eviction Listener to IndexReader to eagerly clean custom caches that use the
IndexReader (getFieldCacheKey)), not clear to me how to use it though

Any of your suggestion/comments is much appreciated. Thanks!

oleole