>From what I heard, taking search offline is solution that the organization had said they wouldn't want. We plan to increase the frequency of index reload every day and hence taking offline so many times a day wouldn't be pragmatic.
Here is the current implementation: We use a store that has the indexes details. During the index reload, existing searches go through the old store. The reset store creates a new store with the updated index details, warms up the search by performing the search, closing the store resources, and setting the store with the new store. The search with the new store is then being hit by a barrage of users. The search results are based out of the bit set filter that the user has access to. I believe the Reader is invalidating the filter after the store reset. So when multiple searches are being hit, the filter is struggling in making its initial updates based on the user types and the access they have. With so many threads in blocked state, most of the servers recover without much of a struggle but there a handful of servers each time that don't recover. For these sad servers, heap memory jumps from 30% to almost 60-80%, request pools goes up from 5-10 to almost 150 - 200. Request pool of less than about 80-90 usually tend to recover on its own. Your suggestions would be appreciated. Thanks, Raghavan On Oct 24, 2012, at 6:10 PM, Vitaly Funstein <vfunst...@gmail.com> wrote: > Just curious - why not take your search feature offline during the > reindexing? That would seem sensible from an operational perspective, I > think. > > On Tue, Oct 23, 2012 at 2:03 PM, Raghavan Parthasarathy < > raghavan8...@gmail.com> wrote: > >> Hi, >> >> We are using Lucene-core and we reindex once a day and plan to do it more >> often in a day sooner. >> >> During re loading of the lucene indexes, some of the slave servers don't >> recover and we have to restart them to get them working again. >> >> Here is the stacktrace: >> >> Here is the Full stack trace of BLOCKED search class: >> >> "daemon prio=5 state=BLOCKED >> at >> >> org.apache.lucene.index.SegmentReader$CoreReaders.getTermsReader(SegmentReader.java:169) >> at >> org.apache.lucene.index.SegmentTermDocs.seek(SegmentTermDocs.java:57) >> at >> >> org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:120) >> at >> org.apache.lucene.search.ChainedFilter.getDocIdSet(ChainedFilter.java:176) >> at >> org.apache.lucene.search.ChainedFilter.getDocIdSet(ChainedFilter.java:104) >> at >> org.apache.lucene.search.ChainedFilter.getDocIdSet(ChainedFilter.java:176) >> at >> org.apache.lucene.search.ChainedFilter.getDocIdSet(ChainedFilter.java:104) >> at >> >> org.apache.lucene.search.CachingWrapperFilter.getDocIdSet(CachingWrapperFilter.java:203) >> at >> org.apache.lucene.search.ChainedFilter.getDocIdSet(ChainedFilter.java:176) >> at >> org.apache.lucene.search.ChainedFilter.getDocIdSet(ChainedFilter.java:104) >> at >> >> org.apache.lucene.search.CachingWrapperFilter.getDocIdSet(CachingWrapperFilter.java:203) >> at >> >> org.apache.lucene.search.IndexSearcher.searchWithFilter(IndexSearcher.java:544) >> at >> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:525) >> at >> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:384) >> at >> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:291) >> at >> com.atypon.publish.search.SearchWorker.search(SearchWorker.java:98) >> at >> com.atypon.publish.search.SearchWorker.search(SearchWorker.java:41) >> ..... >> .... >> >> >> We are using lucene-core version 3.1. Please tell us what we need to do to >> resolve this issue. >> >> Thanks, >> Raghavan >> --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org