Hello all,
            While running in my eclipse and run a set of queries, this
works fine, but when I run it in test production server, the searchers are
leaked. Any hint would be appreciated. I have not used CoreContainer.

Considering that the SearchHandler is running fine, I am not able to think
of a reason why my extended version wouldnt work.. Does anyone have any
idea?

On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj <
karthick.soundara...@gmail.com> wrote:

> I have tons of these open.
> searcherName : Searcher@24be0446 main
> caching : true
> numDocs : 1331167
> maxDoc : 1338549
> reader : SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
> ,refCnt=1,segments=18}
> readerDir : org.apache.lucene.store.NIOFSDirectory@
> /usr/local/solr/highlander/data/......@2f2d9d89
> indexVersion : 1336499508709
> openedAt : Fri Jul 27 09:45:16 EDT 2012
> registeredAt : Fri Jul 27 09:45:19 EDT 2012
> warmupTime : 0
>
> In my custom handler, I have the following code
> I have the following problem
> Although in my custom handler, I have the following implementation(its not
> the full code but it gives an overall idea of the implementation) and it
>
>       class CustomHandler extends SearchHandler {
>
>             void handleRequestBody(SolrQueryRequest req,SolrQueryResponse
> rsp)
>
>                          SolrCore core= req.getCore();
>                          vector<SimpleOrderedMap<Object>> requestParams =
> new   vector<SimpleOrderedMap<Object>>();
>                         /*parse the params such a way that
>                                  requestParams[i] -=> parameter of the ith
> request
>                           */
>                         ......
>
>                   try {
>                        vector<LocalSolrQueryRequests> subQueries = new
>  vector<LocalSolrQueryRequests>(solrcore, requestParams[i]);
>
>                        for(i=0;i<subQueryCount;i++) {
>                               ResponseBuilder rb = new ResponseBuilder()
>                               rb.req = req;
>                                ....
>                               handlerRequestBody(req,rsp,rb); //this would
> call search handler's handler request body, whose signature, i have modified
>                      }
>                  } finally {
>                           for(i=0; i<subQueries.size();i++)
>                                  subQueries.get(i).close();
>                  }
>       }
>
> *Search Handler Changes*
>       class SearchHandler {
>             void handleRequestBody(SolrQueryRequest req, SolrQueryResponse
> rsp, ResponseBuilder rb, ArrayList<Component> comps) {
>                //  ResponseBuilder rb = new ResponseBuilder()  ;
>
>                            ......................
>              }
>             void handleRequestBody(SolrQueryRequest req,
> SolrQueryResponse) {
>                      ResponseBuilder rb = new ResponseBuilder(req,rsp, new
> ResponseBuilder());
>                      handleRequestBody(req, rsp, rb, comps) ;
>              }
>       }
>
>
> I don see the index old index searcher geting closed after warming up the
> new guy... Because I replicate every 5 mintues, it crashes in 2 hours..
>
>  On Fri, Jul 27, 2012 at 3:36 AM, roz dev <rozde...@gmail.com> wrote:
>
>> in my case, I see only 1 searcher, no field cache - still Old Gen is
>> almost
>> full at 22 GB
>>
>> Does it have to do with index or some other configuration
>>
>> -Saroj
>>
>> On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog <goks...@gmail.com> wrote:
>>
>> > What does the "Statistics" page in the Solr admin say? There might be
>> > several "searchers" open: org.apache.solr.search.SolrIndexSearcher
>> >
>> > Each searcher holds open different generations of the index. If
>> > obsolete index files are held open, it may be old searchers. How big
>> > are the caches? How long does it take to autowarm them?
>> >
>> > On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
>> > <karthick.soundara...@gmail.com> wrote:
>> > > Mark,
>> > >         We use solr 3.6.0 on freebsd 9. Over a period of time, it
>> > > accumulates lots of space!
>> > >
>> > > On Thu, Jul 26, 2012 at 8:47 PM, roz dev <rozde...@gmail.com> wrote:
>> > >
>> > >> Thanks Mark.
>> > >>
>> > >> We are never calling commit or optimize with openSearcher=false.
>> > >>
>> > >> As per logs, this is what is happening
>> > >>
>> > >>
>> >
>> openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
>> > >>
>> > >> --
>> > >> But, We are going to use 4.0 Alpha and see if that helps.
>> > >>
>> > >> -Saroj
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller <markrmil...@gmail.com>
>> > >> wrote:
>> > >>
>> > >> > I'd take a look at this issue:
>> > >> > https://issues.apache.org/jira/browse/SOLR-3392
>> > >> >
>> > >> > Fixed late April.
>> > >> >
>> > >> > On Jul 26, 2012, at 7:41 PM, roz dev <rozde...@gmail.com> wrote:
>> > >> >
>> > >> > > it was from 4/11/12
>> > >> > >
>> > >> > > -Saroj
>> > >> > >
>> > >> > > On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller <
>> markrmil...@gmail.com
>> > >
>> > >> > wrote:
>> > >> > >
>> > >> > >>
>> > >> > >> On Jul 26, 2012, at 3:18 PM, roz dev <rozde...@gmail.com>
>> wrote:
>> > >> > >>
>> > >> > >>> Hi Guys
>> > >> > >>>
>> > >> > >>> I am also seeing this problem.
>> > >> > >>>
>> > >> > >>> I am using SOLR 4 from Trunk and seeing this issue repeat every
>> > day.
>> > >> > >>>
>> > >> > >>> Any inputs about how to resolve this would be great
>> > >> > >>>
>> > >> > >>> -Saroj
>> > >> > >>
>> > >> > >>
>> > >> > >> Trunk from what date?
>> > >> > >>
>> > >> > >> - Mark
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> > >>
>> > >> >
>> > >> > - Mark Miller
>> > >> > lucidimagination.com
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >> >
>> > >>
>> >
>> >
>> >
>> > --
>> > Lance Norskog
>> > goks...@gmail.com
>> >
>>
>
>
>
>

Reply via email to