See my other mail for you file descriptor leak.

A short note about your search code:

You should not directly instantiate a TopScoreDocCollector but instead use the 
Searcher method that returns TopDocs. This has the benefit, that the searcher 
automatically chooses the right parameter for scoring docs out/in order. In 
your example, search would be a little faster when using the in-order collector 
(which orders documents faster). But only for MatchAllDocs! Some BooleanQueries 
behave different.

So simply use TopDocs td = searcher.serach(query, count);

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -----Original Message-----
> From: Justin [mailto:cry...@yahoo.com]
> Sent: Friday, March 05, 2010 12:52 AM
> To: java-user@lucene.apache.org
> Subject: File descriptor leak in ParallelReader.reopen()
> 
> Hi Mike and others,
> 
> I have a test case for you (attached) that exhibits a file descriptor
> leak in ParallelReader.reopen().  I listed the OS, JDK, and snapshot of
> Lucene that I'm using in the source code.
> 
> A loop adds just over 4000 documents to an index, reopening the index
> after each, before my system hits an already increased file descriptor
> limit of 8192.  I've also got a thread that reports the number of
> documents in the index and warms a searcher using the reader.  To
> simulate continued use by my application the searchers are not
> discarded.
> 
> Let me know if you need help reproducing the problem or can help
> identify it.
> 
> Thanks!
> Justin
> 
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to