Out of curiosity, I downloaded 2.4.1 and made the necessary source code 
modifications (attached).  There used to be some sort of file descriptor 
cleanup.  With the explicit close, the descriptor count stayed under 100.  
Without the explicit close, the count hit peaks around 3000 by the time 50,000 
documents are added (still under our increased limit).  Hopefully our problems 
didn't extend past leaking file descriptors by omitting the explicit close.




----- Original Message ----
From: Justin <cry...@yahoo.com>
To: java-user@lucene.apache.org
Sent: Thu, March 4, 2010 6:29:25 PM
Subject: Re: File descriptor leak in ParallelReader.reopen()

We must have been getting lucky.  Thanks Mark and Uwe!




----- Original Message ----
From: Uwe Schindler <u...@thetaphi.de>
To: java-user@lucene.apache.org
Sent: Thu, March 4, 2010 6:20:56 PM
Subject: RE: File descriptor leak in ParallelReader.reopen()

That was always the same with reopen(). Its documented in the javadocs, with a 
short example:
http://lucene.apache.org/java/3_0_1/api/all/org/apache/lucene/index/IndexReader.html#reopen()

also in 2.4.1:
http://lucene.apache.org/java/2_4_1/api/org/apache/lucene/index/IndexReader.html#reopen()

Uwe

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -----Original Message-----
> From: Justin [mailto:cry...@yahoo.com]
> Sent: Friday, March 05, 2010 1:17 AM
> To: java-user@lucene.apache.org
> Subject: Re: File descriptor leak in ParallelReader.reopen()
> 
> Has this changed since 2.4.1?  Our application didn't explicitly close
> with 2.4.1 and that combination never had this problem.
> 
> 
> 
> ----- Original Message ----
> From: Mark Miller <markrmil...@gmail.com>
> To: java-user@lucene.apache.org
> Sent: Thu, March 4, 2010 6:00:02 PM
> Subject: Re: File descriptor leak in ParallelReader.reopen()
> 
> On 03/04/2010 06:52 PM, Justin wrote:
> > Hi Mike and others,
> >
> > I have a test case for you (attached) that exhibits a file descriptor
> leak in ParallelReader.reopen().  I listed the OS, JDK, and snapshot of
> Lucene that I'm using in the source code.
> >
> > A loop adds just over 4000 documents to an index, reopening the index
> after each, before my system hits an already increased file descriptor
> limit of 8192.  I've also got a thread that reports the number of
> documents in the index and warms a searcher using the reader.  To
> simulate continued use by my application the searchers are not
> discarded.
> >
> > Let me know if you need help reproducing the problem or can help
> identify it.
> >
> > Thanks!
> > Justin
> >
> >
> >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: java-user-h...@lucene.apache.org
> Doesn't look like you are closing your old reader - reopen will return
> a
> new one when there are changes to the index and the old one must be
> closed.
> 
> --
> - Mark
> 
> http://www.lucidimagination.com
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org


      

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org


      
import java.io.File;
import java.io.IOException;
import java.io.Reader;
import java.util.LinkedList;
import java.util.List;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.LowerCaseFilter;
import org.apache.lucene.analysis.StopAnalyzer;
import org.apache.lucene.analysis.StopFilter;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.snowball.SnowballFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.LogDocMergePolicy;
import org.apache.lucene.index.ParallelReader;
import org.apache.lucene.index.SerialMergeScheduler;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.TopDocCollector;
import org.apache.lucene.store.AlreadyClosedException;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.LockObtainFailedException;

/**
  * OS: CentOS 5.3
  * JDK/JRE: Sun JavaSE: 1.6.0_13
  *
  * # echo "* - nofile 8192" >>/etc/security/limits.conf
  * # ulimit -n 8192
  *
  * java -cp .:lucene-core-3.1-r917204.jar:lucene-analyzers-3.1-r917204.jar 
TestCase
  *
  * java.io.IOException: directory '/root/tmp1' exists and is a directory, but 
cannot be listed: list() returned null
  *   at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234)
  *   at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:245)
  *   at 
org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:298)
  *   at 
org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3616)
  *   at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3519)
  *   at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3510)
  *   at 
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3386)
  *   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3461)
  *   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3432)
  *   at TestCase.<init>(TestCase.java:135)
  *   at TestCase.main(TestCase.java:176)
  * java.io.IOException: directory '/root/tmp1' exists and is a directory, but 
cannot be listed: list() returned null
  *   at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234)
  *   at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:245)
  *   at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:571)
  *   at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:524)
  *   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:307)
  *   at 
org.apache.lucene.index.SegmentInfos.readCurrentVersion(SegmentInfos.java:411)
  *   at 
org.apache.lucene.index.DirectoryReader.isCurrent(DirectoryReader.java:862)
  *   at 
org.apache.lucene.index.DirectoryReader.doReopenNoWriter(DirectoryReader.java:427)
  *   at 
org.apache.lucene.index.DirectoryReader.doReopen(DirectoryReader.java:406)
  *   at 
org.apache.lucene.index.DirectoryReader.reopen(DirectoryReader.java:366)
  *   at 
org.apache.lucene.index.ParallelReader.doReopen(ParallelReader.java:187)
  *   at org.apache.lucene.index.ParallelReader.reopen(ParallelReader.java:170)
  *   at TestCase$2.run(TestCase.java:108)
  *   at java.lang.Thread.run(Thread.java:619)
  */
class TestCase
{
  public static final int MAX_FIELD_LENGTH = 500000;
  public static final int RAM_BUFFER_SIZE = 48;
  public static Analyzer analyzer = new Analyzer() {
    @Override
    public TokenStream tokenStream(String fieldName, Reader reader) {
      TokenStream result = new StandardTokenizer(reader);
      result = new StandardFilter(result);
      result = new LowerCaseFilter(result);
      result = new StopFilter(result, StopAnalyzer.ENGLISH_STOP_WORDS);
      result = new SnowballFilter(result, "English");
      return result;
    }
  };

  private boolean warmSearcher = true;
  private List<IndexSearcher> searcherList = new LinkedList<IndexSearcher>();
  private ParallelReader parallelReader = new ParallelReader();
  private Object lock = new Object();
  private File dir1 = new File("tmp1/");
  private File dir2 = new File("tmp2/");

  public TestCase() throws IOException, InterruptedException, 
CorruptIndexException, LockObtainFailedException {
    System.out.println("Opening readers and writers...");
    FSDirectory luceneDir1 = FSDirectory.getDirectory(dir1);
    FSDirectory luceneDir2 = FSDirectory.getDirectory(dir2);
    IndexWriter writer1 = createIndex(luceneDir1);
    IndexWriter writer2 = createIndex(luceneDir2);
    parallelReader.add(IndexReader.open(luceneDir1));
    parallelReader.add(IndexReader.open(luceneDir2));

    System.out.println("Starting searcher-warming thread...");
    new Thread(new Runnable() {
      public void run() {
        while (warmSearcher) {
          try {
            int numDocs;
            synchronized (lock) { // don't reopen while adding documents
              ParallelReader newReader = (ParallelReader) 
parallelReader.reopen();
              //if (newReader != parallelReader) {
              //  parallelReader.close();
                parallelReader = newReader;
              //}
              numDocs = parallelReader.numDocs();
            }
            System.out.println("Opening searcher for "+numDocs+" docs...");
            IndexSearcher searcher = new IndexSearcher(parallelReader);
            if (numDocs > 0) {
              TopDocCollector collector = new TopDocCollector(numDocs);
              searcher.search(new MatchAllDocsQuery(), collector);
            }
            searcherList.add(searcher);
            Thread.sleep(2000);
          }
          catch (AlreadyClosedException ace) { warmSearcher = false; }
          catch (Exception e) { e.printStackTrace(); System.exit(1); }
        }
      }
    }).start();

    System.out.println("Reading and writing index...");
    for (int i = 0; i < 100000; i++) {
      Document document1 = new Document();
      Document document2 = new Document();
      document1.add(new Field("id", Integer.toString(i), Field.Store.YES, 
Field.Index.NOT_ANALYZED, Field.TermVector.NO));
      document2.add(new Field("id", Integer.toString(i), Field.Store.YES, 
Field.Index.NOT_ANALYZED, Field.TermVector.NO));
      synchronized (lock) {
        writer1.addDocument(document1);
        writer2.addDocument(document2);
        writer1.commit();
        writer2.commit();
        ParallelReader newReader = (ParallelReader) parallelReader.reopen();
        //if (newReader != parallelReader) {
        //  parallelReader.close();
          parallelReader = newReader;
        //}
      }
    }
    warmSearcher = false;

    System.out.println("Reading and writing to index complete!  Waiting 20s 
before cleanup...");
    Thread.sleep(20000);

    parallelReader.close();
    deleteDirectory(dir1);
    deleteDirectory(dir2);
  }

  public static IndexWriter createIndex(FSDirectory luceneDir) throws 
IOException, CorruptIndexException, LockObtainFailedException {
    IndexWriter indexWriter;
    indexWriter = new IndexWriter(luceneDir, analyzer, true, 
      new IndexWriter.MaxFieldLength(MAX_FIELD_LENGTH));
    indexWriter.setMergePolicy(new LogDocMergePolicy());
    indexWriter.setMergeScheduler(new SerialMergeScheduler());
    indexWriter.setRAMBufferSizeMB(RAM_BUFFER_SIZE);
    return indexWriter;
  }

  public static boolean deleteDirectory(File path) {
    if (path.exists()) {
      File[] files = path.listFiles();
      for (int i = 0; i < files.length; i++) {
         if (files[i].isDirectory()) {
           deleteDirectory(files[i]);
         } else {
           files[i].delete();
         }
      }
    }
    return path.delete();
  }

  public static void main(String[] args) {
    try {
      new TestCase();
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to