Hi, If I replace my lucene wrapper with a dummy one the problem goes away. If I close my index-thread every 30 minutes and start a new thread it also goes away. If I exit the thread on OutOfMemory errors it regains all memory. I do not use static variables. If I did they wouldn't get garbage collected anyway when the thread exits.
My guess: Lucene builds up something in ThreadLocals that only gets garbage collected when the thread exits. Maybe it is related to this bug: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4414045 - I can't upgrade the jdk without also upgrading the server. Something that won't happen until EOL next year :( Original message: http://mail-archives.apache.org/mod_mbox/lucene-java-user/200507.mbox/[EMAIL PROTECTED] I consider my problem more or less solved with the workaround of restarting the thread. It is unsatisfactory but it seems to work. /Lasse On 7/13/05, Ian Lea <[EMAIL PROTECTED]> wrote: > Might be interesting to know if it crashed on 20000 docs if you ran it > with heap size of 512Mb. I guess you've already tried with default > merge values. Shouldn't need to optimize after every 100 docs. jdk > 1.3 is pretty ancient - can you use 1.5? > > I'd try it with a larger heap size, and then look for leaks in your > code. Maybe run the load program without the lucene calls and see if > it still fails. > > > -- > Ian. > > > On 13/07/05, Lasse L <[EMAIL PROTECTED]> wrote: > > Hi, > > > > I can see that this has been up before, but I still hope to get some > > advice based on my specific environment. > > > > I index some documents with 26 fields in them. The size 10000 indexed > > documents is 4mb, so it shouldn't be overwhelming amounts of data > > compared to what I have heard lucene can do. > > > > Nevertheless. When I run my daemon thread at night it consistently > > crashes with an OutOfMemoryError after about 10000 documents. If I > > restart the bea 6.1 / sun jdk 1.3 server it continues without a hitch, > > then crashes again after about 10000 more documents. > > > > I manually call optimize after each 100 documents. > > > > I create my IndexWriter like this: > > IndexWriter writer; > > writer = new IndexWriter(_indexPath, getAnalyzer(), false); > > writer.setUseCompoundFile(true); > > writer.mergeFactor = 5; > > writer.minMergeDocs = 10; > > writer.maxMergeDocs = 500; > > > > My heap size is just 256mb. I can double that easy, but I can't make > > it 15 times larger. > > > > I need to index a total of 150000 documents. The creation of a > > document takes about 1 second so performance from lucene is not so > > critical. Just don't crash on me. > > > > I read elsewhere about some gc issues on jdk 1.3. Is that true and is > > there some workaround I would need to do? Threadlocals? static > > variables that can be reset? > > > > I didn't get around to run a profiler on our app to see if the way I > > use our own code somehow creates a leak. But I don't see why it would. > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: [EMAIL PROTECTED] > > For additional commands, e-mail: [EMAIL PROTECTED] > > > > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]