You also might try -Xbatch or -Xcomp to see if that fixes it (or reproduces it faster).
Here's a great list of JVM options: http://blogs.sun.com/roller/resources/watt/jvm-options-list.html -Yonik On 12/11/05, Yonik Seeley <[EMAIL PROTECTED]> wrote: > Sounds like it's a hotspot bug. > AFAIK, hotspot doesn't just compile a method once... it can do > optimization over time. > > To work around it, have you tried pre previous version: 1.5_05? > It's possible it's a fairly new bug. We've been running with that > version and Lucene 1.4.3 without problems (on Opteron, RHEL4). > > You could also try the latest Lucene 1.9 to see if that changes enough > to avoid the bug. > > -Yonik > > On 12/11/05, Dan Gould <[EMAIL PROTECTED]> wrote: > > First, thank you Chris, Yonik, and Dan for your ideas as to what might be > > causing this problem. > > > > I tried moving things around so that the IndexReader is still open when it > > calls TermFreqVector.getTerms()/TermFreqVector.getTermFrequencies(). It > > didn't seem to make any difference. > > > > I also tried running Java with the flags: > > -Xmx2048m -XX:MaxPermSize=200m > > (the box has 4GB of RAM) and it still crashes. It's hard to tell, but the > > program does seem to run for a lot longer (maybe 10 hours), but that could > > just be randomness in my tests. > > > > The JVM always seems to crash with > > > > Current CompileTask: > > opto:1836 > > org.apache.lucene.index.IndexReader$1.doBody()Ljava/lang/Object; > > (99 bytes) > > > > which in the Lucene source is: > > > > private static IndexReader open(final Directory directory, final boolean > > closeDirectory) throws IOException { > > synchronized (directory) { // in- & inter-process > > sync > > return (IndexReader)new Lock.With( > > directory.makeLock(IndexWriter.COMMIT_LOCK_NAME), > > IndexWriter.COMMIT_LOCK_TIMEOUT) { > > public Object doBody() throws IOException { > > SegmentInfos infos = new SegmentInfos(); > > infos.read(directory); > > if (infos.size() == 1) { // index is optimized > > return SegmentReader.get(infos, infos.info(0), > > closeDirectory); > > } > > IndexReader[] readers = new IndexReader[infos.size()]; > > for (int i = 0; i < infos.size(); i++) > > readers[i] = SegmentReader.get(infos.info(i)); > > return new MultiReader(directory, infos, closeDirectory, > > readers); > > > > } > > }.run(); > > } > > } > > > > that's definitely a non-trivial bit of code, but I can't imagine that > > there's a problem that I'm seeing that no one else else. Moreover, that > > code gets run hundreds or even thousands of times before it crashes, so I > > don't image it's being HotSpot-compiled for the first time. > > > > I'm running the 1.4.3 release and the 1.5.0_06-b05 JVM on Centos Linux on > > an Opteron. > > > > Any further guesses? > > > > Thank you all very much, > > Dan > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]