I’m not sure about this particular error, but in general, once the JVM has OOM’d, it is completely unreliable and should be restarted. I’m assuming Lucene catches the OOM just so that it doesn’t get in a state where it will corrupt the index.
-Michael From: Tom Burton-West [mailto:tburt...@umich.edu] Sent: Friday, January 09, 2015 4:04 PM To: java-user@lucene.apache.org Subject: index writer closes due to OOM/heap space issue but no recovery after GC Hello, I'm testing Solr 4.10.2 with 4GB allocated to the heap. During the indexing process I get an error message that says it is caused by an "already closed indexwriter" due to an OOM. (See below). After this occurs it looks like the GC kicks in and there is plenty of heap space(see attached) but I continue getting this error. Can someone please explain why after the GC frees memory, I continue to get the error? p.s. My documents average about 800KB and at completion each shard has over 3 billion unique terms. Tom Burton-West -------------------------------------------------- org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: Exception writing document id pst.000052087387 to the index; possible analysis error. at og.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:168) caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:698) caused by Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:212)