>> If you're not actually hitting OutOfMemoryError, then my best guess about >> what's happening is that you are running >>right at the edge of the >> available Java heap memory, so your JVM is constantly running full garbage >> collections to free up >>enough memory for normal operation. In this >> situation, Solr is actually still running, but is spending most of its time >> >>paused for garbage collection.
Thank you Shawn for taking time and responding. Unfortunately, this is not the case. My heap is not even going past 50% and I have a heap of 10 GB on a instance that I just installed as a standalone version and was only trying out these, • Install a standalone solr 5.3.2 in my PC • Indexed some 10 db records • Hit core reload/call commit frequently in quick internals • Seeing the o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping onDeckSearchers=2 • Collection crashes • Only way to recover is to stop solr – delete the data folder – start solr – reindex In any case, if this heap related issue, a solr restart should help, is what I think. >>If I'm wrong about what's happening, then we'll need a lot more details about >>your server and your Solr setup. Nothing really. Just a standalone solr 5.3.2 on a windows 7 machine - 64 bit, 8 GB RAM. I bet anybody could reproduce the problem if they follow my above steps. Thank you all for spending time on this. I shall post back my findings, if I'm findings are useful. Thank you, Aswath NS Mobile +1 424 345 5340 Office +1 310 468 6729 -----Original Message----- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Monday, March 21, 2016 6:07 PM To: solr-user@lucene.apache.org Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 On 3/21/2016 6:49 PM, Aswath Srinivasan (TMS) wrote: >>> Thank you for the responses. Collection crashes as in, I'm unable to open >>> the core tab in Solr console. Search is not returning. None of the page >>> opens in solr admin dashboard. >>> >>> I do understand how and why this issue occurs and I'm going to do all it >>> takes to avoid this issue. However, on an event of an accidental frequent >>> hard commit close to each other which throws this WARN then - I'm just >>> trying to figure out a way to make my collection throw results without >>> having to delete and re-create the collection or delete the data folder. >>> >>> Again, I know how to avoid this issue but if it still happens then what can >>> be done to avoid a complete reindexing. If you're not actually hitting OutOfMemoryError, then my best guess about what's happening is that you are running right at the edge of the available Java heap memory, so your JVM is constantly running full garbage collections to free up enough memory for normal operation. In this situation, Solr is actually still running, but is spending most of its time paused for garbage collection. https://wiki.apache.org/solr/SolrPerformanceProblems#GC_pause_problems The first part of the "GC pause problems" section on the above wiki page talks about very large heaps, but there is a paragraph just before "Tools and Garbage Collection" that talks about heaps that are a little bit too small. If I'm right about this, you're going to need to increase your java heap size. Exactly how to do this will depend on what version of Solr you're running, how you installed it, and how you start it. For 5.x versions using the included scripts, you can use the "-m" option on the "bin/solr" command when you start Solr manually, or you can edit the solr.in.sh file (usually found in /etc/default or /var/solr) if you used the service installer script on a UNIX/Linux platform. The default heap size in 5.x scripts is 512MB, which is VERY small. For earlier versions, there's too many install/start options available. There were no installation scripts included with Solr itself, so I won't know anything about the setup. If I'm wrong about what's happening, then we'll need a lot more details about your server and your Solr setup. Thanks, Shawn