RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
>> Since you've already reproduced it on a small scale, we'll need your entire >> Solr logfile. The mailing list eats attachments, so you'll need to place it >> somewhere and provide a URL. Sites like gist and dropbox are excellent for >> sharing large text content. Sure. I will try and sent it across. However I don’t see anything in them. I have FINE level logs. >> Do you literally mean 10 records (a number I can count on my fingers)? How much data is in each of those DB records? Which configset did you use when creating the index? Yes. Crazy right. Actually the select query I gave will yield 10 records only. Total records in the table is 200,000. I restricted the query to reproduce the issue in small scale. This issue started appearing in my QA, where, one time we happen to have an accidently frequent hard commit by two batch jobs. There is no autocommit set in solrconfig. Only the batch jobs send a commit. I was never able to recover the collection so I had to delete the data and reindex to fix it. Hence decided to reproduce the issue in very small scale and trying to fix it because deleting data and reindex cannot be a fix. DB records are just normal varchars. Some 7 columns. I don’t think data is the problem. I cloned the 'solr-5.3.2\example\example-DIH\solr\db' and added some addition fields and removed unused default fields. >> You mentioned a 10GB heap and then said the machine has 8GB of RAM. Is this >> correct? If so, this would be a source of serious performance issues. OOPPSS. Its 1 GB heap. That was a typo. The consumed heap is around 300-400 MB. Thank you, Aswath NS -Original Message- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Tuesday, March 22, 2016 10:41 AM To: solr-user@lucene.apache.org Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 On 3/22/2016 11:32 AM, Aswath Srinivasan (TMS) wrote: > Thank you Shawn for taking time and responding. > > Unfortunately, this is not the case. My heap is not even going past > 50% and I have a heap of 10 GB on a instance that I just installed as > a standalone version and was only trying out these, > > • Install a standalone solr 5.3.2 in my PC > • Indexed some 10 db records > • Hit core reload/call commit frequently in quick internals > • Seeing the o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping > onDeckSearchers=2 > • Collection crashes > • Only way to recover is to stop solr – delete the data folder – start > solr – reindex > > In any case, if this heap related issue, a solr restart should help, is what > I think. That shouldn't happen. Since you've already reproduced it on a small scale, we'll need your entire Solr logfile. The mailing list eats attachments, so you'll need to place it somewhere and provide a URL. Sites like gist and dropbox are excellent for sharing large text content. More questions: Do you literally mean 10 records (a number I can count on my fingers)? How much data is in each of those DB records? Which configset did you use when creating the index? You mentioned a 10GB heap and then said the machine has 8GB of RAM. Is this correct? If so, this would be a source of serious performance issues. Thanks, Shawn
Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/22/2016 11:32 AM, Aswath Srinivasan (TMS) wrote: > Thank you Shawn for taking time and responding. > > Unfortunately, this is not the case. My heap is not even going past 50% and I > have a heap of 10 GB on a instance that I just installed as a standalone > version and was only trying out these, > > • Install a standalone solr 5.3.2 in my PC > • Indexed some 10 db records > • Hit core reload/call commit frequently in quick internals > • Seeing the o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping > onDeckSearchers=2 > • Collection crashes > • Only way to recover is to stop solr – delete the data folder – start > solr – reindex > > In any case, if this heap related issue, a solr restart should help, is what > I think. That shouldn't happen. Since you've already reproduced it on a small scale, we'll need your entire Solr logfile. The mailing list eats attachments, so you'll need to place it somewhere and provide a URL. Sites like gist and dropbox are excellent for sharing large text content. More questions: Do you literally mean 10 records (a number I can count on my fingers)? How much data is in each of those DB records? Which configset did you use when creating the index? You mentioned a 10GB heap and then said the machine has 8GB of RAM. Is this correct? If so, this would be a source of serious performance issues. Thanks, Shawn
RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
>> If you're not actually hitting OutOfMemoryError, then my best guess about >> what's happening is that you are running >>right at the edge of the >> available Java heap memory, so your JVM is constantly running full garbage >> collections to free up >>enough memory for normal operation. In this >> situation, Solr is actually still running, but is spending most of its time >> >>paused for garbage collection. Thank you Shawn for taking time and responding. Unfortunately, this is not the case. My heap is not even going past 50% and I have a heap of 10 GB on a instance that I just installed as a standalone version and was only trying out these, • Install a standalone solr 5.3.2 in my PC • Indexed some 10 db records • Hit core reload/call commit frequently in quick internals • Seeing the o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping onDeckSearchers=2 • Collection crashes • Only way to recover is to stop solr – delete the data folder – start solr – reindex In any case, if this heap related issue, a solr restart should help, is what I think. >>If I'm wrong about what's happening, then we'll need a lot more details about >>your server and your Solr setup. Nothing really. Just a standalone solr 5.3.2 on a windows 7 machine - 64 bit, 8 GB RAM. I bet anybody could reproduce the problem if they follow my above steps. Thank you all for spending time on this. I shall post back my findings, if I'm findings are useful. Thank you, Aswath NS Mobile +1 424 345 5340 Office+1 310 468 6729 -Original Message- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Monday, March 21, 2016 6:07 PM To: solr-user@lucene.apache.org Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 On 3/21/2016 6:49 PM, Aswath Srinivasan (TMS) wrote: >>> Thank you for the responses. Collection crashes as in, I'm unable to open >>> the core tab in Solr console. Search is not returning. None of the page >>> opens in solr admin dashboard. >>> >>> I do understand how and why this issue occurs and I'm going to do all it >>> takes to avoid this issue. However, on an event of an accidental frequent >>> hard commit close to each other which throws this WARN then - I'm just >>> trying to figure out a way to make my collection throw results without >>> having to delete and re-create the collection or delete the data folder. >>> >>> Again, I know how to avoid this issue but if it still happens then what can >>> be done to avoid a complete reindexing. If you're not actually hitting OutOfMemoryError, then my best guess about what's happening is that you are running right at the edge of the available Java heap memory, so your JVM is constantly running full garbage collections to free up enough memory for normal operation. In this situation, Solr is actually still running, but is spending most of its time paused for garbage collection. https://wiki.apache.org/solr/SolrPerformanceProblems#GC_pause_problems The first part of the "GC pause problems" section on the above wiki page talks about very large heaps, but there is a paragraph just before "Tools and Garbage Collection" that talks about heaps that are a little bit too small. If I'm right about this, you're going to need to increase your java heap size. Exactly how to do this will depend on what version of Solr you're running, how you installed it, and how you start it. For 5.x versions using the included scripts, you can use the "-m" option on the "bin/solr" command when you start Solr manually, or you can edit the solr.in.sh file (usually found in /etc/default or /var/solr) if you used the service installer script on a UNIX/Linux platform. The default heap size in 5.x scripts is 512MB, which is VERY small. For earlier versions, there's too many install/start options available. There were no installation scripts included with Solr itself, so I won't know anything about the setup. If I'm wrong about what's happening, then we'll need a lot more details about your server and your Solr setup. Thanks, Shawn
Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/21/2016 6:49 PM, Aswath Srinivasan (TMS) wrote: >>> Thank you for the responses. Collection crashes as in, I'm unable to open >>> the core tab in Solr console. Search is not returning. None of the page >>> opens in solr admin dashboard. >>> >>> I do understand how and why this issue occurs and I'm going to do all it >>> takes to avoid this issue. However, on an event of an accidental frequent >>> hard commit close to each other which throws this WARN then - I'm just >>> trying to figure out a way to make my collection throw results without >>> having to delete and re-create the collection or delete the data folder. >>> >>> Again, I know how to avoid this issue but if it still happens then what can >>> be done to avoid a complete reindexing. If you're not actually hitting OutOfMemoryError, then my best guess about what's happening is that you are running right at the edge of the available Java heap memory, so your JVM is constantly running full garbage collections to free up enough memory for normal operation. In this situation, Solr is actually still running, but is spending most of its time paused for garbage collection. https://wiki.apache.org/solr/SolrPerformanceProblems#GC_pause_problems The first part of the "GC pause problems" section on the above wiki page talks about very large heaps, but there is a paragraph just before "Tools and Garbage Collection" that talks about heaps that are a little bit too small. If I'm right about this, you're going to need to increase your java heap size. Exactly how to do this will depend on what version of Solr you're running, how you installed it, and how you start it. For 5.x versions using the included scripts, you can use the "-m" option on the "bin/solr" command when you start Solr manually, or you can edit the solr.in.sh file (usually found in /etc/default or /var/solr) if you used the service installer script on a UNIX/Linux platform. The default heap size in 5.x scripts is 512MB, which is VERY small. For earlier versions, there's too many install/start options available. There were no installation scripts included with Solr itself, so I won't know anything about the setup. If I'm wrong about what's happening, then we'll need a lot more details about your server and your Solr setup. Thanks, Shawn
RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
>>The only way that I can imagine any part of Solr *crashing* when this message >>happens is if you are also hitting an OutOfMemoryError exception. You've said that your collection crashes ... but not what actually happens -- what "crash" means for your situation. I've never heard of a collection crashing. >>If you're running version 4.0 or later, you actually *do* want autoCommit >>configured, with openSearcher set to false. This configuration will not >>change document visibility at all, because it will not open a new searcher. >>You need different commits for document visibility. Thank you for the responses. Collection crashes as in, I'm unable to open the core tab in Solr console. Search is not returning. None of the page opens in solr admin dashboard. I do understand how and why this issue occurs and I'm going to do all it takes to avoid this issue. However, on an event of an accidental frequent hard commit close to each other which throws this WARN then - I'm just trying to figure out a way to make my collection throw results without having to delete and re-create the collection or delete the data folder. Again, I know how to avoid this issue but if it still happens then what can be done to avoid a complete reindexing. Thank you, Aswath NS -Original Message- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Monday, March 21, 2016 4:19 PM To: solr-user@lucene.apache.org Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 On 3/21/2016 12:52 PM, Aswath Srinivasan (TMS) wrote: > Fellow developers, > > PERFORMANCE WARNING: Overlapping onDeckSearchers=2 > > I'm seeing this warning often and whenever I see this, the collection > crashes. The only way to overcome this is by deleting the data folder and > reindexing. > > In my observation, this WARN comes when I hit frequent hard commits or hit > re-load config. I'm not planning on to hit frequent hard commits, however > sometimes accidently it happens. And when it happens the collection crashes > without a recovery. > > Have you faced this issue? Is there a recovery procedure for this WARN? > > Also, I don't want to increase maxWarmingSearchers or set autocommit. This is a lot of the same info that you've gotten from Hoss. I'm just going to leave it all here and add a little bit related to the rest of the thread. Increasing maxWarmingSearchers is almost always the WRONG thing to do. The reason that you are running into this message is that your commits (those that open a new searcher) are taking longer to finish than your commit frequency, so you end up warming multiple searchers at the same time. To limit memory usage, Solr will keep the number of warming searches from exceeding a threshold. You need to either reduce the frequency of your commits that open a new searcher or change your configuration so they complete faster. Here's some info about slow commits: http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits The only way that I can imagine any part of Solr *crashing* when this message happens is if you are also hitting an OutOfMemoryError exception. You've said that your collection crashes ... but not what actually happens -- what "crash" means for your situation. I've never heard of a collection crashing. If you're running version 4.0 or later, you actually *do* want autoCommit configured, with openSearcher set to false. This configuration will not change document visibility at all, because it will not open a new searcher. You need different commits for document visibility. This is the updateHandler config that I use which includes autoCommit: 12 false With this config, there will be at least two minutes between automatic hard commits. Because these commits will not open a new searcher, they cannot cause the message about onDeckSearchers. Commits that do not open a new searcher will normally complete VERY quickly. The reason you want this kind of autoCommit configuration is to avoid extremely large transaction logs. See this blog post for more info than you ever wanted about commits: http://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ If you're going to do all your indexing with the dataimport handler, you could just let the commit option on the dataimport take care of document visibility. Thanks, Shawn
Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/21/2016 12:52 PM, Aswath Srinivasan (TMS) wrote: > Fellow developers, > > PERFORMANCE WARNING: Overlapping onDeckSearchers=2 > > I'm seeing this warning often and whenever I see this, the collection > crashes. The only way to overcome this is by deleting the data folder and > reindexing. > > In my observation, this WARN comes when I hit frequent hard commits or hit > re-load config. I'm not planning on to hit frequent hard commits, however > sometimes accidently it happens. And when it happens the collection crashes > without a recovery. > > Have you faced this issue? Is there a recovery procedure for this WARN? > > Also, I don't want to increase maxWarmingSearchers or set autocommit. This is a lot of the same info that you've gotten from Hoss. I'm just going to leave it all here and add a little bit related to the rest of the thread. Increasing maxWarmingSearchers is almost always the WRONG thing to do. The reason that you are running into this message is that your commits (those that open a new searcher) are taking longer to finish than your commit frequency, so you end up warming multiple searchers at the same time. To limit memory usage, Solr will keep the number of warming searches from exceeding a threshold. You need to either reduce the frequency of your commits that open a new searcher or change your configuration so they complete faster. Here's some info about slow commits: http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits The only way that I can imagine any part of Solr *crashing* when this message happens is if you are also hitting an OutOfMemoryError exception. You've said that your collection crashes ... but not what actually happens -- what "crash" means for your situation. I've never heard of a collection crashing. If you're running version 4.0 or later, you actually *do* want autoCommit configured, with openSearcher set to false. This configuration will not change document visibility at all, because it will not open a new searcher. You need different commits for document visibility. This is the updateHandler config that I use which includes autoCommit: 12 false With this config, there will be at least two minutes between automatic hard commits. Because these commits will not open a new searcher, they cannot cause the message about onDeckSearchers. Commits that do not open a new searcher will normally complete VERY quickly. The reason you want this kind of autoCommit configuration is to avoid extremely large transaction logs. See this blog post for more info than you ever wanted about commits: http://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ If you're going to do all your indexing with the dataimport handler, you could just let the commit option on the dataimport take care of document visibility. Thanks, Shawn
RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
If you're seeing a crash, then that's a distinct problem from the WARN -- it might be related tothe warning, but it's not identical -- Solr doesn't always (or even normally) crash in the "Overlapping onDeckSearchers" situation That is what I hoped for. But I could see nothing else in the log. All I'm trying to do is run a full import in the DIH handler and index some 10 records from DB and check the "commit' check box. Then when I immediately re-run the full import again OR do a reload config, I start seeing this warning and my collection crashes. I have turn off autocommit in the solrconfig. I can try and avoid frequent hard commits but I wanted a solution to overcome this WARN if an accidental frequent hard commit happens. Thank you, Aswath NS -Original Message- From: Chris Hostetter [mailto:hossman_luc...@fucit.org] Sent: Monday, March 21, 2016 2:26 PM To: solr-user@lucene.apache.org Subject: RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 : What I'm wondering is, what should one do to fix this issue when it : happens. Is there a way to recover? after the WARN appears. It's just a warning that you have a sub-optimal situation from a performance standpoint -- either committing too fast, or warming too much. It's not a failure, and Solr will continue to serve queries and process updates -- but meanwhile it's detected that the situation it's in involves wasted CPU/RAM. : In my observation, this WARN comes when I hit frequent hard commits or : hit re-load config. I'm not planning on to hit frequent hard commits, : however sometimes accidently it happens. And when it happens the : collection crashes without a recovery. If you're seeing a crash, then that's a distinct problem from the WARN -- it might be related tothe warning, but it's not identical -- Solr doesn't always (or even normally) crash in the "Overlapping onDeckSearchers" sitaution So if you are seeing crashes, please give us more detials about these crashes: namely more details about everything you are seeing in your logs (on all the nodes, even if only one node is crashing) https://wiki.apache.org/solr/UsingMailingLists -Hoss http://www.lucidworks.com/
RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
: What I'm wondering is, what should one do to fix this issue when it : happens. Is there a way to recover? after the WARN appears. It's just a warning that you have a sub-optimal situation from a performance standpoint -- either committing too fast, or warming too much. It's not a failure, and Solr will continue to serve queries and process updates -- but meanwhile it's detected that the situation it's in involves wasted CPU/RAM. : In my observation, this WARN comes when I hit frequent hard commits or : hit re-load config. I'm not planning on to hit frequent hard commits, : however sometimes accidently it happens. And when it happens the : collection crashes without a recovery. If you're seeing a crash, then that's a distinct problem from the WARN -- it might be related tothe warning, but it's not identical -- Solr doesn't always (or even normally) crash in the "Overlapping onDeckSearchers" sitaution So if you are seeing crashes, please give us more detials about these crashes: namely more details about everything you are seeing in your logs (on all the nodes, even if only one node is crashing) https://wiki.apache.org/solr/UsingMailingLists -Hoss http://www.lucidworks.com/
RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Please note that I'm not looking to find ways to avoid this issue. There are lot of internet articles on this topic. What I'm wondering is, what should one do to fix this issue when it happens. Is there a way to recover? after the WARN appears. Thank you, Aswath NS -Original Message- From: Aswath Srinivasan (TMS) [mailto:aswath.sriniva...@toyota.com] Sent: Monday, March 21, 2016 11:52 AM To: solr-user@lucene.apache.org Subject: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 Fellow developers, PERFORMANCE WARNING: Overlapping onDeckSearchers=2 I'm seeing this warning often and whenever I see this, the collection crashes. The only way to overcome this is by deleting the data folder and reindexing. In my observation, this WARN comes when I hit frequent hard commits or hit re-load config. I'm not planning on to hit frequent hard commits, however sometimes accidently it happens. And when it happens the collection crashes without a recovery. Have you faced this issue? Is there a recovery procedure for this WARN? Also, I don't want to increase maxWarmingSearchers or set autocommit. Thank you, Aswath NS