RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-22 Thread Aswath Srinivasan (TMS)
>> Since you've already reproduced it on a small scale, we'll need your entire 
>> Solr logfile.  The mailing list eats attachments, so you'll need to place it 
>> somewhere and provide a URL.  Sites like gist and dropbox are excellent for 
>> sharing large text content.

Sure. I will try and sent it across. However I don’t see anything in them. I 
have FINE level logs.

>> Do you literally mean 10 records (a number I can count on my fingers)? 
How much data is in each of those DB records?  Which configset did you use when 
creating the index?

Yes. Crazy right. Actually the select query I gave will yield 10 records only. 
Total records in the table is 200,000. I restricted the query to reproduce the 
issue in small scale. This issue started appearing in my QA, where, one time we 
happen to have an accidently frequent hard commit by two batch jobs. There is 
no autocommit set in solrconfig. Only the batch jobs send a commit. I was never 
able to recover the collection so I had to delete the data and reindex to fix 
it. Hence decided to reproduce the issue in very small scale and trying to fix 
it because deleting data and reindex cannot be a fix. DB records are just 
normal varchars. Some 7 columns. I don’t think data is the problem.

I cloned the 'solr-5.3.2\example\example-DIH\solr\db' and added some addition 
fields and removed unused default fields.

>> You mentioned a 10GB heap and then said the machine has 8GB of RAM.  Is this 
>> correct?  If so, this would be a source of serious performance issues.

OOPPSS. Its 1 GB heap. That was a typo. The consumed heap is around 300-400 MB.

Thank you,
Aswath NS


-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Tuesday, March 22, 2016 10:41 AM
To: solr-user@lucene.apache.org
Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

On 3/22/2016 11:32 AM, Aswath Srinivasan (TMS) wrote:
> Thank you Shawn for taking time and responding.
>
> Unfortunately, this is not the case. My heap is not even going past 
> 50% and I have a heap of 10 GB on a instance that I just installed as 
> a standalone version and was only trying out these,
>
> •   Install a standalone solr 5.3.2 in my PC
> •   Indexed some 10 db records
> •   Hit core reload/call commit frequently in quick internals
> •   Seeing the  o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping 
> onDeckSearchers=2
> •   Collection crashes
> •   Only way to recover is to stop solr – delete the data folder – start 
> solr – reindex
>
> In any case, if this heap related issue, a solr restart should help, is what 
> I think.

That shouldn't happen.

Since you've already reproduced it on a small scale, we'll need your entire 
Solr logfile.  The mailing list eats attachments, so you'll need to place it 
somewhere and provide a URL.  Sites like gist and dropbox are excellent for 
sharing large text content.

More questions:

Do you literally mean 10 records (a number I can count on my fingers)? 
How much data is in each of those DB records?  Which configset did you use when 
creating the index?

You mentioned a 10GB heap and then said the machine has 8GB of RAM.  Is this 
correct?  If so, this would be a source of serious performance issues.

Thanks,
Shawn



Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-22 Thread Shawn Heisey
On 3/22/2016 11:32 AM, Aswath Srinivasan (TMS) wrote:
> Thank you Shawn for taking time and responding.
>
> Unfortunately, this is not the case. My heap is not even going past 50% and I 
> have a heap of 10 GB on a instance that I just installed as a standalone 
> version and was only trying out these,
>
> •   Install a standalone solr 5.3.2 in my PC
> •   Indexed some 10 db records
> •   Hit core reload/call commit frequently in quick internals
> •   Seeing the  o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping 
> onDeckSearchers=2
> •   Collection crashes
> •   Only way to recover is to stop solr – delete the data folder – start 
> solr – reindex
>
> In any case, if this heap related issue, a solr restart should help, is what 
> I think.

That shouldn't happen.

Since you've already reproduced it on a small scale, we'll need your
entire Solr logfile.  The mailing list eats attachments, so you'll need
to place it somewhere and provide a URL.  Sites like gist and dropbox
are excellent for sharing large text content.

More questions:

Do you literally mean 10 records (a number I can count on my fingers)? 
How much data is in each of those DB records?  Which configset did you
use when creating the index?

You mentioned a 10GB heap and then said the machine has 8GB of RAM.  Is
this correct?  If so, this would be a source of serious performance issues.

Thanks,
Shawn



RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-22 Thread Aswath Srinivasan (TMS)
>> If you're not actually hitting OutOfMemoryError, then my best guess about 
>> what's happening is that you are running >>right at the edge of the 
>> available Java heap memory, so your JVM is constantly running full garbage 
>> collections to free up >>enough memory for normal operation.  In this 
>> situation, Solr is actually still running, but is spending most of its time 
>> >>paused for garbage collection.

Thank you Shawn for taking time and responding.

Unfortunately, this is not the case. My heap is not even going past 50% and I 
have a heap of 10 GB on a instance that I just installed as a standalone 
version and was only trying out these,

•   Install a standalone solr 5.3.2 in my PC
•   Indexed some 10 db records
•   Hit core reload/call commit frequently in quick internals
•   Seeing the  o.a.s.c.SolrCore [db] PERFORMANCE WARNING: Overlapping 
onDeckSearchers=2
•   Collection crashes
•   Only way to recover is to stop solr – delete the data folder – start 
solr – reindex

In any case, if this heap related issue, a solr restart should help, is what I 
think.

>>If I'm wrong about what's happening, then we'll need a lot more details about 
>>your server and your Solr setup.

Nothing really. Just a standalone solr 5.3.2 on a windows 7 machine - 64 bit, 8 
GB RAM. I bet anybody could reproduce the problem if they follow my above steps.

Thank you all for spending time on this. I shall post back my findings, if I'm 
findings are useful.

Thank you,
Aswath NS
Mobile  +1 424 345 5340
Office+1 310 468 6729

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Monday, March 21, 2016 6:07 PM
To: solr-user@lucene.apache.org
Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

On 3/21/2016 6:49 PM, Aswath Srinivasan (TMS) wrote:
>>> Thank you for the responses. Collection crashes as in, I'm unable to open 
>>> the core tab in Solr console. Search is not returning. None of the page 
>>> opens in solr admin dashboard.
>>>
>>> I do understand how and why this issue occurs and I'm going to do all it 
>>> takes to avoid this issue. However, on an event of an accidental frequent 
>>> hard commit close to each other which throws this WARN then - I'm just 
>>> trying to figure out a way to make my collection throw results without 
>>> having to delete and re-create the collection or delete the data folder.
>>>
>>> Again, I know how to avoid this issue but if it still happens then what can 
>>> be done to avoid a complete reindexing.

If you're not actually hitting OutOfMemoryError, then my best guess about 
what's happening is that you are running right at the edge of the available 
Java heap memory, so your JVM is constantly running full garbage collections to 
free up enough memory for normal operation.  In this situation, Solr is 
actually still running, but is spending most of its time paused for garbage 
collection.

https://wiki.apache.org/solr/SolrPerformanceProblems#GC_pause_problems

The first part of the "GC pause problems" section on the above wiki page talks 
about very large heaps, but there is a paragraph just before "Tools and Garbage 
Collection" that talks about heaps that are a little bit too small.

If I'm right about this, you're going to need to increase your java heap size.  
Exactly how to do this will depend on what version of Solr you're running, how 
you installed it, and how you start it.

For 5.x versions using the included scripts, you can use the "-m" option on the 
"bin/solr" command when you start Solr manually, or you can edit the solr.in.sh 
file (usually found in /etc/default or /var/solr) if you used the service 
installer script on a UNIX/Linux platform.  The default heap size in 5.x 
scripts is 512MB, which is VERY small.

For earlier versions, there's too many install/start options available.
There were no installation scripts included with Solr itself, so I won't know 
anything about the setup.

If I'm wrong about what's happening, then we'll need a lot more details about 
your server and your Solr setup.

Thanks,
Shawn



Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-21 Thread Shawn Heisey
On 3/21/2016 6:49 PM, Aswath Srinivasan (TMS) wrote:
>>> Thank you for the responses. Collection crashes as in, I'm unable to open 
>>> the core tab in Solr console. Search is not returning. None of the page 
>>> opens in solr admin dashboard.
>>>
>>> I do understand how and why this issue occurs and I'm going to do all it 
>>> takes to avoid this issue. However, on an event of an accidental frequent 
>>> hard commit close to each other which throws this WARN then - I'm just 
>>> trying to figure out a way to make my collection throw results without 
>>> having to delete and re-create the collection or delete the data folder.
>>>
>>> Again, I know how to avoid this issue but if it still happens then what can 
>>> be done to avoid a complete reindexing.

If you're not actually hitting OutOfMemoryError, then my best guess
about what's happening is that you are running right at the edge of the
available Java heap memory, so your JVM is constantly running full
garbage collections to free up enough memory for normal operation.  In
this situation, Solr is actually still running, but is spending most of
its time paused for garbage collection.

https://wiki.apache.org/solr/SolrPerformanceProblems#GC_pause_problems

The first part of the "GC pause problems" section on the above wiki page
talks about very large heaps, but there is a paragraph just before
"Tools and Garbage Collection" that talks about heaps that are a little
bit too small.

If I'm right about this, you're going to need to increase your java heap
size.  Exactly how to do this will depend on what version of Solr you're
running, how you installed it, and how you start it.

For 5.x versions using the included scripts, you can use the "-m" option
on the "bin/solr" command when you start Solr manually, or you can edit
the solr.in.sh file (usually found in /etc/default or /var/solr) if you
used the service installer script on a UNIX/Linux platform.  The default
heap size in 5.x scripts is 512MB, which is VERY small.

For earlier versions, there's too many install/start options available. 
There were no installation scripts included with Solr itself, so I won't
know anything about the setup.

If I'm wrong about what's happening, then we'll need a lot more details
about your server and your Solr setup.

Thanks,
Shawn



RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-21 Thread Aswath Srinivasan (TMS)
>>The only way that I can imagine any part of Solr *crashing* when this message 
>>happens is if you are also hitting an OutOfMemoryError

exception.   You've said that your collection crashes ... but not what

actually happens -- what "crash" means for your situation.  I've never heard of 
a collection crashing.



>>If you're running version 4.0 or later, you actually *do* want autoCommit 
>>configured, with openSearcher set to false.  This configuration will not 
>>change document visibility at all, because it will not open a new searcher.  
>>You need different commits for document visibility.


Thank you for the responses. Collection crashes as in, I'm unable to open the 
core tab in Solr console. Search is not returning. None of the page opens in 
solr admin dashboard.

I do understand how and why this issue occurs and I'm going to do all it takes 
to avoid this issue. However, on an event of an accidental frequent hard commit 
close to each other which throws this WARN then - I'm just trying to figure out 
a way to make my collection throw results without having to delete and 
re-create the collection or delete the data folder.

Again, I know how to avoid this issue but if it still happens then what can be 
done to avoid a complete reindexing.

Thank you,
Aswath NS

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Monday, March 21, 2016 4:19 PM
To: solr-user@lucene.apache.org
Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

On 3/21/2016 12:52 PM, Aswath Srinivasan (TMS) wrote:
> Fellow developers,
>
> PERFORMANCE WARNING: Overlapping onDeckSearchers=2
>
> I'm seeing this warning often and whenever I see this, the collection 
> crashes. The only way to overcome this is by deleting the data folder and 
> reindexing.
>
> In my observation, this WARN comes when I hit frequent hard commits or hit 
> re-load config. I'm not planning on to hit frequent hard commits, however 
> sometimes accidently it happens. And when it happens the collection crashes 
> without a recovery.
>
> Have you faced this issue? Is there a recovery procedure for this WARN?
>
> Also, I don't want to increase maxWarmingSearchers or set autocommit.

This is a lot of the same info that you've gotten from Hoss. I'm just going to 
leave it all here and add a little bit related to the rest of the thread.

Increasing maxWarmingSearchers is almost always the WRONG thing to do.
The reason that you are running into this message is that your commits (those 
that open a new searcher) are taking longer to finish than your commit 
frequency, so you end up warming multiple searchers at the same time. To limit 
memory usage, Solr will keep the number of warming searches from exceeding a 
threshold.

You need to either reduce the frequency of your commits that open a new 
searcher or change your configuration so they complete faster. Here's some info 
about slow commits:

http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits

The only way that I can imagine any part of Solr *crashing* when this message 
happens is if you are also hitting an OutOfMemoryError
exception. You've said that your collection crashes ... but not what
actually happens -- what "crash" means for your situation. I've never heard of 
a collection crashing.

If you're running version 4.0 or later, you actually *do* want autoCommit 
configured, with openSearcher set to false. This configuration will not change 
document visibility at all, because it will not open a new searcher. You need 
different commits for document visibility.

This is the updateHandler config that I use which includes autoCommit:



12
false




With this config, there will be at least two minutes between automatic hard 
commits. Because these commits will not open a new searcher, they cannot cause 
the message about onDeckSearchers. Commits that do not open a new searcher will 
normally complete VERY quickly. The reason you want this kind of autoCommit 
configuration is to avoid extremely large transaction logs.

See this blog post for more info than you ever wanted about commits:

http://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

If you're going to do all your indexing with the dataimport handler, you could 
just let the commit option on the dataimport take care of document visibility.

Thanks,
Shawn


Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-21 Thread Shawn Heisey
On 3/21/2016 12:52 PM, Aswath Srinivasan (TMS) wrote:
> Fellow developers,
>
> PERFORMANCE WARNING: Overlapping onDeckSearchers=2
>
> I'm seeing this warning often and whenever I see this, the collection 
> crashes. The only way to overcome this is by deleting the data folder and 
> reindexing.
>
> In my observation, this WARN comes when I hit frequent hard commits or hit 
> re-load config. I'm not planning on to hit frequent hard commits, however 
> sometimes accidently it happens. And when it happens the collection crashes 
> without a recovery.
>
> Have you faced this issue? Is there a recovery procedure for this WARN?
>
> Also, I don't want to increase maxWarmingSearchers or set autocommit.

This is a lot of the same info that you've gotten from Hoss.  I'm just
going to leave it all here and add a little bit related to the rest of
the thread.

Increasing maxWarmingSearchers is almost always the WRONG thing to do. 
The reason that you are running into this message is that your commits
(those that open a new searcher) are taking longer to finish than your
commit frequency, so you end up warming multiple searchers at the same
time.  To limit memory usage, Solr will keep the number of warming
searches from exceeding a threshold.

You need to either reduce the frequency of your commits that open a new
searcher or change your configuration so they complete faster.  Here's
some info about slow commits:

http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits

The only way that I can imagine any part of Solr *crashing* when this
message happens is if you are also hitting an OutOfMemoryError
exception.   You've said that your collection crashes ... but not what 
actually happens -- what "crash" means for your situation.  I've never
heard of a collection crashing.

If you're running version 4.0 or later, you actually *do* want
autoCommit configured, with openSearcher set to false.  This
configuration will not change document visibility at all, because it
will not open a new searcher.  You need different commits for document
visibility.

This is the updateHandler config that I use which includes autoCommit:



  
12
false
  
  


With this config, there will be at least two minutes between automatic
hard commits.  Because these commits will not open a new searcher, they
cannot cause the message about onDeckSearchers.  Commits that do not
open a new searcher will normally complete VERY quickly.  The reason you
want this kind of autoCommit configuration is to avoid extremely large
transaction logs.

See this blog post for more info than you ever wanted about commits:

http://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

If you're going to do all your indexing with the dataimport handler, you
could just let the commit option on the dataimport take care of document
visibility.

Thanks,
Shawn



RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-21 Thread Aswath Srinivasan (TMS)
If you're seeing a crash, then that's a distinct problem from the WARN -- it 
might be related tothe warning, but it's not identical -- Solr doesn't always 
(or even normally) crash in the "Overlapping onDeckSearchers"
situation

That is what I hoped for. But I could see nothing else in the log. All I'm 
trying to do is run a full import in the DIH handler and index some 10 records 
from DB and check the "commit' check box. Then when I immediately re-run the 
full import again OR do a reload config, I start seeing this warning and my 
collection crashes.

I have turn off autocommit in the solrconfig.

I can try and avoid frequent hard commits but I wanted a solution to overcome 
this WARN if an accidental frequent hard commit happens.

Thank you,
Aswath NS



-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Monday, March 21, 2016 2:26 PM
To: solr-user@lucene.apache.org
Subject: RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2


: What I'm wondering is, what should one do to fix this issue when it
: happens. Is there a way to recover? after the WARN appears.

It's just a warning that you have a sub-optimal situation from a performance 
standpoint -- either committing too fast, or warming too much.
It's not a failure, and Solr will continue to serve queries and process updates 
-- but meanwhile it's detected that the situation it's in involves wasted 
CPU/RAM.

: In my observation, this WARN comes when I hit frequent hard commits or
: hit re-load config. I'm not planning on to hit frequent hard commits,
: however sometimes accidently it happens. And when it happens the
: collection crashes without a recovery.

If you're seeing a crash, then that's a distinct problem from the WARN -- it 
might be related tothe warning, but it's not identical -- Solr doesn't always 
(or even normally) crash in the "Overlapping onDeckSearchers"
sitaution

So if you are seeing crashes, please give us more detials about these
crashes: namely more details about everything you are seeing in your logs (on 
all the nodes, even if only one node is crashing)

https://wiki.apache.org/solr/UsingMailingLists



-Hoss
http://www.lucidworks.com/


RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-21 Thread Chris Hostetter

: What I'm wondering is, what should one do to fix this issue when it 
: happens. Is there a way to recover? after the WARN appears.

It's just a warning that you have a sub-optimal situation from a 
performance standpoint -- either committing too fast, or warming too much.  
It's not a failure, and Solr will continue to serve queries and process 
updates -- but meanwhile it's detected that the situation it's in involves 
wasted CPU/RAM.

: In my observation, this WARN comes when I hit frequent hard commits or 
: hit re-load config. I'm not planning on to hit frequent hard commits, 
: however sometimes accidently it happens. And when it happens the 
: collection crashes without a recovery.

If you're seeing a crash, then that's a distinct problem from the WARN -- 
it might be related tothe warning, but it's not identical -- Solr doesn't 
always (or even normally) crash in the "Overlapping onDeckSearchers" 
sitaution

So if you are seeing  crashes, please give us more detials about these 
crashes: namely more details about everything you are seeing in your logs 
(on all the nodes, even if only one node is crashing)

https://wiki.apache.org/solr/UsingMailingLists



-Hoss
http://www.lucidworks.com/


RE: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

2016-03-21 Thread Aswath Srinivasan (TMS)
Please note that I'm not looking to find ways to avoid this issue. There are 
lot of internet articles on this topic.

What I'm wondering is, what should one do to fix this issue when it happens. Is 
there a way to recover? after the WARN appears.

Thank you,
Aswath NS

-Original Message-
From: Aswath Srinivasan (TMS) [mailto:aswath.sriniva...@toyota.com] 
Sent: Monday, March 21, 2016 11:52 AM
To: solr-user@lucene.apache.org
Subject: PERFORMANCE WARNING: Overlapping onDeckSearchers=2

Fellow developers,

PERFORMANCE WARNING: Overlapping onDeckSearchers=2

I'm seeing this warning often and whenever I see this, the collection crashes. 
The only way to overcome this is by deleting the data folder and reindexing.

In my observation, this WARN comes when I hit frequent hard commits or hit 
re-load config. I'm not planning on to hit frequent hard commits, however 
sometimes accidently it happens. And when it happens the collection crashes 
without a recovery.

Have you faced this issue? Is there a recovery procedure for this WARN?

Also, I don't want to increase maxWarmingSearchers or set autocommit.

Thank you,
Aswath NS


RE: Performance warning overlapping onDeckSearchers

2015-08-13 Thread Adrian Liew
Thanks very much for the useful info Eric. Sincerely appreciate you pointed out 
those questions. In fact, I am currently working with a third party product 
called Sitecore Web Content Management System (WCMS) that does the job to issue 
updates to the Solr Index.
 
I need to understand abit more when and how they are committing documents to 
Solr. Your question about how the new searchers arise to produce those 
performance warning messages is valid.  

Your questions will be more for the Sitecore Product Team to investigate in 
which I will chase them up for answers.

1. Whether we are overriding the system variable solr.autoSoftCommit.maxTime?
2. Whether we are overriding the solr.autoCommit.maxTime (although this really 
shouldn't matter).
3. Why we have too many hard commits?
4. How come new searchers are opened even when this is set with 
openSearcherfalse/openSearcher

I shall chase these questions up with them. Will feedback to you where 
necessary.

Thanks.

Best regards,
Adrian Liew

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Wednesday, August 12, 2015 11:19 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance warning overlapping onDeckSearchers

Something's not adding up here. Is your _client_ perhaps issuing commits when 
you index documents? This is Joel's question, so we need to see how you send 
docs to Solr. We really need to know how you're indexing docs to Solr.

My bet (and I suspect Joel's) is that you're either using SolrJ to send docs to 
Solr and have something like solrJ while (more docs) {
   create a doc
   send it to Solr
   commit
}

rather than
while (more docs) {
   create a bunch of docs (I usually start with 1,000) using the commitwithin 
option and make it as long as possible
   send the batch to solr
}

Maybe commit at the very end, but that's not necessary if you're willing to 
wait for commitWithin.

post.jar
Or, you're using post.jar in some kind of loop which commits every time you use 
it by default. You can disable this, try 'java -jar post.jar -help' for all the 
options, but the one you want is -Dcommit=no.

NOTE: you have to issue a commit _sometime_ to see the docs, either the 
commitWithin option in SolrJ or explicitly if you're using the post.jar tool. 
You can even issue a commit (this is suitable for
testing) via curl or a browser with
http://solr_node:8983/solr/collection/update?commit=true

The reason we're focusing here is that:

Soft commits are disabled in your setup, this is the -1 in autoSoftCommit.
Hard commits are not opening searchers, this is the autoCommit, 
openSearcherfalse/openSearcher section.

Are, you perhaps overriding the system variable solr.autoSoftCommit.maxTime 
when you start up Solr?

What about solr.autoCommit.maxTime (although this really shouldn't matter).

If you're not overriding the above, then no searchers should be being opened at 
all after you start Solr, and only one should be opening when you do start 
Solr. So you should not be getting the warning about  Overlapping 
onDeckSearchers.

Forget the static warming queries, they are irrelevant until we understand why 
you're getting any new searchers. For future reference, these are the 
newSearcher and firstSearcher events in solrconfig.xml. newSearcher is fired 
every time one commits, firstSearcher when you start Solr.

The bottom line here is you need to find out why you're committing at all, 
which opens a new searcher which, when that happens too often generates the 
warning you're seeing.

Best,
Erick


On Wed, Aug 12, 2015 at 6:51 AM, Adrian Liew adrian.l...@avanade.com wrote:
 Hi Joel,

 I am fairly new to Solr (version which I am using v5.2.1) so I suppose what 
 you may be asking is referring to the autocommits section:


 !-- AutoCommit

  Perform a hard commit automatically under certain conditions.
  Instead of enabling autoCommit, consider using commitWithin
  when adding documents.

  http://wiki.apache.org/solr/UpdateXmlMessages

  maxDocs - Maximum number of documents to add since the last
commit before automatically triggering a new commit.

  maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
  openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.

  If the updateLog is enabled, then it's highly recommended to
  have some sort of hard autoCommit to limit the log size.
   --
  autoCommit
maxTime${solr.autoCommit.maxTime:15000}/maxTime
openSearcherfalse/openSearcher
  /autoCommit

 !-- softAutoCommit is like autoCommit except it causes a
  'soft' commit which only ensures that changes are visible

Re: Performance warning overlapping onDeckSearchers

2015-08-13 Thread Erick Erickson
Adrian:

You can find your answers a lot faster by inspecting the Solr logs.
You'll see a commit message, messages about opening new searchers,
autowarming, etc. All that is in the log file, along with timestamps.
So rather than ask them an open ended question, you can say something
like I see commits coming through every N seconds. This is an
anti-pattern. Fix this ;)..

Best,
Erick

On Thu, Aug 13, 2015 at 12:46 AM, Adrian Liew adrian.l...@avanade.com wrote:
 Thanks very much for the useful info Eric. Sincerely appreciate you pointed 
 out those questions. In fact, I am currently working with a third party 
 product called Sitecore Web Content Management System (WCMS) that does the 
 job to issue updates to the Solr Index.

 I need to understand abit more when and how they are committing documents to 
 Solr. Your question about how the new searchers arise to produce those 
 performance warning messages is valid.

 Your questions will be more for the Sitecore Product Team to investigate in 
 which I will chase them up for answers.

 1. Whether we are overriding the system variable solr.autoSoftCommit.maxTime?
 2. Whether we are overriding the solr.autoCommit.maxTime (although this 
 really shouldn't matter).
 3. Why we have too many hard commits?
 4. How come new searchers are opened even when this is set with 
 openSearcherfalse/openSearcher

 I shall chase these questions up with them. Will feedback to you where 
 necessary.

 Thanks.

 Best regards,
 Adrian Liew

 -Original Message-
 From: Erick Erickson [mailto:erickerick...@gmail.com]
 Sent: Wednesday, August 12, 2015 11:19 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Performance warning overlapping onDeckSearchers

 Something's not adding up here. Is your _client_ perhaps issuing commits when 
 you index documents? This is Joel's question, so we need to see how you send 
 docs to Solr. We really need to know how you're indexing docs to Solr.

 My bet (and I suspect Joel's) is that you're either using SolrJ to send docs 
 to Solr and have something like solrJ while (more docs) {
create a doc
send it to Solr
commit
 }

 rather than
 while (more docs) {
create a bunch of docs (I usually start with 1,000) using the 
 commitwithin option and make it as long as possible
send the batch to solr
 }

 Maybe commit at the very end, but that's not necessary if you're willing to 
 wait for commitWithin.

 post.jar
 Or, you're using post.jar in some kind of loop which commits every time you 
 use it by default. You can disable this, try 'java -jar post.jar -help' for 
 all the options, but the one you want is -Dcommit=no.

 NOTE: you have to issue a commit _sometime_ to see the docs, either the 
 commitWithin option in SolrJ or explicitly if you're using the post.jar tool. 
 You can even issue a commit (this is suitable for
 testing) via curl or a browser with
 http://solr_node:8983/solr/collection/update?commit=true

 The reason we're focusing here is that:

 Soft commits are disabled in your setup, this is the -1 in autoSoftCommit.
 Hard commits are not opening searchers, this is the autoCommit, 
 openSearcherfalse/openSearcher section.

 Are, you perhaps overriding the system variable solr.autoSoftCommit.maxTime 
 when you start up Solr?

 What about solr.autoCommit.maxTime (although this really shouldn't matter).

 If you're not overriding the above, then no searchers should be being opened 
 at all after you start Solr, and only one should be opening when you do start 
 Solr. So you should not be getting the warning about  Overlapping 
 onDeckSearchers.

 Forget the static warming queries, they are irrelevant until we understand 
 why you're getting any new searchers. For future reference, these are the 
 newSearcher and firstSearcher events in solrconfig.xml. newSearcher is fired 
 every time one commits, firstSearcher when you start Solr.

 The bottom line here is you need to find out why you're committing at all, 
 which opens a new searcher which, when that happens too often generates the 
 warning you're seeing.

 Best,
 Erick


 On Wed, Aug 12, 2015 at 6:51 AM, Adrian Liew adrian.l...@avanade.com wrote:
 Hi Joel,

 I am fairly new to Solr (version which I am using v5.2.1) so I suppose what 
 you may be asking is referring to the autocommits section:


 !-- AutoCommit

  Perform a hard commit automatically under certain conditions.
  Instead of enabling autoCommit, consider using commitWithin
  when adding documents.

  http://wiki.apache.org/solr/UpdateXmlMessages

  maxDocs - Maximum number of documents to add since the last
commit before automatically triggering a new commit.

  maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
  openSearcher - if false, the commit causes recent index changes

RE: Performance warning overlapping onDeckSearchers

2015-08-12 Thread Adrian Liew
Additionally,

I realized that my autowarmCount is set to zero for the following Cache entries 
except perSegFilter:

filterCache class=solr.FastLRUCache
 size=512
 initialSize=512
 autowarmCount=0/

!-- Query Result Cache

Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents 
requested.
Additional supported parameter by LRUCache:
   maxRamMB - the maximum amount of RAM (in MB) that this cache is 
allowed
  to occupy
 --
queryResultCache class=solr.LRUCache
 size=512
 initialSize=512
 autowarmCount=0/
   
!-- Document Cache

 Caches Lucene Document objects (the stored fields for each
 document).  Since Lucene internal document ids are transient,
 this cache will not be autowarmed.  
  --
documentCache class=solr.LRUCache
   size=512
   initialSize=512
   autowarmCount=0/

!-- custom cache currently used by block join -- 
cache name=perSegFilter
  class=solr.search.LRUCache
  size=10
  initialSize=0
  autowarmCount=10
  regenerator=solr.NoOpRegenerator /


The link, 
https://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F
 did suggest to reduce the autowarmCount or reduce warm up cache activity 
(which I am not sure where to begin doing this).

I suspect autowarmCount is not very large as the above. 

Let me know what you think.

Best regards,
Adrian Liew


-Original Message-
From: Adrian Liew [mailto:adrian.l...@avanade.com] 
Sent: Wednesday, August 12, 2015 3:32 PM
To: solr-user@lucene.apache.org
Subject: RE: Performance warning overlapping onDeckSearchers

Thanks Shawn. Having said that increasing maxWarmingSearchers is usually wrong 
to solve this, are there any implications if we set maxWarmingSearchers to zero 
to resolve this problem?

Or do you think there are some other settings that are worthwhile tuning to 
cater to the above?

Best regards,
Adrian 

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Tuesday, August 11, 2015 11:02 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance warning overlapping onDeckSearchers

On 8/11/2015 3:02 AM, Adrian Liew wrote:
 Has anyone come across this issue, [some_index] PERFORMANCE WARNING: 
 Overlapping onDeckSearchers=2?
 
 I am currently using Solr v5.2.1.
 
 What does this mean? Does this raise red flags?
 
 I am currently encountering an issue whereby my Sitecore system is unable to 
 update the index appropriately. I am not sure if this is linked to the 
 warnings above.

https://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F

What the wiki page doesn't explicitly state is that increasing 
maxWarmingSearchers is usually the wrong way to solve this, because that can 
actually make the problem *worse*.  It is implied by the things the page DOES 
say, but it is not stated.

Thanks,
Shawn



RE: Performance warning overlapping onDeckSearchers

2015-08-12 Thread Adrian Liew
Thanks Shawn. Having said that increasing maxWarmingSearchers is usually wrong 
to solve this, are there any implications if we set maxWarmingSearchers to zero 
to resolve this problem?

Or do you think there are some other settings that are worthwhile tuning to 
cater to the above?

Best regards,
Adrian 

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Tuesday, August 11, 2015 11:02 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance warning overlapping onDeckSearchers

On 8/11/2015 3:02 AM, Adrian Liew wrote:
 Has anyone come across this issue, [some_index] PERFORMANCE WARNING: 
 Overlapping onDeckSearchers=2?
 
 I am currently using Solr v5.2.1.
 
 What does this mean? Does this raise red flags?
 
 I am currently encountering an issue whereby my Sitecore system is unable to 
 update the index appropriately. I am not sure if this is linked to the 
 warnings above.

https://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F

What the wiki page doesn't explicitly state is that increasing 
maxWarmingSearchers is usually the wrong way to solve this, because that can 
actually make the problem *worse*.  It is implied by the things the page DOES 
say, but it is not stated.

Thanks,
Shawn



Re: Performance warning overlapping onDeckSearchers

2015-08-12 Thread Joel Bernstein
You'll want to check to see if there are any static warming queries being
run as well.

How often are you committing and opening a new searcher? Are you using
autoCommits or explicitly committing?

Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Aug 12, 2015 at 3:57 AM, Adrian Liew adrian.l...@avanade.com
wrote:

 Additionally,

 I realized that my autowarmCount is set to zero for the following Cache
 entries except perSegFilter:

 filterCache class=solr.FastLRUCache
  size=512
  initialSize=512
  autowarmCount=0/

 !-- Query Result Cache

 Caches results of searches - ordered lists of document ids
 (DocList) based on a query, a sort, and the range of documents
 requested.
 Additional supported parameter by LRUCache:
maxRamMB - the maximum amount of RAM (in MB) that this cache is
 allowed
   to occupy
  --
 queryResultCache class=solr.LRUCache
  size=512
  initialSize=512
  autowarmCount=0/

 !-- Document Cache

  Caches Lucene Document objects (the stored fields for each
  document).  Since Lucene internal document ids are transient,
  this cache will not be autowarmed.
   --
 documentCache class=solr.LRUCache
size=512
initialSize=512
autowarmCount=0/

 !-- custom cache currently used by block join --
 cache name=perSegFilter
   class=solr.search.LRUCache
   size=10
   initialSize=0
   autowarmCount=10
   regenerator=solr.NoOpRegenerator /


 The link,
 https://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F
 did suggest to reduce the autowarmCount or reduce warm up cache activity
 (which I am not sure where to begin doing this).

 I suspect autowarmCount is not very large as the above.

 Let me know what you think.

 Best regards,
 Adrian Liew


 -Original Message-
 From: Adrian Liew [mailto:adrian.l...@avanade.com]
 Sent: Wednesday, August 12, 2015 3:32 PM
 To: solr-user@lucene.apache.org
 Subject: RE: Performance warning overlapping onDeckSearchers

 Thanks Shawn. Having said that increasing maxWarmingSearchers is usually
 wrong to solve this, are there any implications if we set
 maxWarmingSearchers to zero to resolve this problem?

 Or do you think there are some other settings that are worthwhile tuning
 to cater to the above?

 Best regards,
 Adrian

 -Original Message-
 From: Shawn Heisey [mailto:apa...@elyograg.org]
 Sent: Tuesday, August 11, 2015 11:02 PM
 To: solr-user@lucene.apache.org
 Subject: Re: Performance warning overlapping onDeckSearchers

 On 8/11/2015 3:02 AM, Adrian Liew wrote:
  Has anyone come across this issue, [some_index] PERFORMANCE WARNING:
 Overlapping onDeckSearchers=2?
 
  I am currently using Solr v5.2.1.
 
  What does this mean? Does this raise red flags?
 
  I am currently encountering an issue whereby my Sitecore system is
 unable to update the index appropriately. I am not sure if this is linked
 to the warnings above.


 https://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F

 What the wiki page doesn't explicitly state is that increasing
 maxWarmingSearchers is usually the wrong way to solve this, because that
 can actually make the problem *worse*.  It is implied by the things the
 page DOES say, but it is not stated.

 Thanks,
 Shawn




Re: Performance warning overlapping onDeckSearchers

2015-08-11 Thread Shawn Heisey
On 8/11/2015 3:02 AM, Adrian Liew wrote:
 Has anyone come across this issue, [some_index] PERFORMANCE WARNING: 
 Overlapping onDeckSearchers=2?
 
 I am currently using Solr v5.2.1.
 
 What does this mean? Does this raise red flags?
 
 I am currently encountering an issue whereby my Sitecore system is unable to 
 update the index appropriately. I am not sure if this is linked to the 
 warnings above.

https://wiki.apache.org/solr/FAQ#What_does_.22PERFORMANCE_WARNING:_Overlapping_onDeckSearchers.3DX.22_mean_in_my_logs.3F

What the wiki page doesn't explicitly state is that increasing
maxWarmingSearchers is usually the wrong way to solve this, because that
can actually make the problem *worse*.  It is implied by the things the
page DOES say, but it is not stated.

Thanks,
Shawn