Re: Out of memory errors with Spatial indexing

2020-07-06 Thread David Smiley
I believe you are experiencing this bug: LUCENE-5056

The fix would probably be adjusting code in here
org.apache.lucene.spatial.query.SpatialArgs#calcDistanceFromErrPct

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Mon, Jul 6, 2020 at 5:18 AM Sunil Varma  wrote:

> Hi David
> Thanks for your response. Yes, I noticed that all the data causing issue
> were at the poles. I tried the "RptWithGeometrySpatialField" field type
> definition but get a "Spatial context does not support S2 spatial
> index"error. Setting "spatialContextFactory="Geo3D" I still see the
> original OOM error .
>
> On Sat, 4 Jul 2020 at 05:49, David Smiley  wrote:
>
> > Hi Sunil,
> >
> > Your shape is at a pole, and I'm aware of a bug causing an exponential
> > explosion of needed grid squares when you have polygons super-close to
> the
> > pole.  Might you try S2PrefixTree instead?  I forget if this would fix it
> > or not by itself.  For indexing non-point data, I recommend
> > class="solr.RptWithGeometrySpatialField" which internally is based off a
> > combination of a course grid and storing the original vector geometry for
> > accurate verification:
> >  > class="solr.RptWithGeometrySpatialField"
> >   prefixTree="s2" />
> > The internally coarser grid will lessen the impact of that pole bug.
> >
> > ~ David Smiley
> > Apache Lucene/Solr Search Developer
> > http://www.linkedin.com/in/davidwsmiley
> >
> >
> > On Fri, Jul 3, 2020 at 7:48 AM Sunil Varma 
> > wrote:
> >
> > > We are seeing OOM errors  when trying to index some spatial data. I
> > believe
> > > the data itself might not be valid but it shouldn't cause the Server to
> > > crash. We see this on both Solr 7.6 and Solr 8. Below is the input that
> > is
> > > causing the error.
> > >
> > > {
> > > "id": "bad_data_1",
> > > "spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
> > > 1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
> > > 74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
> > > 1.000150474662E30)"
> > > }
> > >
> > > Above dynamic field is mapped to field type "location_rpt" (
> > > solr.SpatialRecursivePrefixTreeFieldType).
> > >
> > >   Any pointers to get around this issue would be highly appreciated.
> > >
> > > Thanks!
> > >
> >
>


Re: Out of memory errors with Spatial indexing

2020-07-06 Thread Sunil Varma
Hi David
Thanks for your response. Yes, I noticed that all the data causing issue
were at the poles. I tried the "RptWithGeometrySpatialField" field type
definition but get a "Spatial context does not support S2 spatial
index"error. Setting "spatialContextFactory="Geo3D" I still see the
original OOM error .

On Sat, 4 Jul 2020 at 05:49, David Smiley  wrote:

> Hi Sunil,
>
> Your shape is at a pole, and I'm aware of a bug causing an exponential
> explosion of needed grid squares when you have polygons super-close to the
> pole.  Might you try S2PrefixTree instead?  I forget if this would fix it
> or not by itself.  For indexing non-point data, I recommend
> class="solr.RptWithGeometrySpatialField" which internally is based off a
> combination of a course grid and storing the original vector geometry for
> accurate verification:
>  class="solr.RptWithGeometrySpatialField"
>   prefixTree="s2" />
> The internally coarser grid will lessen the impact of that pole bug.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Fri, Jul 3, 2020 at 7:48 AM Sunil Varma 
> wrote:
>
> > We are seeing OOM errors  when trying to index some spatial data. I
> believe
> > the data itself might not be valid but it shouldn't cause the Server to
> > crash. We see this on both Solr 7.6 and Solr 8. Below is the input that
> is
> > causing the error.
> >
> > {
> > "id": "bad_data_1",
> > "spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
> > 1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
> > 74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
> > 1.000150474662E30)"
> > }
> >
> > Above dynamic field is mapped to field type "location_rpt" (
> > solr.SpatialRecursivePrefixTreeFieldType).
> >
> >   Any pointers to get around this issue would be highly appreciated.
> >
> > Thanks!
> >
>


Re: Out of memory errors with Spatial indexing

2020-07-03 Thread David Smiley
Hi Sunil,

Your shape is at a pole, and I'm aware of a bug causing an exponential
explosion of needed grid squares when you have polygons super-close to the
pole.  Might you try S2PrefixTree instead?  I forget if this would fix it
or not by itself.  For indexing non-point data, I recommend
class="solr.RptWithGeometrySpatialField" which internally is based off a
combination of a course grid and storing the original vector geometry for
accurate verification:

The internally coarser grid will lessen the impact of that pole bug.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Fri, Jul 3, 2020 at 7:48 AM Sunil Varma  wrote:

> We are seeing OOM errors  when trying to index some spatial data. I believe
> the data itself might not be valid but it shouldn't cause the Server to
> crash. We see this on both Solr 7.6 and Solr 8. Below is the input that is
> causing the error.
>
> {
> "id": "bad_data_1",
> "spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
> 1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
> 74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
> 1.000150474662E30)"
> }
>
> Above dynamic field is mapped to field type "location_rpt" (
> solr.SpatialRecursivePrefixTreeFieldType).
>
>   Any pointers to get around this issue would be highly appreciated.
>
> Thanks!
>


Out of memory errors with Spatial indexing

2020-07-03 Thread Sunil Varma
We are seeing OOM errors  when trying to index some spatial data. I believe
the data itself might not be valid but it shouldn't cause the Server to
crash. We see this on both Solr 7.6 and Solr 8. Below is the input that is
causing the error.

{
"id": "bad_data_1",
"spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
1.000150474662E30)"
}

Above dynamic field is mapped to field type "location_rpt" (
solr.SpatialRecursivePrefixTreeFieldType).

  Any pointers to get around this issue would be highly appreciated.

Thanks!


Re: FuzzyQuery causing Out of Memory Errors in 8.5.x

2020-04-23 Thread Colvin Cowie
https://issues.apache.org/jira/browse/SOLR-14428



On Thu, 23 Apr 2020 at 08:45, Colvin Cowie 
wrote:

> I created a little test that fires off fuzzy queries from random UUID
> strings for 5 minutes
> *FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"*
>
> The change in heap usage is really severe.
>
> On 8.5.1 Solr went OOM almost immediately on a 512mb heap, and with a 4GB
> heap it only just stayed alive.
> On 8.3.1 it was completely happy.
>
> I'm guessing that the memory might be being leaked if the FuzzyQuery
> objects are referenced from the cache, while the FuzzyTermsEnum would not
> have been.
>
> I'm going to raise an issue
>
>
> On Wed, 22 Apr 2020 at 19:44, Colvin Cowie 
> wrote:
>
>> Hello,
>>
>> I'm moving our product from 8.3.1 to 8.5.1 in dev and we've got tests
>> failing because Solr is getting OOMEs with a 512mb heap where it was
>> previously fine.
>>
>> I ran our tests on both versions with jconsole to track the heap usage.
>> Here's a little comparison. 8.5.1 dies part way through
>> https://drive.google.com/open?id=113Ujts-lzv9ZBJOUB78LA2Qw5PsIsajO
>>
>> We have our own query parser as an extension to Solr, and we do various
>> things with user queries, including generating FuzzyQuery-s. Our
>> implementation of org.apache.solr.search.QParser.parse() isn't stateful and
>> parses the qstr and returns new Query objects each time it's called.
>> With JProfiler on I can see that the majority of the heap is being
>> allocated through FuzzyQuery's constructor.
>> https://issues.apache.org/jira/browse/LUCENE-9068 moved construction of
>> the automata from the FuzzyTermsEnum to the FuzzyQuery's constructor.
>>
>> When profiling on 8.3.1 we still have a fairly large number of
>> FuzzyTermEnums created at times, but that's accounting for about ~40mb of
>> the heap for a few seconds rather than the 100mb to 300mb of continual
>> allocation for FuzzyQuery I'm seeing in 8.5.
>>
>> It's definitely possible that we're doing something wrong in our
>> extension (which I can't share the source of) but it seems like the memory
>> cost of FuzzyQuery now is totally disproportionate to what it was before.
>> We've not had issues like this with our extension before (which doesn't
>> mean that our parser is flawless, but it's not been causing noticeable
>> problems for the last 4 years).
>>
>>
>> So I suppose the question is, are we misusing FuzzyQuery in some way
>> (hard for you to say without seeing the source), or are the recent changes
>> using more memory than they should?
>>
>> I will investigate further into what we're doing. But I could maybe use
>> some help to create a stress test for Lucene itself that compares the
>> memory consumption of the old FuzzyQuery vs the new, to see whether it's
>> fundamentally bad for memory or if it's just how we're using it.
>>
>> Regards,
>> Colvin
>>
>>
>>
>>


Re: FuzzyQuery causing Out of Memory Errors in 8.5.x

2020-04-23 Thread Colvin Cowie
I created a little test that fires off fuzzy queries from random UUID
strings for 5 minutes
*FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"*

The change in heap usage is really severe.

On 8.5.1 Solr went OOM almost immediately on a 512mb heap, and with a 4GB
heap it only just stayed alive.
On 8.3.1 it was completely happy.

I'm guessing that the memory might be being leaked if the FuzzyQuery
objects are referenced from the cache, while the FuzzyTermsEnum would not
have been.

I'm going to raise an issue


On Wed, 22 Apr 2020 at 19:44, Colvin Cowie 
wrote:

> Hello,
>
> I'm moving our product from 8.3.1 to 8.5.1 in dev and we've got tests
> failing because Solr is getting OOMEs with a 512mb heap where it was
> previously fine.
>
> I ran our tests on both versions with jconsole to track the heap usage.
> Here's a little comparison. 8.5.1 dies part way through
> https://drive.google.com/open?id=113Ujts-lzv9ZBJOUB78LA2Qw5PsIsajO
>
> We have our own query parser as an extension to Solr, and we do various
> things with user queries, including generating FuzzyQuery-s. Our
> implementation of org.apache.solr.search.QParser.parse() isn't stateful and
> parses the qstr and returns new Query objects each time it's called.
> With JProfiler on I can see that the majority of the heap is being
> allocated through FuzzyQuery's constructor.
> https://issues.apache.org/jira/browse/LUCENE-9068 moved construction of
> the automata from the FuzzyTermsEnum to the FuzzyQuery's constructor.
>
> When profiling on 8.3.1 we still have a fairly large number of
> FuzzyTermEnums created at times, but that's accounting for about ~40mb of
> the heap for a few seconds rather than the 100mb to 300mb of continual
> allocation for FuzzyQuery I'm seeing in 8.5.
>
> It's definitely possible that we're doing something wrong in our extension
> (which I can't share the source of) but it seems like the memory cost of
> FuzzyQuery now is totally disproportionate to what it was before. We've not
> had issues like this with our extension before (which doesn't mean that our
> parser is flawless, but it's not been causing noticeable problems for the
> last 4 years).
>
>
> So I suppose the question is, are we misusing FuzzyQuery in some way (hard
> for you to say without seeing the source), or are the recent changes using
> more memory than they should?
>
> I will investigate further into what we're doing. But I could maybe use
> some help to create a stress test for Lucene itself that compares the
> memory consumption of the old FuzzyQuery vs the new, to see whether it's
> fundamentally bad for memory or if it's just how we're using it.
>
> Regards,
> Colvin
>
>
>
>


FuzzyQuery causing Out of Memory Errors in 8.5.x

2020-04-22 Thread Colvin Cowie
Hello,

I'm moving our product from 8.3.1 to 8.5.1 in dev and we've got tests
failing because Solr is getting OOMEs with a 512mb heap where it was
previously fine.

I ran our tests on both versions with jconsole to track the heap usage.
Here's a little comparison. 8.5.1 dies part way through
https://drive.google.com/open?id=113Ujts-lzv9ZBJOUB78LA2Qw5PsIsajO

We have our own query parser as an extension to Solr, and we do various
things with user queries, including generating FuzzyQuery-s. Our
implementation of org.apache.solr.search.QParser.parse() isn't stateful and
parses the qstr and returns new Query objects each time it's called.
With JProfiler on I can see that the majority of the heap is being
allocated through FuzzyQuery's constructor.
https://issues.apache.org/jira/browse/LUCENE-9068 moved construction of the
automata from the FuzzyTermsEnum to the FuzzyQuery's constructor.

When profiling on 8.3.1 we still have a fairly large number of
FuzzyTermEnums created at times, but that's accounting for about ~40mb of
the heap for a few seconds rather than the 100mb to 300mb of continual
allocation for FuzzyQuery I'm seeing in 8.5.

It's definitely possible that we're doing something wrong in our extension
(which I can't share the source of) but it seems like the memory cost of
FuzzyQuery now is totally disproportionate to what it was before. We've not
had issues like this with our extension before (which doesn't mean that our
parser is flawless, but it's not been causing noticeable problems for the
last 4 years).


So I suppose the question is, are we misusing FuzzyQuery in some way (hard
for you to say without seeing the source), or are the recent changes using
more memory than they should?

I will investigate further into what we're doing. But I could maybe use
some help to create a stress test for Lucene itself that compares the
memory consumption of the old FuzzyQuery vs the new, to see whether it's
fundamentally bad for memory or if it's just how we're using it.

Regards,
Colvin


Re: Out of Memory Errors

2017-06-14 Thread Susheel Kumar
The attachment will not come thru.  Can you upload thru dropbox / other
sharing sites etc.

On Wed, Jun 14, 2017 at 12:41 PM, Satya Marivada 
wrote:

> Susheel, Please see attached. There  heap towards the end of graph has
> spiked
>
>
>
> On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar 
> wrote:
>
>> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
>> or so and share.
>>
>> Thnx
>>
>> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
>> satya.chaita...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > I am getting Out of Memory Errors after a while on solr-6.3.0.
>> > The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/
>> oom_solr.sh
>> > just kills the jvm right after.
>> > Using Jconsole, I see the nice triangle pattern, where it uses the heap
>> > and being reclaimed back.
>> >
>> > The heap size is set at 3g. The index size hosted on that particular
>> node
>> > is 17G.
>> >
>> > java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
>> > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
>> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
>> > -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
>> > -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
>> > CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
>> >
>> > Looking at the solr_gc.log.0, the eden space is being used 100% all the
>> > while and being successfully reclaimed. So don't think that has go to do
>> > with it.
>> >
>> > Apart from that in the solr.log, I see exceptions that are aftermath of
>> > killing the jvm
>> >
>> > org.eclipse.jetty.io.EofException: Closed
>> > at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.
>> java:383)
>> > at org.apache.commons.io.output.ProxyOutputStream.write(
>> > ProxyOutputStream.java:90)
>> > at org.apache.solr.common.util.FastOutputStream.flush(
>> > FastOutputStream.java:213)
>> > at org.apache.solr.common.util.FastOutputStream.flushBuffer(
>> > FastOutputStream.java:206)
>> > at org.apache.solr.common.util.JavaBinCodec.marshal(
>> > JavaBinCodec.java:136)
>> >
>> > Any suggestions on how to go about it.
>> >
>> > Thanks,
>> > Satya
>> >
>>
>


Re: Out of Memory Errors

2017-06-14 Thread Satya Marivada
Susheel, Please see attached. There  heap towards the end of graph has
spiked



On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar 
wrote:

> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
> or so and share.
>
> Thnx
>
> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am getting Out of Memory Errors after a while on solr-6.3.0.
> > The
> -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
> > just kills the jvm right after.
> > Using Jconsole, I see the nice triangle pattern, where it uses the heap
> > and being reclaimed back.
> >
> > The heap size is set at 3g. The index size hosted on that particular node
> > is 17G.
> >
> > java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> > -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> > -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
> > CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> >
> > Looking at the solr_gc.log.0, the eden space is being used 100% all the
> > while and being successfully reclaimed. So don't think that has go to do
> > with it.
> >
> > Apart from that in the solr.log, I see exceptions that are aftermath of
> > killing the jvm
> >
> > org.eclipse.jetty.io.EofException: Closed
> > at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
> > at org.apache.commons.io.output.ProxyOutputStream.write(
> > ProxyOutputStream.java:90)
> > at org.apache.solr.common.util.FastOutputStream.flush(
> > FastOutputStream.java:213)
> > at org.apache.solr.common.util.FastOutputStream.flushBuffer(
> > FastOutputStream.java:206)
> > at org.apache.solr.common.util.JavaBinCodec.marshal(
> > JavaBinCodec.java:136)
> >
> > Any suggestions on how to go about it.
> >
> > Thanks,
> > Satya
> >
>


Re: Out of Memory Errors

2017-06-14 Thread Susheel Kumar
You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
or so and share.

Thnx

On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada 
wrote:

> Hi,
>
> I am getting Out of Memory Errors after a while on solr-6.3.0.
> The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
> just kills the jvm right after.
> Using Jconsole, I see the nice triangle pattern, where it uses the heap
> and being reclaimed back.
>
> The heap size is set at 3g. The index size hosted on that particular node
> is 17G.
>
> java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
> CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
>
> Looking at the solr_gc.log.0, the eden space is being used 100% all the
> while and being successfully reclaimed. So don't think that has go to do
> with it.
>
> Apart from that in the solr.log, I see exceptions that are aftermath of
> killing the jvm
>
> org.eclipse.jetty.io.EofException: Closed
> at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
> at org.apache.commons.io.output.ProxyOutputStream.write(
> ProxyOutputStream.java:90)
> at org.apache.solr.common.util.FastOutputStream.flush(
> FastOutputStream.java:213)
> at org.apache.solr.common.util.FastOutputStream.flushBuffer(
> FastOutputStream.java:206)
> at org.apache.solr.common.util.JavaBinCodec.marshal(
> JavaBinCodec.java:136)
>
> Any suggestions on how to go about it.
>
> Thanks,
> Satya
>


Out of Memory Errors

2017-06-14 Thread Satya Marivada
Hi,

I am getting Out of Memory Errors after a while on solr-6.3.0.
The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
just kills the jvm right after.
Using Jconsole, I see the nice triangle pattern, where it uses the heap and
being reclaimed back.

The heap size is set at 3g. The index size hosted on that particular node
is 17G.

java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
-XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000

Looking at the solr_gc.log.0, the eden space is being used 100% all the
while and being successfully reclaimed. So don't think that has go to do
with it.

Apart from that in the solr.log, I see exceptions that are aftermath of
killing the jvm

org.eclipse.jetty.io.EofException: Closed
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
at
org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)
at
org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:213)
at
org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:206)
at
org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:136)

Any suggestions on how to go about it.

Thanks,
Satya


Re: Detect Out of Memory Errors

2011-02-04 Thread Otis Gospodnetic
Hi,

There are external tools that one can use to watch Java processes, listen for 
errors, and restart processes if they die - monit, daemontools, and some 
Java-specific ones.

Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/



- Original Message 
> From: saureen 
> To: solr-user@lucene.apache.org
> Sent: Thu, January 27, 2011 9:41:56 AM
> Subject: Detect Out of Memory Errors
> 
> 
> Hi,
> 
> is ther a way by which i could detect the out of memory errors in  solr so
> that i could implement some functionality such as restarting the  tomcat or
> alert me via email whenever such error is detected.?
> -- 
> View  this message in context: 
>http://lucene.472066.n3.nabble.com/Detect-Out-of-Memory-Errors-tp2362872p2362872.html
>
> Sent  from the Solr - User mailing list archive at Nabble.com.
> 


Detect Out of Memory Errors

2011-01-27 Thread saureen

Hi,

is ther a way by which i could detect the out of memory errors in solr so
that i could implement some functionality such as restarting the tomcat or
alert me via email whenever such error is detected.?
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Detect-Out-of-Memory-Errors-tp2362872p2362872.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Out of Memory Errors

2008-10-22 Thread Jae Joo
Here is what I am doing to check the memory statues.
1. Run the Servelt and Solr application.
2. On command prompt, jstat -gc  5s (5s means that getting data every 5
seconds.)
3. Watch it or pipe to the file.
4. Analyze the data gathered.

Jae

On Tue, Oct 21, 2008 at 9:48 PM, Willie Wong <[EMAIL PROTECTED]>wrote:

> Hello,
>
> I've been having issues with out of memory errors on searches in Solr. I
> was wondering if I'm hitting a limit with solr or if I've configured
> something seriously wrong.
>
> Solr Setup
> - 3 cores
> - 3163615 documents each
> - 10 GB size
> - approx 10 fields
> - document sizes vary from a few kb to a few MB
> - no faceting is used however the search query can be fairly complex with
> 8 or more fields being searched on at once
>
> Environment:
> - windows 2003
> - 2.8 GHz zeon processor
> - 1.5 GB memory assigned to solr
> - Jetty 6 server
>
> Once we get to around a few  concurrent users OOM start occuring and Jetty
> restarts.  Would this just be a case of more memory or are there certain
> configuration settings that need to be set?  We're using an out of the box
> Solr 1.3 beta version.
>
> A few of the things we considered that might help:
> - Removing sorts on the result sets (result sets are approx 40,000 +
> documents)
> - Reducing cache sizes such as the queryResultMaxDocsCached setting,
> document cache, queryResultCache, filterCache, etc
>
> Am I missing anything else that should be looked at, or is it time to
> simply increase the memory/start looking at distributing the indexes?  Any
> help would be much appreciated.
>
>
> Regards,
>
> WW
>


Re: Out of Memory Errors

2008-10-22 Thread Otis Gospodnetic
Hi,

Without knowing the details I suspect it's just that 1.5GB heap is not enough.  
Yes, sort will use your heap, as will various Solr caches.  As will norms, so 
double-check your schema to make sure you are using field types like string 
where you can, not text, for example.  If you sort by timestamp-like fields, 
reduce its granularity as much as possible.


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



- Original Message 
> From: Willie Wong <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Tuesday, October 21, 2008 9:48:14 PM
> Subject: Out of Memory Errors
> 
> Hello,
> 
> I've been having issues with out of memory errors on searches in Solr. I 
> was wondering if I'm hitting a limit with solr or if I've configured 
> something seriously wrong.
> 
> Solr Setup
> - 3 cores 
> - 3163615 documents each
> - 10 GB size
> - approx 10 fields
> - document sizes vary from a few kb to a few MB
> - no faceting is used however the search query can be fairly complex with 
> 8 or more fields being searched on at once
> 
> Environment:
> - windows 2003
> - 2.8 GHz zeon processor
> - 1.5 GB memory assigned to solr
> - Jetty 6 server
> 
> Once we get to around a few  concurrent users OOM start occuring and Jetty 
> restarts.  Would this just be a case of more memory or are there certain 
> configuration settings that need to be set?  We're using an out of the box 
> Solr 1.3 beta version. 
> 
> A few of the things we considered that might help:
> - Removing sorts on the result sets (result sets are approx 40,000 + 
> documents)
> - Reducing cache sizes such as the queryResultMaxDocsCached setting, 
> document cache, queryResultCache, filterCache, etc
> 
> Am I missing anything else that should be looked at, or is it time to 
> simply increase the memory/start looking at distributing the indexes?  Any 
> help would be much appreciated.
> 
> 
> Regards,
> 
> WW



RE: Out of Memory Errors

2008-10-22 Thread r.prieto
Hi Willie,

Are you using highliting ???

If, the response is yes, you need to know that for each document retrieved,
the solr highliting load into memory the full field who is using for this
functionality. If the field is too long, you have problems with memory.

You can solve the problem using this patch:

http://mail-archives.apache.org/mod_mbox/lucene-solr-dev/200806.mbox/%3C1552
[EMAIL PROTECTED]

to copy the content of the field who is used to highliting to another field
and reduce the size.

You also need to know too that Windows have a limitation for memory process
in 2 GB.



-Mensaje original-
De: Willie Wong [mailto:[EMAIL PROTECTED] 
Enviado el: miƩrcoles, 22 de octubre de 2008 3:48
Para: solr-user@lucene.apache.org
Asunto: Out of Memory Errors

Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.

Solr Setup
- 3 cores 
- 3163615 documents each
- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once

Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 

A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc

Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.


Regards,

WW



Re: Out of Memory Errors

2008-10-22 Thread Nick Jenkin
Have you confirmed Java's -Xmx setting? (Max memory)

e.g. java -Xmx2000MB -jar start.jar
-Nick

On Wed, Oct 22, 2008 at 3:24 PM, Mark Miller <[EMAIL PROTECTED]> wrote:
> How much RAM in the box total? How many sort fields and what types? Sorts on
> each core?
>
> Willie Wong wrote:
>>
>> Hello,
>>
>> I've been having issues with out of memory errors on searches in Solr. I
>> was wondering if I'm hitting a limit with solr or if I've configured
>> something seriously wrong.
>>
>> Solr Setup
>> - 3 cores - 3163615 documents each
>> - 10 GB size
>> - approx 10 fields
>> - document sizes vary from a few kb to a few MB
>> - no faceting is used however the search query can be fairly complex with
>> 8 or more fields being searched on at once
>>
>> Environment:
>> - windows 2003
>> - 2.8 GHz zeon processor
>> - 1.5 GB memory assigned to solr
>> - Jetty 6 server
>>
>> Once we get to around a few  concurrent users OOM start occuring and Jetty
>> restarts.  Would this just be a case of more memory or are there certain
>> configuration settings that need to be set?  We're using an out of the box
>> Solr 1.3 beta version.
>> A few of the things we considered that might help:
>> - Removing sorts on the result sets (result sets are approx 40,000 +
>> documents)
>> - Reducing cache sizes such as the queryResultMaxDocsCached setting,
>> document cache, queryResultCache, filterCache, etc
>>
>> Am I missing anything else that should be looked at, or is it time to
>> simply increase the memory/start looking at distributing the indexes?  Any
>> help would be much appreciated.
>>
>>
>> Regards,
>>
>> WW
>>
>>
>
>


Re: Out of Memory Errors

2008-10-21 Thread Mark Miller
How much RAM in the box total? How many sort fields and what types? 
Sorts on each core?


Willie Wong wrote:

Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.


Solr Setup
- 3 cores 
- 3163615 documents each

- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once


Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 


A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc


Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.



Regards,

WW

  




Out of Memory Errors

2008-10-21 Thread Willie Wong
Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.

Solr Setup
- 3 cores 
- 3163615 documents each
- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once

Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 

A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc

Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.


Regards,

WW