Re: Out of memory errors with Spatial indexing

2020-07-06 Thread David Smiley
I believe you are experiencing this bug: LUCENE-5056

The fix would probably be adjusting code in here
org.apache.lucene.spatial.query.SpatialArgs#calcDistanceFromErrPct

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Mon, Jul 6, 2020 at 5:18 AM Sunil Varma  wrote:

> Hi David
> Thanks for your response. Yes, I noticed that all the data causing issue
> were at the poles. I tried the "RptWithGeometrySpatialField" field type
> definition but get a "Spatial context does not support S2 spatial
> index"error. Setting "spatialContextFactory="Geo3D" I still see the
> original OOM error .
>
> On Sat, 4 Jul 2020 at 05:49, David Smiley  wrote:
>
> > Hi Sunil,
> >
> > Your shape is at a pole, and I'm aware of a bug causing an exponential
> > explosion of needed grid squares when you have polygons super-close to
> the
> > pole.  Might you try S2PrefixTree instead?  I forget if this would fix it
> > or not by itself.  For indexing non-point data, I recommend
> > class="solr.RptWithGeometrySpatialField" which internally is based off a
> > combination of a course grid and storing the original vector geometry for
> > accurate verification:
> >  > class="solr.RptWithGeometrySpatialField"
> >   prefixTree="s2" />
> > The internally coarser grid will lessen the impact of that pole bug.
> >
> > ~ David Smiley
> > Apache Lucene/Solr Search Developer
> > http://www.linkedin.com/in/davidwsmiley
> >
> >
> > On Fri, Jul 3, 2020 at 7:48 AM Sunil Varma 
> > wrote:
> >
> > > We are seeing OOM errors  when trying to index some spatial data. I
> > believe
> > > the data itself might not be valid but it shouldn't cause the Server to
> > > crash. We see this on both Solr 7.6 and Solr 8. Below is the input that
> > is
> > > causing the error.
> > >
> > > {
> > > "id": "bad_data_1",
> > > "spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
> > > 1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
> > > 74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
> > > 1.000150474662E30)"
> > > }
> > >
> > > Above dynamic field is mapped to field type "location_rpt" (
> > > solr.SpatialRecursivePrefixTreeFieldType).
> > >
> > >   Any pointers to get around this issue would be highly appreciated.
> > >
> > > Thanks!
> > >
> >
>


Re: Out of memory errors with Spatial indexing

2020-07-06 Thread Sunil Varma
Hi David
Thanks for your response. Yes, I noticed that all the data causing issue
were at the poles. I tried the "RptWithGeometrySpatialField" field type
definition but get a "Spatial context does not support S2 spatial
index"error. Setting "spatialContextFactory="Geo3D" I still see the
original OOM error .

On Sat, 4 Jul 2020 at 05:49, David Smiley  wrote:

> Hi Sunil,
>
> Your shape is at a pole, and I'm aware of a bug causing an exponential
> explosion of needed grid squares when you have polygons super-close to the
> pole.  Might you try S2PrefixTree instead?  I forget if this would fix it
> or not by itself.  For indexing non-point data, I recommend
> class="solr.RptWithGeometrySpatialField" which internally is based off a
> combination of a course grid and storing the original vector geometry for
> accurate verification:
>  class="solr.RptWithGeometrySpatialField"
>   prefixTree="s2" />
> The internally coarser grid will lessen the impact of that pole bug.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Fri, Jul 3, 2020 at 7:48 AM Sunil Varma 
> wrote:
>
> > We are seeing OOM errors  when trying to index some spatial data. I
> believe
> > the data itself might not be valid but it shouldn't cause the Server to
> > crash. We see this on both Solr 7.6 and Solr 8. Below is the input that
> is
> > causing the error.
> >
> > {
> > "id": "bad_data_1",
> > "spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
> > 1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
> > 74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
> > 1.000150474662E30)"
> > }
> >
> > Above dynamic field is mapped to field type "location_rpt" (
> > solr.SpatialRecursivePrefixTreeFieldType).
> >
> >   Any pointers to get around this issue would be highly appreciated.
> >
> > Thanks!
> >
>


Re: Out of memory errors with Spatial indexing

2020-07-03 Thread David Smiley
Hi Sunil,

Your shape is at a pole, and I'm aware of a bug causing an exponential
explosion of needed grid squares when you have polygons super-close to the
pole.  Might you try S2PrefixTree instead?  I forget if this would fix it
or not by itself.  For indexing non-point data, I recommend
class="solr.RptWithGeometrySpatialField" which internally is based off a
combination of a course grid and storing the original vector geometry for
accurate verification:

The internally coarser grid will lessen the impact of that pole bug.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Fri, Jul 3, 2020 at 7:48 AM Sunil Varma  wrote:

> We are seeing OOM errors  when trying to index some spatial data. I believe
> the data itself might not be valid but it shouldn't cause the Server to
> crash. We see this on both Solr 7.6 and Solr 8. Below is the input that is
> causing the error.
>
> {
> "id": "bad_data_1",
> "spatialwkt_srpt": "LINESTRING (-126.86037681029909 -90.0
> 1.000150474662E30, 73.58164711175415 -90.0 1.000150474662E30,
> 74.52836551959528 -90.0 1.000150474662E30, 74.97006811540834 -90.0
> 1.000150474662E30)"
> }
>
> Above dynamic field is mapped to field type "location_rpt" (
> solr.SpatialRecursivePrefixTreeFieldType).
>
>   Any pointers to get around this issue would be highly appreciated.
>
> Thanks!
>


Re: Out of Memory Errors

2017-06-14 Thread Susheel Kumar
The attachment will not come thru.  Can you upload thru dropbox / other
sharing sites etc.

On Wed, Jun 14, 2017 at 12:41 PM, Satya Marivada 
wrote:

> Susheel, Please see attached. There  heap towards the end of graph has
> spiked
>
>
>
> On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar 
> wrote:
>
>> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
>> or so and share.
>>
>> Thnx
>>
>> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
>> satya.chaita...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > I am getting Out of Memory Errors after a while on solr-6.3.0.
>> > The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/
>> oom_solr.sh
>> > just kills the jvm right after.
>> > Using Jconsole, I see the nice triangle pattern, where it uses the heap
>> > and being reclaimed back.
>> >
>> > The heap size is set at 3g. The index size hosted on that particular
>> node
>> > is 17G.
>> >
>> > java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
>> > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
>> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
>> > -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
>> > -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
>> > CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
>> >
>> > Looking at the solr_gc.log.0, the eden space is being used 100% all the
>> > while and being successfully reclaimed. So don't think that has go to do
>> > with it.
>> >
>> > Apart from that in the solr.log, I see exceptions that are aftermath of
>> > killing the jvm
>> >
>> > org.eclipse.jetty.io.EofException: Closed
>> > at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.
>> java:383)
>> > at org.apache.commons.io.output.ProxyOutputStream.write(
>> > ProxyOutputStream.java:90)
>> > at org.apache.solr.common.util.FastOutputStream.flush(
>> > FastOutputStream.java:213)
>> > at org.apache.solr.common.util.FastOutputStream.flushBuffer(
>> > FastOutputStream.java:206)
>> > at org.apache.solr.common.util.JavaBinCodec.marshal(
>> > JavaBinCodec.java:136)
>> >
>> > Any suggestions on how to go about it.
>> >
>> > Thanks,
>> > Satya
>> >
>>
>


Re: Out of Memory Errors

2017-06-14 Thread Satya Marivada
Susheel, Please see attached. There  heap towards the end of graph has
spiked



On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar 
wrote:

> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
> or so and share.
>
> Thnx
>
> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am getting Out of Memory Errors after a while on solr-6.3.0.
> > The
> -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
> > just kills the jvm right after.
> > Using Jconsole, I see the nice triangle pattern, where it uses the heap
> > and being reclaimed back.
> >
> > The heap size is set at 3g. The index size hosted on that particular node
> > is 17G.
> >
> > java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> > -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> > -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
> > CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> >
> > Looking at the solr_gc.log.0, the eden space is being used 100% all the
> > while and being successfully reclaimed. So don't think that has go to do
> > with it.
> >
> > Apart from that in the solr.log, I see exceptions that are aftermath of
> > killing the jvm
> >
> > org.eclipse.jetty.io.EofException: Closed
> > at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
> > at org.apache.commons.io.output.ProxyOutputStream.write(
> > ProxyOutputStream.java:90)
> > at org.apache.solr.common.util.FastOutputStream.flush(
> > FastOutputStream.java:213)
> > at org.apache.solr.common.util.FastOutputStream.flushBuffer(
> > FastOutputStream.java:206)
> > at org.apache.solr.common.util.JavaBinCodec.marshal(
> > JavaBinCodec.java:136)
> >
> > Any suggestions on how to go about it.
> >
> > Thanks,
> > Satya
> >
>


Re: Out of Memory Errors

2017-06-14 Thread Susheel Kumar
You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
or so and share.

Thnx

On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada 
wrote:

> Hi,
>
> I am getting Out of Memory Errors after a while on solr-6.3.0.
> The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
> just kills the jvm right after.
> Using Jconsole, I see the nice triangle pattern, where it uses the heap
> and being reclaimed back.
>
> The heap size is set at 3g. The index size hosted on that particular node
> is 17G.
>
> java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
> CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
>
> Looking at the solr_gc.log.0, the eden space is being used 100% all the
> while and being successfully reclaimed. So don't think that has go to do
> with it.
>
> Apart from that in the solr.log, I see exceptions that are aftermath of
> killing the jvm
>
> org.eclipse.jetty.io.EofException: Closed
> at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
> at org.apache.commons.io.output.ProxyOutputStream.write(
> ProxyOutputStream.java:90)
> at org.apache.solr.common.util.FastOutputStream.flush(
> FastOutputStream.java:213)
> at org.apache.solr.common.util.FastOutputStream.flushBuffer(
> FastOutputStream.java:206)
> at org.apache.solr.common.util.JavaBinCodec.marshal(
> JavaBinCodec.java:136)
>
> Any suggestions on how to go about it.
>
> Thanks,
> Satya
>


Re: Out of Memory Errors

2008-10-22 Thread Nick Jenkin
Have you confirmed Java's -Xmx setting? (Max memory)

e.g. java -Xmx2000MB -jar start.jar
-Nick

On Wed, Oct 22, 2008 at 3:24 PM, Mark Miller [EMAIL PROTECTED] wrote:
 How much RAM in the box total? How many sort fields and what types? Sorts on
 each core?

 Willie Wong wrote:

 Hello,

 I've been having issues with out of memory errors on searches in Solr. I
 was wondering if I'm hitting a limit with solr or if I've configured
 something seriously wrong.

 Solr Setup
 - 3 cores - 3163615 documents each
 - 10 GB size
 - approx 10 fields
 - document sizes vary from a few kb to a few MB
 - no faceting is used however the search query can be fairly complex with
 8 or more fields being searched on at once

 Environment:
 - windows 2003
 - 2.8 GHz zeon processor
 - 1.5 GB memory assigned to solr
 - Jetty 6 server

 Once we get to around a few  concurrent users OOM start occuring and Jetty
 restarts.  Would this just be a case of more memory or are there certain
 configuration settings that need to be set?  We're using an out of the box
 Solr 1.3 beta version.
 A few of the things we considered that might help:
 - Removing sorts on the result sets (result sets are approx 40,000 +
 documents)
 - Reducing cache sizes such as the queryResultMaxDocsCached setting,
 document cache, queryResultCache, filterCache, etc

 Am I missing anything else that should be looked at, or is it time to
 simply increase the memory/start looking at distributing the indexes?  Any
 help would be much appreciated.


 Regards,

 WW






RE: Out of Memory Errors

2008-10-22 Thread r.prieto
Hi Willie,

Are you using highliting ???

If, the response is yes, you need to know that for each document retrieved,
the solr highliting load into memory the full field who is using for this
functionality. If the field is too long, you have problems with memory.

You can solve the problem using this patch:

http://mail-archives.apache.org/mod_mbox/lucene-solr-dev/200806.mbox/%3C1552
[EMAIL PROTECTED]

to copy the content of the field who is used to highliting to another field
and reduce the size.

You also need to know too that Windows have a limitation for memory process
in 2 GB.



-Mensaje original-
De: Willie Wong [mailto:[EMAIL PROTECTED] 
Enviado el: miƩrcoles, 22 de octubre de 2008 3:48
Para: solr-user@lucene.apache.org
Asunto: Out of Memory Errors

Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.

Solr Setup
- 3 cores 
- 3163615 documents each
- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once

Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 

A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc

Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.


Regards,

WW



Re: Out of Memory Errors

2008-10-22 Thread Jae Joo
Here is what I am doing to check the memory statues.
1. Run the Servelt and Solr application.
2. On command prompt, jstat -gc pid 5s (5s means that getting data every 5
seconds.)
3. Watch it or pipe to the file.
4. Analyze the data gathered.

Jae

On Tue, Oct 21, 2008 at 9:48 PM, Willie Wong [EMAIL PROTECTED]wrote:

 Hello,

 I've been having issues with out of memory errors on searches in Solr. I
 was wondering if I'm hitting a limit with solr or if I've configured
 something seriously wrong.

 Solr Setup
 - 3 cores
 - 3163615 documents each
 - 10 GB size
 - approx 10 fields
 - document sizes vary from a few kb to a few MB
 - no faceting is used however the search query can be fairly complex with
 8 or more fields being searched on at once

 Environment:
 - windows 2003
 - 2.8 GHz zeon processor
 - 1.5 GB memory assigned to solr
 - Jetty 6 server

 Once we get to around a few  concurrent users OOM start occuring and Jetty
 restarts.  Would this just be a case of more memory or are there certain
 configuration settings that need to be set?  We're using an out of the box
 Solr 1.3 beta version.

 A few of the things we considered that might help:
 - Removing sorts on the result sets (result sets are approx 40,000 +
 documents)
 - Reducing cache sizes such as the queryResultMaxDocsCached setting,
 document cache, queryResultCache, filterCache, etc

 Am I missing anything else that should be looked at, or is it time to
 simply increase the memory/start looking at distributing the indexes?  Any
 help would be much appreciated.


 Regards,

 WW



Re: Out of Memory Errors

2008-10-21 Thread Mark Miller
How much RAM in the box total? How many sort fields and what types? 
Sorts on each core?


Willie Wong wrote:

Hello,

I've been having issues with out of memory errors on searches in Solr. I 
was wondering if I'm hitting a limit with solr or if I've configured 
something seriously wrong.


Solr Setup
- 3 cores 
- 3163615 documents each

- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to a few MB
- no faceting is used however the search query can be fairly complex with 
8 or more fields being searched on at once


Environment:
- windows 2003
- 2.8 GHz zeon processor
- 1.5 GB memory assigned to solr
- Jetty 6 server

Once we get to around a few  concurrent users OOM start occuring and Jetty 
restarts.  Would this just be a case of more memory or are there certain 
configuration settings that need to be set?  We're using an out of the box 
Solr 1.3 beta version. 


A few of the things we considered that might help:
- Removing sorts on the result sets (result sets are approx 40,000 + 
documents)
- Reducing cache sizes such as the queryResultMaxDocsCached setting, 
document cache, queryResultCache, filterCache, etc


Am I missing anything else that should be looked at, or is it time to 
simply increase the memory/start looking at distributing the indexes?  Any 
help would be much appreciated.



Regards,

WW