Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2020-02-23 Thread Paras Lehana
Hi,

We are running another 24 hour test with 8GB JVM and so far it is also
> running flawlessly.


If this is the case, as Erick mentioned, the failures were probably due to
long GC pauses. During couple of my stress testings, I had found that
decreasing JVM helps sometimes (it makes GC more frequent and less
intensive in a way). Try with different heap sizes and also consider tuning
GC.

Also, do let us about the performance of ZGC against G1GC - I'm curious.
I'm using Java 11.

On Sun, 23 Feb 2020 at 01:28, tbarkley29  wrote:

> Yes 18% of total physical RAM. The failures in G1GC and CMS setup did seem
> to
> be from pause the world.
>
> We are using Solr Docker image which is using G1GC by default and we tuned
> with G1GC. Even with tuning the performance test failed after about 8
> hours.
> With ZGC we had consistent 12 and 24 hour performance test which ran
> flawlessly.
>
> We are running another 24 hour test with 8GB JVM and so far it is also
> running flawlessly. I will post an update when completed.
>
> Garbage collection is not my area of expertise but so far I am following
> the
> data and out of the box ZGC is performing drastically better.
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
-- 
Regards,

*Paras Lehana* [65871]
Development Engineer, *Auto-Suggest*,
IndiaMART InterMESH Ltd,

11th Floor, Tower 2, Assotech Business Cresterra,
Plot No. 22, Sector 135, Noida, Uttar Pradesh, India 201305

Mob.: +91-9560911996
Work: 0120-4056700 | Extn:
*11096*

-- 
*
*

 


Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2020-02-22 Thread tbarkley29
Yes 18% of total physical RAM. The failures in G1GC and CMS setup did seem to
be from pause the world.

We are using Solr Docker image which is using G1GC by default and we tuned
with G1GC. Even with tuning the performance test failed after about 8 hours.
With ZGC we had consistent 12 and 24 hour performance test which ran
flawlessly. 

We are running another 24 hour test with 8GB JVM and so far it is also
running flawlessly. I will post an update when completed.

Garbage collection is not my area of expertise but so far I am following the
data and out of the box ZGC is performing drastically better.



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2020-02-21 Thread Erick Erickson
People are certainly interested. You’re running on the bleeding edge of
technology, you’re very brave ;).

I’m not quite sure how to interpret “memory utilization stays around 18%”.
18% of total physical RAM or heap? I’m assuming the former..

I’m curious, how did CMS and G1GC fail? It’s perfectly understandable if
the failures were due to stop-the-world GC pauses; they can lead to timeouts
which can cause replicas to be put into recovery, or Zookeeper to think
the node died etc… In extreme cases this means that the entire cluster goes 
down.

Best,
Erick

> On Feb 20, 2020, at 5:31 PM, tbarkley29  wrote:
> 
> We are currently running performance tests with Solr 8.2/OpenJDK11/ZGC. We've
> ran multiple successful 12 hour tests and are currently running 24 hour
> tests. There are three nodes which are 4 cores and 28GB memory, JVM is 16GB.
> We are getting max ~780 Page Per Second with max of ~8,000 users/min. CPU
> utilization stays around 80% and memory utilization stays around 18%. We
> were trying various configurations with G1GC which were unsuccessful after
> about 8 hours. We also tried with CMS which failed within an hour or so.
> Queries used in test were taken from Splunk from production traffic. These
> performance tests are still ongoing and I'd be open to providing JVM metrics
> if interested.
> 
> 
> 
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2020-02-20 Thread tbarkley29
We are currently running performance tests with Solr 8.2/OpenJDK11/ZGC. We've
ran multiple successful 12 hour tests and are currently running 24 hour
tests. There are three nodes which are 4 cores and 28GB memory, JVM is 16GB.
We are getting max ~780 Page Per Second with max of ~8,000 users/min. CPU
utilization stays around 80% and memory utilization stays around 18%. We
were trying various configurations with G1GC which were unsuccessful after
about 8 hours. We also tried with CMS which failed within an hour or so.
Queries used in test were taken from Splunk from production traffic. These
performance tests are still ongoing and I'd be open to providing JVM metrics
if interested.



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-15 Thread Shawn Heisey

On 10/15/2019 2:49 AM, Vassil Velichkov (Sensika) wrote:

I've reduced the JVM heap on one of the shards to 20GB and then simulated some 
heavy load to reproduce the issue in a faster way.
The solr.log ROOT was set to TRACE level, but I can't really see anything meaningful, the 
solr.log ends @ 07:31:40.352 GMT, while the GC log shows later entries and "Pause 
Full (Allocation Failure)".
BTW, I've never seen in the GC logs any automatic attempts for Full GC. I can't 
see any OOME messages in any of the logs, only in the separate solr_oom_killer 
log, but this is the log of the killer script.

Also, to answer your previous questions:
1. We run completely stock Solr, not custom code, no plugins. 
Regardless, we never had such OOMs with Solr 4.x or Solr 6.x
2. It seems that Full GC is never triggered. In some cases in the past 
I've seen log entries for Full GC attempts, but the JVM crashes with OOM long 
before the Full GC could do anything.


The goal for good GC tuning is to avoid full GCs ever being needed.  It 
cannot be prevented entirely, especially when humongous allocations are 
involved ... but a well-tuned GC should not do them very often.


You have only included snippets from your logs.  We would need full logs 
for any of that information to be useful.  Attachments to the list 
rarely work, so you will need to use some kind of file sharing site.  I 
find dropbox to be useful for this, but if you prefer something else 
that works well, feel free to use it.


If the OutOfMemoryError exceptions is logged, it will be in solr.log. 
It is not always logged.  I will ask the Java folks if there is a way we 
can have the killer script provide the reason for the OOME.


It should be unnecessary to increase Solr's log level beyond INFO, but 
DEBUG might provide some useful info.  TRACE will be insanely large and 
I would not recommend it.


Thanks,
Shawn


RE: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-15 Thread Vassil Velichkov (Sensika)
Hi Shawn,

I've reduced the JVM heap on one of the shards to 20GB and then simulated some 
heavy load to reproduce the issue in a faster way.
The solr.log ROOT was set to TRACE level, but I can't really see anything 
meaningful, the solr.log ends @ 07:31:40.352 GMT, while the GC log shows later 
entries and "Pause Full (Allocation Failure)".
BTW, I've never seen in the GC logs any automatic attempts for Full GC. I can't 
see any OOME messages in any of the logs, only in the separate solr_oom_killer 
log, but this is the log of the killer script.

Also, to answer your previous questions:
1. We run completely stock Solr, not custom code, no plugins. 
Regardless, we never had such OOMs with Solr 4.x or Solr 6.x
2. It seems that Full GC is never triggered. In some cases in the past 
I've seen log entries for Full GC attempts, but the JVM crashes with OOM long 
before the Full GC could do anything.
3. On a side note - it seems that when a Solr query spans across 
multiple shards (our sharding is by timePublished), the HTTP connections from 
the aggregation node to the other shards frequently time-out @ 60 seconds, 
despite the Solr HTTP client request timeout is set dynamically to a much 
higher value (120-1200 seconds), and despite we've increased the timeout values 
in solr.xml for shardHandlerFactory (socketTimout / connTimeout) to 1200 
seconds. In such cases when we have inter-cluster aggregation timeouts the 
end-users get "Error retrieving data" and they usually refresh the App, 
basically re-running the heavy Solr queries over and over again. I included a 
sample from the application logs below. This usage-patter might also make the 
things worse - I don't know what happens within the shards when the aggregation 
fails due to timed-out inter-shard connections? If the all the shards keep 
executing the queries passed from the aggregation node, they just waste 
resources and all subsequent query re-runs just increase the resource 
consumption.

>>> SOLR.LOG (last 1 minute)
2019-10-15 07:31:12.848 DEBUG (Connection evictor) [   ] 
o.a.s.u.s.InstrumentedPoolingHttpClientConnectionManager Closing expired 
connections
2019-10-15 07:31:12.848 DEBUG (Connection evictor) [   ] 
o.a.s.u.s.InstrumentedPoolingHttpClientConnectionManager Closing connections 
idle longer than 5 MILLISECONDS
2019-10-15 07:31:12.848 DEBUG (Connection evictor) [   ] 
o.a.s.u.s.InstrumentedPoolingHttpClientConnectionManager Closing expired 
connections
2019-10-15 07:31:40.352 DEBUG (Connection evictor) [   ] 
o.a.s.u.s.InstrumentedPoolingHttpClientConnectionManager Closing expired 
connections
2019-10-15 07:31:40.352 DEBUG (Connection evictor) [   ] 
o.a.s.u.s.InstrumentedPoolingHttpClientConnectionManager Closing connections 
idle longer than 5 MILLISECONDS

>>> SOLR_GC.LOG (last 1 minute)
[2019-10-15T10:32:07.509+0300][528.164s] GC(64) Pause Full (Allocation Failure)
[2019-10-15T10:32:07.539+0300][528.193s] GC(64) Phase 1: Mark live objects
[2019-10-15T10:32:16.785+0300][537.440s] GC(64) Cleaned string and symbol 
table, strings: 23724 processed, 0 removed, symbols: 149625 processed, 0 removed
[2019-10-15T10:32:16.785+0300][537.440s] GC(64) Phase 1: Mark live objects 
9246.644ms
[2019-10-15T10:32:16.785+0300][537.440s] GC(64) Phase 2: Compute new object 
addresses
[2019-10-15T10:32:23.065+0300][543.720s] GC(64) Phase 2: Compute new object 
addresses 6279.790ms
[2019-10-15T10:32:23.065+0300][543.720s] GC(64) Phase 3: Adjust pointers
[2019-10-15T10:32:28.905+0300][549.560s] GC(64) Phase 3: Adjust pointers 
5839.647ms
[2019-10-15T10:32:28.905+0300][549.560s] GC(64) Phase 4: Move objects
[2019-10-15T10:32:28.905+0300][549.560s] GC(64) Phase 4: Move objects 0.135ms
[2019-10-15T10:32:28.921+0300][549.576s] GC(64) Using 8 workers of 8 to rebuild 
remembered set
[2019-10-15T10:32:34.763+0300][555.418s] GC(64) Eden regions: 0->0(160)
[2019-10-15T10:32:34.763+0300][555.418s] GC(64) Survivor regions: 0->0(40)
[2019-10-15T10:32:34.763+0300][555.418s] GC(64) Old regions: 565->565
[2019-10-15T10:32:34.763+0300][555.418s] GC(64) Humongous regions: 75->75
[2019-10-15T10:32:34.763+0300][555.418s] GC(64) Metaspace: 
52093K->52093K(1097728K)
[2019-10-15T10:32:34.764+0300][555.418s] GC(64) Pause Full (Allocation Failure) 
20383M->20383M(20480M) 27254.340ms
[2019-10-15T10:32:34.764+0300][555.419s] GC(64) User=56.35s Sys=0.03s 
Real=27.26s

>>> solr_oom_killer-8983-2019-10-15_07_32_34.log
Running OOM killer script for process 953 for Solr on port 8983
Killed process 953

>>> JVM GC Seettings
-Duser.timezone=UTC-XX:+ParallelRefProcEnabled
-XX:+UseG1GC
-XX:+UseLargePages
-XX:ConcGCThreads=8
-XX:G1HeapRegionSize=32m
-XX:NewRatio=3
-XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 /var/log/solr
-XX:ParallelGCThreads=8
-XX:SurvivorRatio=4
-Xlog:gc*:file=/var/log/solr/solr_gc.log:time,uptime:filecount=9,filesize=20M
-Xms20480M
-Xmx20480M
-Xss256k

>>> App Log Sample Entries
[2019-10-15 00:01:06] PRD-01-WEB-04.ERROR [0.000580]: 

Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Shawn Heisey

On 10/14/2019 7:18 AM, Vassil Velichkov (Sensika) wrote:

After the migration from 6.x to 7.6 we kept the default GC for a couple of 
weeks, than we've started experimenting with G1 and we've managed to achieve 
less frequent OOM crashes, but not by much.


Changing your GC settings will never prevent OOMs.  The only way to 
prevent them is to either increase the resource that's running out or 
reconfigure the program to use less of that resource.



As I explained in my previous e-mail, the unused filterCache entries are not 
discarded, even after a new SolrSearcher is started. The Replicas are synced 
with the Masters every 5 minutes, the filterCache is auto-warmed and the JVM 
heap utilization keeps going up. Within 1 to 2 hours a 64GB heap is being 
exhausted. The GC log entries clearly show that there are more and more 
humongous allocations piling up.


While it is true that the generation-specific collectors for G1 do not 
clean up humungous allocations from garbage, eventually Java will 
perform a full GC, which will be slow, but should clean them up.  If a 
full GC is not cleaning them up, that's a different problem, and one 
that I would suspect is actually a problem with your installation.  We 
have had memory leak bugs in Solr, but I am not aware of any that are as 
serious as your observations suggest.


You could be running into a memory leak ... but I really doubt that it 
is directly related to the filterCache or the humungous allocations. 
Upgrading to the latest release that you can would be advisable -- the 
latest 7.x version would be my first choice, or you could go all the way 
to 8.2.0.


Are you running completely stock Solr, or have you added custom code? 
One of the most common problems with custom code is leaking searcher 
objects, which will cause Java to retain the large cache entries.  We 
have seen problems where one Solr version will work perfectly with 
custom code, but when Solr is upgraded, the custom code has memory leaks.



We have a really stressful use-case: a single user opens a live-report with 
20-30 widgets, each widget performs a Solr Search or facet aggregations, 
sometimes with 5-15 complex filter queries attached to the main query, so the 
end results are visualized as pivot charts. So, one user could trigger hundreds 
of queries in a very short period of time and when we have several analysts 
working on the same time-period, we usually end-up with OOM. This logic used to 
work quite well on Solr 6.x. The only other difference that comes to my mind is 
that with Solr 7.6 we've started using DocValues. I could not find 
documentation about DocValues memory consumption, so it might be related.


For cases where docValues are of major benefit, which is primarily 
facets and sorting, Solr will use less memory with docValues than it 
does with indexed terms.  Adding docValues should not result in a 
dramatic increase in memory requirements, and in many cases, should 
actually require less memory.



Yep, but I plan to generate some detailed JVM trace-dumps, so we could analyze 
which class / data structure causes the OOM. Any recommendations about what 
tool to use for a detailed JVM dump?


Usually the stacktrace itself is not helpful in diagnosing OOMs -- 
because the place where the error is thrown can be ANY allocation, not 
necessarily the one that is the major resource hog.


What I'm interested in here is the message immediately after the OOME, 
not the stacktrace.  Which I'll admit is slightly odd, because for many 
problems I *am* interested in the stacktrace.  OutOfMemoryError is one 
situation where the stacktrace is not very helpful, but the short 
message the error contains is useful.  I only asked for the stacktrace 
because collecting it will usually mean that nothing else in the message 
has been modified.


Here are two separate examples of what I am looking for:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

Caused by: java.lang.OutOfMemoryError: unable to create new native thread


Also, not sure if I could send attachments to the mailing list, but there must 
be a way to share logs...?


There are many websites that facilitate file sharing.  One example, and 
the one that I use most frequently, is dropbox.  Sending attachments to 
the list rarely works.


Thanks,
Shawn


RE: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Vassil Velichkov (Sensika)
Hi Shawn,

My answers are in-line below...

Cheers,
Vassil

-Original Message-
From: Shawn Heisey  
Sent: Monday, October 14, 2019 3:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any 
tests with Java 13 and the new ZGC?

On 10/14/2019 6:18 AM, Vassil Velichkov (Sensika) wrote:
> We have 1 x Replica with 1 x Solr Core per JVM and each JVM runs in a 
> separate VMware VM.
> We have 32 x JVMs/VMs in total, containing between 50M to 180M documents per 
> replica/core/JVM.

With 180 million documents, each filterCache entry will be 22.5 megabytes in 
size.  They will ALL be this size.

> Ops, I didn't know that, but this makes the things even worse. By looking 
> at the GC log, it seems evicted entries are never discarded.

> In our case most filterCache entities (maxDoc/8 + overhead) are typically 
> more than 16MB, which is more than 50% of the max setting for 
> "XX:G1HeapRegionSize" (which is 32MB). That's why I am so interested in Java 
> 13 and ZGC, because ZGC does not have this weird limitation and collects even 
> _large_ garbage pieces :-). We have almost no documentCache or queryCache 
> entities.

I am not aware of any Solr testing with the new garbage collector.  I'm 
interested in knowing whether it does a better job than CMS and G1, but do not 
have any opportunities to try it.

> Currently we have some 2TB free RAM on the cluster, so I guess we could 
> test it in the next coming days. The plan is to re-index at least 2B 
> documents in a separate cluster and stress-test the new cluster with real 
> production data and real production code with Java 13 and ZGC.

Have you tried letting Solr use its default garbage collection settings instead 
of G1?  Have you tried Java 11?  Java 9 is one of the releases without long 
term support, so as Erick says, it is not recommended.

> After the migration from 6.x to 7.6 we kept the default GC for a couple 
> of weeks, than we've started experimenting with G1 and we've managed to 
> achieve less frequent OOM crashes, but not by much.

> By some time tonight all shards will be rebalanced (we've added 6 more) and 
> will contain up to 100-120M documents (14.31MB + overhead should be < 16MB), 
> so hopefully this will help us to alleviate the OOM crashes.

It doesn't sound to me like your filterCache can cause OOM.  The total size of 
256 filterCache entries that are each 22.5 megabytes should be less than 6GB, 
and I would expect the other Solr caches to be smaller.

> As I explained in my previous e-mail, the unused filterCache entries are 
> not discarded, even after a new SolrSearcher is started. The Replicas are 
> synced with the Masters every 5 minutes, the filterCache is auto-warmed 
> and the JVM heap utilization keeps going up. Within 1 to 2 hours a 64GB 
> heap is being exhausted. The GC log entries clearly show that there are 
> more and more humongous allocations piling up. 
 
If you are hitting OOMs, then some other aspect of your setup is the reason 
that's happening.  I would not normally expect a single core with
180 million documents to need more than about 16GB of heap, and 31GB should 
definitely be enough.  Hitting OOM with the heap sizes you have described is 
very strange.

>> We have a really stressful use-case: a single user opens a live-report 
>> with 20-30 widgets, each widget performs a Solr Search or facet 
>> aggregations, sometimes with 5-15 complex filter queries attached to the 
>> main query, so the end results are visualized as pivot charts. So, one 
>> user could trigger hundreds of queries in a very short period of time 
>> and when we have several analysts working on the same time-period, we 
>> usually end-up with OOM. This logic used to work quite well on Solr 6.x. 
>> The only other difference that comes to my mind is that with Solr 7.6 
>> we've started using DocValues. I could not find documentation about 
>> DocValues memory consumption, so it might be related.

Perhaps the root cause of your OOMs is not heap memory, but some other system 
resource.  Do you have log entries showing the stacktrace on the OOM?

>> Yep, but I plan to generate some detailed JVM trace-dumps, so we could 
>> analyze which class / data structure causes the OOM. Any recommendations 
>> about what tool to use for a detailed JVM dump? 
Also, not sure if I could send attachments to the mailing list, but there must 
be a way to share logs...?

Thanks,
Shawn


Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Shawn Heisey

On 10/14/2019 6:18 AM, Vassil Velichkov (Sensika) wrote:

We have 1 x Replica with 1 x Solr Core per JVM and each JVM runs in a separate 
VMware VM.
We have 32 x JVMs/VMs in total, containing between 50M to 180M documents per 
replica/core/JVM.


With 180 million documents, each filterCache entry will be 22.5 
megabytes in size.  They will ALL be this size.



In our case most filterCache entities (maxDoc/8 + overhead) are typically more than 16MB, 
which is more than 50% of the max setting for "XX:G1HeapRegionSize" (which is 
32MB). That's why I am so interested in Java 13 and ZGC, because ZGC does not have this 
weird limitation and collects even _large_ garbage pieces :-). We have almost no 
documentCache or queryCache entities.


I am not aware of any Solr testing with the new garbage collector.  I'm 
interested in knowing whether it does a better job than CMS and G1, but 
do not have any opportunities to try it.


Have you tried letting Solr use its default garbage collection settings 
instead of G1?  Have you tried Java 11?  Java 9 is one of the releases 
without long term support, so as Erick says, it is not recommended.



By some time tonight all shards will be rebalanced (we've added 6 more) and will 
contain up to 100-120M documents (14.31MB + overhead should be < 16MB), so 
hopefully this will help us to alleviate the OOM crashes.


It doesn't sound to me like your filterCache can cause OOM.  The total 
size of 256 filterCache entries that are each 22.5 megabytes should be 
less than 6GB, and I would expect the other Solr caches to be smaller. 
If you are hitting OOMs, then some other aspect of your setup is the 
reason that's happening.  I would not normally expect a single core with 
180 million documents to need more than about 16GB of heap, and 31GB 
should definitely be enough.  Hitting OOM with the heap sizes you have 
described is very strange.


Perhaps the root cause of your OOMs is not heap memory, but some other 
system resource.  Do you have log entries showing the stacktrace on the OOM?


Thanks,
Shawn


RE: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Vassil Velichkov (Sensika)
Hi Erick,

We have 1 x Replica with 1 x Solr Core per JVM and each JVM runs in a separate 
VMware VM.
We have 32 x JVMs/VMs in total, containing between 50M to 180M documents per 
replica/core/JVM.
In our case most filterCache entities (maxDoc/8 + overhead) are typically more 
than 16MB, which is more than 50% of the max setting for "XX:G1HeapRegionSize" 
(which is 32MB). That's why I am so interested in Java 13 and ZGC, because ZGC 
does not have this weird limitation and collects even _large_ garbage pieces 
:-). We have almost no documentCache or queryCache entities.

By some time tonight all shards will be rebalanced (we've added 6 more) and 
will contain up to 100-120M documents (14.31MB + overhead should be < 16MB), so 
hopefully this will help us to alleviate the OOM crashes.

Cheers,
Vassil


-Original Message-
From: Erick Erickson  
Sent: Monday, October 14, 2019 3:03 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any 
tests with Java 13 and the new ZGC?

The filterCache isn’t a single huge allocation, it’s made up of _size_ entries, 
each individual entry shouldn’t be that big, each entry should cap around 
maxDoc/8 bytes + some overhead.

I just scanned the e-mail, I’m not clear how many _replicas_ per JVM you have, 
nor how many JVMs per server you’re running. One strategy to deal with large 
heaps if you have a lot of replicas is to run multiple JVMs, each with a 
smaller heap.

One peculiarity of heaps is that at 32G, one must use long pointers, so a 32G 
heap actually has less available memory than a 31G heap if many of the objects 
are small.


> On Oct 14, 2019, at 7:00 AM, Vassil Velichkov (Sensika) 
>  wrote:
> 
> Thanks Jörn,
> 
> Yep, we are rebalancing the cluster to keep up to ~100M documents per shard, 
> but that's not quite optimal in our use-case.
> 
> We've tried with various ratios between JVM Heap / OS RAM (up to 128GB / 
> 256GB) and we have the same Java Heap OOM crashes.
> For example, a BitSet of 160M documents is > 16MB and when we look at the G1 
> logs, it seems it never discards the humongous allocations, so they keep 
> piling. Forcing a full-garbage collection is just not practical - it takes 
> forever and the shard is not usable. Even when a new Searcher is started 
> (every several minutes) the old large filterCache entries are not freed and 
> sooner or later the JVM crashes.
> 
> On the other hand ZGC has a completely different architecture and does not 
> have the hard-coded threshold of 16MB for *humongous allocations*:
> https://wiki.openjdk.java.net/display/zgc/Main
> 
> Anyway, we will be probably testing Java 13 and ZGC with the real data, we 
> just have to reindex 30+ shards to new Solr servers, which will take a couple 
> of days :-)
> 
> Cheers,
> Vassil
> 
> -Original Message-
> From: Jörn Franke  
> Sent: Monday, October 14, 2019 1:47 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any 
> tests with Java 13 and the new ZGC?
> 
> I would try JDK11 - it works much better than JDK9 in general. 
> 
> I don‘t think JDK13 with ZGC will bring you better results. There seems to be 
> sth strange with the JDk version or Solr version and some settings. 
> 
> Then , make sure that you have much more free memory for the os cache than 
> the heap. Nearly 100 gb for Solr heap sounds excessive. Try to reduce it to 
> much less.
> 
> Try the default options of Solr and use the latest 7.x version or 8.x version 
> of Solr.
> 
> Additionally you can try to shard more.
> 
>> Am 14.10.2019 um 19:19 schrieb Vassil Velichkov (Sensika) 
>> :
>> 
>> Hi Everyone,
>> 
>> Since we’ve upgraded our cluster (legacy sharding) from Solr 6.x to Solr 7.6 
>> we have frequent OOM crashes on specific nodes.
>> 
>> All investigations (detailed below) lead to a hard-coded limitation in the 
>> G1 garbage collector. The Java Heap is exhausted due to too many filterCache 
>> allocations that are never discarded by the G1.
>> 
>> Our hope is to use Java 13 with the new ZGC, which is specifically designed 
>> for large heap-sizes, and supposedly would handle and dispose larger 
>> allocations. The Solr release notes claim that Solr 7.6 builds are tested 
>> with Java 11 / 12 / 13 (pre-release).
>> Does anyone use Java 13 in production and has experience with the new ZGC 
>> and large heap sizes / large document sets of more than 150M documents per 
>> shard?
>> 
>>> Some background information and reference to the possible 
>>> root-cause, described by Shawn Heisey in Solr 1.4 documentation 
>>> >
>> 
>> Our current setup is as follows:
>> 
>> 1.   All nodes are running on VMware 6.5 VMs with Debian 9u5 / Java 9 / 
>> Solr 7.6
>> 
>> 2.   Each VM has 6 or 8 x vCPUs, 128GB or 192GB RAM (50% for Java Heap / 
>> 50% for OS) and 1 x Solr Core with 80M to 160M documents, NO stored fields, 
>> DocValues ON
>> 
>> 3. 

Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Erick Erickson
The filterCache isn’t a single huge allocation, it’s made up of _size_ entries, 
each individual entry shouldn’t be that big, each entry should cap around 
maxDoc/8 bytes + some overhead.

I just scanned the e-mail, I’m not clear how many _replicas_ per JVM you have, 
nor how many JVMs per server you’re running. One strategy to deal with large 
heaps if you have a lot of replicas is to run multiple JVMs, each with a 
smaller heap.

One peculiarity of heaps is that at 32G, one must use long pointers, so a 32G 
heap actually has less available memory than a 31G heap if many of the objects 
are small.


> On Oct 14, 2019, at 7:00 AM, Vassil Velichkov (Sensika) 
>  wrote:
> 
> Thanks Jörn,
> 
> Yep, we are rebalancing the cluster to keep up to ~100M documents per shard, 
> but that's not quite optimal in our use-case.
> 
> We've tried with various ratios between JVM Heap / OS RAM (up to 128GB / 
> 256GB) and we have the same Java Heap OOM crashes.
> For example, a BitSet of 160M documents is > 16MB and when we look at the G1 
> logs, it seems it never discards the humongous allocations, so they keep 
> piling. Forcing a full-garbage collection is just not practical - it takes 
> forever and the shard is not usable. Even when a new Searcher is started 
> (every several minutes) the old large filterCache entries are not freed and 
> sooner or later the JVM crashes.
> 
> On the other hand ZGC has a completely different architecture and does not 
> have the hard-coded threshold of 16MB for *humongous allocations*:
> https://wiki.openjdk.java.net/display/zgc/Main
> 
> Anyway, we will be probably testing Java 13 and ZGC with the real data, we 
> just have to reindex 30+ shards to new Solr servers, which will take a couple 
> of days :-)
> 
> Cheers,
> Vassil
> 
> -Original Message-
> From: Jörn Franke  
> Sent: Monday, October 14, 2019 1:47 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any 
> tests with Java 13 and the new ZGC?
> 
> I would try JDK11 - it works much better than JDK9 in general. 
> 
> I don‘t think JDK13 with ZGC will bring you better results. There seems to be 
> sth strange with the JDk version or Solr version and some settings. 
> 
> Then , make sure that you have much more free memory for the os cache than 
> the heap. Nearly 100 gb for Solr heap sounds excessive. Try to reduce it to 
> much less.
> 
> Try the default options of Solr and use the latest 7.x version or 8.x version 
> of Solr.
> 
> Additionally you can try to shard more.
> 
>> Am 14.10.2019 um 19:19 schrieb Vassil Velichkov (Sensika) 
>> :
>> 
>> Hi Everyone,
>> 
>> Since we’ve upgraded our cluster (legacy sharding) from Solr 6.x to Solr 7.6 
>> we have frequent OOM crashes on specific nodes.
>> 
>> All investigations (detailed below) lead to a hard-coded limitation in the 
>> G1 garbage collector. The Java Heap is exhausted due to too many filterCache 
>> allocations that are never discarded by the G1.
>> 
>> Our hope is to use Java 13 with the new ZGC, which is specifically designed 
>> for large heap-sizes, and supposedly would handle and dispose larger 
>> allocations. The Solr release notes claim that Solr 7.6 builds are tested 
>> with Java 11 / 12 / 13 (pre-release).
>> Does anyone use Java 13 in production and has experience with the new ZGC 
>> and large heap sizes / large document sets of more than 150M documents per 
>> shard?
>> 
>>> Some background information and reference to the possible 
>>> root-cause, described by Shawn Heisey in Solr 1.4 documentation 
>>> >
>> 
>> Our current setup is as follows:
>> 
>> 1.   All nodes are running on VMware 6.5 VMs with Debian 9u5 / Java 9 / 
>> Solr 7.6
>> 
>> 2.   Each VM has 6 or 8 x vCPUs, 128GB or 192GB RAM (50% for Java Heap / 
>> 50% for OS) and 1 x Solr Core with 80M to 160M documents, NO stored fields, 
>> DocValues ON
>> 
>> 3.   The only “hot” and frequently used cache is filterCache, configured 
>> with the default value of 256 entries. If we increase the setting to 512 or 
>> 1024 entries, we are getting 4-5 times better hit-ratio, but the OOM crashes 
>> become too frequent.
>> 
>> 4.   Regardless of the Java Heap size (we’ve tested with even larger 
>> heaps and VM sizing up to 384GB), all nodes that have approx. more than 
>> 120-130M documents crash with OOM under heavy load (hundreds of simultaneous 
>> searches with a variety of Filter Queries).
>> 
>> FilterCache is really frequently used and some of the BitSets are spanning 
>> across 80-90% of the Docset of each shard, so in many cases the FC entries 
>> become larger than 16MB. We believe we’ve pinpointed the problem to the G1 
>> Garbage Collector and the hard-coded limit for "-XX:G1HeapRegionSize", which 
>> allows setting a maximum of 32MB, regardless if it is auto-calculated or set 
>> manually in the JVM startup options. The JVM memory allocation algorithm 
>> tracks every memory 

RE: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Vassil Velichkov (Sensika)
Thanks Jörn,

Yep, we are rebalancing the cluster to keep up to ~100M documents per shard, 
but that's not quite optimal in our use-case.

We've tried with various ratios between JVM Heap / OS RAM (up to 128GB / 256GB) 
and we have the same Java Heap OOM crashes.
For example, a BitSet of 160M documents is > 16MB and when we look at the G1 
logs, it seems it never discards the humongous allocations, so they keep 
piling. Forcing a full-garbage collection is just not practical - it takes 
forever and the shard is not usable. Even when a new Searcher is started (every 
several minutes) the old large filterCache entries are not freed and sooner or 
later the JVM crashes.

On the other hand ZGC has a completely different architecture and does not have 
the hard-coded threshold of 16MB for *humongous allocations*:
https://wiki.openjdk.java.net/display/zgc/Main

Anyway, we will be probably testing Java 13 and ZGC with the real data, we just 
have to reindex 30+ shards to new Solr servers, which will take a couple of 
days :-)

Cheers,
Vassil

-Original Message-
From: Jörn Franke  
Sent: Monday, October 14, 2019 1:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any 
tests with Java 13 and the new ZGC?

I would try JDK11 - it works much better than JDK9 in general. 

I don‘t think JDK13 with ZGC will bring you better results. There seems to be 
sth strange with the JDk version or Solr version and some settings. 

Then , make sure that you have much more free memory for the os cache than the 
heap. Nearly 100 gb for Solr heap sounds excessive. Try to reduce it to much 
less.

Try the default options of Solr and use the latest 7.x version or 8.x version 
of Solr.

Additionally you can try to shard more.

> Am 14.10.2019 um 19:19 schrieb Vassil Velichkov (Sensika) 
> :
> 
> Hi Everyone,
> 
> Since we’ve upgraded our cluster (legacy sharding) from Solr 6.x to Solr 7.6 
> we have frequent OOM crashes on specific nodes.
> 
> All investigations (detailed below) lead to a hard-coded limitation in the G1 
> garbage collector. The Java Heap is exhausted due to too many filterCache 
> allocations that are never discarded by the G1.
> 
> Our hope is to use Java 13 with the new ZGC, which is specifically designed 
> for large heap-sizes, and supposedly would handle and dispose larger 
> allocations. The Solr release notes claim that Solr 7.6 builds are tested 
> with Java 11 / 12 / 13 (pre-release).
> Does anyone use Java 13 in production and has experience with the new ZGC and 
> large heap sizes / large document sets of more than 150M documents per shard?
> 
>> Some background information and reference to the possible 
>> root-cause, described by Shawn Heisey in Solr 1.4 documentation >
> 
> Our current setup is as follows:
> 
> 1.   All nodes are running on VMware 6.5 VMs with Debian 9u5 / Java 9 / 
> Solr 7.6
> 
> 2.   Each VM has 6 or 8 x vCPUs, 128GB or 192GB RAM (50% for Java Heap / 
> 50% for OS) and 1 x Solr Core with 80M to 160M documents, NO stored fields, 
> DocValues ON
> 
> 3.   The only “hot” and frequently used cache is filterCache, configured 
> with the default value of 256 entries. If we increase the setting to 512 or 
> 1024 entries, we are getting 4-5 times better hit-ratio, but the OOM crashes 
> become too frequent.
> 
> 4.   Regardless of the Java Heap size (we’ve tested with even larger 
> heaps and VM sizing up to 384GB), all nodes that have approx. more than 
> 120-130M documents crash with OOM under heavy load (hundreds of simultaneous 
> searches with a variety of Filter Queries).
> 
> FilterCache is really frequently used and some of the BitSets are spanning 
> across 80-90% of the Docset of each shard, so in many cases the FC entries 
> become larger than 16MB. We believe we’ve pinpointed the problem to the G1 
> Garbage Collector and the hard-coded limit for "-XX:G1HeapRegionSize", which 
> allows setting a maximum of 32MB, regardless if it is auto-calculated or set 
> manually in the JVM startup options. The JVM memory allocation algorithm 
> tracks every memory allocation request and if the request exceeds 50% of 
> G1HeapRegionSize, it is considered humongous allocation (he-he, extremely 
> large allocation in 2019?!?), so it is not scanned and evaluated during 
> standard garbage collection cycles. Unused humongous allocations are 
> basically freed only during Full Garbage Collection cycles, which are never 
> really invoked by the G1 garbage collector, before it is too late and the JVM 
> crashes with OOM.
> 
> Now we are rebalancing the cluster to have up to 100-120M  documents per 
> shard, following and ancient, but probably still valid limitation suggested 
> in Solr 1.4 documentation by Shawn 
> Heisey: “If you 
> have an index with about 100 million documents in it, you'll want to use a 
> region size of 32MB, which 

Re: Solr 7.6 frequent OOM with Java 9, G1 and large heap sizes - any tests with Java 13 and the new ZGC?

2019-10-14 Thread Jörn Franke
I would try JDK11 - it works much better than JDK9 in general. 

I don‘t think JDK13 with ZGC will bring you better results. There seems to be 
sth strange with the JDk version or Solr version and some settings. 

Then , make sure that you have much more free memory for the os cache than the 
heap. Nearly 100 gb for Solr heap sounds excessive. Try to reduce it to much 
less.

Try the default options of Solr and use the latest 7.x version or 8.x version 
of Solr.

Additionally you can try to shard more.

> Am 14.10.2019 um 19:19 schrieb Vassil Velichkov (Sensika) 
> :
> 
> Hi Everyone,
> 
> Since we’ve upgraded our cluster (legacy sharding) from Solr 6.x to Solr 7.6 
> we have frequent OOM crashes on specific nodes.
> 
> All investigations (detailed below) lead to a hard-coded limitation in the G1 
> garbage collector. The Java Heap is exhausted due to too many filterCache 
> allocations that are never discarded by the G1.
> 
> Our hope is to use Java 13 with the new ZGC, which is specifically designed 
> for large heap-sizes, and supposedly would handle and dispose larger 
> allocations. The Solr release notes claim that Solr 7.6 builds are tested 
> with Java 11 / 12 / 13 (pre-release).
> Does anyone use Java 13 in production and has experience with the new ZGC and 
> large heap sizes / large document sets of more than 150M documents per shard?
> 
>> Some background information and reference to the possible 
>> root-cause, described by Shawn Heisey in Solr 1.4 documentation >
> 
> Our current setup is as follows:
> 
> 1.   All nodes are running on VMware 6.5 VMs with Debian 9u5 / Java 9 / 
> Solr 7.6
> 
> 2.   Each VM has 6 or 8 x vCPUs, 128GB or 192GB RAM (50% for Java Heap / 
> 50% for OS) and 1 x Solr Core with 80M to 160M documents, NO stored fields, 
> DocValues ON
> 
> 3.   The only “hot” and frequently used cache is filterCache, configured 
> with the default value of 256 entries. If we increase the setting to 512 or 
> 1024 entries, we are getting 4-5 times better hit-ratio, but the OOM crashes 
> become too frequent.
> 
> 4.   Regardless of the Java Heap size (we’ve tested with even larger 
> heaps and VM sizing up to 384GB), all nodes that have approx. more than 
> 120-130M documents crash with OOM under heavy load (hundreds of simultaneous 
> searches with a variety of Filter Queries).
> 
> FilterCache is really frequently used and some of the BitSets are spanning 
> across 80-90% of the Docset of each shard, so in many cases the FC entries 
> become larger than 16MB. We believe we’ve pinpointed the problem to the G1 
> Garbage Collector and the hard-coded limit for "-XX:G1HeapRegionSize", which 
> allows setting a maximum of 32MB, regardless if it is auto-calculated or set 
> manually in the JVM startup options. The JVM memory allocation algorithm 
> tracks every memory allocation request and if the request exceeds 50% of 
> G1HeapRegionSize, it is considered humongous allocation (he-he, extremely 
> large allocation in 2019?!?), so it is not scanned and evaluated during 
> standard garbage collection cycles. Unused humongous allocations are 
> basically freed only during Full Garbage Collection cycles, which are never 
> really invoked by the G1 garbage collector, before it is too late and the JVM 
> crashes with OOM.
> 
> Now we are rebalancing the cluster to have up to 100-120M  documents per 
> shard, following and ancient, but probably still valid limitation suggested 
> in Solr 1.4 documentation by Shawn 
> Heisey: “If you 
> have an index with about 100 million documents in it, you'll want to use a 
> region size of 32MB, which is the maximum possible size. Because of this 
> limitation of the G1 collector, we recommend always keeping a Solr index 
> below a maxDoc value of around 100 to 120 million.”
> 
> Cheers,
> Vassil Velichkov