On 10/11/2018 4:51 AM, yasoobhaider wrote:
Hi Shawn, thanks for the inputs.

I have uploaded the gc logs of one of the slaves here:
https://ufile.io/ecvag (should work till 18th Oct '18)

I uploaded the logs to gceasy as well and it says that the problem is
consecutive full GCs. According to the solution they have mentioned,
increasing the heap size is a solution. But I am already running on a pretty
big heap, so don't think increasing the heap size is going to be a long term
solution.

Surprisingly, the GC performance in that logfile is actually pretty good.  I was more interested in how much heap was actually being used than the performance.

The "Heap after GC" button on the gceasy report page (which controls which graph is shown) shows that you really are using most of that 80GB.  If the info you shared about your index sizes is accurate, the only way I can imagine this much heap being necessary is configuration.

It sounds like each system should have two configurations -- solrconfig.xml and the schema are the primary things in those configs.  Can you share the unique configuration directories for each of your indexes, which I think means there will be two of them? For each of the configurations, indicate the number of documents and size on disk.  You'll need to use a file sharing site.  It would be best to archive each directory into its own zipfile or .tar.gz file.

If your systems are running in cloud mode, the active configuration will be stored in zookeeper.

Thanks,
Shawn

Reply via email to