Thanks, Pushkar. The Solr was already killed by OOM script so i believe we
can't get heap dump.

Hi Shawn, I used Solr service scripts to launch Solr and it looks like
bin/solr doesn't include by default the below JVM parameter.

"-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/the/dump"

Is that something we should add to the Solr launch scripts to have it
included or may be at least in disabled (comment) mode?

Thanks,
Susheel

On Mon, Oct 24, 2016 at 8:20 PM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 10/24/2016 4:27 PM, Susheel Kumar wrote:
> > I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> > today. So far our solr cluster has been running fine but suddenly today
> > many of the VM's Solr instance got killed. I had 8G of heap allocated on
> 64
> > GB machines with 20+ GB of index size on each shards.
> >
> > What could be looked to find the exact root cause. I am suspecting of any
> > query (wildcard prefix query etc.) might have caused this issue.  The
> > ingestion and query load looks normal as other days.  I have the solr GC
> > logs as well.
>
> It is unlikely that you will be able to figure out exactly what is using
> too much memory from Solr logs.  The place where the OOM happens may be
> completely unrelated to the parts of the system that are using large
> amounts of memory.  That point is just the place where Java ran out of
> memory to allocate, which could happen when allocating a tiny amount of
> memory just as easily as it could happen when allocating a large amount
> of memory.
>
> What I can tell you has been placed on this wiki page:
>
> https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>
> Thanks,
> Shawn
>
>

Reply via email to