Thank you Shawn. I understand the two options.
After my own testing with a smaller heap, I increased my heap size more
than triple, but OOME happens again with my testing cases under the
controlled thread process. Increased heap size just delayed the OOME.

Can you provide a feedback on my second question:  When the core reload
happens successfully (no matter it throws the exception or not), does Solr
need to call the openNewSearcherAndUpdateCommitPoint method?

As I described on my previous email, a thread created from
openNewSearcherAndUpdateCommitPoint method hangs and cause a high CPU usage
and a slow response time.  Attached image is the thread hung.



On Thu, Oct 20, 2016 at 9:29 AM, Shawn Heisey <apa...@elyograg.org> wrote:

> On 10/20/2016 8:44 AM, Jihwan Kim wrote:
> > We are using Solr 4.10.4 and experiencing out of memory exception. It
> > seems the problem is cause by the following code & scenario.
>
> When you get an OutOfMemoryError exception that tells you there's not
> enough heap space, the place where the exception happens is frequently
> unrelated to the actual source of the problem.  Also, unless the
> programmer engages in extraordinary effort, encountering OOME will cause
> program behavior to become completely unpredictable.  Most of Solr has
> *NOT* had the benefit of extraordinary effort to handle OOME gracefully.
>
> Before continuing with troubleshooting of SnapPuller, you're going to
> need to fix the OOME error.  When you run out of memory, that is likely
> to be the CAUSE of any errors you're seeing, not a symptom.
>
> There are exactly two ways to deal with OOME:  Increase the max heap, or
> take steps to reduce the amount of heap required.  Increasing the heap
> is the easiest option, and typically the first step.  Sometimes it's the
> ONLY option.
>
> https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>
> Thanks,
> Shawn
>
>

Reply via email to