Good points.
I am able to create this with periodic snap puller and only one http
request.  When I load the Solr on tomcat, the initial memory usage was
between 600M to 800 M.  First time, I used 1.5 G and then increased the
heap to 3.5G.  (When I said 'triple', I meant comparing to the initial
memory consumption)

OK... Shall we focus on my second question: When the core reload happens
successfully (no matter it throws the exception or not), does Solr need to
call the openNewSearcherAndUpdateCommitPoint method?  I think this
openNewSearcherAndUpdateCommitPoint method tries to open a new searcher on
the old SolrCore.

Thank you!


On Thu, Oct 20, 2016 at 9:55 AM, Erick Erickson <erickerick...@gmail.com>
wrote:

> You say you tripled the memory. Up to what? Tripling from 500M t0 1.5G
> isn't likely enough, tripling from 6G to 18G is something else
> again....
>
> You can take a look through any of the memory profilers and try to
> catch the objects (and where they're being allocated). The second is
> to look at the stack trace (presuming you don't have an OOM killer
> script running) and perhaps triangulate that way.
>
> Best,
> Erick
>
> On Thu, Oct 20, 2016 at 11:44 AM, Jihwan Kim <jihwa...@gmail.com> wrote:
> > Thank you Shawn. I understand the two options.
> > After my own testing with a smaller heap, I increased my heap size more
> than
> > triple, but OOME happens again with my testing cases under the controlled
> > thread process. Increased heap size just delayed the OOME.
> >
> > Can you provide a feedback on my second question:  When the core reload
> > happens successfully (no matter it throws the exception or not), does
> Solr
> > need to call the openNewSearcherAndUpdateCommitPoint method?
> >
> > As I described on my previous email, a thread created from
> > openNewSearcherAndUpdateCommitPoint method hangs and cause a high CPU
> usage
> > and a slow response time.  Attached image is the thread hung.
> >
> >
> >
> > On Thu, Oct 20, 2016 at 9:29 AM, Shawn Heisey <apa...@elyograg.org>
> wrote:
> >>
> >> On 10/20/2016 8:44 AM, Jihwan Kim wrote:
> >> > We are using Solr 4.10.4 and experiencing out of memory exception. It
> >> > seems the problem is cause by the following code & scenario.
> >>
> >> When you get an OutOfMemoryError exception that tells you there's not
> >> enough heap space, the place where the exception happens is frequently
> >> unrelated to the actual source of the problem.  Also, unless the
> >> programmer engages in extraordinary effort, encountering OOME will cause
> >> program behavior to become completely unpredictable.  Most of Solr has
> >> *NOT* had the benefit of extraordinary effort to handle OOME gracefully.
> >>
> >> Before continuing with troubleshooting of SnapPuller, you're going to
> >> need to fix the OOME error.  When you run out of memory, that is likely
> >> to be the CAUSE of any errors you're seeing, not a symptom.
> >>
> >> There are exactly two ways to deal with OOME:  Increase the max heap, or
> >> take steps to reduce the amount of heap required.  Increasing the heap
> >> is the easiest option, and typically the first step.  Sometimes it's the
> >> ONLY option.
> >>
> >> https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
> >>
> >> Thanks,
> >> Shawn
> >>
> >
>

Reply via email to