and here is GC log when I create collection(just create collection, nothing
else)

{Heap before GC invocations=1530 (full 412):
 garbage-first heap   total 10485760K, used 10483431K [0x0000000540000000,
0x0000000540405000, 0x00000007c0000000)
  region size 4096K, 0 young (0K), 0 survivors (0K)
 Metaspace       used 70694K, capacity 75070K, committed 75260K, reserved
1116160K
  class space    used 7674K, capacity 8836K, committed 8956K, reserved
1048576K
2021-01-28T21:24:18.396+0800: 34029.526: [GC pause (G1 Evacuation Pause)
(young)
Desired survivor size 33554432 bytes, new threshold 15 (max 15)
, 0.0034128 secs]
   [Parallel Time: 2.2 ms, GC Workers: 4]
      [GC Worker Start (ms): Min: 34029525.7, Avg: 34029526.1, Max:
34029527.3, Diff: 1.6]
      [Ext Root Scanning (ms): Min: 0.0, Avg: 1.0, Max: 1.4, Diff: 1.4,
Sum: 4.1]
      [Update RS (ms): Min: 0.3, Avg: 0.6, Max: 0.7, Diff: 0.4, Sum: 2.2]
         [Processed Buffers: Min: 2, Avg: 2.8, Max: 4, Diff: 2, Sum: 11]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0,
Sum: 0.0]
      [Object Copy (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.2]
      [Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.3, Diff: 0.3, Sum: 0.6]
         [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 4]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum:
0.0]
      [GC Worker Total (ms): Min: 0.6, Avg: 1.8, Max: 2.2, Diff: 1.6, Sum:
7.2]
      [GC Worker End (ms): Min: 34029527.9, Avg: 34029527.9, Max:
34029527.9, Diff: 0.0]
   [Code Root Fixup: 0.0 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.0 ms]
   [Other: 1.2 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 0.9 ms]
      [Ref Enq: 0.0 ms]
      [Redirty Cards: 0.0 ms]
      [Humongous Register: 0.1 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.0 ms]
   [Eden: 0.0B(512.0M)->0.0B(512.0M) Survivors: 0.0B->0.0B Heap:
10237.7M(10240.0M)->10237.7M(10240.0M)]
Heap after GC invocations=1531 (full 412):
 garbage-first heap   total 10485760K, used 10483431K [0x0000000540000000,
0x0000000540405000, 0x00000007c0000000)
  region size 4096K, 0 young (0K), 0 survivors (0K)
 Metaspace       used 70694K, capacity 75070K, committed 75260K, reserved
1116160K
  class space    used 7674K, capacity 8836K, committed 8956K, reserved
1048576K
}
 [Times: user=0.01 sys=0.00, real=0.01 secs]
2021-01-28T21:24:18.400+0800: 34029.529: Total time for which application
threads were stopped: 0.0044183 seconds, Stopping threads took: 0.0000500
seconds
{Heap before GC invocations=1531 (full 412):

On Thu, Jan 28, 2021 at 1:23 PM Luke <lucenew...@gmail.com> wrote:

> Mike,
>
> No, it's not docker. it is just one solr node(service) which connects to
> external zookeeper, the below is a JVM setting and memory usage.
>
> There are 25  collections which have a few 2000 documents totally. I am
> wondering why solr uses so much memory.
>
> -XX:+AlwaysPreTouch-XX:+ExplicitGCInvokesConcurrent
> -XX:+ParallelRefProcEnabled-XX:+PerfDisableSharedMem
> -XX:+PrintGCApplicationStoppedTime-XX:+PrintGCDateStamps
> -XX:+PrintGCDetails-XX:+PrintGCTimeStamps-XX:+PrintHeapAtGC
> -XX:+PrintTenuringDistribution-XX:+UseG1GC-XX:+UseGCLogFileRotation
> -XX:+UseLargePages-XX:-OmitStackTraceInFastThrow-XX:GCLogFileSize=20M
> -XX:MaxGCPauseMillis=250-XX:NumberOfGCLogFiles=9-XX:OnOutOfMemoryError=/mnt/ume/software/solr-8.7.0-3/bin/oom_solr.sh
> 8983 /mnt/ume/logs/solr2-Xloggc:/mnt/ume/logs/solr2/solr_gc.log-Xms6g
> -Xmx10g-Xss256k-verbose:gc
> [image: image.png]
>
> On Thu, Jan 28, 2021 at 4:40 AM Mike Drob <md...@mdrob.com> wrote:
>
>> Are you running these in docker containers?
>>
>> Also, I’m assuming this is a typo but just in case the setting is Xmx :)
>>
>> Can you share the OOM stack trace? It’s not always running out of memory,
>> sometimes Java throws OOM for file handles or threads.
>>
>> Mike
>>
>> On Wed, Jan 27, 2021 at 10:00 PM Luke <lucenew...@gmail.com> wrote:
>>
>> > Shawn,
>> >
>> > it's killed by OOME exception. The problem is that I just created empty
>> > collections and the Solr JVM keeps growing and never goes down. there
>> is no
>> > data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
>> > always use all of them and it will be killed by oom.sh once jvm usage
>> > reachs 100%.
>> >
>> > I have another solr 8.6.2 cloud(3 nodes) in separated environment ,
>> which
>> > have over 100 collections, the Xxm = 6G , jvm is always 4-5G.
>> >
>> >
>> >
>> > On Thu, Jan 28, 2021 at 2:56 AM Shawn Heisey <apa...@elyograg.org>
>> wrote:
>> >
>> > > On 1/27/2021 5:08 PM, Luke Oak wrote:
>> > > > I just created a few collections and no data, memory keeps growing
>> but
>> > > never go down, until I got OOM and solr is killed
>> > > >
>> > > > Any reason?
>> > >
>> > > Was Solr killed by the operating system's oom killer or did the death
>> > > start with a Java OutOfMemoryError exception?
>> > >
>> > > If it was the OS, then the entire system doesn't have enough memory
>> for
>> > > the demands that are made on it.  The problem might be Solr, or it
>> might
>> > > be something else.  You will need to either reduce the amount of
>> memory
>> > > used or increase the memory in the system.
>> > >
>> > > If it was a Java OOME exception that led to Solr being killed, then
>> some
>> > > resource (could be heap memory, but isn't always) will be too small
>> and
>> > > will need to be increased.  To figure out what resource, you need to
>> see
>> > > the exception text.  Such exceptions are not always recorded -- it may
>> > > occur in a section of code that has no logging.
>> > >
>> > > Thanks,
>> > > Shawn
>> > >
>> >
>>
>

Reply via email to