Hi Paras,

Thank you for your advice.

I will confirm JMeter's settings in addition to JVM options.

Yes, we are using documentCache and *after* finishing load test, we will
comment that out.
(By customer requests, we cannot update cache settings till the test
ends...)

Thanks,
Yasufumi

2019年10月1日(火) 19:38 Paras Lehana <paras.leh...@indiamart.com>:

> Hi Yasufumi,
>
> Followings are current load test set up.
>
>
> Did you try decreasing ramp_time and increasing num_threads or Firing
> hosts? Out of 1800 secs, you are giving 200 threads a maximum of 600
> seconds to get ready. I don't increase with this value when I want to test
> parallel requests. In your case, the answer could as simple as that the
> testing threads itself are not capable of requesting more than 60 qps. Do
> confirm this bottleneck by playing with JMeter values.
>
> Cache hit rate is not checked now. Those small cache size are intended, but
> > I want to change those.
>
>
> I suggest you to check the cache stat in Solr Dashboard > Select Core >
> Plugins/Stats > Cache. I'm assuming that all of your 20M queries are unique
> though they could still use documentCache. Why not just comment out Cache
> settings in solrconfig.xml?
>
>
>
> On Tue, 1 Oct 2019 at 15:39, Yasufumi Mizoguchi <yasufumi0...@gmail.com>
> wrote:
>
> > Thank you for replying me.
> >
> > Followings are current load test set up.
> >
> > * Load test program : JUnit
> > * The number of Firing hosts : 6
> > * [Each host]ThreadGroup.num_threads : 200
> > * [Each host]ThreadGroup.ramp_time : 600
> > * [Each host]ThreadGroup.duration: 1800
> > * [Each host]ThreadGroup.delay: 0
> > * The number of sample queries: 20,000,000
> >
> > And we confirmed that Jetty threads was increasing and reached the limit
> > (10000).
> > Therefore, we raised MaxThreads value.
> >
> > I checked GC logs and found that it happened no major GC, and almost all
> > minor GC were finished by 200ms.
> >
> > Cache hit rate is not checked now, but I think those are extremely low
> all
> > kinds of cache.
> > Because the number of sample query is big(20,000,000) compared to
> > queryResult and filter cache size(both 512) and there are few duplication
> > in fq and q parameter.
> > Those small cache size are intended, but I want to change those....
> >
> > Thanks,
> > Yasufumi
> >
> >
> >
> > 2019年9月30日(月) 20:49 Erick Erickson <erickerick...@gmail.com>:
> >
> > > The most basic question is how you are load-testing it? Assuming you
> have
> > > some kind of client firing queries at Solr, keep adding threads so Solr
> > is
> > > handling more and more queries in parallel. If you start to see the
> > > response time at the client get longer _and_ the  QTime in Solr’s
> > response
> > > stays about the same, then the queries are queueing up and you need to
> > see
> > > about increasing the Jetty threads handling queries.
> > >
> > > Second is whether you’re hitting GC pauses, look at the GC logs,
> > > especially for “stop the world” pauses. This is unlikely as you’re
> still
> > > getting 60 qps, but something to check.
> > >
> > > Setting your heap to 31G is good advice, but it won’t dramatically
> > > increase the throughput I’d guess.
> > >
> > > If your I/O isn’t very high, then your index is mostly
> memory-resident. A
> > > general bit of tuning advice is to _reduce_ the heap size, leaving OS
> > > memory for the index. See Uwe’s blog:
> > >
> https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> > .
> > > There’s a sweet spot between having too much heap and too little, and
> > > unfortunately you have to experiment to find out.
> > >
> > > But given the numbers you’re talking about, you won’t be getting 1,000
> > QPS
> > > on a single box and you’ll have to scale out with replicas to hit that
> > > number. Getting all the QPS you can out of the box is important, of
> > course.
> > > Do be careful to use enough different queries that you don’t get them
> > from
> > > the queryResultCache. I had one client who was thrilled they were
> getting
> > > 3ms response times…. by firing the same query over and over and hitting
> > the
> > > queryResultCache 99.9999% of the time ;).
> > >
> > > Best,
> > > Erick
> > >
> > > > On Sep 30, 2019, at 4:28 AM, Yasufumi Mizoguchi <
> > yasufumi0...@gmail.com>
> > > wrote:
> > > >
> > > > Hi, Ere.
> > > >
> > > > Thank you for valuable feedback.
> > > > I will try Xmx31G and Xms31G instead of current ones.
> > > >
> > > > Thanks and Regards,
> > > > Yasufumi.
> > > >
> > > > 2019年9月30日(月) 17:19 Ere Maijala <ere.maij...@helsinki.fi>:
> > > >
> > > >> Just a side note: -Xmx32G is really bad for performance as it forces
> > > >> Java to use non-compressed pointers. You'll actually get better
> > results
> > > >> with -Xmx31G. For more information, see e.g.
> > > >>
> > > >>
> > >
> >
> https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
> > > >>
> > > >> Regards,
> > > >> Ere
> > > >>
> > > >> Yasufumi Mizoguchi kirjoitti 30.9.2019 klo 11.05:
> > > >>> Hi, Deepak.
> > > >>> Thank you for replying me.
> > > >>>
> > > >>> JVM settings from solr.in.sh file are as follows. (Sorry, I could
> > not
> > > >> show
> > > >>> all due to our policy)
> > > >>>
> > > >>> -verbose:gc
> > > >>> -XX:+PrintHeapAtGC
> > > >>> -XX:+PrintGCDetails
> > > >>> -XX:+PrintGCDateStamps
> > > >>> -XX:+PrintGCTimeStamps
> > > >>> -XX:+PrintTenuringDistribution
> > > >>> -XX:+PrintGCApplicationStoppedTime
> > > >>> -Dcom.sun.management.jmxremote.ssl=false
> > > >>> -Dcom.sun.management.jmxremote.authenticate=false
> > > >>> -Dcom.sun.management.jmxremote.port=18983
> > > >>> -XX:OnOutOfMemoryError=/home/solr/solr-6.2.1/bin/oom_solr.sh
> > > >>> -XX:NewSize=128m
> > > >>> -XX:MaxNewSize=128m
> > > >>> -XX:+UseG1GC
> > > >>> -XX:+PerfDisableSharedMem
> > > >>> -XX:+ParallelRefProcEnabled
> > > >>> -XX:G1HeapRegionSize=8m
> > > >>> -XX:MaxGCPauseMillis=250
> > > >>> -XX:InitiatingHeapOccupancyPercent=75
> > > >>> -XX:+UseLargePages
> > > >>> -XX:+AggressiveOpts
> > > >>> -Xmx32G
> > > >>> -Xms32G
> > > >>> -Xss256k
> > > >>>
> > > >>>
> > > >>> Thanks & Regards
> > > >>> Yasufumi.
> > > >>>
> > > >>> 2019年9月30日(月) 16:12 Deepak Goel <deic...@gmail.com>:
> > > >>>
> > > >>>> Hello
> > > >>>>
> > > >>>> Can you please share the JVM heap settings in detail?
> > > >>>>
> > > >>>> Deepak
> > > >>>>
> > > >>>> On Mon, 30 Sep 2019, 11:15 Yasufumi Mizoguchi, <
> > > yasufumi0...@gmail.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> Hi,
> > > >>>>>
> > > >>>>> I am trying some tests to confirm if single Solr instance can
> > perform
> > > >>>> over
> > > >>>>> 1000 queries per second(!).
> > > >>>>>
> > > >>>>> But now, although CPU usage is 40% or so and iowait is almost 0%,
> > > >>>>> throughput does not increase over 60 queries per second.
> > > >>>>>
> > > >>>>> I think there are some bottlenecks around Kernel, JVM, or Solr
> > > >> settings.
> > > >>>>>
> > > >>>>> The values we already checked and configured are followings.
> > > >>>>>
> > > >>>>> * Kernel:
> > > >>>>> file descriptor
> > > >>>>> net.ipv4.tcp_max_syn_backlog
> > > >>>>> net.ipv4.tcp_syncookies
> > > >>>>> net.core.somaxconn
> > > >>>>> net.core.rmem_max
> > > >>>>> net.core.wmem_max
> > > >>>>> net.ipv4.tcp_rmem
> > > >>>>> net.ipv4.tcp_wmem
> > > >>>>>
> > > >>>>> * JVM:
> > > >>>>> Heap [ -> 32GB]
> > > >>>>> G1GC settings
> > > >>>>>
> > > >>>>> * Solr:
> > > >>>>> (Jetty) MaxThreads [ -> 20000]
> > > >>>>>
> > > >>>>>
> > > >>>>> And the other info is as follows.
> > > >>>>>
> > > >>>>> CPU : 16 cores
> > > >>>>> RAM : 128 GB
> > > >>>>> Disk : SSD 500GB
> > > >>>>> NIC : 10Gbps(maybe)
> > > >>>>> OS : Ubuntu 14.04
> > > >>>>> JVM : OpenJDK 1.8.0u191
> > > >>>>> Solr : 6.2.1
> > > >>>>> Index size : about 60GB
> > > >>>>>
> > > >>>>> Any insights will be appreciated.
> > > >>>>>
> > > >>>>> Thanks and regards,
> > > >>>>> Yasufumi.
> > > >>>>>
> > > >>>>
> > > >>>
> > > >>
> > > >> --
> > > >> Ere Maijala
> > > >> Kansalliskirjasto / The National Library of Finland
> > > >>
> > >
> > >
> >
>
>
> --
> --
> Regards,
>
> *Paras Lehana* [65871]
> Software Programmer, Auto-Suggest,
> IndiaMART Intermesh Ltd.
>
> 8th Floor, Tower A, Advant-Navis Business Park, Sector 142,
> Noida, UP, IN - 201303
>
> Mob.: +91-9560911996
> Work: 01203916600 | Extn:  *8173*
>
> --
> IMPORTANT:
> NEVER share your IndiaMART OTP/ Password with anyone.
>

Reply via email to