Hello:

    5.3.2 or 6.2.0 ?



    Does 5.3.2 support HugePage ?


    5.3.2 only support Transparent Huge Page ?


    How about just use Transparent Huge Page ? or prefer to close it ?



    Does it related to fragment size ?


    We have encountered slow request with Transparent Huge Page(5.3.2). It 
blocked ink_freelist_new (malloc) and ink_freelist_free (init list) sometimes 
even more than 1 second holding vol lock. We guess it caused by defragment and 
long time search for continuous memory larger than size(HugePage). Could anyone 
analyse it ? Or why it when malloc and init list?


    When should we use HugePage or TransparentHugePage?

——————————————
Lampo
best regards


 Original Message
Sender: Steve Malenfant<[email protected]>
Recipient: users<[email protected]>
Date: Tuesday, Sep 6, 2016 20:24
Subject: Huge Page sizing for traffic server

I'm testing 5.3.2 with huge page enabled. Here's the setup :

CONFIG proxy.config.cache.ram_cache.size = INT 137438953472 (128GB)

Normally I would need 65536 pages (2048KB in size), But I added ~12 GB extra :
vm.nr_hugepages=72000

The results are good. no performance issues (on the current build I have), the 
PageTables size is hovering around 8000 kB (PageTables:         8440 kB). 
Before, this would go higher than 200,000kB creating some memory management 
issues every 32-33 minutes and causing high load average on the server (on code 
with hugepages modifications).

After a weekend of load testing just so the RAM Cache is fully filled up, I see 
that all 72,000 pages are being used :

[root@psp6cdedge03 ~]# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
HugePages_Total:   72000
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

If you enable Huge Pages, you'll see that traffic server doesn't report that 
RES memory in top anymore and can be a bit confusing...

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
7386 ats       20   0  154g 1.0g  39m S 45.0  0.4   2590:29 [ET_NET 0]

This build also have the NUMA fix, before you'd see a bunch of numa_miss and 
numa_foreign :

# numastat
                           node0           node1
numa_hit             25212626736     25457088463
numa_miss                      0               0
numa_foreign                   0               0
interleave_hit          12464334        12464369
local_node           25212625913     25444350193
other_node                   823        12738270


How much Huge Pages do I need extras? Is there some other parameters that would 
help me sizing the number of reserve huge page I need?

Thanks,

Steve

Reply via email to