Wow quick reply. :-)

I provided two sample machines in my original post. One of the two is a VM, the other one is physical.

The VM "db_another":

[user@db_another ~]$ sudo numastat
                           node0
numa_hit              8701301862
numa_miss                      0
numa_foreign                   0
interleave_hit             48993
local_node            8701301862
other_node                     0
and the physical "db":

[user@db ~]$ sudo numastat
                           node0           node1
numa_hit              8800569447     10675116172
numa_miss             2549915869      2282676844
numa_foreign          2282676844      2549915869
interleave_hit             39195           38959
local_node            8800434335     10675002137
other_node            2550050981      2282790879

I will read up on "swapping insanity".

Thanks for the reply! Appreciated!

MJ

On 9/23/24 13:45, Gordan Bobic wrote:
How many CPU sockets? Is NUMA exposed? This could be MySQL/MariaDB
"swapping insanity".

On Mon, Sep 23, 2024 at 2:44 PM cyusedfzfb via discuss
<[email protected]> wrote:
Hi all!

New here, signed up just new, to discuss an interesting mariadb behaviour we 
are seeing, related to mariadb unexpectedly using swap space.

Look at this example:

RHEL 8.10, running mariadb-server-utils.x86_64                   
3:10.5.22-1.module+el8.8.0+20134+a92c7654:

[user@db ~]$ free -g
               total        used        free      shared  buff/cache   available
Mem:            187           9          11           0         165         175
Swap:             3           3           0

and:

top - 08:52:46 up 39 days, 15:50,  2 users,  load average: 1.70, 2.05, 1.99
Tasks: 616 total,   2 running, 614 sleeping,   0 stopped,   0 zombie
%Cpu(s):  6.0 us,  0.8 sy,  0.0 ni, 88.2 id,  4.7 wa,  0.1 hi,  0.1 si,  0.0 st
MiB Mem : 191529.7 total,  11761.7 free,   9764.9 used, 170003.1 buff/cache
MiB Swap:   4096.0 total,   1006.1 free,   3089.9 used. 179429.5 avail Mem

     PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
      SWAP
1952811 mysql     20   0   14.4g   5.0g  15172 S  65.7   2.7   9663:14 mysqld   
     1.7g
2934963 root      20   0    5208   2056   1408 S   8.9   0.0  10:39.29 gzip     
     0
2934962 root      20   0   33612   8268   6960 S   7.9   0.0   2:35.31 
mysqldump     0

while:

[user@db ~]$ cat /proc/sys/vm/swappiness
1

We could (we should..) allocate more ram to mariadb, but the point is: there is 
plenty of RAM available, and there always has been, since boot. And yet, 
mariadb is using all swap.

I guess this has no performance impact, probably the swapped pages are not 
actually used, but zabbix does not like 100% swap usage. And frankly: neither 
do I.

Why? Can anyone here explain? Swappiness set to 1...is mariadb ignoring 
swappiness..? Is there something else we can configure..?

I read some older posts on ram/swap, and here is some data, requested in that 
older thread:

[user@db ~]$ sudo ps -o pid,vsz,comm `pgrep mysqld`
     PID    VSZ COMMAND
1952811 15117612 mysqld

and

[user@db ~]$  for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " 
$3}END{
print ""}' $file; done | sort -k 2 -n -r | grep mysql
mysqld 1982672 kB


We are seeing similar behaviour on another machines, running more recent 
mariadb version: 10.5.22-MariaDB MariaDB Server:

[user@db_another ~]$ free -g
               total        used        free      shared  buff/cache   available
Mem:             62          40           0           1          21          20
Swap:             4           3           1

Again: enough RAM available, and swap is used 3/4, mostly by MariaDB.


Hope the provided information is enough. Let me know if there is anything else 
I can provide. Looking forward for any insight or help on the matter. :-)

Thanks!
MJ


_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to