Hi, Theirry,

We have followed the directions to investigate a memory leak which occurs as we 
reach many hundreds of thousands of entries, and got the log file, which shows 
some warnings:
==26735== Warning: set address range perms: large range [0x59eaf000, 
0xb3067000) (defined)
==26735== Thread 18:
==26735== Conditional jump or move depends on uninitialized value(s)
…
==26735== Thread 30:
==26735== Syscall param pwrite64(buf) points to uninitialised byte(s)
==26735==    at 0x7998023: ??? (in /usr/lib64/libpthread-2.17.so)

and similar strings. Memory was running to 90% of 16G servers, and never comes 
back.

Thoughts, suggestions are much appreciated.

- Alex

From: Thierry Bordaz <tbor...@redhat.com>
Date: Tuesday, April 18, 2023 at 12:37 PM
To: "Nazarenko, Alexander" <alexander_nazare...@harvard.edu>, "General 
discussion list for the 389 Directory server project." 
<389-users@lists.fedoraproject.org>
Subject: Re: [389-users] 389 DS memory growth


Thanks for the update.

I failed to reproduced any significant growth with groups(100)/members(1000) 
provisioning. The same with searches on person returning 1000 person entries 
(bound as DM). We will wait for your profiling info.

regards
Theirry
On 4/18/23 18:12, Nazarenko, Alexander wrote:
This is understood, thank you. It is not a big concern for us, as our servers 
are at least 16Gb.
We are not using pbkdf2 either.
This is the heap growth above 20Gb (and up) that is the concern, due to queries 
like (objectclass=person) hiting the server.

At some point in the near future we plan profile a typical server for memory 
usage, and plan to keep posted.

- Alwes

From: Thierry Bordaz <tbor...@redhat.com><mailto:tbor...@redhat.com>
Date: Tuesday, April 18, 2023 at 11:47 AM
To: "General discussion list for the 389 Directory server project." 
<389-users@lists.fedoraproject.org><mailto:389-users@lists.fedoraproject.org>, 
"Nazarenko, Alexander" 
<alexander_nazare...@harvard.edu><mailto:alexander_nazare...@harvard.edu>
Subject: Re: [389-users] 389 DS memory growth


Hi,

Note that the initial memory footprint of an instance 1.3.11 is larger that an 
1.3.10 one.

On RHEL 7.9 2Gb VM, an instance 1.3.11 is 1Gb while 1.3.10 is 0.5Gb.
Instances have the same DS tuning.

The difference comes from extra chunks of anonymous memory (heap) that are 
possibly related to the new rust plugin handling pbkdf2_sha512.

00007ffb0812e000   64328       0       0 -----   [ anon ]
00007ffb0c000000    1204    1060    1060 rw---   [ anon ]
00007ffb0c12d000   64332       0       0 -----   [ anon ]
00007ffb10000000    1028    1028    1028 rw---   [ anon ]
00007ffb10101000   64508       0       0 -----   [ anon ]
00007ffb14000000    1020    1020    1020 rw---   [ anon ]
00007ffb140ff000   64516       0       0 -----   [ anon ]
00007ffb18000000    1024    1024    1024 rw---   [ anon ]
00007ffb18100000   64512       0       0 -----   [ anon ]
00007ffb1c000000    1044    1044    1044 rw---   [ anon ]
00007ffb1c105000   64492       0       0 -----   [ anon ]
00007ffb20000000     540     540     540 rw---   [ anon ]
00007ffb20087000   64996       0       0 -----   [ anon ]
00007ffb271ce000       4       0       0 -----   [ anon ]

This is just the initial memory footprint and does not explain regular growth.
Thanks to progier who raised that point.

regards
thierry
On 4/17/23 03:07, Nazarenko, Alexander wrote:
Hello colleagues,
On March 22nd we updated the 389-ds-base.x86_64 and 389-ds-base-libs.x86_64 
packages on our eight RHEL 7.9 production servers from version 
1.3.10.2-17.el7_9 to version 1.3.11.1-1.el7_9.  We also updated the kernel from 
kernel 3.10.0-1160.80.1.el7.x86_64 to kernel-3.10.0-1160.88.1.el7.x86_64 during 
the same update.
Approximately 12 days later, on April 3rd, all the hosts started exhibiting 
memory growth issues whereby the “slapd” process was using over 90% of the 
available system memory of 32GB, which was NOT happening for a couple of years 
prior to applying any of the available package updates on the systems.

Two of the eight hosts act as Primaries (formerly referred to as masters), 
while 6 of the hosts act as read-only replicas.  Three of the read-only 
replicas are used by our authorization system while the other three read-only 
replicas are used by customer-based applications.

Currently we use system controls to restrict the memory usage.

My question is whether this is something that other users also experience, and 
what is the recommended way to stabilize the DS servers in this type of 
situation?
Thanks,
- Alex






_______________________________________________

389-users mailing list -- 
389-users@lists.fedoraproject.org<mailto:389-users@lists.fedoraproject.org>

To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org<mailto:389-users-le...@lists.fedoraproject.org>

Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/<https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.fedoraproject.org_en-2DUS_project_code-2Dof-2Dconduct_&d=DwMDaQ&c=WO-RGvefibhHBZq3fL85hQ&r=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY&m=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B&s=vA-Y0-9EUfql9PZclFkEUQNYQWbXzkK_wckNoCsb-Rs&e=>

List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines<https://urldefense.proofpoint.com/v2/url?u=https-3A__fedoraproject.org_wiki_Mailing-5Flist-5Fguidelines&d=DwMDaQ&c=WO-RGvefibhHBZq3fL85hQ&r=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY&m=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B&s=2Q_fo7HDWuMCaznM6X-ZFtOxykQo_10PZVZX4FGds3k&e=>

List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.fedoraproject.org_archives_list_389-2Dusers-40lists.fedoraproject.org&d=DwMDaQ&c=WO-RGvefibhHBZq3fL85hQ&r=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY&m=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B&s=IAJEMJXS6iY8Zy8s59ThNT9cCsKLvelobO7aDShm-EY&e=>

Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue<https://urldefense.proofpoint.com/v2/url?u=https-3A__pagure.io_fedora-2Dinfrastructure_new-5Fissue&d=DwMDaQ&c=WO-RGvefibhHBZq3fL85hQ&r=uiyPR4nhbOiFgJxO8FlFxvLTOA66849EeL0Dl9-gcSY&m=lhlOLw41Ef61rYH2u6A-2OZLJIGjWUJyMvejNbxxpIlCBNzk8EkZ_ZDJ-b16xD6B&s=XiecP4NfR2TV4r64x1e-H1jn4vfDA6V2Bd2syZxuVxk&e=>
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

Reply via email to