Re: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-11 Thread Simone Piccardi
Il 05/11/20 16:26, Howard Chu ha scritto:
> Traces from a stripped binary are useless.
>
Using a non stripped binary in production lead to some performance
problem, so it took a while to plan a deploy for enough time to take data.

I'm attaching two gstack traces taken during two different events. What
I see is a thread (the second one, here the slapd pid was 31267) in
epoll_wait, all the other waiting in a futex excepted one, 31280. But I
non skilled enough to understand what it is doing.

We also had a much longer event later than the ones from which the
attached traces are taken (lasted some minutes). During this one all the
thread were straced and we don't have a gstack trace.

This time we also got some different errors in the log, and I'm
attacching a redacted excerpt hoping that they may constitute a useful
clue. What we found were messages like:

Nov 10 00:49:56 ldp-11 slapd[31267]: nonpresent_callback: rid=012
present UUID 258f12bd-b531-426e-8dc8-49263545db58, dn
cn=905719,cn=protected,o=ourorg

They starte appear around five seconds before most of the slapd thread
stopped waiting on a futex (that happened near 00:50:03). After that
there were still lot of messages on the logs (but only
"nonpresent_callback" ones) up to around 00:50:57; then nothing more
until the activities resumed (around 00:53:06).

>From the strace we saw that the second thread (MAIN_PID+1, here 31268)
was ever processing epoll_wait, with some activity in the beginning,
then just awakening every 2500ms doing nothing, another thread was
sending (with sendto) a lot of messages to fd 3 (from their beginning
they seems syslog messages) for about 50 seconds (and the log is full of
"nonpresent_callback" in this time) then it also stopped in the same
futex of the other threads.

The only thread (except for the epoll waiter) that never stopped, was
doing just the following system calls:

00:50:05.384738 mprotect(0x7f23bb1bc000, 9613312, PROT_READ|PROT_WRITE) = 0
00:53:05.415500 msync(0x7f23e6e56000, 25769803776, MS_SYNC) = 0
00:53:06.114513 futex(0x7f29eae6e040, FUTEX_WAKE, 1) = 1

the FUTEX_WAKE was on the futex stopping all the other ones, and after
that one they restarted working.

I hope this could be enough to pin down the problem.

Simone
-- 
Simone Piccardi Truelite Srl
picca...@truelite.it (email/jabber) Via Monferrato, 6
Tel. +39-347-103243350142 Firenze
http://www.truelite.it  Tel. +39-055-7879597
Thread 18 (Thread 0x7f23e6e55700 (LWP 31268)):
#0  0x7f29e96baed3 in epoll_wait () from /lib64/libc.so.6
#1  0x0042944e in slapd_daemon_task (ptr=0x151fbb0) at daemon.c:2596
#2  0x7f29e9991e75 in start_thread () from /lib64/libpthread.so.0
#3  0x7f29e96ba8fd in clone () from /lib64/libc.so.6
Thread 17 (Thread 0x7f23e6654700 (LWP 31269)):
#0  0x7f29e998fa5b in __pthread_mutex_lock_full () from 
/lib64/libpthread.so.0
#1  0x004f4584 in mdb_txn_renew0 (txn=0x151cd40) at 
./../../../libraries/liblmdb/mdb.c:2752
#2  0x004f4c7b in mdb_txn_begin (env=0x141a060, parent=0x0, 
flags=524288, ret=0x7f23e66522e0) at ./../../../libraries/liblmdb/mdb.c:2910
#3  0x0055d715 in mdb_opinfo_get (op=0x7f23dc17ca70, 
mdb=0x7f29ead2d010, rdonly=0, moip=0x7f23e66522c8) at id2entry.c:470
#4  0x00506e05 in mdb_modify (op=0x7f23dc17ca70, rs=0x7f23e66539e0) at 
modify.c:511
#5  0x004bfaf8 in overlay_op_walk (op=0x7f23dc17ca70, 
rs=0x7f23e66539e0, which=op_modify, oi=0x136a4f0, on=0x0) at backover.c:677
#6  0x004bfd1c in over_op_func (op=0x7f23dc17ca70, rs=0x7f23e66539e0, 
which=op_modify) at backover.c:730
#7  0x004bfe50 in over_op_modify (op=0x7f23dc17ca70, rs=0x7f23e66539e0) 
at backover.c:769
#8  0x0044d666 in fe_op_modify (op=0x7f23dc17ca70, rs=0x7f23e66539e0) 
at modify.c:303
#9  0x0044cf48 in do_modify (op=0x7f23dc17ca70, rs=0x7f23e66539e0) at 
modify.c:177
#10 0x0042d756 in connection_operation (ctx=0x7f23e6653b10, 
arg_v=0x7f23dc17ca70) at connection.c:1175
#11 0x0042dceb in connection_read_thread (ctx=0x7f23e6653b10, 
argv=0x4a1) at connection.c:1311
#12 0x0058a4a2 in ldap_int_thread_pool_wrapper (xpool=0x12fb090) at 
tpool.c:696
#13 0x7f29e9991e75 in start_thread () from /lib64/libpthread.so.0
#14 0x7f29e96ba8fd in clone () from /lib64/libc.so.6
Thread 16 (Thread 0x7f23e5e53700 (LWP 31272)):
#0  0x7f29e998fa5b in __pthread_mutex_lock_full () from 
/lib64/libpthread.so.0
#1  0x004f4584 in mdb_txn_renew0 (txn=0x151cd40) at 
./../../../libraries/liblmdb/mdb.c:2752
#2  0x004f4c7b in mdb_txn_begin (env=0x141a060, parent=0x0, 
flags=524288, ret=0x7f23e5e512e0) at ./../../../libraries/liblmdb/mdb.c:2910
#3  0x0055d715 in mdb_opinfo_get (op=0x7f23dc022b50, 
mdb=0x7f29ead2d010, rdonly=0, moip=0x7f23e5e512c8) at id2entry.c:470
#4  0x00506e05 in mdb_modify (op=0x7f23dc022b50, rs=0x7f23e5e529e0) at 

Antw: [EXT] Re: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-06 Thread Ulrich Windl
>>> Simone Piccardi  schrieb am 05.11.2020 um 16:17 in
Nachricht <5a6d778a-b75b-3027-3a88-f5507c839...@truelite.it>:
> Il 03/11/20 22:49, Quanah Gibson‑Mount ha scritto:
> 
>>> The problem manifests itself without periodicity and looking on the
>>> number of connection before it we could not see any usage peak. We tried
>>> to strace slapd threads during the problem, and they seem blocked on a
>>> mutex waiting for the one running at 100% (in a single CPU, user time).
>>> I'm attaching a top results during one of these events.
>> 
>> If you can attach to the process while this is occurring, I'd suggest
>> obtaining a full GDB backtrace to see what the different slapd threads
>> are doing at that time.  Also, what mutex specifically is slapd waiting
on?
>> 
> I executed gstack on the slapd pid during one of such events saving the
> output, they are attached, but the running slapd is stripped so they are
> quite obscure (at least for me).

I think even when stripped, you could "re-attach" the symbols (given that you
saved them before stripping). For some dirstributions, such symbol (debug)
packages are available for install. I don't know for your package source,
however.

> 
> We are trying to put in a non stripped version (compiled with
> CFLAGS='‑g"  and ‑‑enable‑debug=yes) in use for a test, but that's a
> production machine, and it will take a while.
> 
> What I should do to find which one the mutex is? in the straces they are
> identified just by a number.
> 
>>> So a first question is: there is any other configuration parameter about
>>> indexing that I can try?
>> 
>> If you really believe that this is indexing related, you should be able
>> to tell this from the slapd logs at "stats" logging, where you would see
>> a specific search taking a significant amount of time.  However that
>> generally does not lead to a system that's paused as searches shouldn't
>> trigger a mutex issue like what you're describing.
>> 
> No, it is not that I believe that, as I said it was just a guess about
> something that could need full CPU for tens of seconds blocking all
> other operations. But from what you are saying the guess is probably
> plain wrong.
> 
>> Is this on RHEL7 or later?  If you have both "stats" and "sync" logging
>> enabled (the recommended setting for replicating nodes), what does the
>> slapd log show is happening at this time?
> 
> The server is running an updated version of Amazon Linux (Amazon Linux
> AMI 2018.03).
> 
> We enabled stats and sync to logs, and I'm attaching a redacted excerpt
> of them around the incident time, when I also took the gstack.txt (done
> at 00:39:04) and gstack2.txt (done at 00:39:20) backtraces. But during
> that time there is no data.
> 
> Simone




Re: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-05 Thread Simone Piccardi
Il 03/11/20 22:49, Quanah Gibson-Mount ha scritto:

>> The problem manifests itself without periodicity and looking on the
>> number of connection before it we could not see any usage peak. We tried
>> to strace slapd threads during the problem, and they seem blocked on a
>> mutex waiting for the one running at 100% (in a single CPU, user time).
>> I'm attaching a top results during one of these events.
> 
> If you can attach to the process while this is occurring, I'd suggest
> obtaining a full GDB backtrace to see what the different slapd threads
> are doing at that time.  Also, what mutex specifically is slapd waiting on?
> 
I executed gstack on the slapd pid during one of such events saving the
output, they are attached, but the running slapd is stripped so they are
quite obscure (at least for me).

We are trying to put in a non stripped version (compiled with
CFLAGS='-g"  and --enable-debug=yes) in use for a test, but that's a
production machine, and it will take a while.

What I should do to find which one the mutex is? in the straces they are
identified just by a number.

>> So a first question is: there is any other configuration parameter about
>> indexing that I can try?
> 
> If you really believe that this is indexing related, you should be able
> to tell this from the slapd logs at "stats" logging, where you would see
> a specific search taking a significant amount of time.  However that
> generally does not lead to a system that's paused as searches shouldn't
> trigger a mutex issue like what you're describing.
> 
No, it is not that I believe that, as I said it was just a guess about
something that could need full CPU for tens of seconds blocking all
other operations. But from what you are saying the guess is probably
plain wrong.

> Is this on RHEL7 or later?  If you have both "stats" and "sync" logging
> enabled (the recommended setting for replicating nodes), what does the
> slapd log show is happening at this time?

The server is running an updated version of Amazon Linux (Amazon Linux
AMI 2018.03).

We enabled stats and sync to logs, and I'm attaching a redacted excerpt
of them around the incident time, when I also took the gstack.txt (done
at 00:39:04) and gstack2.txt (done at 00:39:20) backtraces. But during
that time there is no data.

Simone
Thread 18 (Thread 0x7f1718b84700 (LWP 23491)):
#0  0x7f1d1b3e9ed3 in epoll_wait () from /lib64/libc.so.6
#1  0x00420cc4 in ?? ()
#2  0x7f1d1b6c0e75 in start_thread () from /lib64/libpthread.so.0
#3  0x7f1d1b3e98fd in clone () from /lib64/libc.so.6
Thread 17 (Thread 0x7f1713fff700 (LWP 23492)):
#0  0x7f1d1b6bea5b in __pthread_mutex_lock_full () from 
/lib64/libpthread.so.0
#1  0x004c2652 in ?? ()
#2  0x004c3524 in ?? ()
#3  0x0051b3ff in ?? ()
#4  0x004d1b1e in ?? ()
#5  0x0049a592 in ?? ()
#6  0x0049a6ce in ?? ()
#7  0x00440ea7 in ?? ()
#8  0x00442e7b in ?? ()
#9  0x00425e54 in ?? ()
#10 0x0042612a in ?? ()
#11 0x00541a40 in ?? ()
#12 0x7f1d1b6c0e75 in start_thread () from /lib64/libpthread.so.0
#13 0x7f1d1b3e98fd in clone () from /lib64/libc.so.6
Thread 16 (Thread 0x7f17137fe700 (LWP 23494)):
#0  0x004c3b93 in ?? ()
#1  0x004c718f in ?? ()
#2  0x004cc8ae in ?? ()
#3  0x004cfab6 in ?? ()
#4  0x0051a462 in ?? ()
#5  0x004d22c7 in ?? ()
#6  0x0049a592 in ?? ()
#7  0x0049a6ce in ?? ()
#8  0x00440ea7 in ?? ()
#9  0x00442e7b in ?? ()
#10 0x00425e54 in ?? ()
#11 0x0042612a in ?? ()
#12 0x00541a40 in ?? ()
#13 0x7f1d1b6c0e75 in start_thread () from /lib64/libpthread.so.0
#14 0x7f1d1b3e98fd in clone () from /lib64/libc.so.6
Thread 15 (Thread 0x7f1712ffd700 (LWP 23495)):
#0  0x7f1d1b6bea5b in __pthread_mutex_lock_full () from 
/lib64/libpthread.so.0
#1  0x004c2652 in ?? ()
#2  0x004c3524 in ?? ()
#3  0x0051b3ff in ?? ()
#4  0x004d1b1e in ?? ()
#5  0x0049a592 in ?? ()
#6  0x0049a6ce in ?? ()
#7  0x00440ea7 in ?? ()
#8  0x00442e7b in ?? ()
#9  0x00425e54 in ?? ()
#10 0x0042612a in ?? ()
#11 0x00541a40 in ?? ()
#12 0x7f1d1b6c0e75 in start_thread () from /lib64/libpthread.so.0
#13 0x7f1d1b3e98fd in clone () from /lib64/libc.so.6
Thread 14 (Thread 0x7f17127fc700 (LWP 23496)):
#0  0x7f1d1b6bea5b in __pthread_mutex_lock_full () from 
/lib64/libpthread.so.0
#1  0x004c2652 in ?? ()
#2  0x004c3524 in ?? ()
#3  0x0051b3ff in ?? ()
#4  0x004d1b1e in ?? ()
#5  0x0049a592 in ?? ()
#6  0x0049a6ce in ?? ()
#7  0x00440ea7 in ?? ()
#8  0x00442e7b in ?? ()
#9  0x00425e54 in ?? ()
#10 0x0042612a in ?? ()
#11 0x00541a40 in ?? ()
#12 0x7f1d1b6c0e75 in start_thread () from /lib64/libpthread.so.0
#13 0x7f1d1b3e98fd in clone () from /lib64/libc.so.6
Thread 

RE: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-05 Thread Maucci, Cyrille
Your gstacks are mostly useless because you probably launched them from a 
directory that did not allow the lib paths to be resolved and therefore instead 
of seeing functions names, we see "??"

++Cyrille

-Original Message-
From: Simone Piccardi [mailto:picca...@truelite.it] 
Sent: Thursday, November 5, 2020 4:17 PM
To: openldap-technical@openldap.org
Subject: Re: Connections blocked for some tens of seconds while a single slapd 
thread running 100%

Il 03/11/20 22:49, Quanah Gibson-Mount ha scritto:

>> The problem manifests itself without periodicity and looking on the 
>> number of connection before it we could not see any usage peak. We 
>> tried to strace slapd threads during the problem, and they seem 
>> blocked on a mutex waiting for the one running at 100% (in a single CPU, 
>> user time).
>> I'm attaching a top results during one of these events.
> 
> If you can attach to the process while this is occurring, I'd suggest 
> obtaining a full GDB backtrace to see what the different slapd threads 
> are doing at that time.  Also, what mutex specifically is slapd waiting on?
> 
I executed gstack on the slapd pid during one of such events saving the output, 
they are attached, but the running slapd is stripped so they are quite obscure 
(at least for me).

We are trying to put in a non stripped version (compiled with CFLAGS='-g"  and 
--enable-debug=yes) in use for a test, but that's a production machine, and it 
will take a while.

What I should do to find which one the mutex is? in the straces they are 
identified just by a number.

>> So a first question is: there is any other configuration parameter 
>> about indexing that I can try?
> 
> If you really believe that this is indexing related, you should be 
> able to tell this from the slapd logs at "stats" logging, where you 
> would see a specific search taking a significant amount of time.  
> However that generally does not lead to a system that's paused as 
> searches shouldn't trigger a mutex issue like what you're describing.
> 
No, it is not that I believe that, as I said it was just a guess about 
something that could need full CPU for tens of seconds blocking all other 
operations. But from what you are saying the guess is probably plain wrong.

> Is this on RHEL7 or later?  If you have both "stats" and "sync" 
> logging enabled (the recommended setting for replicating nodes), what 
> does the slapd log show is happening at this time?

The server is running an updated version of Amazon Linux (Amazon Linux AMI 
2018.03).

We enabled stats and sync to logs, and I'm attaching a redacted excerpt of them 
around the incident time, when I also took the gstack.txt (done at 00:39:04) 
and gstack2.txt (done at 00:39:20) backtraces. But during that time there is no 
data.

Simone


Re: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-05 Thread Howard Chu
Simone Piccardi wrote:
> Il 03/11/20 22:49, Quanah Gibson-Mount ha scritto:
> 
>>> The problem manifests itself without periodicity and looking on the
>>> number of connection before it we could not see any usage peak. We tried
>>> to strace slapd threads during the problem, and they seem blocked on a
>>> mutex waiting for the one running at 100% (in a single CPU, user time).
>>> I'm attaching a top results during one of these events.
>>
>> If you can attach to the process while this is occurring, I'd suggest
>> obtaining a full GDB backtrace to see what the different slapd threads
>> are doing at that time.  Also, what mutex specifically is slapd waiting on?
>>
> I executed gstack on the slapd pid during one of such events saving the
> output, they are attached, but the running slapd is stripped so they are
> quite obscure (at least for me).

Traces from a stripped binary are useless.

> We are trying to put in a non stripped version (compiled with
> CFLAGS='-g"  and --enable-debug=yes) in use for a test, but that's a
> production machine, and it will take a while.

-- 
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/


Re: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-03 Thread Quanah Gibson-Mount




--On Tuesday, November 3, 2020 6:41 PM +0100 Simone Piccardi 
 wrote:




The problem manifests itself without periodicity and looking on the
number of connection before it we could not see any usage peak. We tried
to strace slapd threads during the problem, and they seem blocked on a
mutex waiting for the one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.


If you can attach to the process while this is occurring, I'd suggest 
obtaining a full GDB backtrace to see what the different slapd threads are 
doing at that time.  Also, what mutex specifically is slapd waiting on?



From the behaviour I was suspecting (just a wild and uninformated guess)
some indexing issue, blocking all access.

We tried to change tool-threads to 4 because I found it cited in some
example as related to threads used for indexing, but the change has no
effect. Re-reading last version of man-page, if I understand it
correctly, it's effective only for slapadd etc.


Correct, this has setting has zero to do with a running slapd process.  It 
only affects how many threads are used by slapadd & slapindex while doing 
indexing during offline operations.  Additionally any value above 2 has no 
impact with back-mdb (it'll just be set back to 2).



So a first question is: there is any other configuration parameter about
indexing that I can try?


If you really believe that this is indexing related, you should be able to 
tell this from the slapd logs at "stats" logging, where you would see a 
specific search taking a significant amount of time.  However that 
generally does not lead to a system that's paused as searches shouldn't 
trigger a mutex issue like what you're describing.


Is this on RHEL7 or later?  If you have both "stats" and "sync" logging 
enabled (the recommended setting for replicating nodes), what does the 
slapd log show is happening at this time?


Regards,
Quanah

--

Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:



RE: Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-03 Thread Maucci, Cyrille
If I was facing this symptom, I'd capture a couple of pstack  
outputs when the pb is occurring (and maybe in correlation with perf top -p 
 if pstacks are not enough).
That should help avoiding guesses.

++Cyrille

-Original Message-
From: Simone Piccardi [mailto:picca...@truelite.it] 
Sent: Tuesday, November 3, 2020 6:41 PM
To: openldap-technical@openldap.org
Subject: Connections blocked for some tens of seconds while a single slapd 
thread running 100%

Hi,

we got a quite strange behaviour in which a slapd server stops processing 
connections for some tens of seconds while a single thread is running 100% on a 
single CPU and all other CPU are almost idle.
When the problem arise there is no significant iowait or disk I/O (and no 
swapping, that's disabled). Context switches just go near zero (from some tens 
of thousand to some hundreds). Load average is almost always under 2.

The server has 32G of RAM and 4 HT processors, is running
openldap-2.4.54 in mirror mode (but no delta replication) using the mdb 
backend. The same behaviour was found also with 2.4.53. OpenLDAP is the only 
service running on it, apart SSH and some monitoring tools.
Database maxsize is 25G around 17G are used.

I'm attaching a redacted configuration of the main server (the secondary one is 
the same, with IDs reverted for mirror mode use)

Most of the time it works just fine, processing a up to a few thousand of read 
query per second while having some tens of write per second.
Connections are managed by HA-proxy, sending them to this server by default 
(used as main node). Many times these stop are short (around 10
second) and we don't lost connections, but when the problem arise and last for 
enough time, HAproxy switch to the second node, and we got downtimes. Staying 
with the secondary node we have the same behaviour.

The problem manifests itself without periodicity and looking on the number of 
connection before it we could not see any usage peak. We tried to strace slapd 
threads during the problem, and they seem blocked on a mutex waiting for the 
one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.

>From the behaviour I was suspecting (just a wild and uninformated guess) some 
>indexing issue, blocking all access.

We tried to change tool-threads to 4 because I found it cited in some example 
as related to threads used for indexing, but the change has no effect. 
Re-reading last version of man-page, if I understand it correctly, it's 
effective only for slapadd etc.

So a first question is: there is any other configuration parameter about 
indexing that I can try?

Anyway I'm not sure if there is an effective indexing issue (indexes are quite 
basic). I was suspecting this because there are lot of writes, and there is no 
strace activity during the stop.  I should look somewhere else?

Any suggestion on further checks or configuration changes will be more than 
appreciated.

Regards
Simone


Connections blocked for some tens of seconds while a single slapd thread running 100%

2020-11-03 Thread Simone Piccardi
Hi,

we got a quite strange behaviour in which a slapd server stops
processing connections for some tens of seconds while a single thread is
running 100% on a single CPU and all other CPU are almost idle.
When the problem arise there is no significant iowait or disk I/O (and
no swapping, that's disabled). Context switches just go near zero (from
some tens of thousand to some hundreds). Load average is almost always
under 2.

The server has 32G of RAM and 4 HT processors, is running
openldap-2.4.54 in mirror mode (but no delta replication) using the mdb
backend. The same behaviour was found also with 2.4.53. OpenLDAP is the
only service running on it, apart SSH and some monitoring tools.
Database maxsize is 25G around 17G are used.

I'm attaching a redacted configuration of the main server (the secondary
one is the same, with IDs reverted for mirror mode use)

Most of the time it works just fine, processing a up to a few thousand
of read query per second while having some tens of write per second.
Connections are managed by HA-proxy, sending them to this server by
default (used as main node). Many times these stop are short (around 10
second) and we don't lost connections, but when the problem arise and
last for enough time, HAproxy switch to the second node, and we got
downtimes. Staying with the secondary node we have the same behaviour.

The problem manifests itself without periodicity and looking on the
number of connection before it we could not see any usage peak. We tried
to strace slapd threads during the problem, and they seem blocked on a
mutex waiting for the one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.

>From the behaviour I was suspecting (just a wild and uninformated guess)
some indexing issue, blocking all access.

We tried to change tool-threads to 4 because I found it cited in some
example as related to threads used for indexing, but the change has no
effect. Re-reading last version of man-page, if I understand it
correctly, it's effective only for slapadd etc.

So a first question is: there is any other configuration parameter about
indexing that I can try?

Anyway I'm not sure if there is an effective indexing issue (indexes are
quite basic). I was suspecting this because there are lot of writes, and
there is no strace activity during the stop.  I should look somewhere else?

Any suggestion on further checks or configuration changes will be more
than appreciated.

Regards
Simone

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#

include /usr/local/openldap/etc/openldap/schema/corba.schema
include /usr/local/openldap/etc/openldap/schema/core.schema
include /usr/local/openldap/etc/openldap/schema/cosine.schema
include /usr/local/openldap/etc/openldap/schema/duaconf.schema
include /usr/local/openldap/etc/openldap/schema/dyngroup.schema
include /usr/local/openldap/etc/openldap/schema/inetorgperson.schema
include /usr/local/openldap/etc/openldap/schema/java.schema
include /usr/local/openldap/etc/openldap/schema/misc.schema
include /usr/local/openldap/etc/openldap/schema/nis.schema
include /usr/local/openldap/etc/openldap/schema/openldap.schema
include /usr/local/openldap/etc/openldap/schema/ppolicy.schema
include /usr/local/openldap/etc/openldap/schema/collective.schema

#add OurOrganization schema
include /usr/local/openldap/etc/openldap/schema/OurOrganization.schema

# Allow LDAPv2 client connections.  This is NOT the default.
allow bind_v2

# This is for mirrormode replication
serverID 11

# Global ACLs
include /usr/local/openldap/etc/openldap/acls/global.acl

# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral   ldap://root.openldap.org

pidfile  /usr/local/openldap/var/run/slapd.pid
argsfile /usr/local/openldap/var/run/slapd.args

# options: none sync parse shell stats2 stats ACL config filter BER conns args 
packets trace any
# https://www.openldap.org/doc/admin24/slapdconfig.html
#loglevel none
#loglevel stats sync
loglevel stats
#loglevel none
#loglevel any


# The next three lines allow use of TLS for encrypting connections using a
# dummy test certificate which you can generate by running
# /usr/libexec/openldap/generate-server-cert.sh. Your client software may balk
# at self-signed certificates, however.
TLSCACertificatePath /usr/local/openldap/etc/openldap/certs
TLSCACertificateFile /usr/local/openldap/etc/openldap/certs/rootCA.pem
TLSCertificateFile /usr/local/openldap/etc/openldap/certs/server.crt
TLSCertificateKeyFile /usr/local/openldap/etc/openldap/certs/server.key


#TLSCertificateFile /etc/pki/tls/certs/ldap1_pubkey.pem
#TLSCertificateKeyFile /etc/pki/tls/certs/ldap1_privkey.pem

sizelimit 25

# Setup the idle timeout to prevent app servers from taking down ldap.
#