Hey,
thanks so much for your answers.
>> When restarting dirsrv we find in logs:
>>
>> libdb: BDB2034 unable to allocate memory for mutex; resize mutex region
>> mmap in opening database environment failed trying to allocate 50
>> bytes. (OS err 12 - Cannot allocate memory)
>>
>> Same error,
Hi all,
suddenly one of our ldap-servers crashed and don't restart.
When restarting dirsrv we find in logs:
libdb: BDB2034 unable to allocate memory for mutex; resize mutex region
mmap in opening database environment failed trying to allocate 50
bytes. (OS err 12 - Cannot allocate memory)
S
Am 13.12.2018 um 21:44 schrieb David Boreham:
> It could be something like : the VM host changed (guest may have been
> migrated live) such that the physical memory is much larger. This
> combined with situation I mentioned earlier where the cache size is
> computed from the host physical memory n
Hi David,
thanks for answer,
Am 13.12.2018 um 20:53 schrieb David Boreham:
>
> On 12/13/2018 12:30 PM, Jan Kowalsky wrote:
>>
>> after dirsrv crashed and trying to restart, I got the following errors
>> and dirsrv doesn't start at all:
>>
>> [13/Dec/201
Hi all,
after dirsrv crashed and trying to restart, I got the following errors
and dirsrv doesn't start at all:
[13/Dec/2018:20:17:28 +0100] - 389-Directory/1.3.3.5 B2018.298.1116
starting up
[13/Dec/2018:20:17:28 +0100] - Detected Disorderly Shutdown last time
Directory Server was running, recov
Hi all,
I just wanted to say, that the high i/o rates where gone more or less
spontaneously. Probably they where related to multimaster replication
and writing data to changelogdb. But we could not trigger down the
reason. After one restart the high i/o rates of one of the three
suppliers in a mul
Hi Rob,
thanks for answer.
Am 21.08.2018 um 14:05 schrieb Rob Crittenden:
> Jan Kowalsky wrote:
>> Hi all,
>>
>> we still struggeling with multiple problems in our 389-ds setup.
>>
>> I already opened the threads "disk i/o: very high write rates&q
Hi all,
we still struggeling with multiple problems in our 389-ds setup.
I already opened the threads "disk i/o: very high write rates" and
"ldapsearch performance problem".
Another (the biggest) problem is, that dirsrv becomes suddenly totally
unresponsive. The server is still running, no erro
Hi all, thanks for answering,
it took some time until I came again to this problem.
Am 09.08.2018 um 10:44 schrieb Ludwig Krispenz:
>
> On 08/09/2018 01:53 AM, William Brown wrote:
>> Sadly this doesn't tell us much :(
> we could get a pstack along with iotop to see which threads do teh IO,
un
ups, the mail wasn't ready yet - I sent it by accident.
Hi all,
I'm running a set of three 389-ds servers with about 50 databases with
replication on each server.
No I'm encounter a constant very hight disk write rate (about 300 write
io/sec. - avarage).
In the audit-log there is nothing what w
Hi all,
I'm running a set of three 389-ds servers with about 50 databases with
replication on each server.
No I'm encounter a constant very hight disk write rate (about 300 write
io/sec.).
In the audit-log there is nothing what would explain this. But in iotop
I see a lot of threads like:
1621
Hi Marc,
thanks for help.
Am 15.06.2018 um 22:50 schrieb Mark Reynolds:
>
> You did not run logconv.pl the way I requested, can you please run it
> again this way:
>
> logconv.pl -ulatn
I omitted the detailed searches because there are user-data in it...
but this here does't look like this
Hi Davind,
thanks for answer
Am 15.06.2018 um 22:15 schrieb David Boreham:
>
>
> On 6/15/2018 2:04 PM, Jan Kowalsky wrote:
>>
>>
>> What I can see are a log of unindexec component queries, most of them
>> like:
>>
>> [15/Jun/2018:21:51:14 +0200]
Hi Marc,
thanks for answer, and the hint for indexes.
Am 15.06.2018 um 00:53 schrieb Mark Reynolds:
> Can we see your access log showing the slow searches? Are they
> unindexed? If you have unindexed searches they will bog down the entire
Whole log is difficult because of privacy, because it's
schrieb Jan Kowalsky:
> Hi all,
>
> while moving 389ds server to another machine (and another version) I
> realize performance issues during ldapsearch.
>
> Normaly a query ist quite quick (about 20ms - but sometimes(like every
> five seconds) it hangs for one ore even several
Hi Mark,
Am 01.06.2018 um 15:24 schrieb Mark Reynolds:
> You have 50 backends on a single instance of DS?
It's a kolab Groupware with about 50 domains. This is the default setup
on kolab to have a backend for each domain. Maybe not ideal e.g. in
this case.
> Also, this is something you co
Hi Viktor,
Thanks for the hint.
Am 01.06.2018 um 12:16 schrieb Viktor Ashirov:
> Hi,
> It's possible to regenerate encryption keys from the new certificate:
> https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/updating_the_tls_certificates_used_for
Hi all,
we have the following situation: An 389ds with tls/ssl configured whith
an certificate from letsencrypt.
Since letsencrypt is short-dated we have an automated update routine for
regenerating the cert8.db.
Now we have this sort of errors in changelog.
[01/Jun/2018:11:46:40 +0200] attrcry
Hi all,
while moving 389ds server to another machine (and another version) I
realize performance issues during ldapsearch.
Normaly a query ist quite quick (about 20ms - but sometimes(like every
five seconds) it hangs for one ore even several seconds).
I test this with:
time ldapsearch -h loca
Hi William,
thanks a lot for clearing.
Am 13.11.2017 um 00:42 schrieb William Brown:
> On Sun, 2017-11-12 at 23:06 +0100, Jan Kowalsky wrote:
>> On some comments I read that it's generally discouraged to use aci's
>> with a "not" logic like:
>>
>&g
Hi all,
after reading post on the lists regarding acis I was wondering what will
be the preferred way to only grant access to the directory for hosts in
the own network.
On some comments I read that it's generally discouraged to use aci's
with a "not" logic like:
ip != 10.0.0.*
or something li
Am Tuesday, 18. February 2014 schrieb Jan Kowalsky:
> Am Monday, 17. February 2014 schrieb Jan Kowalsky:
> > Am Saturday, 15. February 2014 schrieb Rich Megginson:
> > > On 02/14/2014 05:20 PM, Jeroen van Meeuwen (Kolab Systems) wrote:
> > > > On 2014-02-14 23:03, J
Am Saturday, 15. February 2014 schrieb Jeroen van Meeuwen (Kolab Systems):
> On 2014-02-15 00:53, Rich Megginson wrote:
> > On 02/14/2014 04:43 PM, Jeroen van Meeuwen (Kolab Systems) wrote:
> >> On 2014-02-12 23:25, Rich Megginson wrote:
> >>> Not sure what version 1.2.11.15-1 is on Debian. If it
Am Saturday, 15. February 2014 schrieb Rich Megginson:
> On 02/14/2014 05:20 PM, Jeroen van Meeuwen (Kolab Systems) wrote:
> > On 2014-02-14 23:03, Jan Kowalsky wrote:
> >> On 2014-02-14 22:15, Rich Megginson wrote:
> >>> On 02/14/2014 01:57 PM, Jan Kowalsky wrote:
On 2014-02-14 22:15, Rich Megginson wrote:
On 02/14/2014 01:57 PM, Jan Kowalsky wrote:
On 2014-02-13 15:12, Rich Megginson wrote:
On 02/13/2014 02:05 AM, Jan Kowalsky wrote:
On 2014-02-12 23:25, Rich Megginson wrote:
On 02/12/2014 02:34 PM, Jan Kowalsky wrote:
Hi Rich,
Not sure what
On 2014-02-13 15:12, Rich Megginson wrote:
On 02/13/2014 02:05 AM, Jan Kowalsky wrote:
On 2014-02-12 23:25, Rich Megginson wrote:
On 02/12/2014 02:34 PM, Jan Kowalsky wrote:
Hi Rich,
Not sure what version 1.2.11.15-1 is on Debian. If it is the same as
the upstream 1.2.11.15, that's
On 2014-02-12 23:25, Rich Megginson wrote:
On 02/12/2014 02:34 PM, Jan Kowalsky wrote:
Hi Rich,
thank you for answering,
Since this is my first experience with replication I don't know if I
do something completely wrong or it's a but. I folled the
documentation on
https://access.
tion_Guide/Managing_Replication-Configuring-Replication-cmd.html
I've added some more information and logs below.
On 2014-02-12 18:04, Rich Megginson wrote:
On 02/12/2014 06:29 AM, Jan Kowalsky wrote:
Version and platform please - rpm -q 389-ds-base
yes, sorry, I forgot. I'm running on de
Hi all,
this is my first post on the list. I'm using 389ds inside a kolab
environment. We are going to migrate to the new kolab version which runs
now 389ds. At the moment we are testing different scenarios for
replication. I don't have much experience with ldap and particular with
389ds.
I
29 matches
Mail list logo