[389-users] Re: [EXT] Re: Password policies and replication service accounts

2024-08-26 Thread Darby, Tim - (tdarby)
I know about the replication agreements, cn=replication manager, and all that, 
but...

In the process of moving my old instances to 2.5, I decided to use lib389 as 
much as possible to script my replicas, replication agreements, replication 
accounts, etc. lib389, as far as I can tell, only creates replication service 
accounts in ou=Services. As such, I assumed that the old system, where 
everything was in cn=config, was no longer the recommendation. It seems to me 
that having the replication accounts in an OU makes them susceptible to 
password polices, which I do believe was causing the issues I was seeing. If 
you're saying that having it all in cn=config is still fine, I will happily 
revert to that.

On subtree policies, if that's the expected behavior of dsconf, which I find 
rather unhelpful to be honest, then it's all good. It just feels like there out 
to be something in the system that you can point at any given ou and ask 
"what's the policy that's active on this ou?"

Tim Darby

From: Mark Reynolds 
Sent: Monday, August 26, 2024 12:13
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>; Darby, Tim - (tdarby) 
Subject: Re: [389-users] Re: [EXT] Re: Password policies and replication 
service accounts


External Email




On 8/26/24 1:18 PM, Darby, Tim - (tdarby) wrote:
Thanks, I'm reviewing what I did to see where this went wrong. I did supply 
credentials for the service accounts.

I was referring to the entry you use in the replication agreement (typically 
cn=replication manager, cn=config):


Supplier:


dn: cn=replication manager,cn=config
objectClass: top
objectClass: inetUser
objectClass: netscapeServer
objectClass: nsAccount
cn: replication manager
uid: replication manager
userPassword:: e1BCS0RGMi1TSEE1MTJ9MTAwMDAkQ0c5N08xK2crMWZyU0czQkdhcjN6WTNqM2Y
 2eHEvMHEkdXlPTnhmdW43RUEycVBFcmtKMHdXVGtSUXNpL3VBbDVpQTNySHJkRFJZejZGZTdjcHpi
 SHYzRnRQNlJ0Nnl4a1dvTFJ6clJ3a1lnYk9mMmo1OSsyZHc9PQ==


dn: cn=darby,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
objectClass: top
objectClass: nsds5replicationagreement
cn: darby
nsDS5ReplicaRoot: dc=example,dc=com
description: darby
nsDS5ReplicaHost: localhost
nsDS5ReplicaPort: 389
nsDS5ReplicaBindMethod: simple
nsDS5ReplicaTransportInfo: LDAP
nsDS5ReplicaBindDN: cn=replication manager,cn=config
nsDS5ReplicaCredentials: {AES-TUhNR0NTcUdTSWIzRFFFRkRUQm1NRVVHQ1NxR1NJYjNEUUVG
 RERBNEJDUmxObUk0WXpjM1l5MHdaVE5rTXpZNA0KTnkxaE9XSmhORGRoT0MwMk1ESmpNV014TUFBQ
 0FRSUNBU0F3Q2dZSUtvWklodmNOQWdjd0hRWUpZSVpJQVdVRA0KQkFFcUJCQUVqZlE0RzhhaHF5Rz
 dUN0F3Mmk3WQ==}14bMwBr/FBN6L1kPTBuyRA==



On the consumer side you must specify the bind that that can perform 
replication updates in the replica configuration entry:


dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
objectClass: top
objectClass: nsds5Replica
cn: replica
nsDS5ReplicaRoot: dc=example,dc=com
nsDS5ReplicaBindDN: cn=replication manager,cn=config

...

...


Question about inheritance of subtree password policies: How do you see that it 
has been applied to a subtree of a subtree? dsconf claims that there is no 
subtree policy on ou=test,ou=accounts:

> dsconf -D "cn=Directory Manager" -w "" 
> ldap://localhost:3389<http://ldap//localhost:3389> localpwp list
ou=accounts,ou=etc etc (subtree policy)

> dsconf -D "cn=Directory Manager" -w "" 
> ldap://localhost:3389<http://ldap//localhost:3389> localpwp get 
> "ou=test,ou=accounts,ou=etc etc"
Error: No password policy was found for this entry

Right, so anything under "ou=accounts,ou=etc" should have that local policy 
applied to it.  It will only be listed under the the original subtree.  So this 
all looks correct.  Please provide your password policy settings and describe 
how it's not working as you expect:


# dsconf slapd-INSTANCE localpwp get "ou=accounts,ou=etc"



Thanks,
Mark

Tim Darby
Systems Integration and Architecture | The University of Arizona | 
tda...@arizona.edu<mailto:tda...@arizona.edu> | 
they/he<https://diversity.arizona.edu/pronouns>

From: Mark Reynolds <mailto:marey...@redhat.com>
Sent: Monday, August 26, 2024 07:14
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org><mailto:389-users@lists.fedoraproject.org>; 
Darby, Tim - (tdarby) <mailto:tda...@arizona.edu>
Subject: [EXT] Re: [389-users] Password policies and replication service 
accounts

External Email


On 8/19/24 11:05 AM, tda...@arizona.edu<mailto:tda...@arizona.edu> wrote:
> I encountered a problem with replication service accounts in the process of 
> moving my one remaining very old (1.9.x) 389ds system to the new container. 
> This is likely a mis

[389-users] Re: [EXT] Re: Password policies and replication service accounts

2024-08-26 Thread Darby, Tim - (tdarby)
Thanks, I'm reviewing what I did to see where this went wrong. I did supply 
credentials for the service accounts.

Question about inheritance of subtree password policies: How do you see that it 
has been applied to a subtree of a subtree? dsconf claims that there is no 
subtree policy on ou=test,ou=accounts:

> dsconf -D "cn=Directory Manager" -w "" ldap://localhost:3389 localpwp list
ou=accounts,ou=etc etc (subtree policy)

> dsconf -D "cn=Directory Manager" -w "" ldap://localhost:3389 localpwp get 
> "ou=test,ou=accounts,ou=etc etc"
Error: No password policy was found for this entry

Tim Darby
Systems Integration and Architecture | The University of Arizona | 
tda...@arizona.edu<mailto:tda...@arizona.edu> | 
they/he<https://diversity.arizona.edu/pronouns>

From: Mark Reynolds 
Sent: Monday, August 26, 2024 07:14
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>; Darby, Tim - (tdarby) 
Subject: [EXT] Re: [389-users] Password policies and replication service 
accounts

External Email


On 8/19/24 11:05 AM, tda...@arizona.edu wrote:
> I encountered a problem with replication service accounts in the process of 
> moving my one remaining very old (1.9.x) 389ds system to the new container. 
> This is likely a misunderstanding on my part about how password policies 
> work, so I'd appreciate any insights on this.
>
> This system has two MMR instances and an ou=Accounts containing thousands of 
> user accounts. It uses a global password policy for the user accounts 
> (there's no subtree policy on ou=Accounts). As I did with a previous 
> migration to 3.x, I created a new ou, ou=Services,ou=Accounts, to hold the 
> service accounts for replication. When I tried to do the initial replication, 
> it failed because the source instance couldn't authenticate to the 
> destination instance. I was seeing weird messages like "inappropriate 
> authentication", etc.
>
> It occurred to me that maybe the issue had to do with the fact that this 
> system had a global policy set whereas the previous system was not using a 
> global policy. I tried removing the global policy and adding a subtree policy 
> instead to ou=Accounts and that solved the problem. So, questions:
>
> - Is there a way to have a global password policy but not have it apply to a 
> particular ou?

The global password policy under cn=config applies to all entries in the
database.  Then subtree policies (or fine-grained policies) can
over-rule the global policy.  If you want the global and subtree
policies to blend together then you must set the polices to inherit the
global policy:

https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/core_server_configuration_reference#cnconfig-nsslapd_pwpolicy_inherit_global_Inherit_Global_Password_Policy<https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/core_server_configuration_reference#cnconfig-nsslapd_pwpolicy_inherit_global_Inherit_Global_Password_Policy>

But, if you want the subtree policy to completely bypass the global
policy then do NOT set that attribute from the doc.

> - It appears that setting a subtree policy on an ou (ou=Accounts), does not 
> inherit to a subtree of that tree (ou=Services). Is that right?
It should apply to all entries under the subtree policy, if not it's a
bug.  But we have tests for this so it should definitely be working in
newer versions of 389.
> - It's not clear to me what actually causes the global policy to be active. 
> Does it become active simply by changing any of its password attributes in 
> cn=config?

Yes, but like I said subtree policies overrule it.  All changes to
global/fine-grain polices take effect immediately.


Now going back to your replication issue, the password policy should not
impact replication.  Inappropriate auth means a password/credential was
not provided.  It's possible the consumer does not have a replication
manager defined, or you left out the credentials attribute in the
agreement.  Either way that's a replication config issue (agreement or
replica config) and unrelated to password policy.

HTH,

Mark

--
Identity Management Development Team
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Password policies and replication service accounts

2024-08-19 Thread tdarby
I encountered a problem with replication service accounts in the process of 
moving my one remaining very old (1.9.x) 389ds system to the new container. 
This is likely a misunderstanding on my part about how password policies work, 
so I'd appreciate any insights on this.

This system has two MMR instances and an ou=Accounts containing thousands of 
user accounts. It uses a global password policy for the user accounts (there's 
no subtree policy on ou=Accounts). As I did with a previous migration to 3.x, I 
created a new ou, ou=Services,ou=Accounts, to hold the service accounts for 
replication. When I tried to do the initial replication, it failed because the 
source instance couldn't authenticate to the destination instance. I was seeing 
weird messages like "inappropriate authentication", etc.

It occurred to me that maybe the issue had to do with the fact that this system 
had a global policy set whereas the previous system was not using a global 
policy. I tried removing the global policy and adding a subtree policy instead 
to ou=Accounts and that solved the problem. So, questions:

- Is there a way to have a global password policy but not have it apply to a 
particular ou?
- It appears that setting a subtree policy on an ou (ou=Accounts), does not 
inherit to a subtree of that tree (ou=Services). Is that right?
- It's not clear to me what actually causes the global policy to be active. 
Does it become active simply by changing any of its password attributes in 
cn=config?
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: [EXT] Re: container 3.1.1: BDB is failing to recover from hard shutdown

2024-08-05 Thread Darby, Tim - (tdarby)
Thanks, this may actually be the issue. It's odd that I've never seen the 
default 60 second timeout have an effect before.

Tim Darby


From: Viktor Ashirov 
Sent: Monday, August 5, 2024 09:35
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>
Subject: [EXT] [389-users] Re: container 3.1.1: BDB is failing to recover from 
hard shutdown


External Email


Hi,

There is a default timeout of 60 seconds for container to start up:
https://github.com/389ds/389-ds-base/blob/e7b0e20c1a76753a78c0c0e04d3579fd2037cdee/src/lib389/cli/dscontainer#L367
Try to pass DS_STARTUP_TIMEOUT more than 60 seconds.

HTH

On Mon, Aug 5, 2024 at 5:17 PM mailto:tda...@arizona.edu>> 
wrote:
I have a test instance with two 2.5 container instances replicating. I took one 
down, but not gracefully, and restarted it using the 3.1.1 container. It showed 
that it was recovering the RUV and then died 60 seconds after container start. 
I've tried to restart it several times and it continually fails after 60 
seconds. I've never seen a 389ds instance fail to recover so this is alarming: 
Here's what I see in the logs:

[02/Aug/2024:23:40:08.164519017 +] - INFO - main - 389-Directory/3.1.1 
B2024.213.0201 starting up
[02/Aug/2024:23:40:08.167190043 +] - INFO - main - Setting the maximum file 
descriptor limit to: 65535
[02/Aug/2024:23:40:08.324281525 +] - INFO - PBKDF2_SHA256 - Based on CPU 
performance, chose 2048 rounds
[02/Aug/2024:23:40:08.328414518 +] - INFO - 
ldbm_instance_config_cachememsize_set - force a minimal value 512000
[02/Aug/2024:23:40:08.334434881 +] - NOTICE - bdb_start_autotune - found 
126976000k physical memory
[02/Aug/2024:23:40:08.336991774 +] - NOTICE - bdb_start_autotune - found 
101486140k available
[02/Aug/2024:23:40:08.340297223 +] - NOTICE - bdb_start_autotune - total 
cache size: 29477568512 B;
[02/Aug/2024:23:40:08.343560343 +] - NOTICE - bdb_start - Detected 
Disorderly Shutdown last time Directory Server was running, recovering database.
[02/Aug/2024:23:40:50.311047857 +] - INFO - slapi_vattrspi_regattr - 
Because pwdpolicysubentry is a new registered virtual attribute , 
nsslapd-ignore-virtual-attrs was set to 'off'
[02/Aug/2024:23:40:50.367322989 +] - NOTICE - NSMMReplicationPlugin - 
changelog program - _cl5ConstructRUVs - Rebuilding the replication changelog 
RUV, this may take several minutes...
[02/Aug/2024:23:41:06.202467004 +] - NOTICE - NSMMReplicationPlugin - 
changelog program - _cl5ConstructRUVs - Rebuilding replication changelog RUV 
complete.  Result 0 (Success)

(dies here and the container exits)
--
___
389-users mailing list -- 
389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


--
Viktor
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: [EXT] Re: container 3.1.1: BDB is failing to recover from hard shutdown

2024-08-05 Thread Darby, Tim - (tdarby)
I'm looking at everything but I don't think it's Docker at this point. It does 
have the appearance of something killing it off with a 60 second timeout though.

Tim Darby

From: David Boreham 
Sent: Monday, August 5, 2024 08:47
To: 389-users@lists.fedoraproject.org <389-users@lists.fedoraproject.org>
Subject: [EXT] [389-users] Re: container 3.1.1: BDB is failing to recover from 
hard shutdown


External Email


Is docker killing the container before recovery completes?

On Mon, Aug 5, 2024, at 9:16 AM, tda...@arizona.edu 
wrote:
I have a test instance with two 2.5 container instances replicating. I took one 
down, but not gracefully, and restarted it using the 3.1.1 container. It showed 
that it was recovering the RUV and then died 60 seconds after container start. 
I've tried to restart it several times and it continually fails after 60 
seconds. I've never seen a 389ds instance fail to recover so this is alarming: 
Here's what I see in the logs:

[02/Aug/2024:23:40:08.164519017 +] - INFO - main - 389-Directory/3.1.1 
B2024.213.0201 starting up
[02/Aug/2024:23:40:08.167190043 +] - INFO - main - Setting the maximum file 
descriptor limit to: 65535
[02/Aug/2024:23:40:08.324281525 +] - INFO - PBKDF2_SHA256 - Based on CPU 
performance, chose 2048 rounds
[02/Aug/2024:23:40:08.328414518 +] - INFO - 
ldbm_instance_config_cachememsize_set - force a minimal value 512000
[02/Aug/2024:23:40:08.334434881 +] - NOTICE - bdb_start_autotune - found 
126976000k physical memory
[02/Aug/2024:23:40:08.336991774 +] - NOTICE - bdb_start_autotune - found 
101486140k available
[02/Aug/2024:23:40:08.340297223 +] - NOTICE - bdb_start_autotune - total 
cache size: 29477568512 B;
[02/Aug/2024:23:40:08.343560343 +] - NOTICE - bdb_start - Detected 
Disorderly Shutdown last time Directory Server was running, recovering database.
[02/Aug/2024:23:40:50.311047857 +] - INFO - slapi_vattrspi_regattr - 
Because pwdpolicysubentry is a new registered virtual attribute , 
nsslapd-ignore-virtual-attrs was set to 'off'
[02/Aug/2024:23:40:50.367322989 +] - NOTICE - NSMMReplicationPlugin - 
changelog program - _cl5ConstructRUVs - Rebuilding the replication changelog 
RUV, this may take several minutes...
[02/Aug/2024:23:41:06.202467004 +] - NOTICE - NSMMReplicationPlugin - 
changelog program - _cl5ConstructRUVs - Rebuilding replication changelog RUV 
complete.  Result 0 (Success)

(dies here and the container exits)
--
___
389-users mailing list -- 
389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] container 3.1.1: BDB is failing to recover from hard shutdown

2024-08-05 Thread tdarby
I have a test instance with two 2.5 container instances replicating. I took one 
down, but not gracefully, and restarted it using the 3.1.1 container. It showed 
that it was recovering the RUV and then died 60 seconds after container start. 
I've tried to restart it several times and it continually fails after 60 
seconds. I've never seen a 389ds instance fail to recover so this is alarming: 
Here's what I see in the logs:

[02/Aug/2024:23:40:08.164519017 +] - INFO - main - 389-Directory/3.1.1 
B2024.213.0201 starting up
[02/Aug/2024:23:40:08.167190043 +] - INFO - main - Setting the maximum file 
descriptor limit to: 65535
[02/Aug/2024:23:40:08.324281525 +] - INFO - PBKDF2_SHA256 - Based on CPU 
performance, chose 2048 rounds
[02/Aug/2024:23:40:08.328414518 +] - INFO - 
ldbm_instance_config_cachememsize_set - force a minimal value 512000
[02/Aug/2024:23:40:08.334434881 +] - NOTICE - bdb_start_autotune - found 
126976000k physical memory
[02/Aug/2024:23:40:08.336991774 +] - NOTICE - bdb_start_autotune - found 
101486140k available
[02/Aug/2024:23:40:08.340297223 +] - NOTICE - bdb_start_autotune - total 
cache size: 29477568512 B; 
[02/Aug/2024:23:40:08.343560343 +] - NOTICE - bdb_start - Detected 
Disorderly Shutdown last time Directory Server was running, recovering database.
[02/Aug/2024:23:40:50.311047857 +] - INFO - slapi_vattrspi_regattr - 
Because pwdpolicysubentry is a new registered virtual attribute , 
nsslapd-ignore-virtual-attrs was set to 'off'
[02/Aug/2024:23:40:50.367322989 +] - NOTICE - NSMMReplicationPlugin - 
changelog program - _cl5ConstructRUVs - Rebuilding the replication changelog 
RUV, this may take several minutes...
[02/Aug/2024:23:41:06.202467004 +] - NOTICE - NSMMReplicationPlugin - 
changelog program - _cl5ConstructRUVs - Rebuilding replication changelog RUV 
complete.  Result 0 (Success)

(dies here and the container exits)
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: [EXT] Re: Replication weirdness with 3.0.1 mdb instance

2024-07-09 Thread Darby, Tim - (tdarby)
Any update on this? I've run out of ideas.

Tim Darby

From: Darby, Tim - (tdarby) 
Sent: Wednesday, July 3, 2024 8:36 AM
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>
Subject: [389-users] Re: [EXT] Re: Replication weirdness with 3.0.1 mdb instance


External Email


No, I'm not seeing any error messages on the supplier. On both sides it seems 
like everything is OK, it just doesn't fully replicate the database.

Tim Darby

From: Pierre Rogier 
Sent: Wednesday, July 3, 2024 7:22 AM
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>
Subject: [EXT] [389-users] Re: Replication weirdness with 3.0.1 mdb instance


External Email


Hi,
I suspect it may be a bug that we are just discovering:
Have you checked the error on the supplier  you are initializing from ?

Are you seeing something like:
[03/Jul/2024:16:07:57.719471684 +0200] - ERR - check_suffix_entryID - Unable to 
retrieve entryid of the suffix entry dc=example,dc=com



On Tue, Jul 2, 2024 at 8:33 PM mailto:tda...@arizona.edu>> 
wrote:
I have several 389ds instances using MMR, but only one at the moment that uses 
3.0.1 and the new mdb. It's an instance I'm building from scratch and I script 
it with config files that are similar to the 2.5 instances. When I initiate 
replication, what I see is that it finishes without errors but only sends 
approx. 1000 entries to the consumer (there are over 1M entries). I've looked 
at all the config attributes that I'm aware of and I'm stumped.
Can you think of anything that would prevent a replication account from sending 
over more than 1000 entries? Here's a typical snippet from the logs for this:

[02/Jul/2024:17:22:05.137366809 +] - NOTICE - NSMMReplicationPlugin - 
multisupplier_be_state_change - Replica ou=netid,ou=ccit,o=university of 
arizona,c=us is going offline; disabling replication
[02/Jul/2024:17:22:07.913831452 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Import writer thread usage: run: 9.31% read: 69.38% write: 
19.17% pause: 0.99% txnbegin: 0.00% txncommit: 1.15%
[02/Jul/2024:17:22:07.978516129 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers finished; cleaning up...
[02/Jul/2024:17:22:07.981201324 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers cleaned up.
[02/Jul/2024:17:22:07.983442852 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Indexing complete.  Post-processing...
[02/Jul/2024:17:22:07.985709898 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numsubordinates (this may take several minutes to 
complete)...
[02/Jul/2024:17:22:07.991134437 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numSubordinates complete.
[02/Jul/2024:17:22:07.993601582 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Flushing caches...
[02/Jul/2024:17:22:07.995667599 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Closing files...
[02/Jul/2024:17:22:07.997941832 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Import complete.  Processed 1097 entries in 3 seconds. (365.67 
entries/sec)
--
___
389-users mailing list -- 
389-users@lists.fedoraproject.org<mailto:389-users@lists.fedoraproject.org>
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org<mailto:389-users-le...@lists.fedoraproject.org>
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


--
--

389 Directory Server Development Team
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: [EXT] Re: Replication weirdness with 3.0.1 mdb instance

2024-07-03 Thread Darby, Tim - (tdarby)
No, I'm not seeing any error messages on the supplier. On both sides it seems 
like everything is OK, it just doesn't fully replicate the database.

Tim Darby

From: Pierre Rogier 
Sent: Wednesday, July 3, 2024 7:22 AM
To: General discussion list for the 389 Directory server project. 
<389-users@lists.fedoraproject.org>
Subject: [EXT] [389-users] Re: Replication weirdness with 3.0.1 mdb instance


External Email


Hi,
I suspect it may be a bug that we are just discovering:
Have you checked the error on the supplier  you are initializing from ?

Are you seeing something like:
[03/Jul/2024:16:07:57.719471684 +0200] - ERR - check_suffix_entryID - Unable to 
retrieve entryid of the suffix entry dc=example,dc=com



On Tue, Jul 2, 2024 at 8:33 PM mailto:tda...@arizona.edu>> 
wrote:
I have several 389ds instances using MMR, but only one at the moment that uses 
3.0.1 and the new mdb. It's an instance I'm building from scratch and I script 
it with config files that are similar to the 2.5 instances. When I initiate 
replication, what I see is that it finishes without errors but only sends 
approx. 1000 entries to the consumer (there are over 1M entries). I've looked 
at all the config attributes that I'm aware of and I'm stumped.
Can you think of anything that would prevent a replication account from sending 
over more than 1000 entries? Here's a typical snippet from the logs for this:

[02/Jul/2024:17:22:05.137366809 +] - NOTICE - NSMMReplicationPlugin - 
multisupplier_be_state_change - Replica ou=netid,ou=ccit,o=university of 
arizona,c=us is going offline; disabling replication
[02/Jul/2024:17:22:07.913831452 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Import writer thread usage: run: 9.31% read: 69.38% write: 
19.17% pause: 0.99% txnbegin: 0.00% txncommit: 1.15%
[02/Jul/2024:17:22:07.978516129 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers finished; cleaning up...
[02/Jul/2024:17:22:07.981201324 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers cleaned up.
[02/Jul/2024:17:22:07.983442852 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Indexing complete.  Post-processing...
[02/Jul/2024:17:22:07.985709898 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numsubordinates (this may take several minutes to 
complete)...
[02/Jul/2024:17:22:07.991134437 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numSubordinates complete.
[02/Jul/2024:17:22:07.993601582 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Flushing caches...
[02/Jul/2024:17:22:07.995667599 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Closing files...
[02/Jul/2024:17:22:07.997941832 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Import complete.  Processed 1097 entries in 3 seconds. (365.67 
entries/sec)
--
___
389-users mailing list -- 
389-users@lists.fedoraproject.org
To unsubscribe send an email to 
389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: 
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


--
--

389 Directory Server Development Team
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Replication weirdness with 3.0.1 mdb instance

2024-07-02 Thread tdarby
I have several 389ds instances using MMR, but only one at the moment that uses 
3.0.1 and the new mdb. It's an instance I'm building from scratch and I script 
it with config files that are similar to the 2.5 instances. When I initiate 
replication, what I see is that it finishes without errors but only sends 
approx. 1000 entries to the consumer (there are over 1M entries). I've looked 
at all the config attributes that I'm aware of and I'm stumped.
Can you think of anything that would prevent a replication account from sending 
over more than 1000 entries? Here's a typical snippet from the logs for this:

[02/Jul/2024:17:22:05.137366809 +] - NOTICE - NSMMReplicationPlugin - 
multisupplier_be_state_change - Replica ou=netid,ou=ccit,o=university of 
arizona,c=us is going offline; disabling replication
[02/Jul/2024:17:22:07.913831452 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Import writer thread usage: run: 9.31% read: 69.38% write: 
19.17% pause: 0.99% txnbegin: 0.00% txncommit: 1.15% 
[02/Jul/2024:17:22:07.978516129 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers finished; cleaning up...
[02/Jul/2024:17:22:07.981201324 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers cleaned up.
[02/Jul/2024:17:22:07.983442852 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Indexing complete.  Post-processing...
[02/Jul/2024:17:22:07.985709898 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numsubordinates (this may take several minutes to 
complete)...
[02/Jul/2024:17:22:07.991134437 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numSubordinates complete.
[02/Jul/2024:17:22:07.993601582 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Flushing caches...
[02/Jul/2024:17:22:07.995667599 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Closing files...
[02/Jul/2024:17:22:07.997941832 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Import complete.  Processed 1097 entries in 3 seconds. (365.67 
entries/sec)
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Replication weirdness with 3.0.1 mdb instance

2024-07-02 Thread tdarby
I have several 389ds instances using MMR, but only one at the moment that uses 
3.0.1 and the new mdb. It's an instance I'm building from scratch and I script 
it with config files that are similar to the 2.5 instances. When I initiate 
replication, what I see is that it finishes without errors but only sends 
approx. 1000 entries to the consumer (there are over 1M entries). I've looked 
at all the config attributes that I'm aware of and I'm stumped.
Can you think of anything that would prevent a replication account from sending 
over more than 1000 entries? Here's a typical snippet from the logs for this:

[02/Jul/2024:17:22:05.137366809 +] - NOTICE - NSMMReplicationPlugin - 
multisupplier_be_state_change - Replica ou=netid,ou=ccit,o=university of 
arizona,c=us is going offline; disabling replication
[02/Jul/2024:17:22:07.913831452 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Import writer thread usage: run: 9.31% read: 69.38% write: 
19.17% pause: 0.99% txnbegin: 0.00% txncommit: 1.15% 
[02/Jul/2024:17:22:07.978516129 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers finished; cleaning up...
[02/Jul/2024:17:22:07.981201324 +] - INFO - dbmdb_import_monitor_threads - 
import dsroot: Workers cleaned up.
[02/Jul/2024:17:22:07.983442852 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Indexing complete.  Post-processing...
[02/Jul/2024:17:22:07.985709898 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numsubordinates (this may take several minutes to 
complete)...
[02/Jul/2024:17:22:07.991134437 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Generating numSubordinates complete.
[02/Jul/2024:17:22:07.993601582 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Flushing caches...
[02/Jul/2024:17:22:07.995667599 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Closing files...
[02/Jul/2024:17:22:07.997941832 +] - INFO - dbmdb_public_dbmdb_import_main 
- import dsroot: Import complete.  Processed 1097 entries in 3 seconds. (365.67 
entries/sec)
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] MDB db questions/concerns

2024-07-01 Thread tdarby
The mdb change came as a worrying surprise when the container pulled down 3.0.1 
without me realizing it. I've read as much as I can find (which is not much), 
but I don't understand all the changes. Questions:

1. Is there a way to continue to pull the 2.5 version of the container?
2. If I update an existing 2.5 container instance to 3.0.1, will it continue to 
use bdb?
3. The nsslapd-mdb-max-size attribute:
- Does this only set the size of the db on disk? What if it exceeds that limit?
- I was going to set it to "use all available space" until I saw these 
comments, but not sure what to make of them and I don't feel any closer to 
understanding what I should do with this attribute:

   - the file will tend to grow up to its maximum
- if real memory is smaller than the file then pages will be swappped 
- My first experience about configuring oversized database was interresting:
at first I did not capped the value to 1Gb and the basic import 
scenario was spending
hours before failing because disk was full (with a real db size around 
10Gb on the test VM)
while once capped to 1 GB the import was successful in less than 10 
minutes 
==> *Correct sizing matters*"
-- 
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: One way supplier replication is failing on newly installed instance

2024-04-22 Thread tdarby
This seems to be working now. I tried the repl agreement poke command and 
nothing happened but tried it again the following day and it caused replication 
to resume. It has continued to work over the weekend. Not sure what the cause 
was, but I only need this to work a little while longer and then I can retire 
the old instances.
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: One way supplier replication is failing on newly installed instance

2024-04-18 Thread tdarby
Thanks for the help!

On the new server, there is nothing related in the errors log. In the access 
log, I see the successful binds from the old server:

[18/Apr/2024:16:10:22.457956096 +] conn=182814 op=0 BIND 
dn="cn=eds2.iam.arizona.edu:389,ou=Services,dc=eds,dc=arizona,dc=edu" 
method=128 version=3
[18/Apr/2024:16:10:22.465767099 +] conn=182814 op=0 RESULT err=0 tag=97 
nentries=0 wtime=0.61482 optime=0.007844345 etime=0.007904970 
dn="cn=eds2.iam.arizona.edu:389,ou=services,dc=eds,dc=arizona,dc=edu"
[18/Apr/2024:16:10:22.490471613 +] conn=182814 op=1 SRCH base="" scope=0 
filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[18/Apr/2024:16:10:22.491081053 +] conn=182814 op=1 RESULT err=0 tag=101 
nentries=1 wtime=0.000113881 optime=0.000613900 etime=0.000727198
[18/Apr/2024:16:10:22.492073298 +] conn=182814 op=2 SRCH base="" scope=0 
filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[18/Apr/2024:16:10:22.492563635 +] conn=182814 op=2 RESULT err=0 tag=101 
nentries=1 wtime=0.000127618 optime=0.000492238 etime=0.000619213
[18/Apr/2024:16:10:22.493480021 +] conn=182814 op=3 EXT 
oid="2.16.840.1.113730.3.5.12" name="replication-multisupplier-extop"
[18/Apr/2024:16:10:22.497607017 +] conn=182814 op=3 RESULT err=0 tag=120 
nentries=0 wtime=0.45592 optime=0.004128811 etime=0.004173982

There's no more activity from that connection until it times out (10 minute 
timeout set):
[18/Apr/2024:16:20:22.443255138 +] conn=182814 op=-1 fd=127 Disconnect - 
Connection timed out - Idle Timeout (nsslapd-idletimeout) - T1

Other info -

Old server:
nsslapd-conntablesize: 65535
nsslapd-threadnumber: 96
nsslapd-maxdescriptors: 65535
net.core.somaxconn = 128
net.ipv4.tcp_max_syn_backlog = 2048

New server:
nsslapd-conntablesize (attribute doesn't exist in the schema)
nsslapd-threadnumber: 16
nsslapd-maxdescriptors: 16384
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096

I tried upping the threadnumber and maxdescriptors to match the old server, but 
that didn't seem to help

Old server monitor command:
dn: cn=monitor
version: 389-Directory/1.3.9.0 B2018.304.1940
threads: 96
connection: (a bunch of these)
currentconnections: 80
totalconnections: 858539
currentconnectionsatmaxthreads: 0
maxthreadsperconnhits: 17
dtablesize: 65535
readwaiters: 0
opsinitiated: 8152665
opscompleted: 8152664
entriessent: 160176043
bytessent: 95033377994
currenttime: 20240418184550Z
starttime: 20240416182128Z
nbackends: 3

New server monitor command:
dn: cn=monitor
version: 389-Directory/2.5.0 B2024.017.
threads: 17
connection: 1:20240417165837Z:3:3:-:cn=Directory Manager:0:0:0:4:ip=local
connection: 2:20240418184424Z:3:2:-:cn=directory 
manager:0:0:0:200246:ip=127.0.0.1
currentconnections: 2
totalconnections: 200246
currentconnectionsatmaxthreads: 0
maxthreadsperconnhits: 0
dtablesize: 16258
readwaiters: 0
opsinitiated: 1913950
opscompleted: 1913949
entriessent: 1945806
bytessent: 206107062
currenttime: 20240418184424Z
starttime: 20240417165836Z
nbackends: 1
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: One way supplier replication is failing on newly installed instance

2024-04-18 Thread tdarby
Just to add:

- I can manually make a connection from the old to new using the replication 
account and password, so connectivity is fine.
- The old server is in 2-way replication with another server of the same 
version and that is all working perfectly.
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] One way supplier replication is failing on newly installed instance

2024-04-18 Thread tdarby
I'm trying to get one-way replication going between a very old (1.3.7.8) server 
and a newly built (2.5.0 B2024.017.) server.

- The initial one-way full replication works.
- Incremental updates from the old to the new work for a time and then start 
failing with these messages logged on the old server:

[18/Apr/2024:16:10:19.456632850 +] - WARN - NSMMReplicationPlugin - 
send_updates - agmt="cn=eds2prod-eds-ldap-63421-agreement" 
(eds-ldap-63421:389): Failed to send update operation to consumer (uniqueid 
32245001-51c111ea-b1489889-5bf7d8fd, CSN 6620dbb500040015): Can't contact 
LDAP server. Will retry later.
[18/Apr/2024:16:10:19.458134915 +] - ERR - NSMMReplicationPlugin - 
release_replica - agmt="cn=eds2prod-eds-ldap-63421-agreement" 
(eds-ldap-63421:389): Unable to send endReplication extended operation (Can't 
contact LDAP server)
[18/Apr/2024:16:10:22.471856842 +] - INFO - NSMMReplicationPlugin - 
bind_and_check_pwp - agmt="cn=eds2prod-eds-ldap-63421-agreement" 
(eds-ldap-63421:389): Replication bind with SIMPLE auth resumed
[18/Apr/2024:16:20:22.661249153 +] - WARN - NSMMReplicationPlugin - 
repl5_inc_update_from_op_result - agmt="cn=eds2prod-eds-ldap-63421-agreement" 
(eds-ldap-63421:389): Consumer failed to replay change (uniqueid (null), CSN 
(null)): Can't contact LDAP server(-1). Will retry later.

- That set of errors repeats approximately every 2 hours. I'm assuming this has 
caused replication to halt and it will not resume until it gets passed whatever 
the issue is.

I was hoping that I could somehow get past this 2 hour retry by disabling and 
re-enabling the agreement but that seems to have no effect.

Any thoughts on how to attack this?

- Tim
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Directory Administrators vs. Password Administrators

2024-03-15 Thread tdarby
I see tn the docs that you can make a Password Administrators group, like so:

dn: cn=config
changetype: modify
replace: passwordAdminDN
passwordAdminDN: cn=Passwd Admins,ou=groups,dc=example,dc=com

I'm curious though, what privileges does a Directory Administrator have over 
and above one of these Password Administrators.
--
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Migration: importing an OU to a new instance

2023-09-15 Thread tdarby
An interesting suggestion, but part of my goal for the migration is to move to 
a simpler configuration and this will mostly likely be a one time import.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Migration: importing an OU to a new instance

2023-09-15 Thread tdarby
That's a clever idea, thanks. That may be just what I need.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Migration: importing an OU to a new instance

2023-09-14 Thread tdarby
Thanks, this was my backup plan if I couldn't find a backup/restore script that 
would do it. It's kind of a large set though, over 1M entries.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: Migration: importing an OU to a new instance

2023-09-13 Thread tdarby
Thanks for the quick reply. My issue is this:

Server A has two OUs, call them ou=A and ou=B. Server B has two OUs, ou=A 
(empty) and ou=C. I want to copy the data from ou=A on server A to ou=A on 
server B. There are no ou=B entries in the export file from server A and for 
the import task I add to server B, I set this attribute:
nsExcludeSuffix: ou=B

When this task runs, it populates ou=A on server B but also completely deletes 
OU=C
Any way around this?

Thanks for the additional info on the other question, I guess my problem is 
that I don't understand the significance of entry USNs at all in 389 server, so 
I'm not sure how to deal with them in general and especially when it comes to 
instance migration.  
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Migration: importing an OU to a new instance

2023-09-13 Thread tdarby
I've read this doc:
https://access.redhat.com/documentation/en-us/red_hat_directory_server/12/html/importing_and_exporting_data/importing-data-to-directory-server_importing-and-exporting-data

The export from server A to an LDIF file works and I've done some testing but 
it seems like the import feature always deletes existing OUs on server B that 
aren't in the exported LDIF file. Am I missing something? I'd like to simply 
get an LDIF of all the entries in Server A and populate only that OU in server 
B.

Related, this bit is bewildering
Optional: By default, Directory Server sets the entry update sequence numbers 
(USNs) of all imported entries to 0. To set an alternative initial USN value, 
set the nsslapd-entryusn-import-initval parameter. For example, to set USN for 
all imported values to 12345, enter:

I don't understand what this means or the consequences of taking the default or 
not. Server B is already in multi-supplier replication with other servers, so I 
worry about screwing that up with any import choices I might make.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Migration: importing an OU to a new instance

2023-09-13 Thread tdarby
I've read this doc:
https://access.redhat.com/documentation/en-us/red_hat_directory_server/12/html/importing_and_exporting_data/importing-data-to-directory-server_importing-and-exporting-data

The export from server A to an LDIF file works and I've done some testing but 
it seems like the import feature always deletes existing OUs on server B that 
aren't in the exported LDIF file. Am I missing something? I'd like to simply 
get an LDIF of all the entries in Server A and populate only that OU in server 
B.

Related, this bit is bewildering
Optional: By default, Directory Server sets the entry update sequence numbers 
(USNs) of all imported entries to 0. To set an alternative initial USN value, 
set the nsslapd-entryusn-import-initval parameter. For example, to set USN for 
all imported values to 12345, enter:

I don't understand what this means or the consequences of taking the default or 
not. Server B is already in multi-supplier replication with other servers, so I 
worry about screwing that up with any import choices I might make.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Replication question

2023-08-24 Thread tdarby
I've got two 389 multi-supplier replicated instances that I want to replicate 
to two new ones just temporarily for migration purposes. Since this is 
production, it would be ideal if it could be replicating right up to the second 
I make the new ones live. However, I simplified the configuration on the new 
one and now I'm wondering if this scheme will still work.

On the old one, I have two replicated DBs, one mapped to 
ou=b,dc=a,dc=arizona,dc=edu and the other mapped to dc=a,dc=arizona,dc=edu. On 
the new one, I decided to have just one replicated DB, mapped to dc=arizona, 
dc=edu. Is it possible to replicate the old ones to the new instance? I'm 
thinking no, but I had to ask.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Automated replication management

2022-09-09 Thread tdarby
I've been working with the official docker container and have succeeded in 
scripting a complete install and configuration of our LDAP instance(s). Now, 
I'd like to go further and automate the addition and removal of a multi-master 
instance. Has anyone already succeeded in doing this?

Currently, we have two EC2 instances in AWS that have a multi-master 
replication agreement. I'd like to be able to run a script that:
1. Brings up a new EC2/389ds container instance
2. Adds this to the existing multi-master replication agreements
3. When this is finished replicating, remove one of the current instances from 
the replication agreements

I can do part 1, but I'm concerned about making parts 2 and 3 bullet-proof.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389 server logging format

2022-08-26 Thread tdarby
https://www.ietf.org/rfc/rfc3339.txt
Something like 2022-06-16T13:56:05.344374, for example

Thanks for the code reference.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] 389 server logging format

2022-08-26 Thread tdarby
Is there a way to customize the access and errors logging formats? In 
particular, it would be nice to use a more standard timestamp format. If not, 
is there some Python code available for parsing the logs?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[389-users] Re: 389-ds opensuse container questions

2022-06-13 Thread tdarby
Ok, so I think what you're saying is that each domain (dc=arizona,dc=edu and 
dc=eds,dc=arizona,dc=edu) requires its own database? Maybe I will rethink this 
plan just to keep things simple.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389-ds opensuse container questions

2022-06-13 Thread tdarby
I've got things pretty well scripted now, but one thing I've never been sure of 
is what exactly maps to a database.

In our current dirsrv, we've got one databse for our base suffix 
"dc=eds,dc=arizona,dc=edu" and everything is contained in that 
(ou=people,dc=eds,dc=arizona,dc=edu, for example).

In the new directory, I'd like the base suffix to be "dc=arizona,dc=edu" (where 
some new objects will live) and then the additional suffix  
"dc=eds,dc=arizona,dc=edu", which will contain all the same stuff as the 
current directory (ou=people, etc.)

So my question is, if I make a database for "dc=arizona,dc=edu", can 
"dc=eds,dc=arizona,dc=edu" also reside in that database (by making an 
additional domain for it?) or does it have to have its own db.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389-ds opensuse container questions

2022-06-07 Thread tdarby
Oh, brilliant, thanks!
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389-ds opensuse container questions

2022-06-06 Thread tdarby
Thanks, so is there a way to change the nsslapd-db-locks attribute in Python? 
it would be nice to do it all there instead of having to shell out to dsconf.

I'll try to put together a list of lib389 things.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: 389-ds opensuse container questions

2022-06-05 Thread tdarby
Thanks! I've succeeded in getting all my configs scripted with Python, except 
for setting the nsslapd-db-locks attribute:

standalone.conig,set (fail)
dse_ldif.replace (fail)
dsconf works!

What's the magic here? Is there a Pythonic way to do this?

Also, I'm surpsised at how much I had to resort to Python LDAP to get things 
done. I kinda thought that lib389 would do most of it?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] 389-ds opensuse container questions

2022-05-25 Thread tdarby
Hi,

My team is preparing to move to containerized 389-ds instances after years of 
running on two AWS EC2 instances in multi-master replication behind a AWS 
classic load balancer. All data is on separate EBS volumes. The 389-ds version 
is 1.3.9.0. I'm mainly curious about the right way to use the container, so 
questions:

1. Is the container considered production-ready?
2. I see that the container will create a new instance if it doesn't find one. 
How does it determine that an instance exists?
3. What setup config options are available to the container? I see mention of 
container.inf, but it's not clear what all I can put in that. For the install 
of our current directory, we need to make dse.ldif changes, ACI changes, schema 
changes and other things. I realize I can do all this after the container 
creates the bare instance, but I'm wondering how much the container install 
could do for me.
4. How big of a deal is it going to be moving from 1.3.9.0 to the latest 
version?

Additionally, we were wondering if it's possible to have an AWS load balancer 
handle the TLS exchanges instead of the LDAP instances. In other words, install 
the certificate on the load balancer and have it talk unencrypted to the LDAP 
instances over port 389.

Thanks,
Tim
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Re: How to containerize 389DS using Docker in production systems

2018-03-07 Thread tdarby
> On Wed, 2018-03-07 at 08:52 +0100, Alberto García Sola wrote:
> 
> Hi there,
> 
> I'm currently working on docker support in 389-ds.

William, I'm really glad to hear this. We've been running 389 server in docker 
in EC2 instances for months now and it works great. We have home grown scripts 
for automating the DS installation and replication between 2 DS instances, but 
it would be awesome to use a supported setup instead, so I'd really like to try 
what you have. Our setup uses mounted EBS volumes that contain all the 
necessary DS folders so that the EC2s can be blown away and recreated any time 
we want.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] PR_Write error

2018-01-22 Thread tdarby
OS: Fedora 26
389 Server 1.3.7.6

I'm getting one or two of these a day on two servers that are MM replicas of 
each other:
ERR - write_function - PR_Write(371) Netscape Portable Runtime error -12135 
(unknown error)

What is this? A write error certainly doesn't sound good.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-21 Thread tdarby
> So you are saying its syntax is:  1.3.6.1.4.1.1466.115.121.1.15
> 
Yes, and I checked to make sure that is the case.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-20 Thread tdarby
> On 10/20/2017 12:32 PM, tdarby(a)email.arizona.edu wrote:
> Is there a core file you get a stack trace from? 

Not sure how to set up to get a core dump in a docker container.

> What are the schema definitions for lastUpdated & lastStarted?

As I said earlier, these are standard directory string attributes, 
multi-valued, nothing special. They are used in 3 different entries.

> Can you reproduce this in a non-production environment where you could
> run it under valgrind?
> Or can you run it under gdb so we can catch the crash live?

If I can figure out how to do this I will, but again, this is a difficult thing 
to reproduce. The 1.3.7 servers ran for a week before it happened.

> Are there any other errors messages at the time you see: 
> "valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type
> lastStarted"?

These were the only errors.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-20 Thread tdarby
> I spent a lot of time yesterday trying different ideas for reproducing the 
> crash and
> haven't found the right sequence of events yet. I did discover that I was 
> able to
> bring back a failed server instance by deleting a different entry, 
> cn=grouperfeeds, which
> like cn=grouperstatus contains the lastUpdated and lastStarted attributes.
> 
> I'm testing now with 1.3.7.6, built from source, on Fedora 26.

It was looking good until both test servers crashed again this morning:

389-ds-base-libs-1.3.7.6-1.fc26.x86_64
389-ds-base-1.3.7.6-1.fc26.x86_64

I see hundreds of these errors:
[20/Oct/2017:01:55:11.852720430 -0700] - ERR - valueset_value_syntax_cmp - 
slapi_attr_values2keys_sv failed for type lastStarted

So, it's happening on both the lastStarted and lastUpdated attributes of 
cn=grouperstatus.

What I know for sure is that this never happened on 389 1.2.11.15 and has 
happened on 1.3.6.1, 1.3.6.8, and 1.3.7.6.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-12 Thread tdarby
> Can you provide me a simple test case to reproduce the crash?

I spent a lot of time yesterday trying different ideas for reproducing the 
crash and haven't found the right sequence of events yet. I did discover that I 
was able to bring back a failed server instance by deleting a different entry, 
cn=grouperfeeds, which like cn=grouperstatus contains the lastUpdated and 
lastStarted attributes.

I'm testing now with 1.3.7.6, built from source, on Fedora 26.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-11 Thread tdarby
> you can always get the latest (upstream) version.  If you could at least
> test this on Fedora with the latest version of 389 so we can rule out if
> its a known issue or a new one.
> This is now fixed upstream on Fedora (26 and
> up)

I'm testing this in a docker container and the latest container image is Fedora 
26 and the only package it offers me is 1.3.6.8. I was testing a new install of 
this last night on two instances and the failure happened again with the same 
errors and now 389 server won't stay up. It will run for a few minutes, then 
generate hundreds of slapi_attr_values2keys_sv failed for type lastUpdated 
errors and then die. Again, this didn't seem to bother the two 1.2 servers that 
were replicating with them.

I tried setting up the system for core dumps but I can't find a core dump when 
it dies. Is there anything you want me to look at while it's in this state?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-10 Thread tdarby
> When the server crashes do you get a core dump or similar? That would
> really help. 

Where do I find a core dump?

> I think the issue with the lastUpdated type is that this is a custom
> element of your schema - I can't find any references to it at all in our
> code base. Can you send me your schema definiton for this attribute? 

This is the entry that after I deleted it enabled the server to start up again:
dn: cn=grouperstatus, ou=People,dc=eds,dc=arizona,dc=edu
objectClass: edsstatus
objectClass: top
cn: grouperstatus

It contains the multi-valued attributes lastUpdated and lastStarted, both 
strings.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-10 Thread tdarby
> On 10/09/2017 05:33 PM, tdarby(a)email.arizona.edu wrote:
> Okay the version you have has a few
> known crashes.  They have been fixed
> in 1.3.6.1-20 and up.  This fix will also be part of RHEL's 7.4 batch
> update 2.

Thanks, I don't see a way to get a higher packaged version with CentOS 7.4. I'm 
not committed to CentOS. What about switching to Fedora 26 in order to get the 
1.3.6.8 package?

Is there anything I can do about the attrlist_replace - attr_replace 
nsslapd-referral messages filling up the errors log?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-09 Thread tdarby
> On 10/09/2017 05:20 PM, tdarby(a)email.arizona.edu wrote:
> This
> might be fixed in a newer version of 1.3.6, what version are you
> using now?   rpm -qa | grep 389-ds-base

389-ds-base-1.3.6.1-19.el7_4.x86_64
389-ds-base-libs-1.3.6.1-19.el7_4.x86_64
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-09 Thread tdarby
I fixed the problem but the solution makes me concerned that this version of 
389 server is not going to work for me. In short, I found that deleting a 
particular entry on both servers brought them back to life. This actually makes 
sense because they both died at exactly the same time, so I'm guessing this 
entry was updated on one, got replicated to the other and then they both 
crashed. At the time of this crash, what is typically happening to this entry 
is a lot of individual attribute value adds and deletes coming from multiple 
machines and processes.

I've never seen anything bad happen on version 1.2 servers and, in fact, I had 
two 1.2 servers replicating with these 1.3.6 servers and both of them stayed up.

So, what do I do now? CentOS won't let me go backwards. Should I go forward and 
use 1.3.7 instead?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] 1.3.6 dirsrv crash: ERR - valueset_value_syntax_cmp - slapi_attr_values2keys_sv failed for type lastUpdated

2017-10-07 Thread tdarby
OS: CentOS Linux release 7.4.1708 (Core) 
dirsrv: 1.3.6.1 B2017.249.1616

I've had two of these running in multi-master replication for a week now with 
no issues, but last night they both crashed at the same time and there were a 
lot of these just before they died:

[06/Oct/2017:22:30:41.990009449 -0700] - ERR - valueset_value_syntax_cmp - 
slapi_attr_values2keys_sv failed for type lastUpdated
[06/Oct/2017:22:30:41.991965822 -0700] - ERR - valueset_value_syntax_cmp - 
slapi_attr_values2keys_sv failed for type lastUpdated
[06/Oct/2017:22:30:41.993908534 -0700] - ERR - valueset_value_syntax_cmp - 
slapi_attr_values2keys_sv failed for type lastUpdated

When I try to start either server now, I get the usual recovery messages and 
then a bunch of these errors and a crash. I've checked as many things as I can 
think of, including dse.ldif, which is fine.

Unrelated probably, but annoying, my error logs are also filling up with lots 
of these:
[06/Oct/2017:21:51:16.987020789 -0700] - ERR - attrlist_replace - attr_replace 
(nsslapd-referral, 
ldap://ldap2.arizona.edu:389/dc%3Deds%2Cdc%3Darizona%2Cdc%3Dedu) failed.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: Multi-master replication among 1.2 and 1.3 servers

2017-09-25 Thread tdarby
> I'm in a situation where I will need to have my two existing version 1.2.11.15
> B2015.345.187 servers, set up for multi-master replication, do multi-master 
> replication
> with two additional 1.3 servers. Is there any known problem with this and is 
> anyone doing
> it?

In my tests I haven't found any issues with this at all, FYI.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Multi-master replication among 1.2 and 1.3 servers

2017-09-13 Thread tdarby
I'm in a situation where I will need to have my two existing version 1.2.11.15 
B2015.345.187 servers, set up for multi-master replication, do multi-master 
replication with two additional 1.3 servers. Is there any known problem with 
this and is anyone doing it?
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: Index corruption message in multimaster replication

2017-07-17 Thread tdarby
> But there are two issues, the BUFFER_SMALL was on one server and the 
> *zon error on the other. And it explicitely said that the data size was 
> 5 instead of the expected 4, so this indicates that there is really 
> someting broken in that index.
> 
> I would suggest to combine your and Mark's proposal: remove the 
> substring index and reindex the database

Thanks to all who replied. I'm going to reindex the database tonight.
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Re: Index corruption message in multimaster replication

2017-07-16 Thread tdarby
> Which version of 389-ds-base do you have configured? 

I'm running on RHEL 6.9
In the errors log, it shows 389-Directory/1.2.11.15 B2015.345.187
The RPM for it is 389-ds-base-1.2.11.15-69.el6_7.x86_64 is the RPM

> Sorry, there is no way to see which index it is easily. It looks like a
> substring index, but you likely need to correlate this to an access log
> that made a query of "*zon".

I searched all the access logs there and couldn't find a string matching *zon. 
Subsequently, another identical error message showed up for the string "*urs". 
I was not able to find that in the access logs either.

So, I started doing dbscans on the indexes and eventually found two that are 
most likely the culprits, cn.db4 and ismemberof.db4. In the dbscans for these I 
found the following:

cn.db4:
*urs403595
*zon409451

ismemberof.db4:
*urs403926
*zon28

Are you saying then that I may not actually have a corrupt index?

Thanks,
Tim
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org


[389-users] Index corruption message in multimaster replication

2017-07-13 Thread tdarby
I have two 389 servers configured for multimaster replication. I noticed these 
possibly related messages in the errors logs:

server1:
[12/Jul/2017:07:50:44 -0700] - database index is corrupt; key *zon has a data 
item with the wrong size (5)

server2:
[12/Jul/2017:07:55:48 -0700] - idl_new.c BAD 59, err=-30999 DB_BUFFER_SMALL: 
User memory too small for return value
[12/Jul/2017:07:55:48 -0700] - database index operation failed BAD 1050, 
err=-30999 DB_BUFFER_SMALL: User memory too small for

How can I determine what index this is referring to and how do I repair it?

Version info:
389-Directory/1.2.11.15 B2015.345.187 (389-ds-base-1.2.11.15-69.el6_7.x86_64 is 
the RPM) on RHEL 6.9
___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org