[Freeipa-users] Re: ipaserver.ipa.1017.abc can not serve DNS for 1017.abc

2023-08-09 Thread Alexander Bokovoy via FreeIPA-users

On Срд, 09 жні 2023, Alan Latteri via FreeIPA-users wrote:

OKbut why is this?  It is a very clean and standard install of
FreeIPA, the domains are added via standard methods in the GUI.
Everything but apex domain of the IPA server works totally fine.  No
reason this should not work.  What is the solution to achieve this
scenario?


This is not a FreeIPA-specific problem. It is a generic DNS setup issue.

DNS server needs to know where to go for NS for a specific zone. Since
your NS record uses something BIND cannot resolve because it is loading
a parent zone for that NS record's value, it cannot complete its
validation of the loaded values.

You can imitate the same with a plain BIND setup as well. It will be
failing that zone load too. Other DNS server implementations might
postpone NS record value validation to a later stage though I doubt it,
most do validate static values at a zone load time.

If you want IPA to serve the parent zone, use a different name in NS
record that belongs to a different DNS zone that is hosted elsewhere.
Remember that DNS is hierarchical.

--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: Visibility/access of Freeipa users to windows on trusted AD

2023-08-09 Thread Francis Augusto Medeiros-Logeay via FreeIPA-users




On 2023-02-07 08:20, Alexander Bokovoy via FreeIPA-users wrote:
On ma, 06 helmi 2023, Francis Augusto Medeiros-Logeay via 
FreeIPA-users wrote:

Hi,

I have searched this everywhere, but can't find it.

I want to grant access to a FreeIPA user to a Windows machine. When I 
try to grant the user access on windows, adding it like 
FREEIPADOMAIN\freeipauser, I get an error. There is a trust between 
both domains, but every place where I see the trusted domain on 
Windows (for example when configuring a GPO) I can't search for 
FreeIPA users.


Is this how it is supposed to be, or how can I see my FreeIPA users 
on Windows the same way I see AD users on my freeipa linux clients?


This is how it supposed to be. Using IPA users on Windows systems in
trusted AD forest is not supported so far. We need to complete Global
Catalog service implementation first which is currently on hold due to
other work being priority.



Hi,

I just wonder if any work was done towards this. Is there any place we 
can follow the progress of this?


Best,
Francis
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: In FreeIPA AD trust environment add AD user to local group

2023-08-09 Thread Sameer Gurung via FreeIPA-users
On Mon, 31 Jul, 2023, 12:53 Alexander Bokovoy,  wrote:

> On Пан, 31 ліп 2023, Sameer Gurung via FreeIPA-users wrote:
> >On Sun, Jul 30, 2023 at 10:20 PM Ronald Wimmer via FreeIPA-users <
> >freeipa-users@lists.fedorahosted.org> wrote:
> >
> >> The referenced thread is about merging local and IPA groups. Not
> >> explicitly about the direction.
> >>
> >> Cheers,
> >> Ronald
> >>
> >I dont quite follow. I have added a docker group to freeipa with the
> >--external option. Then added my AD user to this group.. this works fine.
> >However at the client group merging does not take place. the AD user is
> not
> >added to the local docker group of the client
>
> You are using it wrong way.
>
> 'external' group in IPA is not a POSIX group. It is supposed to be
> included into a POSIX group and then SSSD on the client system will pull
> all external references from 'external' group when building up a
> membership of the POSIX group. That's why the documentation talks about
> two-group buildup:
>
>   - create an 'external' group and add AD objects as members of it
>   - create a POSIX group and add the 'external' group as a member
>
> Group merging feature in glibc works only for POSIX groups because these
> are the only groups that exist in POSIX environment where glibc
> operates. Unless an AD user is pulled into the POSIX group, the group
> cannot see the AD user as a member.
>
> So you should create a 'docker-external' 'external' group and add users
> there. Then create a 'docker' group in IPA and add 'docker-external'
> group as a member there. Then, upon login to a system governed by SSSD
> this 'docker' group membership will be filled in by SSSD for the AD user
> and glibc will handle group merging on top of that.
>
I thought this had solved my problem but after the recent update to
freeipa, group merging no longer works.

1. New AD users added to the docker-external group are not added to the
local machines docker group.

2. AD users that were already in the docker-external group and were added
to the local machines docker group no longer have permission to run docker.
Running the id command to check user details shows them to be member of the
docker group but the id of the docker group is the id of the freeipa docker
posix group.

> >
> >>
> >> On 30.07.23 17:54, Sameer Gurung via FreeIPA-users wrote:
> >> > I have followed the link you sent and managed to add users to the
> local
> >> > docker group when the users are in FreeIPA. However in my case they
> are
> >> > AD users logging in to linux clients through the IPA AD trust
> >> >
> >> > *Sameer Kr. Gurung*
> >> >
> >> > On Sun, Jul 30, 2023 at 6:07 PM Ronald Wimmer via FreeIPA-users
> >> >  >> > > wrote:
> >> >
> >> > Have a look at
> >> >
> >>
> https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org/thread/WR7JQOMWCEXNABNSZGFF2FYN6ENEHEIB/#BBVO35ZP4YFI7C27NASZLYWRWDFY6DRH
> >> <
> >>
> https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org/thread/WR7JQOMWCEXNABNSZGFF2FYN6ENEHEIB/#BBVO35ZP4YFI7C27NASZLYWRWDFY6DRH
> >> >
> >> >
> >> >
> >> > Am 30. Juli 2023 11:17:37 MESZ schrieb Sameer Gurung via
> >> > FreeIPA-users  >> > >:
> >> >
> >> > I have integrated freeipa with AD via a two way trust.
> However I
> >> > now have a problem
> >> >
> >> > How do I add my AD users logging in to linux clients to the
> >> > local machines
> >> > docker group so that they can run docker.
> >> > Any help would be appreciated.
> >> > Thanks everyone!
> >> >
> >> > Sameer K Gurung
> >> >
> >> >
> >> > This message contains confidential information and is intended
> >> > only for the individual named. If you are not the named
> >> > addressee you should not disseminate, distribute or copy this
> >> > e-mail. Please notify the sender immediately by e-mail if you
> >> > have received this e-mail by mistake and delete this e-mail
> from
> >> > your system. E-mail transmission cannot be guaranteed to be
> >> > secure or error-free as information could be intercepted,
> >> > corrupted, lost, destroyed, arrive late or incomplete, or
> >> > contain viruses. The sender therefore does not accept
> liability
> >> > for any errors or omissions in the contents of this message,
> >> > which arise as a result of e-mail transmission. If
> verification
> >> > is required please request a hard-copy version.
> >> > Saint Mary's College, Shillong, Meghalaya, India-793003,
> >> > smcs.ac.in 
> >> >
> >> > ___
> >> > FreeIPA-users mailing list --
> freeipa-users@lists.fedorahosted.org
> >> > 
> >> > To unsubscribe send an email to
> >> >  

[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Mark Reynolds via FreeIPA-users


On 8/9/23 2:00 AM, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:
Thanks for your help.  Details below.  The problem 'moved' in I hope 
a diagnositcally useful way, but the system remains broken.


On 8/8/23 08:54, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:


On 8/8/23 02:43, Alexander Bokovoy wrote:

pstack $(pgrep ns-slapd)  > ns-slapd log
Tried an upgrade from 4.9.10 to 4.9.11, the "writeback to ldap 
failed" error moved from the primary instance (on which the dns 
records were being added) to the replica which hung in the same 
fashion.   Here's the log you asked for from attempting 'systemctl 
restart dirsrv@...'  it just hangs at 100% cpu for about 10 minutes.


Thank you. Are you using schema compat for some legacy clients?



This is a fresh install of 4.9.10 about a week ago, upgraded to 
4.9.11 yesterday, just two freeipa instances and no appreciable user 
load, using the install defaults.  The 'in house' system then starts 
loading lots of dns records via the python ldap2 interface on the 
first of two systems installed, the replica produced what you see in 
this post.   There is no 'private' information involved of any sort, 
it's supposed to field DNS calls from the public but was so 
unreliable I had to implement unbound on other servers, so all 
freeipa does is IXFR to unbound for the heavy load.  I suppose there 
may be <16 other in-house lab systems, maybe 2 or 3 with any 
activity, that use it for dns.   The only other clue is these are 
running on VMs in older servers and have no other software packages 
installed other than freeipa and what freeipa needs to run, and the 
in-house program that loads the dns.


Just to exclude potential problems with schema compat, it can be
disabled if you are not using it.

I don't think it is about named per se, it is a bit of an unfortunate
interop inside ns-slapd between different plugins. bind-dyndb-ldap
relies on the syncrepl extension which implementation in ns-slapd is
using the retro changelog content. Retro changelog plugin triggers some
updates that cause schema compatibility plugin to lock itself up
depending on the order of updates that retro changelog would capture. We
fixed that in slapi-nis package some time ago and it *should* be
ignoring the retro changelog changes but somehow they still propagate
into it. There are few places in ns-slapd which were addressed just
recently and those updates might help (out later this year in RHEL).
Disabling schema compat would be the best.

What's worse, every reboot attempt waits the full '9 min 29 secs' 
before systemd forcibly terminates ns-slapd to finish the 'stop job'.


That's why I'm so troubled by all this, it's not like there is any 
interference from anything other than what freeipa puts out there, 
and it just locks with a message that gives no indication of what to 
do about it, with nothing in any logs and 'systemctl 
is-system-running' reports 'running'.


You could easily replicate this:  imagine a simple validation test 
that sets up two freeipa nodes, turns on dnssec, creates some 
domains, then adds A  and *.arpa records using the ldap2 api on 
one of the nodes.  Maybe limit the net speed between the nodes to a 
1GB link typical, maybe at most 4 processor cores of some older 
vintage and 5GB memory.  It takes less than 2 minutes after dns load 
start to lock up.


What's really odd is bind9 / named keeps blasting out change 
notifications for some of the updated domains, then a few lines 
later, with no intervening activity in any log or by any program 
affecting the zone, will publish further change notifications with a 
new serial number for the same zone.  This happens for all the zones 
that get modifications.  I'm thinking 'rr' computations?  I wonder if 
those entries-- being auto-generated internally -- are creating a 
'flow control' issue between the primary and replica.


This is something that retro changelog is responsible for as it is the
data store used by the syncrepl protocol implementation. If these
'changes' appear again and again, it means retro changelog plugin marks
them as new for this particular syncrepl client (bind-dyndb-ldap).

All threads other than the thread 30 are normal ones (idle threads) but
this one blocks the database backend in the log flush sequence while
writing the retro changelog entry for this updated DNS record:


Thread 30 (Thread 0x7f0e583ff700 (LWP 1438)):
#0  0x7f0e9bf7d8af in fdatasync () at target:/lib64/libc.so.6
#1  0x7f0e91cbe6b5 in __os_fsync () at target:/lib64/libdb-5.3.so
#2  0x7f0e91ca598c in __log_flush_int () at 
target:/lib64/libdb-5.3.so

#3  0x7f0e91ca7dd0 in __log_flush () at target:/lib64/libdb-5.3.so
#4  0x7f0e91ca7f73 in __log_flush_pp () at 
target:/lib64/libdb-5.3.so
#5  0x7f0e8afe1304 in bdb_txn_commit (li=,  
txn=0x7f0e583fd028, use_lock=1) at 
ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c:2772
#6  0x7f0e8af95515 in dblayer_txn_commit (be

[Freeipa-users] IPA sub-domain in a lab?

2023-08-09 Thread Amos via FreeIPA-users
We currently use (Free)IPA (what's provided by Redhat) in a forest trust
relationship with our Active Directory domains. All accounts are defined in
AD with the necessary POSIX attributes. The only things locally defined
within IPA are the automounter maps, sudo rules, and HBAC rules. (I must
say, these HBAC rules work rather nicely!)


A research group wants to create their own OU in AD to manage and rely on
AD for authentication. Centralized sudo rule configuration is also
important to them. They would like to have internal DNS for their lab of
entirely Linux machines so that these systems are more easily accessible
from within the lab instead of relying exclusively on IP addresses. (We use
Infoblox for centralized DNS, but since this is a private lab, there's a
question as to whether to leverage our Infoblox DNS or to use DNS in their
own IPA instance.)


On one hand, it makes sense to set them up using IPA. If so, would these
servers be in a sub-domain of the central IPA? They would need to be able
to manage this instance of IPA, but we would not want them to have admin
rights on the central IPA servers. Under this scenario, would the trust to
AD remain?


I'm fairly comfortable with the principles behind IPA, but only so far as
we're talking about the global environment. Setting things up in
semi-connected labs like this would be new to us, at least since we moved
to IPA.


There is some pressure to have their lab bind directly to AD. I pointed out
that currently there would be no way to centrally manage the sudo rules.
However, we're also currently considering adding the sudo schema to AD,
which if we did, might take care of that.


So, I'm just trying to wrap my head around all the possible approaches and
weigh the pros and cons with either approach. Any insight would be greatly
appreciated.


Thanks.
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: In FreeIPA AD trust environment add AD user to local group

2023-08-09 Thread Alexander Bokovoy via FreeIPA-users

On Срд, 09 жні 2023, Sameer Gurung wrote:

On Mon, 31 Jul, 2023, 12:53 Alexander Bokovoy,  wrote:


On Пан, 31 ліп 2023, Sameer Gurung via FreeIPA-users wrote:
>On Sun, Jul 30, 2023 at 10:20 PM Ronald Wimmer via FreeIPA-users <
>freeipa-users@lists.fedorahosted.org> wrote:
>
>> The referenced thread is about merging local and IPA groups. Not
>> explicitly about the direction.
>>
>> Cheers,
>> Ronald
>>
>I dont quite follow. I have added a docker group to freeipa with the
>--external option. Then added my AD user to this group.. this works fine.
>However at the client group merging does not take place. the AD user is
not
>added to the local docker group of the client

You are using it wrong way.

'external' group in IPA is not a POSIX group. It is supposed to be
included into a POSIX group and then SSSD on the client system will pull
all external references from 'external' group when building up a
membership of the POSIX group. That's why the documentation talks about
two-group buildup:

  - create an 'external' group and add AD objects as members of it
  - create a POSIX group and add the 'external' group as a member

Group merging feature in glibc works only for POSIX groups because these
are the only groups that exist in POSIX environment where glibc
operates. Unless an AD user is pulled into the POSIX group, the group
cannot see the AD user as a member.

So you should create a 'docker-external' 'external' group and add users
there. Then create a 'docker' group in IPA and add 'docker-external'
group as a member there. Then, upon login to a system governed by SSSD
this 'docker' group membership will be filled in by SSSD for the AD user
and glibc will handle group merging on top of that.


I thought this had solved my problem but after the recent update to
freeipa, group merging no longer works.

1. New AD users added to the docker-external group are not added to the
local machines docker group.

2. AD users that were already in the docker-external group and were added
to the local machines docker group no longer have permission to run docker.
Running the id command to check user details shows them to be member of the
docker group but the id of the docker group is the id of the freeipa docker
posix group.


Since user/group data properly comes from IPA, you need to check your
client system configuration. Group merging is a feature of glibc and is
driven by the configuration in /etc/nsswitch.conf.


--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: Visibility/access of Freeipa users to windows on trusted AD

2023-08-09 Thread Alexander Bokovoy via FreeIPA-users

On Срд, 09 жні 2023, Francis Augusto Medeiros-Logeay via FreeIPA-users wrote:




On 2023-02-07 08:20, Alexander Bokovoy via FreeIPA-users wrote:
On ma, 06 helmi 2023, Francis Augusto Medeiros-Logeay via 
FreeIPA-users wrote:

Hi,

I have searched this everywhere, but can't find it.

I want to grant access to a FreeIPA user to a Windows machine. 
When I try to grant the user access on windows, adding it like 
FREEIPADOMAIN\freeipauser, I get an error. There is a trust 
between both domains, but every place where I see the trusted 
domain on Windows (for example when configuring a GPO) I can't 
search for FreeIPA users.


Is this how it is supposed to be, or how can I see my FreeIPA 
users on Windows the same way I see AD users on my freeipa linux 
clients?


This is how it supposed to be. Using IPA users on Windows systems in
trusted AD forest is not supported so far. We need to complete Global
Catalog service implementation first which is currently on hold due to
other work being priority.



Hi,

I just wonder if any work was done towards this. Is there any place we 
can follow the progress of this?


There is currently no work beyond what exists in my gc-wip branch
https://github.com/freeipa/freeipa/compare/master...abbra:freeipa:gc-wip

The roadmap view for IPA issues shows what needs to be completed yet.
This is not a full list because we have been moving iteratively, finding
issues one by one and simply don't know how much is still left:
https://pagure.io/freeipa/roadmap/Global%20Catalog%20and%20IPA-IPA%20trust/



--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: IPA sub-domain in a lab?

2023-08-09 Thread Alexander Bokovoy via FreeIPA-users

On Срд, 09 жні 2023, Amos via FreeIPA-users wrote:

We currently use (Free)IPA (what's provided by Redhat) in a forest trust
relationship with our Active Directory domains. All accounts are defined in
AD with the necessary POSIX attributes. The only things locally defined
within IPA are the automounter maps, sudo rules, and HBAC rules. (I must
say, these HBAC rules work rather nicely!)


A research group wants to create their own OU in AD to manage and rely on
AD for authentication. Centralized sudo rule configuration is also
important to them. They would like to have internal DNS for their lab of
entirely Linux machines so that these systems are more easily accessible
from within the lab instead of relying exclusively on IP addresses. (We use
Infoblox for centralized DNS, but since this is a private lab, there's a
question as to whether to leverage our Infoblox DNS or to use DNS in their
own IPA instance.)


On one hand, it makes sense to set them up using IPA. If so, would these
servers be in a sub-domain of the central IPA? They would need to be able
to manage this instance of IPA, but we would not want them to have admin
rights on the central IPA servers. Under this scenario, would the trust to
AD remain?


I'm fairly comfortable with the principles behind IPA, but only so far as
we're talking about the global environment. Setting things up in
semi-connected labs like this would be new to us, at least since we moved
to IPA.


There is some pressure to have their lab bind directly to AD. I pointed out
that currently there would be no way to centrally manage the sudo rules.
However, we're also currently considering adding the sudo schema to AD,
which if we did, might take care of that.


So, I'm just trying to wrap my head around all the possible approaches and
weigh the pros and cons with either approach. Any insight would be greatly
appreciated.


My suggestion would be for them to stand up their own IPA deployment,
unrelated to yours. This way they can configure their own trust to AD
and settings they need.

You would not be able to establish trust between yours IPA setup and
theirs, as we don't support yet IPA-IPA trust.

Giving them administrative permissions in your IPA setup only for a
subset of objects is not really viable. IPA is not designed with the
concept of having separate administrators for individual objects. If you
are an admin for, say, HBAC rules, you manage all of them.


--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Harry G Coin via FreeIPA-users


On 8/9/23 01:00, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:
Thanks for your help.  Details below.  The problem 'moved' in I hope 
a diagnositcally useful way, but the system remains broken.


On 8/8/23 08:54, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:


On 8/8/23 02:43, Alexander Bokovoy wrote:

pstack $(pgrep ns-slapd)  > ns-slapd log
Tried an upgrade from 4.9.10 to 4.9.11, the "writeback to ldap 
failed" error moved from the primary instance (on which the dns 
records were being added) to the replica which hung in the same 
fashion.   Here's the log you asked for from attempting 'systemctl 
restart dirsrv@...'  it just hangs at 100% cpu for about 10 minutes.


Thank you. Are you using schema compat for some legacy clients?



This is a fresh install of 4.9.10 about a week ago, upgraded to 
4.9.11 yesterday, just two freeipa instances and no appreciable user 
load, using the install defaults.  The 'in house' system then starts 
loading lots of dns records via the python ldap2 interface on the 
first of two systems installed, the replica produced what you see in 
this post.   There is no 'private' information involved of any sort, 
it's supposed to field DNS calls from the public but was so 
unreliable I had to implement unbound on other servers, so all 
freeipa does is IXFR to unbound for the heavy load.  I suppose there 
may be <16 other in-house lab systems, maybe 2 or 3 with any 
activity, that use it for dns.   The only other clue is these are 
running on VMs in older servers and have no other software packages 
installed other than freeipa and what freeipa needs to run, and the 
in-house program that loads the dns.


Just to exclude potential problems with schema compat, it can be
disabled if you are not using it.


How?  The installs just use all the defaults, other than enabling dnssec 
and PTR records for all a/.


I'm officially in 'desperation mode' as not being able to populate DNS 
in freeipa reduces everyone to pencil and paper and coffee with full 
project stoppage until it's fixed or at least 'worked around'.   So 
anything that 'might help' can be sacrificed so at least 'something' 
works 'somewhat'.   If old AD needs to be 'broken' or 'off' but mostly 
the rest of it 'works sort of' then how do I do it?


Really this can't be hard to reproduce, it's just two instances with a 
1G link between them, each with a pair of old rusty hard drives in an 
lvm mirror using a COW file system, dnssec on, and one of them loading 
lots of dns with reverse pointers for each A/ with maybe 200 to 600 
PTR records per *arpa and maybe 10-200 records per subdomain, maybe 200 
domains total.    A couple python for loops and hey presto you'll see 
freeipa lock up without notice in your lab as well.  I just can't 
imagine causing these race conditions to appear in the case of the only 
important load being DNS adds/finds/shows should be difficult.


I appreciate the help, and have become officially fearful about 
freeipa.  Maybe it's seldom used extensively for DNS and so my use case 
is an outlier?   Why are so few seeing this?  It's a fully default 
package install, no custom changes to the OS, freeipa, other packages.   
I don't get it.


Thanks for any leads or help!




I don't think it is about named per se, it is a bit of an unfortunate
interop inside ns-slapd between different plugins. bind-dyndb-ldap
relies on the syncrepl extension which implementation in ns-slapd is
using the retro changelog content. Retro changelog plugin triggers some
updates that cause schema compatibility plugin to lock itself up
depending on the order of updates that retro changelog would capture. We
fixed that in slapi-nis package some time ago and it *should* be
ignoring the retro changelog changes but somehow they still propagate
into it. There are few places in ns-slapd which were addressed just
recently and those updates might help (out later this year in RHEL).
Disabling schema compat would be the best.

What's worse, every reboot attempt waits the full '9 min 29 secs' 
before systemd forcibly terminates ns-slapd to finish the 'stop job'.


That's why I'm so troubled by all this, it's not like there is any 
interference from anything other than what freeipa puts out there, 
and it just locks with a message that gives no indication of what to 
do about it, with nothing in any logs and 'systemctl 
is-system-running' reports 'running'.


You could easily replicate this:  imagine a simple validation test 
that sets up two freeipa nodes, turns on dnssec, creates some 
domains, then adds A  and *.arpa records using the ldap2 api on 
one of the nodes.  Maybe limit the net speed between the nodes to a 
1GB link typical, maybe at most 4 processor cores of some older 
vintage and 5GB memory.  It takes less than 2 minutes after dns load 
start to lock up.


What's really odd is bind9 / named keeps blasting out change 
notifications for some of the updated domains, then a fe

[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Thierry Bordaz via FreeIPA-users


On 8/9/23 17:15, Harry G Coin wrote:


On 8/9/23 01:00, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:
Thanks for your help.  Details below. The problem 'moved' in I hope 
a diagnositcally useful way, but the system remains broken.


On 8/8/23 08:54, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:


On 8/8/23 02:43, Alexander Bokovoy wrote:

pstack $(pgrep ns-slapd)  > ns-slapd log
Tried an upgrade from 4.9.10 to 4.9.11, the "writeback to ldap 
failed" error moved from the primary instance (on which the dns 
records were being added) to the replica which hung in the same 
fashion.   Here's the log you asked for from attempting 'systemctl 
restart dirsrv@...'  it just hangs at 100% cpu for about 10 minutes.


Thank you. Are you using schema compat for some legacy clients?



This is a fresh install of 4.9.10 about a week ago, upgraded to 
4.9.11 yesterday, just two freeipa instances and no appreciable user 
load, using the install defaults.  The 'in house' system then starts 
loading lots of dns records via the python ldap2 interface on the 
first of two systems installed, the replica produced what you see in 
this post.   There is no 'private' information involved of any sort, 
it's supposed to field DNS calls from the public but was so 
unreliable I had to implement unbound on other servers, so all 
freeipa does is IXFR to unbound for the heavy load.  I suppose there 
may be <16 other in-house lab systems, maybe 2 or 3 with any 
activity, that use it for dns.   The only other clue is these are 
running on VMs in older servers and have no other software packages 
installed other than freeipa and what freeipa needs to run, and the 
in-house program that loads the dns.


Just to exclude potential problems with schema compat, it can be
disabled if you are not using it.


How?  The installs just use all the defaults, other than enabling 
dnssec and PTR records for all a/.


I'm officially in 'desperation mode' as not being able to populate DNS 
in freeipa reduces everyone to pencil and paper and coffee with full 
project stoppage until it's fixed or at least 'worked around'.   So 
anything that 'might help' can be sacrificed so at least 'something' 
works 'somewhat'.   If old AD needs to be 'broken' or 'off' but mostly 
the rest of it 'works sort of' then how do I do it?


Really this can't be hard to reproduce, it's just two instances with a 
1G link between them, each with a pair of old rusty hard drives in an 
lvm mirror using a COW file system, dnssec on, and one of them loading 
lots of dns with reverse pointers for each A/ with maybe 200 to 
600 PTR records per *arpa and maybe 10-200 records per subdomain, 
maybe 200 domains total.    A couple python for loops and hey presto 
you'll see freeipa lock up without notice in your lab as well.  I just 
can't imagine causing these race conditions to appear in the case of 
the only important load being DNS adds/finds/shows should be difficult.


I appreciate the help, and have become officially fearful about 
freeipa.  Maybe it's seldom used extensively for DNS and so my use 
case is an outlier?   Why are so few seeing this?  It's a fully 
default package install, no custom changes to the OS, freeipa, other 
packages.   I don't get it.


Thanks for any leads or help!



Hi Harry,


I agree with Mark, nothing suspicious on Thread30. It is flushing its txn.
The discussion is quite long, do you mind to re-explain what are the 
current symptoms ?

Is it hanging during update ? consuming CPU ?
Could you run top -H -p  -n 5 -d 3

if it is hanging could you run 'db_stat -CA -h /dev/shm/slapd-/ -N'


regards
thierry






I don't think it is about named per se, it is a bit of an unfortunate
interop inside ns-slapd between different plugins. bind-dyndb-ldap
relies on the syncrepl extension which implementation in ns-slapd is
using the retro changelog content. Retro changelog plugin triggers some
updates that cause schema compatibility plugin to lock itself up
depending on the order of updates that retro changelog would capture. We
fixed that in slapi-nis package some time ago and it *should* be
ignoring the retro changelog changes but somehow they still propagate
into it. There are few places in ns-slapd which were addressed just
recently and those updates might help (out later this year in RHEL).
Disabling schema compat would be the best.

What's worse, every reboot attempt waits the full '9 min 29 secs' 
before systemd forcibly terminates ns-slapd to finish the 'stop job'.


That's why I'm so troubled by all this, it's not like there is any 
interference from anything other than what freeipa puts out there, 
and it just locks with a message that gives no indication of what to 
do about it, with nothing in any logs and 'systemctl 
is-system-running' reports 'running'.


You could easily replicate this:  imagine a simple validation test 
that sets up two freeipa nodes, turns on dnssec, creates some 
domains, then adds A AA

[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Harry G Coin via FreeIPA-users
Theirry asked for a recap summary below, so forgive the 'top post'.  
Here it is:


4.9.10 default install on two systems call them primary (with kasp.db) 
and secondary but otherwise multi-master, 1g link between them, 
modest/old cpu, drives, 5Gmemory, with dns/dnssec and adtrust (aimed at 
local samba share support only).  Unremarkable initial install.  Normal 
operations, GUI, etc.


A python program using the ldap2 backend on Primary starts loading a few 
dozen default domains with A /  and associated PTR records.   It 
first does dns find/show to check for existence, and if absent adds the 
domain/subdomain, missing A /  assoc PTR etc.    Extensive traffic 
in the logs to do with dnssec, notifies being sent back and forth 
between primary and secondary by bind9 (which you'd think already had 
the info in ldap so why 'notify' via bind really?)  serial numbers going 
up, dnssec updates.  Every now and then the program checks whether 
dnssec keys need rotating or if new zones appear, but that's fairly 
infrequent and seems unrelated.


After not more than a few minutes of adding records, in Primary's log 
"writeback to ldap failed" will appear.   There will be nothing in any 
log indicating anything else amiss, 'systemctl is-system-running' 
reports 'running'.  Login attempts on the GUI fail 'for an unknown 
reason', named/bind9 queries for A/ seem to work.  Anything that 
calls ns-slapd times out or hangs waiting forever.  CPU usage near 0.


'systemctl restart ipa' and or reboot restores operations-- HOWEVER 
there will be at least a 10 minute wait with ns-slapd at 100% CPU until 
the reboot process forcibly kills it.


Upgrading to 4.9.11 caused the 'writeback to ldap failed' message to 
move to  Secondary, not primary.   Same further consequences.


Alexander's dsconf notion changed the appearance, it broke dnssec 
updates with an LDAP timeout error message.


There is nothing whatever remarkable about this two node setup. I 
suspect that test environments using the latest processors and all nvme 
storage is just too performant to manifest it, or the test environments 
don't have dnssec enabled and don't add a few thousand records to a few 
dozen subdomains.


I need some way forward, it's dead in the water now.   Presently my 
'plan' such as it is -- is move freeipa VMs to faster systems with more 
memory and 10gb interconnects in hopes of not hitting this, but of 
course this is one of those 'sword hanging over everyone's head by a 
thread' 'don't breathe on it wrong or you'll die' situations that needs 
an answer before trust can come back.


I appreciate the focus!


On 8/9/23 11:24, Thierry Bordaz wrote:


On 8/9/23 17:15, Harry G Coin wrote:


On 8/9/23 01:00, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:
Thanks for your help.  Details below. The problem 'moved' in I hope 
a diagnositcally useful way, but the system remains broken.


On 8/8/23 08:54, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:


On 8/8/23 02:43, Alexander Bokovoy wrote:

pstack $(pgrep ns-slapd)  > ns-slapd log
Tried an upgrade from 4.9.10 to 4.9.11, the "writeback to ldap 
failed" error moved from the primary instance (on which the dns 
records were being added) to the replica which hung in the same 
fashion.   Here's the log you asked for from attempting 
'systemctl restart dirsrv@...'  it just hangs at 100% cpu for 
about 10 minutes.


Thank you. Are you using schema compat for some legacy clients?



This is a fresh install of 4.9.10 about a week ago, upgraded to 
4.9.11 yesterday, just two freeipa instances and no appreciable 
user load, using the install defaults.  The 'in house' system then 
starts loading lots of dns records via the python ldap2 interface 
on the first of two systems installed, the replica produced what 
you see in this post. There is no 'private' information involved of 
any sort, it's supposed to field DNS calls from the public but was 
so unreliable I had to implement unbound on other servers, so all 
freeipa does is IXFR to unbound for the heavy load.  I suppose 
there may be <16 other in-house lab systems, maybe 2 or 3 with any 
activity, that use it for dns.   The only other clue is these are 
running on VMs in older servers and have no other software packages 
installed other than freeipa and what freeipa needs to run, and the 
in-house program that loads the dns.


Just to exclude potential problems with schema compat, it can be
disabled if you are not using it.


How?  The installs just use all the defaults, other than enabling 
dnssec and PTR records for all a/.


I'm officially in 'desperation mode' as not being able to populate 
DNS in freeipa reduces everyone to pencil and paper and coffee with 
full project stoppage until it's fixed or at least 'worked around'.   
So anything that 'might help' can be sacrificed so at least 
'something' works 'somewhat'.   If old AD needs to be 'broken' or 
'off' but mostly the rest of it 'works s

[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Thierry Bordaz via FreeIPA-users


On 8/9/23 18:55, Harry G Coin wrote:
Theirry asked for a recap summary below, so forgive the 'top post'.  
Here it is:


4.9.10 default install on two systems call them primary (with kasp.db) 
and secondary but otherwise multi-master, 1g link between them, 
modest/old cpu, drives, 5Gmemory, with dns/dnssec and adtrust (aimed 
at local samba share support only).  Unremarkable initial install.  
Normal operations, GUI, etc.


A python program using the ldap2 backend on Primary starts loading a 
few dozen default domains with A /  and associated PTR records.   
It first does dns find/show to check for existence, and if absent adds 
the domain/subdomain, missing A /  assoc PTR etc.    Extensive 
traffic in the logs to do with dnssec, notifies being sent back and 
forth between primary and secondary by bind9 (which you'd think 
already had the info in ldap so why 'notify' via bind really?)  serial 
numbers going up, dnssec updates.  Every now and then the program 
checks whether dnssec keys need rotating or if new zones appear, but 
that's fairly infrequent and seems unrelated.


After not more than a few minutes of adding records, in Primary's log 
"writeback to ldap failed" will appear.   There will be nothing in any 
log indicating anything else amiss, 'systemctl is-system-running' 
reports 'running'.  Login attempts on the GUI fail 'for an unknown 
reason', named/bind9 queries for A/ seem to work.  Anything that 
calls ns-slapd times out or hangs waiting forever.  CPU usage near 0.



Did you get a pstack (ns-slapd) at that time ?




'systemctl restart ipa' and or reboot restores operations-- HOWEVER 
there will be at least a 10 minute wait with ns-slapd at 100% CPU 
until the reboot process forcibly kills it.


I guess most ns-slapd workers (thread running the requests) have been 
stopped but no idea which ones is eating CPU. A 'top -H' and pstack 
would help.




Upgrading to 4.9.11 caused the 'writeback to ldap failed' message to 
move to  Secondary, not primary.   Same further consequences.


Alexander's dsconf notion changed the appearance, it broke dnssec 
updates with an LDAP timeout error message.


There is nothing whatever remarkable about this two node setup. I 
suspect that test environments using the latest processors and all 
nvme storage is just too performant to manifest it, or the test 
environments don't have dnssec enabled and don't add a few thousand 
records to a few dozen subdomains.


I need some way forward, it's dead in the water now.   Presently my 
'plan' such as it is -- is move freeipa VMs to faster systems with 
more memory and 10gb interconnects in hopes of not hitting this, but 
of course this is one of those 'sword hanging over everyone's head by 
a thread' 'don't breathe on it wrong or you'll die' situations that 
needs an answer before trust can come back.


I appreciate the focus!


On 8/9/23 11:24, Thierry Bordaz wrote:


On 8/9/23 17:15, Harry G Coin wrote:


On 8/9/23 01:00, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:
Thanks for your help.  Details below. The problem 'moved' in I 
hope a diagnositcally useful way, but the system remains broken.


On 8/8/23 08:54, Alexander Bokovoy wrote:

On Аўт, 08 жні 2023, Harry G Coin wrote:


On 8/8/23 02:43, Alexander Bokovoy wrote:

pstack $(pgrep ns-slapd)  > ns-slapd log
Tried an upgrade from 4.9.10 to 4.9.11, the "writeback to ldap 
failed" error moved from the primary instance (on which the dns 
records were being added) to the replica which hung in the same 
fashion.   Here's the log you asked for from attempting 
'systemctl restart dirsrv@...'  it just hangs at 100% cpu for 
about 10 minutes.


Thank you. Are you using schema compat for some legacy clients?



This is a fresh install of 4.9.10 about a week ago, upgraded to 
4.9.11 yesterday, just two freeipa instances and no appreciable 
user load, using the install defaults. The 'in house' system then 
starts loading lots of dns records via the python ldap2 interface 
on the first of two systems installed, the replica produced what 
you see in this post. There is no 'private' information involved 
of any sort, it's supposed to field DNS calls from the public but 
was so unreliable I had to implement unbound on other servers, so 
all freeipa does is IXFR to unbound for the heavy load.  I suppose 
there may be <16 other in-house lab systems, maybe 2 or 3 with any 
activity, that use it for dns.   The only other clue is these are 
running on VMs in older servers and have no other software 
packages installed other than freeipa and what freeipa needs to 
run, and the in-house program that loads the dns.


Just to exclude potential problems with schema compat, it can be
disabled if you are not using it.


How?  The installs just use all the defaults, other than enabling 
dnssec and PTR records for all a/.


I'm officially in 'desperation mode' as not being able to populate 
DNS in freeipa reduces everyone to pencil and paper and coffee

[Freeipa-users] Re: ipaserver.ipa.1017.abc can not serve DNS for 1017.abc

2023-08-09 Thread Alan Latteri via FreeIPA-users
Thank you for the reply.  

What is the proper way to approach setting up a fresh IPA environment, trying 
to following best practices of having IPA and AD in separate subdomains?

I'm a bit confused on how to approach, if I'd like to be able to serve apex 
domain from IPA.

According to best practice documentation, IPA and AD should be in separate 
subdomains.  For instance, ipaserver.ipa.example.com and 
adserver.ad.example.com.  This would instigate the original apex domain issue.

Sorry for any level of ignorance here.  Trying my best to plan out a future 
proof deployment.

Thankyou.
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Harry G Coin via FreeIPA-users


On 8/9/23 12:05, Thierry Bordaz wrote:


On 8/9/23 18:55, Harry G Coin wrote:
Theirry asked for a recap summary below, so forgive the 'top post'.  
Here it is:


4.9.10 default install on two systems call them primary (with 
kasp.db) and secondary but otherwise multi-master, 1g link between 
them, modest/old cpu, drives, 5Gmemory, with dns/dnssec and adtrust 
(aimed at local samba share support only). Unremarkable initial 
install.  Normal operations, GUI, etc.


A python program using the ldap2 backend on Primary starts loading a 
few dozen default domains with A /  and associated PTR records.   
It first does dns find/show to check for existence, and if absent 
adds the domain/subdomain, missing A /  assoc PTR etc.    
Extensive traffic in the logs to do with dnssec, notifies being sent 
back and forth between primary and secondary by bind9 (which you'd 
think already had the info in ldap so why 'notify' via bind really?)  
serial numbers going up, dnssec updates.  Every now and then the 
program checks whether dnssec keys need rotating or if new zones 
appear, but that's fairly infrequent and seems unrelated.


After not more than a few minutes of adding records, in Primary's log 
"writeback to ldap failed" will appear.   There will be nothing in 
any log indicating anything else amiss, 'systemctl is-system-running' 
reports 'running'.  Login attempts on the GUI fail 'for an unknown 
reason', named/bind9 queries for A/ seem to work.  Anything that 
calls ns-slapd times out or hangs waiting forever.  CPU usage near 0.



Did you get a pstack (ns-slapd) at that time ?


Yes, posted 8/8/23 and again now:

[root@registry2 ~]# pstack 1405 > ns-slapd3.log
[root@registry2 ~]# more ns-slapd3.log
Thread 33 (Thread 0x7f66366f5700 (LWP 2654)):
#0  0x7f6639b6d455 in pthread_rwlock_wrlock () at 
target:/lib64/libpthread.so.0
#1  0x7f6628e9d380 in map_wrlock () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#2  0x7f6628e8d393 in backend_shr_post_delete_cb.part () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#3  0x7f6628e8d508 in backend_shr_betxn_post_delete_cb () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#4  0x7f663d7bec79 in plugin_call_func (list=0x7f66324c8200, 
operation=operation@entry=563, pb=pb@entry=0x7f65fbdffcc0, 
call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:2032
#5  0x7f663d7beec4 in plugin_call_list (pb=0x7f65fbdffcc0, 
operation=563, list=) at ldap/servers/slapd/plugin.c:1973
#6  0x7f663d7beec4 in plugin_call_plugins 
(pb=pb@entry=0x7f65fbdffcc0, whichfunction=whichfunction@entry=563) at 
ldap/servers/slapd/plugin.c:442
#7  0x7f662ae6ac83 in ldbm_back_delete (pb=0x7f65fbdffcc0) at 
ldap/servers/slapd/back-ldbm/ldbm_delete.c:1289
#8  0x7f663d7696ac in op_shared_delete (pb=pb@entry=0x7f65fbdffcc0) 
at ldap/servers/slapd/delete.c:338
#9  0x7f663d7698bd in delete_internal_pb 
(pb=pb@entry=0x7f65fbdffcc0) at ldap/servers/slapd/delete.c:209
#10 0x7f663d769b3b in slapi_delete_internal_pb 
(pb=pb@entry=0x7f65fbdffcc0) at ldap/servers/slapd/delete.c:151
#11 0x7f66294c4fde in delete_changerecord (cnum=cnum@entry=27941) at 
ldap/servers/plugins/retrocl/retrocl_trim.c:89
#12 0x7f66294c51a1 in trim_changelog () at 
ldap/servers/plugins/retrocl/retrocl_trim.c:290
#13 0x7f66294c51a1 in changelog_trim_thread_fn (arg=) 
at ldap/servers/plugins/retrocl/retrocl_trim.c:333

#14 0x7f663a1cd968 in _pt_root () at target:/lib64/libnspr4.so
#15 0x7f6639b681ca in start_thread () at target:/lib64/libpthread.so.0
#16 0x7f663be12e73 in clone () at target:/lib64/libc.so.6
Thread 32 (Thread 0x7f65f61fc700 (LWP 1438)):
#0  0x7f6639b6d022 in pthread_rwlock_rdlock () at 
target:/lib64/libpthread.so.0
#1  0x7f6628e9d242 in map_rdlock () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#2  0x7f6628e88298 in backend_bind_cb () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#3  0x7f663d7bec79 in plugin_call_func (list=0x7f66324c7300, 
operation=operation@entry=401, pb=pb@entry=0x7f6600e07580, 
call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:2032
#4  0x7f663d7beec4 in plugin_call_list (pb=0x7f6600e07580, 
operation=401, list=) at ldap/servers/slapd/plugin.c:1973
#5  0x7f663d7beec4 in plugin_call_plugins 
(pb=pb@entry=0x7f6600e07580, whichfunction=whichfunction@entry=401) at 
ldap/servers/slapd/plugin.c:442
#6  0x55a716413add in ids_sasl_check_bind 
(pb=pb@entry=0x7f6600e07580) at ldap/servers/slapd/saslbind.c:1205
#7  0x55a7163fbd27 in do_bind (pb=pb@entry=0x7f6600e07580) at 
ldap/servers/slapd/bind.c:367
#8  0x55a716401835 in connection_dispatch_operation 
(pb=0x7f6600e07580, op=, conn=) at 
ldap/servers/slapd/connection.c:626
#9  0x55a716401835 in connection_threadmain (arg=) at 
ldap/servers/slapd/connection.c:1803

#10 0x7f663a1cd968 in _pt_root () at target:/lib64/libnspr4.so
#11 0x7f6639b681ca in start_thread () at t

[Freeipa-users] Re: After "writeback to ldap failed" -- silent total freeipa failure / deadlock.

2023-08-09 Thread Thierry Bordaz via FreeIPA-users


On 8/9/23 21:13, Harry G Coin wrote:



On 8/9/23 12:05, Thierry Bordaz wrote:


On 8/9/23 18:55, Harry G Coin wrote:
Theirry asked for a recap summary below, so forgive the 'top post'.  
Here it is:


4.9.10 default install on two systems call them primary (with 
kasp.db) and secondary but otherwise multi-master, 1g link between 
them, modest/old cpu, drives, 5Gmemory, with dns/dnssec and adtrust 
(aimed at local samba share support only).  Unremarkable initial 
install.  Normal operations, GUI, etc.


A python program using the ldap2 backend on Primary starts loading a 
few dozen default domains with A /  and associated PTR 
records.   It first does dns find/show to check for existence, and 
if absent adds the domain/subdomain, missing A /  assoc PTR 
etc.    Extensive traffic in the logs to do with dnssec, notifies 
being sent back and forth between primary and secondary by bind9 
(which you'd think already had the info in ldap so why 'notify' via 
bind really?)  serial numbers going up, dnssec updates.  Every now 
and then the program checks whether dnssec keys need rotating or if 
new zones appear, but that's fairly infrequent and seems unrelated.


After not more than a few minutes of adding records, in Primary's 
log "writeback to ldap failed" will appear.   There will be nothing 
in any log indicating anything else amiss, 'systemctl 
is-system-running' reports 'running'.  Login attempts on the GUI 
fail 'for an unknown reason', named/bind9 queries for A/ seem to 
work.  Anything that calls ns-slapd times out or hangs waiting 
forever.  CPU usage near 0.



Did you get a pstack (ns-slapd) at that time ?


Yes, posted 8/8/23 and again now:

The threads 25 and 33 are in fatal deadlock. Alexander suggested, as a 
workaround, to disable retroCL trimming. After you disabled retroCL 
trimming and restarted the instance are you still seeing this kind of 
deadlock ?


best regards
thierry


[root@registry2 ~]# pstack 1405 > ns-slapd3.log
[root@registry2 ~]# more ns-slapd3.log
Thread 33 (Thread 0x7f66366f5700 (LWP 2654)):
#0  0x7f6639b6d455 in pthread_rwlock_wrlock () at 
target:/lib64/libpthread.so.0
#1  0x7f6628e9d380 in map_wrlock () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#2  0x7f6628e8d393 in backend_shr_post_delete_cb.part () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#3  0x7f6628e8d508 in backend_shr_betxn_post_delete_cb () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#4  0x7f663d7bec79 in plugin_call_func (list=0x7f66324c8200, 
operation=operation@entry=563, pb=pb@entry=0x7f65fbdffcc0, 
call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:2032
#5  0x7f663d7beec4 in plugin_call_list (pb=0x7f65fbdffcc0, 
operation=563, list=) at ldap/servers/slapd/plugin.c:1973
#6  0x7f663d7beec4 in plugin_call_plugins 
(pb=pb@entry=0x7f65fbdffcc0, whichfunction=whichfunction@entry=563) at 
ldap/servers/slapd/plugin.c:442
#7  0x7f662ae6ac83 in ldbm_back_delete (pb=0x7f65fbdffcc0) at 
ldap/servers/slapd/back-ldbm/ldbm_delete.c:1289
#8  0x7f663d7696ac in op_shared_delete 
(pb=pb@entry=0x7f65fbdffcc0) at ldap/servers/slapd/delete.c:338
#9  0x7f663d7698bd in delete_internal_pb 
(pb=pb@entry=0x7f65fbdffcc0) at ldap/servers/slapd/delete.c:209
#10 0x7f663d769b3b in slapi_delete_internal_pb 
(pb=pb@entry=0x7f65fbdffcc0) at ldap/servers/slapd/delete.c:151
#11 0x7f66294c4fde in delete_changerecord (cnum=cnum@entry=27941) 
at ldap/servers/plugins/retrocl/retrocl_trim.c:89
#12 0x7f66294c51a1 in trim_changelog () at 
ldap/servers/plugins/retrocl/retrocl_trim.c:290
#13 0x7f66294c51a1 in changelog_trim_thread_fn (arg=out>) at ldap/servers/plugins/retrocl/retrocl_trim.c:333

#14 0x7f663a1cd968 in _pt_root () at target:/lib64/libnspr4.so
#15 0x7f6639b681ca in start_thread () at 
target:/lib64/libpthread.so.0

#16 0x7f663be12e73 in clone () at target:/lib64/libc.so.6
Thread 32 (Thread 0x7f65f61fc700 (LWP 1438)):
#0  0x7f6639b6d022 in pthread_rwlock_rdlock () at 
target:/lib64/libpthread.so.0
#1  0x7f6628e9d242 in map_rdlock () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#2  0x7f6628e88298 in backend_bind_cb () at 
target:/usr/lib64/dirsrv/plugins/schemacompat-plugin.so
#3  0x7f663d7bec79 in plugin_call_func (list=0x7f66324c7300, 
operation=operation@entry=401, pb=pb@entry=0x7f6600e07580, 
call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:2032
#4  0x7f663d7beec4 in plugin_call_list (pb=0x7f6600e07580, 
operation=401, list=) at ldap/servers/slapd/plugin.c:1973
#5  0x7f663d7beec4 in plugin_call_plugins 
(pb=pb@entry=0x7f6600e07580, whichfunction=whichfunction@entry=401) at 
ldap/servers/slapd/plugin.c:442
#6  0x55a716413add in ids_sasl_check_bind 
(pb=pb@entry=0x7f6600e07580) at ldap/servers/slapd/saslbind.c:1205
#7  0x55a7163fbd27 in do_bind (pb=pb@entry=0x7f6600e07580) at 
ldap/servers/slapd/bind.c:367
#8  0x55a716401835 in connection_dispatc

[Freeipa-users] Finding users with missing field entries

2023-08-09 Thread Ali Sobhi via FreeIPA-users
How do I search for logins where --departmentnumber value is null?
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: Finding users with missing field entries

2023-08-09 Thread Alexander Bokovoy via FreeIPA-users

On Чцв, 10 жні 2023, Ali Sobhi via FreeIPA-users wrote:

How do I search for logins where --departmentnumber value is null?


Use LDAP searches directly. 'ipa -find' commands do not allow to
search for an absence of an attribute.

$ kinit admin
$ BASEDN=$(ipa env basedn|cut -d: -f2-|tr -d ' ')
$ ldapsearch -Y GSSAPI -b cn=users,cn=accounts,$BASEDN 
'(&(objectclass=inetorgperson)(!(departmentnumber=*)))'

Please note that 'admin' user will be missing from this list even though
it does not have a department number. This is because its LDAP record
does not include 'inetOrgPerson' object class and hence
'departmentNumber' attribute is not allowed there. Normal IPA users will
have 'inetOrgPerson' object class by default:

$ ipa config-show --all --raw|grep ipaUserObjectClasses
  ipaUserObjectClasses: top
  ipaUserObjectClasses: person
  ipaUserObjectClasses: organizationalperson
  ipaUserObjectClasses: inetorgperson
  ipaUserObjectClasses: inetuser
  ipaUserObjectClasses: posixaccount
  ipaUserObjectClasses: krbprincipalaux
  ipaUserObjectClasses: krbticketpolicyaux
  ipaUserObjectClasses: ipaobject
  ipaUserObjectClasses: ipasshuser


--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


[Freeipa-users] Re: ipaserver.ipa.1017.abc can not serve DNS for 1017.abc

2023-08-09 Thread Alexander Bokovoy via FreeIPA-users

On Срд, 09 жні 2023, Alan Latteri via FreeIPA-users wrote:

Thank you for the reply.

What is the proper way to approach setting up a fresh IPA environment,
trying to following best practices of having IPA and AD in separate
subdomains?

I'm a bit confused on how to approach, if I'd like to be able to serve
apex domain from IPA.

According to best practice documentation, IPA and AD should be in
separate subdomains.  For instance, ipaserver.ipa.example.com and
adserver.ad.example.com.  This would instigate the original apex domain
issue.

Sorry for any level of ignorance here.  Trying my best to plan out a
future proof deployment.


If you want to add example.com and your IPA server is in
ipa.example.com, then use a 'fake' host name for IPA server in
example.com to specify NS record.

Something like:

ipa-ns.example.com. 300 IN  A   192.168.100.146
ipa.example.com.86400   IN  NS  ipa-ns.example.com.

The only downside is to make sure ipa-ns.example.com gets updated
manually when IPA server actual address changes.


--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue