On 2025-08-04 9:31 a.m., TomK via FreeIPA-users wrote:
On 2025-08-04 9:02 a.m., Florence Blanc-Renaud wrote:
Hi,
On Mon, Aug 4, 2025 at 2:31 PM TomK via FreeIPA-users <freeipa-
[email protected]> wrote:
GM Folks!
Getting the following:
[S-1-5-21-1803828911-4163023034-2461700517-1104] has a RID that is
larger than the ldap_idmap_range_size.
However, my ID range is, or was, 200,000 and there were no changes on
this IPA 4.6.6 / CentOS 7 server in years.
A few messages surrounding the above:
26238 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_process_result] (0x2000): Trace: sh[0x564bbffeced0],
connected[1],
ops[0x564bc0031710], ldap[0x564bc00049c0]
26239 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_process_message] (0x4000): Message type:
[LDAP_RES_SEARCH_REFERENCE]
26240 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_get_generic_ext_add_references] (0x1000): Additional
References:
ldap://mds.xyz/CN=Configuration,DC=mds,DC=xyz <http://mds.xyz/
CN=Configuration,DC=mds,DC=xyz>
26241 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_process_result] (0x2000): Trace: sh[0x564bbffeced0],
connected[1],
ops[0x564bc0031710], ldap[0x564bc00049c0]
26242 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_process_message] (0x4000): Message type:
[LDAP_RES_SEARCH_RESULT]
26243 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_get_generic_op_finished] (0x0400): Search result:
Success(0), no
errmsg set
26244 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_op_destructor] (0x2000): Operation 5 finished
26245 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[generic_ext_search_handler] (0x4000): Request included referrals
which
were ignored.
26246 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[generic_ext_search_handler] (0x4000): Ref:
ldap://ForestDnsZones.mds.xyz/DC=ForestDnsZones,DC=mds,DC=xyz
<http://ForestDnsZones.mds.xyz/DC=ForestDnsZones,DC=mds,DC=xyz>
26247 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[generic_ext_search_handler] (0x4000): Ref:
ldap://DomainDnsZones.mds.xyz/DC=DomainDnsZones,DC=mds,DC=xyz
<http://DomainDnsZones.mds.xyz/DC=DomainDnsZones,DC=mds,DC=xyz>
26248 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[generic_ext_search_handler] (0x4000): Ref:
ldap://mds.xyz/CN=Configuration,DC=mds,DC=xyz <http://mds.xyz/
CN=Configuration,DC=mds,DC=xyz>
26249 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_search_user_process] (0x0400): Search for users, returned 1
results.
26250 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_search_user_process] (0x2000): Retrieved total 1 users
26251 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): start ldb transaction (nesting: 0)
26252 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_save_user] (0x0400): Save user
26253 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sss_domain_get_state] (0x1000): Domain nix.mds.xyz <http://
nix.mds.xyz> is Active
26254 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sss_domain_get_state] (0x1000): Domain mds.xyz <http://mds.xyz>
is Active
26255 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_get_primary_name] (0x0400): Processing object tom
26256 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_save_user] (0x0400): Processing user [email protected]
26257 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_save_user] (0x1000): Mapping user [[email protected]] objectSID
[S-1-5-21-1803828911-4163023034-2461700517-1104] to unix ID
26258 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_idmap_sid_to_unix] (0x0040): Object SID
[S-1-5-21-1803828911-4163023034-2461700517-1104] has a RID that is
larger than the ldap_idmap_range_size. See the "ID MAPPING"
section of sssd-ad(5) for an explanation of how to resolve this issue.
26259 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_idmap_sid_to_unix] (0x0080): Could not convert objectSID
[S-1-5-21-1803828911-4163023034-2461700517-1104] to a UNIX ID
26260 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_save_user] (0x0020): Failed to save user [[email protected]]
26261 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_save_users] (0x0040): Failed to store user 0. Ignoring.
26262 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): commit ldb transaction (nesting: 0)
26263 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_get_users_done] (0x4000): Saving 1 Users - Done
26264 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_id_op_done] (0x4000): releasing operation connection
26265 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): Added timed event "ldb_kv_callback": 0x564bc0021ee0
26266
26267 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): Added timed event "ldb_kv_timeout": 0x564bc0023580
26268
26269 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): Running timer event 0x564bc0021ee0 "ldb_kv_callback"
26270
26271 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): Destroying timer event 0x564bc0023580 "ldb_kv_timeout"
26272
26273 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]] [ldb]
(0x4000): Destroying timer event 0x564bc0021ee0 "ldb_kv_callback"
26274
26275 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sysdb_search_by_name] (0x0400): No such entry
26276 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[ipa_get_ad_acct_ad_part_done] (0x0080): Object not found, ending
request
26277 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[sdap_id_op_destroy] (0x4000): releasing operation connection
26278 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[dp_req_done] (0x0400): DP Request [Account #1]: Request handler
finished [0]: Success
26279 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[_dp_req_recv] (0x0400): DP Request [Account #1]: Receiving
request data.
26280 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[dp_req_reply_list_success] (0x0400): DP Request [Account #1]:
Finished.
Success.
26281 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[dp_req_reply_std] (0x1000): DP Request [Account #1]: Returning
[Success]: 0,0,Success
26282 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[dp_table_value_destructor] (0x0400): Removing
[0:1:0x0001:1::mds.xyz:[email protected]] from reply table
26283 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[dp_req_destructor] (0x0400): DP Request [Account #1]: Request
removed.
26284 (Mon Aug 4 07:42:50 2025) [sssd[be[nix.mds.xyz <http://
nix.mds.xyz>]]]
[dp_req_destructor] (0x0400): Number of active DP request: 0
Ranges, after increasing to 2,000,000:
ipa idrange-find mds.xyz_id_range
----------------
2 ranges matched
----------------
Range name: MDS.XYZ_id_range
First Posix ID of the range: 155600000
Number of IDs in the range: 2000000
First RID of the corresponding RID range: 155600000
Domain SID of the trusted domain:
S-1-5-21-1803828911-4163023034-2461700517
Range type: Active Directory domain range
The SID S-1-5-21-1803828911-4163023034-2461700517-1104 corresponds to
a domain SID S-1-5-21-1803828911-4163023034-2461700517 and a RID 1104.
Domain sid falls into the range MDS.XYZ_id_range which has rids
between 155600000 and 155600000+2000000 => 1004 is outside of the rid
range.
Did you manually create or modify this AD range?
flo
It was originally auto selected during the installation. Issue is with
users on the MDS.XYZ domain. No customization of this range happened.
Just in last 1-2 days I tried to fix the error by running the below:
ipa idrange-mod --base-id=155600000 --range-size=200000000 MDS.XYZ_id_range
to increase it to 2,000,000 from the 200,000 that it was originally. I
did do a yum update recently:
yum history info 31
Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Transaction ID : 31
Begin time : Sun Jul 20 21:51:12 2025
Begin rpmdb : 733:71ac40129a477426d1f0f9f1fd02ea2c07387297
End time : 21:51:22 2025 (10 seconds)
End rpmdb : 740:beba678a2cea950b4bd793b4f73711bd562328d6
User : root <root>
Return-Code : Success
Command Line : install python-devel openldap-devel
Transaction performed with:
Installed rpm-4.11.3-43.el7.x86_64 @base
Installed yum-3.4.3-167.el7.centos.noarch @base
Installed yum-plugin-fastestmirror-1.1.31-54.el7_8.noarch @updates
Packages Altered:
Dep-Install cyrus-sasl-2.1.26-24.el7_9.x86_64 @updates
Dep-Install cyrus-sasl-devel-2.1.26-24.el7_9.x86_64 @updates
Updated cyrus-sasl-gssapi-2.1.26-23.el7.x86_64 @base
Update 2.1.26-24.el7_9.x86_64 @updates
Updated cyrus-sasl-lib-2.1.26-23.el7.x86_64 @base
Update 2.1.26-24.el7_9.x86_64 @updates
Updated cyrus-sasl-md5-2.1.26-23.el7.x86_64 @base
Update 2.1.26-24.el7_9.x86_64 @updates
Updated cyrus-sasl-plain-2.1.26-23.el7.x86_64 @base
Update 2.1.26-24.el7_9.x86_64 @updates
Updated openldap-2.4.44-21.el7_6.x86_64 @base
Update 2.4.44-25.el7_9.x86_64 @updates
Updated openldap-clients-2.4.44-21.el7_6.x86_64 @base
Update 2.4.44-25.el7_9.x86_64 @updates
Install openldap-devel-2.4.44-25.el7_9.x86_64 @updates
Updated python-2.7.5-88.el7.x86_64 @base
Update 2.7.5-94.el7_9.x86_64 @updates
Install python-devel-2.7.5-94.el7_9.x86_64 @updates
Updated python-libs-2.7.5-88.el7.x86_64 @base
Update 2.7.5-94.el7_9.x86_64 @updates
Dep-Install python-rpm-macros-3-34.el7.noarch @base
Dep-Install python-srpm-macros-3-34.el7.noarch @base
Dep-Install python2-rpm-macros-3-34.el7.noarch @base
But digging through the c code right now, not sure if any of the above
had an effect:
vi src/lib/idmap/sss_idmap.c
1385 enum idmap_error_code sss_idmap_sid_to_unix(struct
sss_idmap_ctx *ctx,
1386 const char *sid,
1387 uint32_t *_id)
Still poking around on my end.
Have another set of IPA 4.6.6 servers on CentOS 7 and those continue to
work fine. Did not do a yum update on those two. :) But.... now that
you mention, as I check that second cluster:
ipa idrange-find mds.xyz_id_range
----------------
2 ranges matched
----------------
Range name: MDS.XYZ_id_range
First Posix ID of the range: 155600000
Number of IDs in the range: 200000
First RID of the corresponding RID range: 0
Domain SID of the trusted domain:
S-1-5-21-1803828911-4163023034-2461700517
Range type: Active Directory domain range
Range name: MWS.MDS.XYZ_id_range
First Posix ID of the range: 1163400000
Number of IDs in the range: 200000
First RID of the corresponding RID range: 1000
First RID of the secondary RID range: 100000000
Range type: local domain range
----------------------------
Number of entries returned 2
----------------------------
RID range is 0. Hmm. Does:
--base-id=155600000
correspond to "First RID of the corresponding RID range" because if so,
then yep, that mod command I used:
ipa idrange-mod --base-id=155600000 --range-size=200000000 MDS.XYZ_id_range
might have changed it, but need your confirmation.
Did add these at some point to the sssd configuration to try and fix
this error, but not 100% if they would alter / impact the ipa id ranges
above:
[domain/nix.mds.xyz]
ldap_id_mapping = True
ldap_idmap_range_min = 100000
ldap_idmap_range_max = 2000000
ldap_idmap_range_size = 1000000
Tom
Ok so this worked.
ipa idrange-mod MDS.XYZ_id_range --rid-base=0
ipa: WARNING: Service sssd.service requires restart on IPA server
MDS.XYZ_id_range to apply configuration changes.
------------------------------------
Modified ID range "MDS.XYZ_id_range"
------------------------------------
Range name: MDS.XYZ_id_range
First Posix ID of the range: 155600000
Number of IDs in the range: 2000000
First RID of the corresponding RID range: 0
Domain SID of the trusted domain:
S-1-5-21-1803828911-4163023034-2461700517
Range type: Active Directory domain range
# id [email protected]
id: [email protected]: no such user
# systemctl stop sssd; rm -f /var/lib/sss/db/*; systemctl start sssd
# id [email protected]
uid=155601104([email protected]) gid=155601104([email protected])
#1 ---
Still the math troubles me:
"155600000 and 155600000+2000000 => 1004"
0 + 2000000 => 1004
?
or do you mean:
155600000 + 155600000 + 2,000,000 ....
?
#2 ---
Or perhaps:
If [email protected] user has SID:
S-1-5-21-1803828911-4163023034-2461700517-1104
Then given (original values):
BASE-ID: 155,600,000
RID : 1104
UID : BASE-ID + (RID - BASE-ID) = 155,600,000 + (1104 - 155,600,000) = 1104
and so 1104. But if it's 0:
UID : BASE-ID + (RID - BASE-ID) = 0 + (1104 - 0) = 1,104
gives me the same thing of course since the BASE-ID's cancel out.
#3 ---
Or perhaps it's this?
posix-base = First Posix ID in the range
rid-base = First RID of the corresponding RID range
rid = From SID (1104
So then:
posix-id = posix-base + ( rid - rid-base )
then:
posix-id = 155600000 + ( 1104 - 155600000 ) = 1104
But, if I have rid-base = 0:
posix-id = 155600000 + ( 1104 - 0 ) = 155601104
Which is what I see on the command line. Thinking last one is correct
then so it now works.
Thx,
Tom
Range name: NIX.MDS.XYZ_id_range
First Posix ID of the range: 1746600000
Number of IDs in the range: 200000
First RID of the corresponding RID range: 1000
First RID of the secondary RID range: 100000000
Range type: local domain range
----------------------------
Number of entries returned 2
----------------------------
ipa trust-show mds.xyz <http://mds.xyz>
Realm name: mds.xyz <http://mds.xyz>
Domain NetBIOS name: MDS
Domain Security Identifier:
S-1-5-21-1803828911-4163023034-2461700517
Trust direction: Two-way trust
Trust type: Active Directory domain
sssd.conf
[domain/nix.mds.xyz <http://nix.mds.xyz>]
debug_level = 9
cache_credentials = True
krb5_store_password_if_offline = True
ipa_domain = nix.mds.xyz <http://nix.mds.xyz>
id_provider = ipa
auth_provider = ipa
access_provider = ipa
ipa_hostname = idmipa01.nix.mds.xyz <http://idmipa01.nix.mds.xyz>
chpass_provider = ipa
ipa_server = idmipa01.nix.mds.xyz <http://idmipa01.nix.mds.xyz>
ipa_server_mode = True
ldap_tls_cacert = /etc/ipa/ca.crt
sudo_provider = ipa
ldap_sudo_search_base = ou=sudoers,dc=nix,dc=mds,dc=xyz
lookup_family_order = ipv4_only
[domain/sudoproxy]
debug_level = 9
id_provider = proxy
proxy_lib_name = files
ldap_uri = ldap://idmipa01.nix.mds.xyz <http://
idmipa01.nix.mds.xyz>, ldap://idmipa02.nix.mds.xyz <http://
idmipa02.nix.mds.xyz>
proxy_pam_target = system-auth-ac
sudo_provider = ipa
ldap_sudo_search_base = ou=sudoers,dc=nix,dc=mds,dc=xyz
ipa_domain = nix.mds.xyz <http://nix.mds.xyz>
[sssd]
debug_level = 9
services = nss, ifp, sudo, ssh, pam
config_file_version = 2
domains = sudoproxy, nix.mds.xyz <http://nix.mds.xyz>
[nss]
debug_level = 9
memcache_timeout = 600
homedir_substring = /home
[pam]
debug_level = 9
[sudo]
debug_level = 9
[autofs]
[ssh]
[pac]
debug_level = 9
[ifp]
allowed_uids = ipaapi, root
I do not have any specific ID ranges defined in /etc/sssd/sssd.conf.
Tried to modify the ranges using:
ipa idrange-mod --base-id=1746600000 --range-size=2000000
NIX.MDS.XYZ_id_range
But not surprisingly, it doesn't fix the above issue. Not
surprisingly,
at least based on my reading, because 200000 should be plenty for
1104.
Looking for some hints and tips to get past this? I did keep a
backup
of the IPA servers with a cache that was working off the second node.
But after modifications to the RID ranges above, this second node
also
stopped working. First host basically just stopped working on
it's own,
and second host stopped working after I modified the ID range to
2,000,000. Perhaps updates on the AD servers caused the issue?
Please let me know if more info is needed.
--
Thx,
Tom
P.S. Have started looking at upgrades to a higher version of IPA,
though that broke on current IPA server due to certs not having
proper
SAN values. Different story though. ;)
--
_______________________________________________
FreeIPA-users mailing list -- [email protected]
To unsubscribe send an email to freeipa-users-
[email protected]
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/
project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/
Mailing_list_guidelines
List Archives: https://lists.fedorahosted.org/archives/list/
[email protected]
Do not reply to spam, report it: https://pagure.io/fedora-
infrastructure/new_issue
--
Thx,
TK.
--
Thx,
TK.
--
_______________________________________________
FreeIPA-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/[email protected]
Do not reply to spam, report it:
https://pagure.io/fedora-infrastructure/new_issue