Some work was done a few years back by a user who was storing a similar scale 
of users in their directory. They noticed some delays in replication but those 
issues were resolved. So hopefully it "just works".

If you were going to have any issues with this, it would be:

* network response sizes (since the groups are large it will block the 
connection)
* replication delays (to sort/manage the content)
* update/write delays

So I'd test this with a development environment if I were you, but as 
mentioned, since there are already users doing this, hopefully there are no 
hidden traps for you :) 

> On 23 Oct 2020, at 08:32, murma...@hotmail.com wrote:
> 
> We have a two machine 389DS multimaster cluster holding about 850.000 users. 
> It's been working great for over three years now.
> 
> But we are creating some big groups, that will have about 150.000 users in 
> them.
> 
> I've read in the list some posts about groups larger than that.
> 
> But I would like to know if there is any limit or precaution when working 
> with groups this size?
> _______________________________________________
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

—
Sincerely,

William Brown

Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org

Reply via email to