On 06/05/2014 01:12 PM, Simo Sorce wrote:
On Thu, 2014-06-05 at 16:46 +0200, Ludwig Krispenz wrote:
On 06/05/2014 03:14 PM, Ludwig Krispenz wrote:
On 06/05/2014 02:45 PM, Simo Sorce wrote:
On Thu, 2014-06-05 at 11:27 +0200, Ludwig Krispenz wrote:
On 06/04/2014 06:04 PM, thierry bordaz wrote:
But this requires that the node database is already initialized (have
the same replicageneration than the others nodes).
If it is not initialized, it could be done by any masters. But if it
is automatic, there is a risk that several masters will want to
initialize it.
I would not give the plugin the power to reinitialize a database on
another server and I also would not put the responsibility to do it to
the plugin. Initializing a server is an administrative task done at
specific steps during deployment or in  error conditions and most time
has to be scheduled and often on-line initialization is not the
preferred method.
Agree.

The plugin could still be used to trigger online initializations if the
nsds5replicarefresh attribute is part of the information in the shared
tree, it can be modified, the plugin on the targeted server will detect
the change, update the replication agreement object and start the
initialization (but this should probably still be allowed to be done
directly).
Nope, leave re-initialization to the plumbing. The topology plugin deals
only with topology, it should not be involved with re-initializations,
save for not going crazy and trying to remove agreements "during" a
re-initialization.

The question for me was more how the configuration information in the
shared tree is initialized (not the full shared tree).
We do agree that the info in the shared tree should be authoritative.
Yep.

   To
synchronize the info in the shared tree and cn=config I see two modes:
"passive" mode: the sync is only from the shared tree to cn=config, it
is the responsibility of an admin/tool to initialize the config in the
shared tree
This is my preferred, although leaves the problem open for migration, we
need a way to properly prime the topology so that it doesn't wipe out
current agreements before they are imported.

"supporting" mode: if the plugin starts, it checks if the info in the
shared tree exists, if not it initializes the topology info in the
shared tree and then only reacts to changes in the shared tree.
I would like some more details about what you have in mind here,
depending on the fine points I might agree this is desirable to solve
the migration problem.
I will write it down for a few different scenarios.
looks like this could get ugly, if the topology info initialization
would happen on several masters at the same time, they would create the
same entries (at least the config root container) and we could get
replication conflicts :-(
This is exactly why I said I do not want the plugin to create the
topology tree itself :-)

It should only import agreements that are not yet there. There is still
potential conflict, the topology plugin will have to handle them, by
intercepting replication writes to the topology tree and smartly
merging/fencing IMO.

we need to be careful on the process, I have an idea how it could work,
but need to think a bit more about it
I am all ears.

Simo.

We already have several situations (CRL, DNSSEC, cert rotation) where a single server has to do the job first and all the rest should rely on that. We can simply our life by making the initialization a special admin initialized operation for the situations when we upgrade from earlier versions. So general topology plugin does not initialize anything automatically. If we build a new deployment the ipa replica management tools will drive the modifications as servers are added. If it is an upgrade admin might go to IPA configuration and ray build/rebuild topology. This will drop all segment information if there is any and using master list from cn=masters connect to each replica, query its replication agreements and create data for the replicated tree. If some replica is not on line the operation will report that replica can be connected and that topology is not complete. I think this is acceptable for topology plugin to focus only on keeping data in sync when replica management tools are invoked and mot worry about initialization after migration.

--
Thank you,
Dmitri Pal

Sr. Engineering Manager IdM portfolio
Red Hat, Inc.

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to