On 14.8.2018 15:19, Bogdan Dobrelya wrote:
On 8/13/18 9:47 PM, Giulio Fidente wrote:
Hello,

I'd like to get some feedback regarding the remaining
work for the split controlplane spec implementation [1]

Specifically, while for some services like nova-compute it is not
necessary to update the controlplane nodes after an edge cloud is
deployed, for other services, like cinder (or glance, probably
others), it is necessary to do an update of the config files on the
controlplane when a new edge cloud is deployed.

In fact for services like cinder or glance, which are hosted in the
controlplane, we need to pull data from the edge clouds (for example
the newly deployed ceph cluster keyrings and fsid) to configure cinder
(or glance) with a new backend.

It looks like this demands for some architectural changes to solve the > 
following two:

- how do we trigger/drive updates of the controlplane nodes after the
edge cloud is deployed?

Note, there is also a strict(?) requirement of local management
capabilities for edge clouds temporary disconnected off the central
controlplane. That complicates the updates triggering even more. We'll
need at least a notification-and-triggering system to perform required
state synchronizations, including conflicts resolving. If that's the
case, the architecture changes for TripleO deployment framework are
inevitable AFAICT.

Indeed this would complicate things much, but IIUC the spec [1] that Giulio referenced doesn't talk about local management at all.

Within the context of what the spec covers, i.e. 1 stack for Controller role and other stack(s) for Compute or *Storage roles, i hope we could address updates/upgrades workflow similarly as the deployment workflow would be addressed -- working with the stacks one by one.

That would probably mean:

1. `update/upgrade prepare` on Controller stack

2. `update/upgrade prepare` on other stacks (perhaps reusing some outputs from Controller stack here)

3. `update/upgrade run` on Controller stack

4. `update/upgrade run` on other stacks

5. (`external-update/external-upgrade run` on other stacks where appropriate)

6. `update/upgrade converge` on Controller stack

7. `update/upgrade converge` on other stacks (again maybe reusing outputs from Controller stack)

I'm not *sure* such approach would work, but at the moment i don't see a reason why it wouldn't :)

Jirka



- how do we scale the controlplane parameters to accomodate for N
backends of the same type?

A very rough approach to the latter could be to use jinja to scale up
the CephClient service so that we can have multiple copies of it in the
controlplane.

Each instance of CephClient should provide the ceph config file and
keyring necessary for each cinder (or glance) backend.

Also note that Ceph is only a particular example but we'd need a similar
workflow for any backend type.

The etherpad for the PTG session [2] touches this, but it'd be good to
start this conversation before then.

1.
https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html

2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to