On 12.06.2017 14:02, Jiří Stránský wrote:
> On 9.6.2017 18:51, Flavio Percoco wrote:
>> A-ha, ok! I figured this was another option. In this case I guess we
>> would
>> have 2 options:
>>
>> 1. Run confd + openstack service in side the container. My concern in
>> this
>> case
>> would be that we'd have to run 2 services inside the container and
>> structure
>> things in a way we can monitor both services and make sure they are both
>> running. Nothing impossible but one more thing to do.
> 
> I see several cons with this option:
> 
> * Even if we do this in a sidecar container like Bogdan mentioned (which
> is better than running 2 "top-level" processes in a single container
> IMO), we still have to figure out when to restart the main service,
> IIUC. I see confd in daemon mode listens on the backend change and
> updates the conf files, but i can't find a mention that it would be able
> to restart services. Even if we implemented this auto-restarting in
> OpenStack services, we need to deal with services like MariaDB, Redis,
> ..., so additional wrappers might be needed to make this a generic
> solution.

AFAIK, confd can send a signal to the process, so actions to be taken
are up to the service, either to refresh from its configs [0] or just
exit to be restarted by the container manager (which is docker-daemon,
currently, in tripleo).

Speaking of (tripleo specific) HA services you've mentioned, let
pacemaker to handle it on its own, but the same way, based on signals
sent to services by confd. For example, a galera service instance may
exit on the signal from the confd sidecar, then picked up by the next
monitor action causing it to be restarted by pcmk resources managemend
logic.

[0] https://bugs.launchpad.net/oslo-incubator/+bug/1276694

> 
> * Assuming we've solved the above, if we push a config change to etcd,
> all services get restarted at roughly the same time, possibly creating
> downtime or capacity issues.
> 
> * It complicates the reasoning about container lifecycle, as we have to
> start distinguishing between changes that don't require a new container
> (config change only) vs. changes which do require it (image content
> change). Mutable container config also hides this lifecycle from the
> operator -- the container changes on the inside without COE knowing
> about it, so any operator's queries to COE would look like no changes
> happened.
> 
> I think ideally container config would be immutable, and every time we
> want to change anything, we'd do that via a roll out of a new set of
> containers. This way we have a single way of making changes to reason
> about, and when we're doing rolling updates, it shouldn't result in a
> downtime or tangible performance drop. (Not talking about migrating to a
> new major OpenStack release, which will still remain a special case in
> foreseeable future.)
> 
>>
>> 2. Run confd `-onetime` and then run the openstack service.
> 
> This sounds simpler both in terms of reasoning and technical complexity,
> so if we go with confd, i'd lean towards this option. We'd have to
> rolling-replace the containers from outside, but that's what k8s can
> take care of, and at least the operator can see what's happening on high
> level.
> 
> The issues that Michał mentioned earlier still remain to be solved --
> config versioning ("accidentally" picking up latest config), and how to
> supply config elements that differ per host.
> 
> Also, it's probably worth diving a bit deeper into comparing `confd
> -onetime` and ConfigMaps...
> 
> 
> Jirka
> 
>>
>>
>> Either would work but #2 means we won't have config files monitored
>> and the
>> container would have to be restarted to update the config files.
>>
>> Thanks, Doug.
>> Flavio
>>
>>
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to