On 11/1/17 7:04 PM, milanisko k wrote:
Folks,
=====

I've got a dilemma right now about how to proceed with containerising ironic-inspector:

Fat Container
------------------
put ironic-inspector and dnsmasq into a single container i.e consider a container as a complete inspection service shipping unit, use supervisord inside to fork and monitor both the services.

Pros:

* decoupling: dnsmasq of inspector isn't used by any other service which makes our life simpler as we won't need to reset dnsmasq configuration in case inspector died (to avoid exposing an unfiltered DHCP service)

* we can use dnsmasq filter (an on-line configuration files updating facility) driver to limit access to dnsmasq instead of iptables, in a self-contained "package" that is configured to work together as a single unit

* we don't have to worry about always scheduling dnsmasq and inspector containers on a single node (both services are bundled)

* we have a *Spine-Leaf deployment capable & containerised undercloud*

* an *HA capable inspector* service to be reused in overcloud

* an integrated solution, tested to work by upstream CI in inspector (compatibility, versioning, configuration,...)

Cons:

* inflexibility: container has to be rebuilt to be used with different DHCP service (filter driver)

* supervisord dependency and the need to refactor current container of inspector

* <put your input here>

Flat Container
-------------------

Put inspector and dnsmasq into separate containers. Use the (current) iptables driver to protect dnsmasq. IIRC this is the current approach.

Pros:

* containerised undercloud

Cons:

* no decoupling of dnsmasq

* no spine-leaf (iptables)

* containers have to be scheduled together on a single node,

* no HA (iptables driver)

* container won't be cool for overcloud as it won't be HA

Flat container with dnsmasq filter driver
----------------------------------------------------

Same as above but iptables isn't used anymore. Since it's not the current approach, we'd have to reshape the containers of dnsmasq and inspector to expose each others configuration so that inspector can write dnsmasq configuration on the fly (does inotify work in the mounted directories case???)

Pros:

* containerised undercloud

* Spine-Leaf

Cons:

* No (easy) HA (dnsmasq would be exposed in case inspector died)

Could it be managed by pacemaker bundles then?


* No decoupling of dnsmasq (shared between multiple services)

A dedicated side-car container can be used, just like the logging bp [0] implements it. So nothing will be shared then.

[0] https://blueprints.launchpad.net/tripleo/+spec/logging-stdout-rsyslog


* containers to be reshaped to expose the configuration

Seems like this is inevitable anyway


* overcloud-uncool container (lack of HA)

Could it be managed by pacemaker bundles then?


No Container
------------------

we ship inspector as a service and configure dnsmasq for inspector to be shut down in case inspector dies (to prevent exposing an unfiltered DHCP service interference). We use dnsmasq (configuration) filter driver to have a Spine-Leaf deployment capable undercloud.

Pros:

* Spine&Leaf

Cons:

* no HA inspector (shared dnsmasq?)

* no containers

* no reusable container for overcloud

* we'd have to update the undercloud systemd to shut down dnsmasq in case inspector dies if we want to use the dnsmasq filter driver

* no decoupling

The Question
------------------

What is your take on it?

Cheers,
milan


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to