On Mon, Dec 17, 2012 at 8:30 PM, Roman V Shaposhnik <[email protected]> wrote:

> This looks really interesting and I can see how it can be very useful
> for things like buildouts of classes of virtual nodes.
>

That's our primary goal: we want to have a robust system that can create
pools
of identical virtual machines on multiple clouds. From an user perspective
all clusters
should be more or less identical: same base operating system, same dns
settings,
ssh credentials, same firewall / security groups settings, same packages,
same files etc.


> The question I have is this -- once you're done with automating the
> base-line provisioning,
> what's your involvement with higher-level orchestration?
>

I think higher level (service specific) orchestration should be a
completely different layer that
reacts to pool structure change events and only assumes ssh access (direct
or through a gateway).
Anytime a new set of nodes are added or removed from a pool the
configuration layer should
be notified and react as needed.

We also want to make it possible to configure the pool management process to
repair the pool if virtual machines are destroyed due to unexpected events
(chaos monkey).


> It seems that one way for you to handle this is to hand off to the existing
> cluster orchestrators like CM and Ambari. This is fine, but I'm more
> interested
> in how extensible your architecture is.
>

That's exactly what we are doing at Axemblr and we had a good experience so
far.

Extensible in what sense? As in being able to handle new services? That's
not really important for us.

 So here's my favorite use case -- suppose I need to stand up a Zookeeper
> cluster from Zookeper RPM packages from the Bigtop distribution. Could
> you, please, walk me through each step? The more detailed the better!
>

I this case I would start by creating a pool description that contains
instructions about
registering a new rpm repository and installing the ZooKeeper server rpm
packages.

If this is something I need to do many times I would enable automatic base
image caching to
speed-up things (avoid jdk install, avoid downloads etc.)

Because ZooKeeper does not (yet) support dynamic membership I need to wait
for all the pool
nodes to start before doing any configuration.

The configuration layer should later on generate the config files and start
the daemons on all nodes.

And the last step would be to deploy some sort of monitoring to close the
loop:
Provisioning -> Configuration -> Monitoring -> and back all triggered by
events

What do you think?

-- A

Reply via email to