Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-30 Thread Dan Prince
On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:
> On 11/29/18 6:42 PM, Jiří Stránský wrote:
> > On 28. 11. 18 18:29, Bogdan Dobrelya wrote:
> > > On 11/28/18 6:02 PM, Jiří Stránský wrote:
> > > > 
> > > > 
> > > > > Reiterating again on previous points:
> > > > > 
> > > > > -I'd be fine removing systemd. But lets do it properly and
> > > > > not via 'rpm
> > > > > -ev --nodeps'.
> > > > > -Puppet and Ruby *are* required for configuration. We can
> > > > > certainly put
> > > > > them in a separate container outside of the runtime service
> > > > > containers
> > > > > but doing so would actually cost you much more
> > > > > space/bandwidth for each
> > > > > service container. As both of these have to get downloaded to
> > > > > each node
> > > > > anyway in order to generate config files with our current
> > > > > mechanisms
> > > > > I'm not sure this buys you anything.
> > > > 
> > > > +1. I was actually under the impression that we concluded
> > > > yesterday on
> > > > IRC that this is the only thing that makes sense to seriously
> > > > consider.
> > > > But even then it's not a win-win -- we'd gain some security by
> > > > leaner
> > > > production images, but pay for it with space+bandwidth by
> > > > duplicating
> > > > image content (IOW we can help achieve one of the goals we had
> > > > in mind
> > > > by worsening the situation w/r/t the other goal we had in
> > > > mind.)
> > > > 
> > > > Personally i'm not sold yet but it's something that i'd
> > > > consider if we
> > > > got measurements of how much more space/bandwidth usage this
> > > > would
> > > > consume, and if we got some further details/examples about how
> > > > serious
> > > > are the security concerns if we leave config mgmt tools in
> > > > runtime 
> > > > images.
> > > > 
> > > > IIRC the other options (that were brought forward so far) were
> > > > already
> > > > dismissed in yesterday's IRC discussion and on the reviews.
> > > > Bin/lib bind
> > > > mounting being too hacky and fragile, and nsenter not really
> > > > solving the
> > > > problem (because it allows us to switch to having different
> > > > bins/libs
> > > > available, but it does not allow merging the availability of
> > > > bins/libs
> > > > from two containers into a single context).
> > > > 
> > > > > We are going in circles here I think
> > > > 
> > > > +1. I think too much of the discussion focuses on "why it's bad
> > > > to have
> > > > config tools in runtime images", but IMO we all sorta agree
> > > > that it
> > > > would be better not to have them there, if it came at no cost.
> > > > 
> > > > I think to move forward, it would be interesting to know: if we
> > > > do this
> > > > (i'll borrow Dan's drawing):
> > > > 
> > > > > base container| --> |service container| --> |service
> > > > > container w/
> > > > Puppet installed|
> > > > 
> > > > How much more space and bandwidth would this consume per node
> > > > (e.g.
> > > > separately per controller, per compute). This could help with
> > > > decision
> > > > making.
> > > 
> > > As I've already evaluated in the related bug, that is:
> > > 
> > > puppet-* modules and manifests ~ 16MB
> > > puppet with dependencies ~61MB
> > > dependencies of the seemingly largest a dependency, systemd
> > > ~190MB
> > > 
> > > that would be an extra layer size for each of the container
> > > images to be
> > > downloaded/fetched into registries.
> > 
> > Thanks, i tried to do the math of the reduction vs. inflation in
> > sizes 
> > as follows. I think the crucial point here is the layering. If we
> > do 
> > this image layering:
> > 
> > > base| --> |+ service| --> |+ Puppet|
> > 
> > we'd drop ~267 MB from base image, but we'd be installing that to
> > the 
> > topmost level, per-component, right?
> 
> Given we detached systemd from puppet, cronie et al, that would be 
> 267-190MB, so the math below would be looking much better

Would it be worth writing a spec that summarizes what action items are
bing taken to optimize our base image with regards to the systemd?

It seems like the general consenses is that cleaning up some of the RPM
dependencies so that we don't install Systemd is the biggest win.

What confuses me is why are there still patches posted to move Puppet
out of the base layer when we agree moving it out of the base layer
would actually cause our resulting container image set to be larger in
size.

Dan


> 
> > In my basic deployment, undercloud seems to have 17 "components"
> > (49 
> > containers), overcloud controller 15 components (48 containers),
> > and 
> > overcloud compute 4 components (7 containers). Accounting for
> > overlaps, 
> > the total number of "components" used seems to be 19. (By
> > "components" 
> > here i mean whatever uses a different ConfigImage than other
> > services. I 
> > just eyeballed it but i think i'm not too far off the correct
> > number.)
> > 
> > So we'd subtract 267 MB from base image and add that to 19 leaf
> > images 
> > used in this 

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-28 Thread Dan Prince
On Wed, 2018-11-28 at 13:28 -0500, James Slagle wrote:
> On Wed, Nov 28, 2018 at 12:31 PM Bogdan Dobrelya  > wrote:
> > Long story short, we cannot shoot both rabbits with a single shot,
> > not
> > with puppet :) May be we could with ansible replacing puppet
> > fully...
> > So splitting config and runtime images is the only choice yet to
> > address
> > the raised security concerns. And let's forget about edge cases for
> > now.
> > Tossing around a pair of extra bytes over 40,000 WAN-distributed
> > computes ain't gonna be our the biggest problem for sure.
> 
> I think it's this last point that is the crux of this discussion. We
> can agree to disagree about the merits of this proposal and whether
> it's a pre-optimzation or micro-optimization, which I admit are
> somewhat subjective terms. Ultimately, it seems to be about the "why"
> do we need to do this as to the reason why the conversation seems to
> be going in circles a bit.
> 
> I'm all for reducing container image size, but the reality is that
> this proposal doesn't necessarily help us with the Edge use cases we
> are talking about trying to solve.
> 
> Why would we even run the exact same puppet binary + manifest
> individually 40,000 times so that we can produce the exact same set
> of
> configuration files that differ only by things such as IP address,
> hostnames, and passwords? Maybe we should instead be thinking about
> how we can do that *1* time centrally, and produce a configuration
> that can be reused across 40,000 nodes with little effort. The
> opportunity for a significant impact in terms of how we can scale
> TripleO is much larger if we consider approaching these problems with
> a wider net of what we could do. There's opportunity for a lot of
> better reuse in TripleO, configuration is just one area. The plan and
> Heat stack (within the ResourceGroup) are some other areas.

We run Puppet for configuration because that is what we did on
baremetal and we didn't break backwards compatability for our
configuration options for upgrades. Our Puppet model relies on being
executed on each local host in order to splice in the correct IP
address and hostname. It executes in a distributed fashion, and works
fairly well considering the history of the project. It is robust,
guarantees no duplicate configs are being set, and is backwards
compatible with all the options TripleO supported on baremetal. Puppet
is arguably better for configuration than Ansible (which is what I hear
people most often suggest we replace it with). It suits our needs fine,
but it is perhaps a bit overkill considering we are only generating
config files.

I think the answer here is moving to something like Etcd. Perhaps
skipping over Ansible entirely as a config management tool (it is
arguably less capable than Puppet in this category anyway). Or we could
use Ansible for "legacy" services only, switch to Etcd for a majority
of the OpenStack services, and drop Puppet entirely (my favorite
option). Consolidating our technology stack would be wise.

We've already put some work and analysis into the Etcd effort. Just
need to push on it some more. Looking at the previous Kubernetes
prototypes for TripleO would be the place to start.

Config management migration is going to be tedious. Its technical debt
that needs to be handled at some point anyway. I think it is a general
TripleO improvement that could benefit all clouds, not just Edge.

Dan

> 
> At the same time, if some folks want to work on smaller optimizations
> (such as container image size), with an approach that can be agreed
> upon, then they should do so. We just ought to be careful about how
> we
> justify those changes so that we can carefully weigh the effort vs
> the
> payoff. In this specific case, I don't personally see this proposal
> helping us with Edge use cases in a meaningful way given the scope of
> the changes. That's not to say there aren't other use cases that
> could
> justify it though (such as the security points brought up earlier).
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-28 Thread Dan Prince
On Wed, 2018-11-28 at 15:12 +0100, Bogdan Dobrelya wrote:
> On 11/28/18 2:58 PM, Dan Prince wrote:
> > On Wed, 2018-11-28 at 12:45 +0100, Bogdan Dobrelya wrote:
> > > To follow up and explain the patches for code review:
> > > 
> > > The "header" patch https://review.openstack.org/620310 ->
> > > (requires)
> > > https://review.rdoproject.org/r/#/c/17534/, and also
> > > https://review.openstack.org/620061 -> (which in turn requires)
> > > https://review.openstack.org/619744 -> (Kolla change, the 1st to
> > > go)
> > > https://review.openstack.org/619736
> > 
> > This email was cross-posted to multiple lists and I think we may
> > have
> > lost some of the context in the process as the subject was changed.
> > 
> > Most of the suggestions and patches are about making our base
> > container(s) smaller in size. And the means by which the patches do
> > that is to share binaries/applications across containers with
> > custom
> > mounts/volumes. I've -2'd most of them. What concerns me however is
> > that some of the TripleO cores seemed open to this idea yesterday
> > on
> > IRC. Perhaps I've misread things but what you appear to be doing
> > here
> > is quite drastic I think we need to consider any of this carefully
> > before proceeding with any of it.
> > 
> > 
> > > Please also read the commit messages, I tried to explain all
> > > "Whys"
> > > very
> > > carefully. Just to sum up it here as well:
> > > 
> > > The current self-containing (config and runtime bits)
> > > architecture
> > > of
> > > containers badly affects:
> > > 
> > > * the size of the base layer and all containers images as an
> > > additional 300MB (adds an extra 30% of size).
> > 
> > You are accomplishing this by removing Puppet from the base
> > container,
> > but you are also creating another container in the process. This
> > would
> > still be required on all nodes as Puppet is our config tool. So you
> > would still be downloading some of this data anyways. Understood
> > your
> > reasons for doing this are that it avoids rebuilding all containers
> > when there is a change to any of these packages in the base
> > container.
> > What you are missing however is how often is it the case that
> > Puppet is
> > updated that something else in the base container isn't?
> 
> For CI jobs updating all containers, its quite an often to have
> changes 
> in openstack/tripleo puppet modules to pull in. IIUC, that
> automatically 
> picks up any updates for all of its dependencies and for the 
> dependencies of dependencies, and all that multiplied by a hundred
> of 
> total containers to get it updated. That is a *pain* we're used to
> have 
> these day for quite often timing out CI jobs... Ofc, the main cause
> is 
> delayed promotions though.

Regarding CI I made a separate suggestion on that below in that
rebuilding the base layer more often could be a good solution here. I
don't think the puppet-tripleo package is that large however so we
could just live with it.

> 
> For real deployments, I have no data for the cadence of minor updates
> in 
> puppet and tripleo & openstack modules for it, let's ask operators
> (as 
> we're happened to be in the merged openstack-discuss list)? For its 
> dependencies though, like systemd and ruby, I'm pretty sure it's
> quite 
> often to have CVEs fixed there. So I expect what "in the fields" 
> security fixes delivering for those might bring some unwanted hassle
> for 
> long-term maintenance of LTS releases. As Tengu noted on IRC:
> "well, between systemd, puppet and ruby, there are many security 
> concernes, almost every month... and also, what's the point keeping
> them 
> in runtime containers when they are useless?"

Reiterating again on previous points:

-I'd be fine removing systemd. But lets do it properly and not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can certainly put
them in a separate container outside of the runtime service containers
but doing so would actually cost you much more space/bandwidth for each
service container. As both of these have to get downloaded to each node
anyway in order to generate config files with our current mechanisms
I'm not sure this buys you anything.

We are going in circles here I think

Dan

> 
> > I would wager that it is more rare than you'd think. Perhaps
> > looking at
> > the history of an OpenStack distribution would be a valid way to
> > assess
> > 

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-28 Thread Dan Prince
On Wed, 2018-11-28 at 12:45 +0100, Bogdan Dobrelya wrote:
> To follow up and explain the patches for code review:
> 
> The "header" patch https://review.openstack.org/620310 -> (requires) 
> https://review.rdoproject.org/r/#/c/17534/, and also 
> https://review.openstack.org/620061 -> (which in turn requires)
> https://review.openstack.org/619744 -> (Kolla change, the 1st to go)
> https://review.openstack.org/619736

This email was cross-posted to multiple lists and I think we may have
lost some of the context in the process as the subject was changed.

Most of the suggestions and patches are about making our base
container(s) smaller in size. And the means by which the patches do
that is to share binaries/applications across containers with custom
mounts/volumes. I've -2'd most of them. What concerns me however is
that some of the TripleO cores seemed open to this idea yesterday on
IRC. Perhaps I've misread things but what you appear to be doing here
is quite drastic I think we need to consider any of this carefully
before proceeding with any of it.


> 
> Please also read the commit messages, I tried to explain all "Whys"
> very 
> carefully. Just to sum up it here as well:
> 
> The current self-containing (config and runtime bits) architecture
> of 
> containers badly affects:
> 
> * the size of the base layer and all containers images as an
>additional 300MB (adds an extra 30% of size).

You are accomplishing this by removing Puppet from the base container,
but you are also creating another container in the process. This would
still be required on all nodes as Puppet is our config tool. So you
would still be downloading some of this data anyways. Understood your
reasons for doing this are that it avoids rebuilding all containers
when there is a change to any of these packages in the base container.
What you are missing however is how often is it the case that Puppet is
updated that something else in the base container isn't?

I would wager that it is more rare than you'd think. Perhaps looking at
the history of an OpenStack distribution would be a valid way to assess
this more critically. Without this data to backup the numbers I'm
afraid what you are doing here falls into "pre-optimization" territory
for me and I don't think the means used in the patches warrent the
benefits you mention here.


> * Edge cases, where we have containers images to be distributed, at
>least once to hit local registries, over high-latency and limited
>bandwith, highly unreliable WAN connections.
> * numbers of packages to update in CI for all containers for all
>services (CI jobs do not rebuild containers so each container gets
>updated for those 300MB of extra size).

It would seem to me there are other ways to solve the CI containers
update problems. Rebuilding the base layer more often would solve this
right? If we always build our service containers off of a base layer
that is recent there should be no updates to the system/puppet packages
there in our CI pipelines.

> * security and the surface of attacks, by introducing systemd et al
> as
>additional subjects for CVE fixes to maintain for all containers.

We aren't actually using systemd within our containers. I think those
packages are getting pulled in by an RPM dependency elsewhere. So
rather than using 'rpm -ev --nodeps' to remove it we could create a
sub-package for containers in those cases and install it instead. In
short rather than hack this to remove them why not pursue a proper
packaging fix?

In general I am a fan of getting things out of the base container we
don't need... so yeah lets do this. But lets do it properly.

> * services uptime, by additional restarts of services related to
>security maintanence of irrelevant to openstack components sitting
>as a dead weight in containers images for ever.

Like I said above how often is it that these packages actually change
where something else in the base container doesn't? Perhaps we should
get more data here before blindly implementing a solution we aren't
sure really helps out in the real world.

> 
> On 11/27/18 4:08 PM, Bogdan Dobrelya wrote:
> > Changing the topic to follow the subject.
> > 
> > [tl;dr] it's time to rearchitect container images to stop
> > incluiding 
> > config-time only (puppet et al) bits, which are not needed runtime
> > and 
> > pose security issues, like CVEs, to maintain daily.
> > 
> > Background: 1) For the Distributed Compute Node edge case, there
> > is 
> > potentially tens of thousands of a single-compute-node remote edge
> > sites 
> > connected over WAN to a single control plane, which is having high 
> > latency, like a 100ms or so, and limited bandwith.
> > 2) For a generic security case,
> > 3) TripleO CI updates all
> > 
> > Challenge:
> > 
> > > Here is a related bug [1] and implementation [1] for that. PTAL
> > > folks!
> > > 
> > > [0] https://bugs.launchpad.net/tripleo/+bug/1804822
> > > [1] 
> > > 

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-28 Thread Dan Prince
On Wed, 2018-11-28 at 00:31 +, Fox, Kevin M wrote:
> The pod concept allows you to have one tool per container do one
> thing and do it well.
> 
> You can have a container for generating config, and another container
> for consuming it.
> 
> In a Kubernetes pod, if you still wanted to do puppet,
> you could have a pod that:
> 1. had an init container that ran puppet and dumped the resulting
> config to an emptyDir volume.
> 2. had your main container pull its config from the emptyDir volume.

We have basically implemented the same workflow in TripleO today. First
we execute Puppet in an "init container" (really just an ephemeral
container that generates the config files and then goes away). Then we
bind mount those configs into the service container.

One improvement we could make (which we aren't doing yet) is to use a
data container/volume to store the config files instead of using the
host. Sharing *data* within a 'pod' (set of containers, etc.) is
certainly a valid use of container volumes.

None of this is what we are really talking about in this thread though.
Most of the suggestions and patches are about making our base
container(s) smaller in size. And the means by which the patches do
that is to share binaries/applications across containers with custom
mounts/volumes. I don't think it is a good idea at all as it violates
encapsulation of the containers in general, regardless of whether we
use pods or not.

Dan


> 
> Then each container would have no dependency on each other.
> 
> In full blown Kubernetes cluster you might have puppet generate a
> configmap though and ship it to your main container directly. Thats
> another matter though. I think the example pod example above is still
> usable without k8s?
> 
> Thanks,
> Kevin
> 
> From: Dan Prince [dpri...@redhat.com]
> Sent: Tuesday, November 27, 2018 10:10 AM
> To: OpenStack Development Mailing List (not for usage questions); 
> openstack-disc...@lists.openstack.org
> Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of
> containers for security and size of images (maintenance) sakes
> 
> On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote:
> > Changing the topic to follow the subject.
> > 
> > [tl;dr] it's time to rearchitect container images to stop
> > incluiding
> > config-time only (puppet et al) bits, which are not needed runtime
> > and
> > pose security issues, like CVEs, to maintain daily.
> 
> I think your assertion that we need to rearchitect the config images
> to
> container the puppet bits is incorrect here.
> 
> After reviewing the patches you linked to below it appears that you
> are
> proposing we use --volumes-from to bind mount application binaries
> from
> one container into another. I don't believe this is a good pattern
> for
> containers. On baremetal if we followed the same pattern it would be
> like using an /nfs share to obtain access to binaries across the
> network to optimize local storage. Now... some people do this (like
> maybe high performance computing would launch an MPI job like this)
> but
> I don't think we should consider it best practice for our containers
> in
> TripleO.
> 
> Each container should container its own binaries and libraries as
> much
> as possible. And while I do think we should be using --volumes-from
> more often in TripleO it would be for sharing *data* between
> containers, not binaries.
> 
> 
> > Background:
> > 1) For the Distributed Compute Node edge case, there is potentially
> > tens
> > of thousands of a single-compute-node remote edge sites connected
> > over
> > WAN to a single control plane, which is having high latency, like a
> > 100ms or so, and limited bandwith. Reducing the base layer size
> > becomes
> > a decent goal there. See the security background below.
> 
> The reason we put Puppet into the base layer was in fact to prevent
> it
> from being downloaded multiple times. If we were to re-architect the
> image layers such that the child layers all contained their own
> copies
> of Puppet for example there would actually be a net increase in
> bandwidth and disk usage. So I would argue we are already addressing
> the goal of optimizing network and disk space.
> 
> Moving it out of the base layer so that you can patch it more often
> without disrupting other services is a valid concern. But addressing
> this concern while also preserving our definiation of a container
> (see
> above, a container should contain all of its binaries) is going to
> cost
> you something, namely disk and network space because Puppet would
> need
> to be duplicated in each child container.
> 
> A

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-27 Thread Dan Prince
On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote:
> Changing the topic to follow the subject.
> 
> [tl;dr] it's time to rearchitect container images to stop incluiding 
> config-time only (puppet et al) bits, which are not needed runtime
> and 
> pose security issues, like CVEs, to maintain daily.

I think your assertion that we need to rearchitect the config images to
container the puppet bits is incorrect here.

After reviewing the patches you linked to below it appears that you are
proposing we use --volumes-from to bind mount application binaries from
one container into another. I don't believe this is a good pattern for
containers. On baremetal if we followed the same pattern it would be
like using an /nfs share to obtain access to binaries across the
network to optimize local storage. Now... some people do this (like
maybe high performance computing would launch an MPI job like this) but
I don't think we should consider it best practice for our containers in
TripleO.

Each container should container its own binaries and libraries as much
as possible. And while I do think we should be using --volumes-from
more often in TripleO it would be for sharing *data* between
containers, not binaries.


> 
> Background:
> 1) For the Distributed Compute Node edge case, there is potentially
> tens 
> of thousands of a single-compute-node remote edge sites connected
> over 
> WAN to a single control plane, which is having high latency, like a 
> 100ms or so, and limited bandwith. Reducing the base layer size
> becomes 
> a decent goal there. See the security background below.

The reason we put Puppet into the base layer was in fact to prevent it
from being downloaded multiple times. If we were to re-architect the
image layers such that the child layers all contained their own copies
of Puppet for example there would actually be a net increase in
bandwidth and disk usage. So I would argue we are already addressing
the goal of optimizing network and disk space.

Moving it out of the base layer so that you can patch it more often
without disrupting other services is a valid concern. But addressing
this concern while also preserving our definiation of a container (see
above, a container should contain all of its binaries) is going to cost
you something, namely disk and network space because Puppet would need
to be duplicated in each child container.

As Puppet is used to configure a majority of the services in TripleO
having it in the base container makes most sense. And yes, if there are
security patches for Puppet/Ruby those might result in a bunch of
containers getting pushed. But let Docker layers take care of this I
think... Don't try to solve things by constructing your own custom
mounts and volumes to work around the issue.


> 2) For a generic security (Day 2, maintenance) case, when 
> puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to
> be 
> updated and all layers on top - to be rebuild, and all of those
> layers, 
> to be re-fetched for cloud hosts and all containers to be
> restarted... 
> And all of that because of some fixes that have nothing to OpenStack.
> By 
> the remote edge sites as well, remember of "tens of thousands", high 
> latency and limited bandwith?..
> 3) TripleO CI updates (including puppet*) packages in containers, not
> in 
> a common base layer of those. So each a CI job has to update puppet*
> and 
> its dependencies - ruby/systemd as well. Reducing numbers of packages
> to 
> update for each container makes sense for CI as well.
> 
> Implementation related:
> 
> WIP patches [0],[1] for early review, uses a config "pod" approach,
> does 
> not require to maintain a two sets of config vs runtime images.
> Future 
> work: a) cronie requires systemd, we'd want to fix that also off the 
> base layer. b) rework to podman pods for docker-puppet.py instead of 
> --volumes-from a side car container (can't be backported for Queens 
> then, which is still nice to have a support for the Edge DCN case,
> at 
> least downstream only perhaps).
> 
> Some questions raised on IRC:
> 
> Q: is having a service be able to configure itself really need to 
> involve a separate pod?
> A: Highly likely yes, removing not-runtime things is a good idea and 
> pods is an established PaaS paradigm already. That will require some 
> changes in the architecture though (see the topic with WIP patches).

I'm a little confused on this one. Are you suggesting that we have 2
containers for each service? One with Puppet and one without?

That is certainly possible, but to pull it off would likely require you
to have things built like this:

 |base container| --> |service container| --> |service container w/
Puppet installed|

The end result would be Puppet being duplicated in a layer for each
services "config image". Very inefficient.

Again, I'm ansering this assumping we aren't violating our container
constraints and best practices where each container has the binaries
its needs to do its 

Re: [openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-05 Thread Dan Prince
On Mon, Nov 5, 2018 at 4:06 AM Cédric Jeanneret  wrote:
>
> On 11/2/18 2:39 PM, Dan Prince wrote:
> > I pushed a patch[1] to update our containerized deployment
> > architecture docs yesterday. There are 2 new fairly useful sections we
> > can leverage with TripleO's stepwise deployment. They appear to be
> > used somewhat sparingly so I wanted to get the word out.
>
> Good thing, it's important to highlight this feature and explain how it
> works, big thumb up Dan!
>
> >
> > The first is 'deploy_steps_tasks' which gives you a means to run
> > Ansible snippets on each node/role in a stepwise fashion during
> > deployment. Previously it was only possible to execute puppet or
> > docker commands where as now that we have deploy_steps_tasks we can
> > execute ad-hoc ansible in the same manner.
>
> I'm wondering if such a thing could be used for the "inflight
> validations" - i.e. a step to validate a service/container is working as
> expected once it's deployed, in order to get early failure.
> For instance, we deploy a rabbitmq container, and right after it's
> deployed, we'd like to ensure it's actually running and works as
> expected before going forward in the deploy.
>
> Care to have a look at that spec[1] and see if, instead of adding a new
> "validation_tasks" entry, we could "just" use the "deploy_steps_tasks"
> with the right step number? That would be really, really cool, and will
> probably avoid a lot of code in the end :).

It could work fine I think. As deploy-steps-tasks runs before the
"common container/baremetal" actions special care would need to be
taken so that validations for a containers startup occur at the
beginning of the next step. So a container started at step 2 would be
validated early in step 3. This may also require us to have a "post"
deploy_steps_tasks" iteration so that we can validate late starting
containers.

If if we use the more generic deploy_steps_tasks section we'd probably
rely on conventions to always add Ansible tags onto the validation
tasks. These could be useful for those wanting to selectively execute
them externally (not sure if that was part of your spec but I could
see someone wanting this).

Dan

>
> Thank you!
>
> C.
>
> [1] https://review.openstack.org/#/c/602007/
>
> >
> > The second is 'external_deploy_tasks' which allows you to use run
> > Ansible snippets on the Undercloud during stepwise deployment. This is
> > probably most useful for driving an external installer but might also
> > help with some complex tasks that need to originate from a single
> > Ansible client.
> >
> > The only downside I see to these approaches is that both appear to be
> > implemented with Ansible's default linear strategy. I saw shardy's
> > comment here [2] that the :free strategy does not yet apparently work
> > with the any_errors_fatal option. Perhaps we can reach out to someone
> > in the Ansible community in this regard to improve running these
> > things in parallel like TripleO used to work with Heat agents.
> >
> > This is also how host_prep_tasks is implemented which BTW we should
> > now get rid of as a duplicate architectural step since we have
> > deploy_steps_tasks anyway.
> >
> > [1] https://review.openstack.org/#/c/614822/
> > [2] 
> > http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PSA lets use deploy_steps_tasks

2018-11-02 Thread Dan Prince
I pushed a patch[1] to update our containerized deployment
architecture docs yesterday. There are 2 new fairly useful sections we
can leverage with TripleO's stepwise deployment. They appear to be
used somewhat sparingly so I wanted to get the word out.

The first is 'deploy_steps_tasks' which gives you a means to run
Ansible snippets on each node/role in a stepwise fashion during
deployment. Previously it was only possible to execute puppet or
docker commands where as now that we have deploy_steps_tasks we can
execute ad-hoc ansible in the same manner.

The second is 'external_deploy_tasks' which allows you to use run
Ansible snippets on the Undercloud during stepwise deployment. This is
probably most useful for driving an external installer but might also
help with some complex tasks that need to originate from a single
Ansible client.

The only downside I see to these approaches is that both appear to be
implemented with Ansible's default linear strategy. I saw shardy's
comment here [2] that the :free strategy does not yet apparently work
with the any_errors_fatal option. Perhaps we can reach out to someone
in the Ansible community in this regard to improve running these
things in parallel like TripleO used to work with Heat agents.

This is also how host_prep_tasks is implemented which BTW we should
now get rid of as a duplicate architectural step since we have
deploy_steps_tasks anyway.

[1] https://review.openstack.org/#/c/614822/
[2] 
http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/common/deploy-steps.j2#n554

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Dan Prince
On Wed, Oct 17, 2018 at 11:15 AM Alex Schultz  wrote:
>
> Time to resurrect this thread.
>
> On Thu, Jul 5, 2018 at 12:14 PM James Slagle  wrote:
> >
> > On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince  wrote:
> > > Last week I was tinkering with my docker configuration a bit and was a
> > > bit surprised that puppet/services/docker.yaml no longer used puppet to
> > > configure the docker daemon. It now uses Ansible [1] which is very cool
> > > but brings up the question of how should we clearly indicate to
> > > developers and users that we are using Ansible vs Puppet for
> > > configuration?
> > >
> > > TripleO has been around for a while now, has supported multiple
> > > configuration ans service types over the years: os-apply-config,
> > > puppet, containers, and now Ansible. In the past we've used rigid
> > > directory structures to identify which "service type" was used. More
> > > recently we mixed things up a bit more even by extending one service
> > > type from another ("docker" services all initially extended the
> > > "puppet" services to generate config files and provide an easy upgrade
> > > path).
> > >
> > > Similarly we now use Ansible all over the place for other things in
> > > many of or docker and puppet services for things like upgrades. That is
> > > all good too. I guess the thing I'm getting at here is just a way to
> > > cleanly identify which services are configured via Puppet vs. Ansible.
> > > And how can we do that in the least destructive way possible so as not
> > > to confuse ourselves and our users in the process.
> > >
> > > Also, I think its work keeping in mind that TripleO was once a multi-
> > > vendor project with vendors that had different preferences on service
> > > configuration. Also having the ability to support multiple
> > > configuration mechanisms in the future could once again present itself
> > > (thinking of Kubernetes as an example). Keeping in mind there may be a
> > > conversion period that could well last more than a release or two.
> > >
> > > I suggested a 'services/ansible' directory with mixed responces in our
> > > #tripleo meeting this week. Any other thoughts on the matter?
> >
> > I would almost rather see us organize the directories by service
> > name/project instead of implementation.
> >
> > Instead of:
> >
> > puppet/services/nova-api.yaml
> > puppet/services/nova-conductor.yaml
> > docker/services/nova-api.yaml
> > docker/services/nova-conductor.yaml
> >
> > We'd have:
> >
> > services/nova/nova-api-puppet.yaml
> > services/nova/nova-conductor-puppet.yaml
> > services/nova/nova-api-docker.yaml
> > services/nova/nova-conductor-docker.yaml
> >
> > (or perhaps even another level of directories to indicate
> > puppet/docker/ansible?)
> >
> > Personally, such an organization is something I'm more used to. It
> > feels more similar to how most would expect a puppet module or ansible
> > role to be organized, where you have the abstraction (service
> > configuration) at a higher directory level than specific
> > implementations.
> >
> > It would also lend itself more easily to adding implementations only
> > for specific services, and address the question of if a new top level
> > implementation directory needs to be created. For example, adding a
> > services/nova/nova-api-chef.yaml seems a lot less contentious than
> > adding a top level chef/services/nova-api.yaml.
> >
> > It'd also be nice if we had a way to mark the default within a given
> > service's directory. Perhaps services/nova/nova-api-default.yaml,
> > which would be a new template that just consumes the default? Or
> > perhaps a symlink, although it was pointed out symlinks don't work in
> > swift containers. Still, that could possibly be addressed in our plan
> > upload workflows. Then the resource-registry would point at
> > nova-api-default.yaml. One could easily tell which is the default
> > without having to cross reference with the resource-registry.
> >
>
> So since I'm adding a new ansible service, I thought I'd try and take
> a stab at this naming thing. I've taken James's idea and proposed an
> implementation here:
> https://review.openstack.org/#/c/588111/
>
> The idea would be that the THT code for the service deployment would
> end up in something like:
>
> deployment//-.yaml

A matter of preference but I can live with this.

>
> Additionally I took a

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Dan Prince
On Thu, Oct 25, 2018 at 11:26 AM Alex Schultz  wrote:
>
> On Thu, Oct 25, 2018 at 9:16 AM Bogdan Dobrelya  wrote:
> >
> >
> > On 10/19/18 8:04 PM, Alex Schultz wrote:
> > > On Fri, Oct 19, 2018 at 10:53 AM James Slagle  
> > > wrote:
> > >>
> > >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  
> > >> wrote:
> > >> > Additionally I took a stab at combining the puppet/docker service
> > >> > definitions for the aodh services in a similar structure to start
> > >> > reducing the overhead we've had from maintaining the docker/puppet
> > >> > implementations seperately.  You can see the patch
> > >> > https://review.openstack.org/#/c/611188/ for an additional example of
> > >> > this.
> > >>
> > >> That patch takes the approach of removing baremetal support. Is that
> > >> what we agreed to do?
> > >>
> > >
> > > Since it's deprecated since Queens[0], yes? I think it is time to stop
> > > continuing this method of installation.  Given that I'm not even sure
> >
> > My point and concern retains as before, unless we fully dropped the
> > docker support for Queens (and downstream LTS released for it), we
> > should not modify the t-h-t directory tree, due to associated
> > maintenance of backports complexity reasons
> >
>
> This is why we have duplication of things in THT.  For environment
> files this is actually an issue due to the fact they are the end user
> interface. But these service files should be internal and where they
> live should not matter.  We already have had this in the past and have
> managed to continue to do backports so I don't think this as a reason
> not to do this clean up.  It feels like we use this as a reason not to
> actually move forward on cleanup and we end up carrying the tech debt.
> By this logic, we'll never be able to cleanup anything if we can't
> handle moving files around.

Yeah. The environment files would contain some level of duplication
until we refactor our plan storage mechanism to use a plain old
tarball (stored in Swift still) instead of storing files in the
expanded format. Swift does not support softlinks, but a tarball would
and thus would allow us to de-dup things in the future.

The patch is here but it needs some love:

https://review.openstack.org/#/c/581153/

Dan

>
> I think there are some patches to do soft links (dprince might be able
> to provide the patches) which could at least handle this backward
> compatibility around locations, but I think we need to actually move
> forward on the simplification of the service definitions unless
> there's a blocking technical issue with this effort.
>
> Thanks,
> -Alex
>
> > > the upgrade process even works anymore with baremetal, I don't think
> > > there's a reason to keep it as it directly impacts the time it takes
> > > to perform deployments and also contributes to increased complexity
> > > all around.
> > >
> > > [0] 
> > > http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html
> > >
> > >> I'm not specifically opposed, as I'm pretty sure the baremetal
> > >> implementations are no longer tested anywhere, but I know that Dan had
> > >> some concerns about that last time around.
> > >>
> > >> The alternative we discussed was using jinja2 to include common
> > >> data/tasks in both the puppet/docker/ansible implementations. That
> > >> would also result in reducing the number of Heat resources in these
> > >> stacks and hopefully reduce the amount of time it takes to
> > >> create/update the ServiceChain stacks.
> > >>
> > >
> > > I'd rather we officially get rid of the one of the two methods and
> > > converge on a single method without increasing the complexity via
> > > jinja to continue to support both. If there's an improvement to be had
> > > after we've converged on a single structure for including the base
> > > bits, maybe we could do that then?
> > >
> > > Thanks,
> > > -Alex
> >
> >
> > --
> > Best regards,
> > Bogdan Dobrelya,
> > Irc #bogdando
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Dan Prince
On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>
> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
> > On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi  > > wrote:
> >
> > More seriously here: there is an ongoing effort to converge the
> > tools around containerization within Red Hat, and we, TripleO are
> > interested to continue the containerization of our services (which
> > was initially done with Docker & Docker-Distribution).
> > We're looking at how these containers could be managed by k8s one
> > day but way before that we plan to swap out Docker and join CRI-O
> > efforts, which seem to be using Podman + Buildah (among other things).
> >
> > I guess my wording wasn't the best but Alex explained way better here:
> > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
> >
> > If I may have a chance to rephrase, I guess our current intention is to
> > continue our containerization and investigate how we can improve our
> > tooling to better orchestrate the containers.
> > We have a nice interface (openstack/paunch) that allows us to run
> > multiple container backends, and we're currently looking outside of
> > Docker to see how we could solve our current challenges with the new tools.
> > We're looking at CRI-O because it happens to be a project with a great
> > community, focusing on some problems that we, TripleO have been facing
> > since we containerized our services.
> >
> > We're doing all of this in the open, so feel free to ask any question.
>
> I appreciate your response, Emilien, thank you. Alex' responses to
> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>
> For now, it *seems* to me that all of the chosen tooling is very Red Hat
> centric. Which makes sense to me, considering Triple-O is a Red Hat product.

Perhaps a slight clarification here is needed. "Director" is a Red Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).

>
> I don't know how much of the current reinvention of container runtimes
> and various tooling around containers is the result of politics. I don't
> know how much is the result of certain companies wanting to "own" the
> container stack from top to bottom. Or how much is a result of technical
> disagreements that simply cannot (or will not) be resolved among
> contributors in the container development ecosystem.
>
> Or is it some combination of the above? I don't know.
>
> What I *do* know is that the current "NIH du jour" mentality currently
> playing itself out in the container ecosystem -- reminding me very much
> of the Javascript ecosystem -- makes it difficult for any potential
> *consumers* of container libraries, runtimes or applications to be
> confident that any choice they make towards one of the other will be the
> *right* choice or even a *possible* choice next year -- or next week.
> Perhaps this is why things like openstack/paunch exist -- to give you
> options if something doesn't pan out.

This is exactly why paunch exists.

Re, the podman thing I look at it as an implementation detail. The
good news is that given it is almost a parity replacement for what we
already use we'll still contribute to the OpenStack community in
similar ways. Ultimately whether you run 'docker run' or 'podman run'
you end up with the same thing as far as the existing TripleO
architecture goes.

Dan

>
> You have a tough job. I wish you all the luck in the world in making
> these decisions and hope politics and internal corporate management
> decisions play as little a role in them as possible.
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ansible roles in tripleo

2018-08-23 Thread Dan Prince
On Tue, Aug 14, 2018 at 1:53 PM Jill Rouleau  wrote:
>
> Hey folks,
>
> Like Alex mentioned[0] earlier, we've created a bunch of ansible roles
> for tripleo specific bits.  The idea is to start putting some basic
> cookiecutter type things in them to get things started, then move some
> low-hanging fruit out of tripleo-heat-templates and into the appropriate
> roles.  For example, docker/services/keystone.yaml could have
> upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-role-
> tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and the
> t-h-t updated to
> include_role: ansible-role-tripleo-keystone
>   tasks_from: upgrade.yml
> without having to modify any puppet or heat directives.
>
> This would let us define some patterns for implementing these tripleo
> roles during Stein while looking at how we can make use of ansible for
> things like core config.

I like the idea of consolidating the Ansible stuff and getting out of
the practice of inlining it into t-h-t. Especially the "core config"
which I take to mean moving away from Puppet and towards Ansible for
service level configuration. But presumably we are going to rely on
the upstream Openstack ansible-os_* projects to do the heavy config
lifting for us here though right? We won't have to do much on our side
to leverage that I hope other than translating old hiera to equivalent
settings for the config files to ensure some backwards comparability.

While I agree with the goals I do wonder if the shear number of git
repos we've created here is needed. Like with puppet-tripleo we were
able to combine a set of "small lightweight" manifests in a way to
wrap them around the upstream Puppet modules. Why not do the same with
ansible-role-tripleo? My concern is that we've created so many cookie
cutter repos with boilerplate code in them that ends up being much
heavier than the files which will actually reside in many of these
repos. This in addition to the extra review work and RPM packages we
need to constantly maintain.

Dan

>
> t-h-t and config-download will still drive the vast majority of playbook
> creation for now, but for new playbooks (such as for operations tasks)
> tripleo-ansible[1] would be our project directory.
>
> So in addition to the larger conversation about how deployers can start
> to standardize how we're all using ansible, I'd like to also have a
> tripleo-specific conversation at PTG on how we can break out some of our
> ansible that's currently embedded in t-h-t into more modular and
> flexible roles.
>
> Cheers,
> Jill
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/13311
> 9.html
> [1] 
> https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Patches to speed up plan operations

2018-08-07 Thread Dan Prince
Thanks for taking this on Ian! I'm fully on board with the effort. I
like the consolidation and performance improvements. Storing t-h-t
templates in Swift worked okay 3-4 years ago. Now that we have more
templates, many of which need .j2 rendering the storage there has
become quite a bottleneck.

Additionally, since we'd be sending commands to Heat via local
filesystem template storage we could consider using softlinks again
within t-h-t which should help with refactoring and deprecation
efforts.

Dan
On Wed, Aug 1, 2018 at 7:35 PM Ian Main  wrote:
>
>
> Hey folks!
>
> So I've been working on some patches to speed up plan operations in TripleO.  
> This was originally driven by the UI needing to be able to perform a 'plan 
> upload' in something less than several minutes. :)
>
> https://review.openstack.org/#/c/581153/
> https://review.openstack.org/#/c/581141/
>
> I have a functioning set of patches, and it actually cuts over 2 minutes off 
> the overcloud deployment time.
>
> Without patch:
> + openstack overcloud plan create --templates 
> /home/stack/tripleo-heat-templates/ overcloud
> Creating Swift container to store the plan
> Creating plan from template files in: /home/stack/tripleo-heat-templates/
> Plan created.
> real3m3.415s
>
> With patch:
> + openstack overcloud plan create --templates 
> /home/stack/tripleo-heat-templates/ overcloud
> Creating Swift container to store the plan
> Creating plan from template files in: /home/stack/tripleo-heat-templates/
> Plan created.
> real0m44.694s
>
> This is on VMs.  On real hardware it now takes something like 15-20 seconds 
> to do the plan upload which is much more manageable from the UI standpoint.
>
> Some things about what this patch does:
>
> - It makes use of process-templates.py (written for the undercloud) to 
> process the jinjafied templates.  This reduces replication with the existing 
> version in the code base and is very fast as it's all done on local disk.
> - It stores the bulk of the templates as a tarball in swift.  Any individual 
> files in swift take precedence over the contents of the tarball so it should 
> be backwards compatible.  This is a great speed up as we're not accessing a 
> lot of individual files in swift.
>
> There's still some work to do; cleaning up and fixing the unit tests, testing 
> upgrades etc.  I just wanted to get some feedback on the general idea and 
> hopefully some reviews and/or help - especially with the unit test stuff.
>
> Thanks everyone!
>
> Ian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-07 Thread Dan Prince
On Thu, Aug 2, 2018 at 5:42 PM Steve Baker  wrote:
>
>
>
> On 02/08/18 13:03, Alex Schultz wrote:
> > On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:
> >> On 7/6/18 7:02 PM, Ben Nemec wrote:
> >>>
> >>>
> >>> On 07/05/2018 01:23 PM, Dan Prince wrote:
> >>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
> >>>>>
> >>>>> I would almost rather see us organize the directories by service
> >>>>> name/project instead of implementation.
> >>>>>
> >>>>> Instead of:
> >>>>>
> >>>>> puppet/services/nova-api.yaml
> >>>>> puppet/services/nova-conductor.yaml
> >>>>> docker/services/nova-api.yaml
> >>>>> docker/services/nova-conductor.yaml
> >>>>>
> >>>>> We'd have:
> >>>>>
> >>>>> services/nova/nova-api-puppet.yaml
> >>>>> services/nova/nova-conductor-puppet.yaml
> >>>>> services/nova/nova-api-docker.yaml
> >>>>> services/nova/nova-conductor-docker.yaml
> >>>>>
> >>>>> (or perhaps even another level of directories to indicate
> >>>>> puppet/docker/ansible?)
> >>>>
> >>>> I'd be open to this but doing changes on this scale is a much larger
> >>>> developer and user impact than what I was thinking we would be willing
> >>>> to entertain for the issue that caused me to bring this up (i.e. how to
> >>>> identify services which get configured by Ansible).
> >>>>
> >>>> Its also worth noting that many projects keep these sorts of things in
> >>>> different repos too. Like Kolla fully separates kolla-ansible and
> >>>> kolla-kubernetes as they are quite divergent. We have been able to
> >>>> preserve some of our common service architectures but as things move
> >>>> towards kubernetes we may which to change things structurally a bit
> >>>> too.
> >>>
> >>> True, but the current directory layout was from back when we intended to
> >>> support multiple deployment tools in parallel (originally
> >>> tripleo-image-elements and puppet).  Since I think it has become clear 
> >>> that
> >>> it's impractical to maintain two different technologies to do essentially
> >>> the same thing I'm not sure there's a need for it now.  It's also worth
> >>> noting that kolla-kubernetes basically died because there wasn't enough
> >>> people to maintain both deployment methods, so we're not the only ones who
> >>> have found that to be true.  If/when we move to kubernetes I would
> >>> anticipate it going like the initial containers work did - development 
> >>> for a
> >>> couple of cycles, then a switch to the new thing and deprecation of the 
> >>> old
> >>> thing, then removal of support for the old thing.
> >>>
> >>> That being said, because of the fact that the service yamls are
> >>> essentially an API for TripleO because they're referenced in user
> >>
> >> this ^^
> >>
> >>> resource registries, I'm not sure it's worth the churn to move everything
> >>> either.  I think that's going to be an issue either way though, it's just 
> >>> a
> >>> question of the scope.  _Something_ is going to move around no matter how 
> >>> we
> >>> reorganize so it's a problem that needs to be addressed anyway.
> >>
> >> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
> >> maintainers doing backports for queens (and the LTS downstream release 
> >> based
> >> on it). Now imagine kubernetes support comes within those next a few years,
> >> before we can let the old API just go...
> >>
> >> I have an example [0] to share all that pain brought by a simple move of
> >> 'API defaults' from environments/services-docker to environments/services
> >> plus environments/services-baremetal. Each time a file changes contents by
> >> its old location, like here [1], I had to run a lot of sanity checks to
> >> rebase it properly. Like checking for the updated paths in resource
> >> registries are still valid or had to/been moved as well, then picking the
> >> source of truth for diverged old vs changes locations - all that to loose
> >> nothing important in progress.
> >>
>

Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-18 Thread Dan Prince
ndercloud HA is important, it won’t
> > bring
> > operators as many benefits as the previously mentioned
> > improvements.
> > Let’s keep it in mind when we are considering the amount of work
> > needed for it.
> 
> +100
> 
> > E) One of the use-cases we want to take into account is expanind a
> > single-node deployment (all-in-one) to 3 node HA controller. I
> > think
> > it is important when evaluating PCMK/keepalived 
> 
> Right, so to be able to implement this, there is no way around having
> pacemaker (at least today until we have galera and rabbit).
> It still does not mean we have to default to it, but if you want to
> scale beyond one node, then there is no other option atm.
> 
> > HTH
> 
> It did, thanks!
> 
> Michele
> > — Jarda
> > 
> > > On Jul 17, 2018, at 05:04, Emilien Macchi 
> > > wrote:
> > > 
> > > Thanks everyone for the feedback, I've made a quick PoC:
> > > https://review.openstack.org/#/q/topic:bp/undercloud-pacemaker-de
> > > fault
> > > 
> > > And I'm currently doing local testing. I'll publish results when
> > > progress is made, but I've made it so we have the choice to
> > > enable pacemaker (disabled by default), where keepalived would
> > > remain the default for now.
> > > 
> > > On Mon, Jul 16, 2018 at 2:07 PM Michele Baldessari  > > n.org> wrote:
> > > On Mon, Jul 16, 2018 at 11:48:51AM -0400, Emilien Macchi wrote:
> > > > On Mon, Jul 16, 2018 at 11:42 AM Dan Prince  > > > > wrote:
> > > > [...]
> > > > 
> > > > > The biggest downside IMO is the fact that our Pacemaker
> > > > > integration is
> > > > > not containerized. Nor are there any plans to finish the
> > > > > containerization of it. Pacemaker has to currently run on
> > > > > baremetal
> > > > > and this makes the installation of it for small dev/test
> > > > > setups a lot
> > > > > less desirable. It can launch containers just fine but the
> > > > > pacemaker
> > > > > installation itself is what concerns me for the long term.
> > > > > 
> > > > > Until we have plans for containizing it I suppose I would
> > > > > rather see
> > > > > us keep keepalived as an option for these smaller setups. We
> > > > > can
> > > > > certainly change our default Undercloud to use Pacemaker (if
> > > > > we choose
> > > > > to do so). But having keepalived around for "lightweight"
> > > > > (zero or low
> > > > > footprint) installs that work is really quite desirable.
> > > > > 
> > > > 
> > > > That's a good point, and I agree with your proposal.
> > > > Michele, what's the long term plan regarding containerized
> > > > pacemaker?
> > > 
> > > Well, we kind of started evaluating it (there was definitely not
> > > enough
> > > time around pike/queens as we were busy landing the bundles
> > > code), then
> > > due to discussions around k8s it kind of got off our radar. We
> > > can
> > > at least resume the discussions around it and see how much effort
> > > it
> > > would be. I'll bring it up with my team and get back to you.
> > > 
> > > cheers,
> > > Michele
> > > -- 
> > > Michele Baldessari
> > > C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
> > > 
> > > _
> > > _
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
> > > subscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> > > 
> > > -- 
> > > Emilien Macchi
> > > _
> > > _
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
> > > subscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Stein blueprint - Plan to remove Keepalived support (replaced by Pacemaker)

2018-07-16 Thread Dan Prince
On Fri, Jul 13, 2018 at 2:34 PM Emilien Macchi  wrote:
>
> Greetings,
>
> We have been supporting both Keepalived and Pacemaker to handle VIP 
> management.
> Keepalived is actually the tool used by the undercloud when SSL is enabled 
> (for SSL termination).
> While Pacemaker is used on the overcloud to handle VIPs but also services HA.
>
> I see some benefits at removing support for keepalived and deploying 
> Pacemaker by default:
> - pacemaker can be deployed on one node (we actually do it in CI), so can be 
> deployed on the undercloud to handle VIPs and manage HA as well.
> - it'll allow to extend undercloud & standalone use cases to support 
> multinode one day, with HA and SSL, like we already have on the overcloud.
> - it removes the complexity of managing two tools so we'll potentially 
> removing code in TripleO.
> - of course since pacemaker features from overcloud would be usable in 
> standalone environment, but also on the undercloud.
>
> There is probably some downside, the first one is I think Keepalived is much 
> more lightweight than Pacemaker, we probably need to run some benchmark here 
> and make sure we don't make the undercloud heavier than it is now.

The biggest downside IMO is the fact that our Pacemaker integration is
not containerized. Nor are there any plans to finish the
containerization of it. Pacemaker has to currently run on baremetal
and this makes the installation of it for small dev/test setups a lot
less desirable. It can launch containers just fine but the pacemaker
installation itself is what concerns me for the long term.

Until we have plans for containizing it I suppose I would rather see
us keep keepalived as an option for these smaller setups. We can
certainly change our default Undercloud to use Pacemaker (if we choose
to do so). But having keepalived around for "lightweight" (zero or low
footprint) installs that work is really quite desirable.

Dan

>
> I went ahead and created this blueprint for Stein:
> https://blueprints.launchpad.net/tripleo/+spec/undercloud-pacemaker-default
> I also plan to prototype some basic code soon and provide an upgrade path if 
> we accept this blueprint.
>
> This is something I would like to discuss here and at the PTG, feel free to 
> bring questions/concerns,
> Thanks!
> --
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] New "validation" subcommand for "openstack undercloud"

2018-07-16 Thread Dan Prince
On Mon, Jul 16, 2018 at 11:27 AM Cédric Jeanneret  wrote:
>
> Dear Stackers,
>
> In order to let operators properly validate their undercloud node, I
> propose to create a new subcommand in the "openstack undercloud" "tree":
> `openstack undercloud validate'
>
> This should only run the different validations we have in the
> undercloud_preflight.py¹
> That way, an operator will be able to ensure all is valid before
> starting "for real" any other command like "install" or "upgrade".
>
> Of course, this "validate" step is embedded in the "install" and
> "upgrade" already, but having the capability to just validate without
> any further action is something that can be interesting, for example:
>
> - ensure the current undercloud hardware/vm is sufficient for an update
> - ensure the allocated VM for the undercloud is sufficient for a deploy
> - and so on
>
> There are probably other possibilities, if we extend the "validation"
> scope outside the "undercloud" (like, tripleo, allinone, even overcloud).
>
> What do you think? Any pros/cons/thoughts?

I think this command could be very useful. I'm assuming the underlying
implementation would call a 'heat stack-validate' using an ephemeral
heat-all instance. If so way we implement it for the undercloud vs the
'standalone' use case would likely be a bit different. We can probably
subclass the implementations to share common code across the efforts
though.

For the undercloud you are likely to have a few extra 'local only'
validations. Perhaps extra checks for things on the client side.

For the all-in-one I had envisioned using the output from the 'heat
stack-validate' to create a sample config file for a custom set of
services. Similar to how tools like Packstack generate a config file
for example.

Dan

>
> Cheers,
>
> C.
>
>
>
> ¹
> http://git.openstack.org/cgit/openstack/python-tripleoclient/tree/tripleoclient/v1/undercloud_preflight.py
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread Dan Prince
On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
> 
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
> 
> Instead of:
> 
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
> 
> We'd have:
> 
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml
> 
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)

I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread Dan Prince
Last week I was tinkering with my docker configuration a bit and was a
bit surprised that puppet/services/docker.yaml no longer used puppet to
configure the docker daemon. It now uses Ansible [1] which is very cool
but brings up the question of how should we clearly indicate to
developers and users that we are using Ansible vs Puppet for
configuration?

TripleO has been around for a while now, has supported multiple
configuration ans service types over the years: os-apply-config,
puppet, containers, and now Ansible. In the past we've used rigid
directory structures to identify which "service type" was used. More
recently we mixed things up a bit more even by extending one service
type from another ("docker" services all initially extended the
"puppet" services to generate config files and provide an easy upgrade
path).

Similarly we now use Ansible all over the place for other things in
many of or docker and puppet services for things like upgrades. That is
all good too. I guess the thing I'm getting at here is just a way to
cleanly identify which services are configured via Puppet vs. Ansible.
And how can we do that in the least destructive way possible so as not
to confuse ourselves and our users in the process.

Also, I think its work keeping in mind that TripleO was once a multi-
vendor project with vendors that had different preferences on service
configuration. Also having the ability to support multiple
configuration mechanisms in the future could once again present itself
(thinking of Kubernetes as an example). Keeping in mind there may be a
conversion period that could well last more than a release or two.

I suggested a 'services/ansible' directory with mixed responces in our
#tripleo meeting this week. Any other thoughts on the matter?

Thanks,

Dan

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/puppet/services/docker.yaml?id=00f5019ef28771e0b3544d0aa3110d5603d7c
159

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One)

2018-06-05 Thread Dan Prince
On Mon, Jun 4, 2018 at 8:26 PM, Emilien Macchi  wrote:
> TL;DR: we made nice progress and you can checkout this demo:
> https://asciinema.org/a/185533
>
> We started the discussion back in Dublin during the last PTG. The idea of
> Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud)
> is to deploy a single node OpenStack where the provisioning happens on the
> same node (there is no notion of {under/over}cloud).
>
> A kind of a "packstack" or "devstack" but using TripleO which has can offer:
> - composable containerized services
> - composable upgrades
> - composable roles
> - Ansible driven deployment
>
> One of the key features we have been focusing so far are:
> - low bar to be able to dev/test TripleO (single machine: VM), with simpler
> tooling

One idea might be worth considering adding to this list is the idea of
"zero-footprint". Right now you can use a VM to isolate the
installation of the all-in-one installer on your laptop which is cool
and you can always use a VM to isolate things. But now that we have
containers it might also be cool to have the installer itself ran in a
container rather than require the end user to install
python-tripleoclient at all.

A few of us tried out a similar sort of idea in Pike with the
undercloud_deploy interface (docker in docker, etc.). At the time we
didn't have config-download working so it had to all be done inside
the container. But now that we have config download working with the
undercloud/all-in-one installers the Ansible which is generated can
run anywhere so long as the relevant hooks are installed. (paunch,
etc.)

The benefit here is that the requirements are even less... the
developer can just use the framework to generate Ansible that spins up
containers on his/her laptop directly. Again, only the required
Ansible/heat hooks would need to be installed.

I mentioned a few months ago my old attempt was here (uses
undercloud_deploy) [1].

Also, worth mentioning that I got it working without installing Puppet
on my laptop too [2]. The idea being that now that our containers have
all the puppet-modules in them no real need to bind mount them in from
the host anymore unless you are using the last few (HA??!!) services
that require puppet modules on baremetal. Perhaps we should switch to
installing the required puppet modules there dynamically instead of
requiring them for any old undercloud/all-in-one installer which
largely focus on non-HA deployments anyway I think.

Is anyone else interested in the zero-footprint idea? Perhaps this is
the next iteration of the all-in-one installer?... but the one I'm
perhaps most interested in as a developer.

[1] https://github.com/dprince/talon
[2] https://review.openstack.org/#/c/550848/ (Add
DockerPuppetMountHostPuppet parameter)

Dan

> - make it fast (being able to deploy OpenStack in minutes)
> - being able to make a change in OpenStack (e.g. Keystone) and test the
> change immediately
>
> The workflow that we're currently targeting is:
> - deploy the system by yourself (centos7 or rhel7)
> - deploy the repos, install python-tripleoclient
> - run 'openstack tripleo deploy (+ few args)
> - (optional) modify your container with a Dockerfile + Ansible
> - Test your change
>
> Status:
> - tripleoclient was refactored in a way that the undercloud is actually a
> special configuration of the standalone deployment (still work in progress).
> We basically refactored the containerized undercloud to be more generic and
> configurable for standalone.
> - we can now deploy a standalone OpenStack with just Keystone + dependencies

Fwiw you could always do this with undercloud_deploy as well. But the
new interface is much nicer I agree. :)

> - which takes 12 minutes total (demo here: https://asciinema.org/a/185533
> and doc in progress:
> http://logs.openstack.org/27/571827/6/check/build-openstack-sphinx-docs/1885304/html/install/containers_deployment/standalone.html)
> - we have an Ansible role to push modifications to containers via a Docker
> file: https://github.com/openstack/ansible-role-tripleo-modify-image/
>
> What's next:
> - Documentation: as you can see the documentation is still in progress
> (https://review.openstack.org/#/c/571827/)
> - Continuous Integration: we're working on a new CI job:
> tripleo-ci-centos-7-standalone
> https://trello.com/c/HInL8pNm/7-upstream-ci-testing
> - Working on the standalone configuration interface, still WIP:
> https://review.openstack.org/#/c/569535/
> - Investigate the use case where a developer wants to prepare the containers
> before the deployment
>
> I hope this update was useful, feel free to give feedback or ask any
> questions,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Re: [openstack-dev] [tripleo] Containerized Undercloud deep-dive

2018-05-30 Thread Dan Prince
We are on for this tomorrow (Thursday) at 2pm UTC (10am EST).

We'll meet here: https://redhat.bluejeans.com/dprince/ and record it
live. We'll do an overview presentation and then perhaps jump into a
terminal for some live questions.

Dan

On Tue, May 15, 2018 at 10:51 AM, Emilien Macchi  wrote:
> Dan and I are organizing a deep-dive session focused on the containerized
> undercloud.
>
> https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud
>
> We proposed a date + list of topics but feel free to comment and ask for
> topics/questions.
> Thanks,
> --
> Emilien & Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-05 Thread Dan Prince
On Thu, Apr 5, 2018 at 12:24 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince <dpri...@redhat.com> wrote:
>
>> Much of the work on this is already there. We've been using this stuff
>> for over a year to dev/test the containerization efforts for a long
>> time now (and thanks for your help with this effort). The problem I
>> think is how it is all packaged. While you can use it today it
>> involves some tricks (docker in docker), or requires you to use an
>> extra VM to minimize the installation footprint on your laptop.
>>
>> Much of the remaining work here is really just about packaging and
>> technical debt. If we put tripleoclient and heat-monolith into a
>> container that solves much of the requirements problems and
>> essentially gives you a container which can transform Heat templates
>> to Ansible. From the ansible side we need to do a bit more work to
>> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would
>> be one option for developers if we could make that work. I lighter set
>> of RPM packages would be another way to do it. Perhaps both...
>> Then a smaller wrapper around these things (which I personally would
>> like to name) to make it all really tight.
>
>
> So if I summarize the discussion:
>
> - A lot of positive feedback about the idea and many use cases, which is
> great.
>
> - Support for non-containerized services is not required, as long as we
> provide a way to update containers with under-review patches for developers.

I think we still desire some (basic no upgrades) support for
non-containerized baremetal at this time.

>
> - We'll probably want to breakdown the "openstack undercloud deploy" process
> into pieces
> * start an ephemeral Heat container

It already supports this if use don't use the --heat-native option,
also you can customize the container used for heat via
--heat-container-image. So we already have this! But rather than do
this I personally prefer the container to have python-tripleoclient
and heat-monolith in it. That way everything everything is in there to
generate Ansible templates. If you just use Heat you are missing some
of the pieces that you'd still have to install elsewhere on your host.
Having them all be in one scoped container which generates Ansible
playbooks from Heat templates is better IMO.

> * create the Heat stack passing all requested -e's
> * run config-download and save the output
>
> And then remove undercloud specific logic, so we can provide a generic way
> to create the config-download playbooks.

Yes. Lets remove some of the undercloud logic. But do note that most
of the undercloud specific login is now in undercloud_config.py anyway
so this is mostly already on its way.

> This generic way would be consumed by the undercloud deploy commands but
> also by the new all-in-one wrapper.
>
> - Speaking of the wrapper, we will probably have a new one. Several names
> were proposed:
> * openstack tripleo deploy
> * openstack talon deploy
> * openstack elf deploy

The wrapper could be just another set of playbooks. That we give a
name too... and perhaps put a CLI in front of as well.

>
> - The wrapper would work with deployed-server, so we would noop Neutron
> networks and use fixed IPs.

This would be configurable I think depending on which templates were
used. Noop as a default for developer deployments but do note that
some services like Neutron aren't going to work unless you have some
basic network setup. Noop is useful if you prefer to do this manually,
but our os-net-config templates are quite useful to automate things.

>
> - Investigate the packaging work: containerize tripleoclient and
> dependencies, see how we can containerized Ansible + dependencies (and
> eventually reduce them at strict minimum).
>
> Let me know if I missed something important, hopefully we can move things
> forward during this cycle.
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-05 Thread Dan Prince
Sigh.

And the answer is: user error. Adminstrator != Administrator.

Well this was fun. Sorry for the bother. All is well. :)

Dan

On Thu, Apr 5, 2018 at 8:13 AM, Dan Prince <dpri...@redhat.com> wrote:
> On Wed, Apr 4, 2018 at 1:27 PM, Jim Rollenhagen <j...@jimrollenhagen.com> 
> wrote:
>> On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen <j...@jimrollenhagen.com>
>> wrote:
>>>
>>> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince <dpri...@redhat.com> wrote:
>>>>
>>>> Kind of a support question but figured I'd ask here in case there are
>>>> suggestions for workarounds for specific machines.
>>>>
>>>> Setting up a new rack of mixed machines this week and hit this issue
>>>> with HP machines using the ipmi power driver for Ironic. Curious if
>>>> anyone else has seen this before? The same commands work great with my
>>>> Dell boxes!
>>>>
>>>> -
>>>>
>>>> [root@localhost ~]# cat x.sh
>>>> set -x
>>>> # this is how Ironic sends its IPMI commands it fails
>>>> echo -n password > /tmp/tmprmdOOv
>>>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>>>> power status
>>>>
>>>> # this works great
>>>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>>>> status
>>>>
>>>> [root@localhost ~]# bash x.sh
>>>> + echo -n password
>>>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>>>> power status
>>>> Error: Unable to establish IPMI v2 / RMCP+ session
>>>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>>>> status
>>>> Chassis Power is on
>>>
>>>
>>> Very strange. A tcpdump of both would probably be enlightening. :)
>>>
>>> Also curious what version of ipmitool this is, maybe you're hitting an old
>>> bug.
>>
>>
>> https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect
>> here.
>
> Thanks for the suggestion Jim! So I tried a few very short passwords
> and no dice so far. Looking into the tcpdump info a bit now.
>
> I'm in a bit of a rush so I may hack in a quick patch Ironic to make
> ipmitool to use the -P option to proceed and loop back to fix this a
> bit later.
>
> Dan
>
>>
>> // jim
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-05 Thread Dan Prince
On Wed, Apr 4, 2018 at 1:27 PM, Jim Rollenhagen <j...@jimrollenhagen.com> wrote:
> On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen <j...@jimrollenhagen.com>
> wrote:
>>
>> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince <dpri...@redhat.com> wrote:
>>>
>>> Kind of a support question but figured I'd ask here in case there are
>>> suggestions for workarounds for specific machines.
>>>
>>> Setting up a new rack of mixed machines this week and hit this issue
>>> with HP machines using the ipmi power driver for Ironic. Curious if
>>> anyone else has seen this before? The same commands work great with my
>>> Dell boxes!
>>>
>>> -
>>>
>>> [root@localhost ~]# cat x.sh
>>> set -x
>>> # this is how Ironic sends its IPMI commands it fails
>>> echo -n password > /tmp/tmprmdOOv
>>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>>> power status
>>>
>>> # this works great
>>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>>> status
>>>
>>> [root@localhost ~]# bash x.sh
>>> + echo -n password
>>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>>> power status
>>> Error: Unable to establish IPMI v2 / RMCP+ session
>>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>>> status
>>> Chassis Power is on
>>
>>
>> Very strange. A tcpdump of both would probably be enlightening. :)
>>
>> Also curious what version of ipmitool this is, maybe you're hitting an old
>> bug.
>
>
> https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect
> here.

Thanks for the suggestion Jim! So I tried a few very short passwords
and no dice so far. Looking into the tcpdump info a bit now.

I'm in a bit of a rush so I may hack in a quick patch Ironic to make
ipmitool to use the -P option to proceed and loop back to fix this a
bit later.

Dan

>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-05 Thread Dan Prince
On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen <j...@jimrollenhagen.com> wrote:
> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince <dpri...@redhat.com> wrote:
>>
>> Kind of a support question but figured I'd ask here in case there are
>> suggestions for workarounds for specific machines.
>>
>> Setting up a new rack of mixed machines this week and hit this issue
>> with HP machines using the ipmi power driver for Ironic. Curious if
>> anyone else has seen this before? The same commands work great with my
>> Dell boxes!
>>
>> -
>>
>> [root@localhost ~]# cat x.sh
>> set -x
>> # this is how Ironic sends its IPMI commands it fails
>> echo -n password > /tmp/tmprmdOOv
>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>> power status
>>
>> # this works great
>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>> status
>>
>> [root@localhost ~]# bash x.sh
>> + echo -n password
>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>> power status
>> Error: Unable to establish IPMI v2 / RMCP+ session
>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>> status
>> Chassis Power is on
>
>
> Very strange. A tcpdump of both would probably be enlightening. :)

Ack, I will see about getting these.

>
> Also curious what version of ipmitool this is, maybe you're hitting an old
> bug.

RHEL 7.5 so this:

ipmitool-1.8.18-7.el7.rpm

Dan

>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-04 Thread Dan Prince
On Wed, Apr 4, 2018 at 9:00 AM, Noam Angel <no...@mellanox.com> wrote:
> Hi,
>
> First check you can ping the.
> Then open a browser and login.
> Make sure ipmi enabled.
> Make sure user has permissions for admin or other role with reboot
> capabilities.
> Check again

Hi, yeah. So like I mention in my initial email IPMI is working great
with a password like this:

ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status

It just fails when Ironic sends the similar command with a password
file. It appears that the password file is the issue. Tried it with
and without newlines even and no success.

Dan

>
> Get Outlook for Android
>
> ________
> From: Dan Prince <dpri...@redhat.com>
> Sent: Wednesday, April 4, 2018 3:39:00 PM
> To: List, OpenStack
> Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines
>
> Kind of a support question but figured I'd ask here in case there are
> suggestions for workarounds for specific machines.
>
> Setting up a new rack of mixed machines this week and hit this issue
> with HP machines using the ipmi power driver for Ironic. Curious if
> anyone else has seen this before? The same commands work great with my
> Dell boxes!
>
> -
>
> [root@localhost ~]# cat x.sh
> set -x
> # this is how Ironic sends its IPMI commands it fails
> echo -n password > /tmp/tmprmdOOv
> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
> power status
>
> # this works great
> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status
>
> [root@localhost ~]# bash x.sh
> + echo -n password
> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
> power status
> Error: Unable to establish IPMI v2 / RMCP+ session
> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
> status
> Chassis Power is on
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-dev=02%7C01%7Cnoama%40mellanox.com%7Cb8324698c48e4fba8de408d59a293ae2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636584424239254168=o2D26f1zFNmaM%2BOhQKD0SKaqqISRYdNzVotcR%2Fdqyhc%3D=0
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-04 Thread Dan Prince
On Wed, Apr 4, 2018 at 8:52 AM, Sanjay Upadhyay <supad...@redhat.com> wrote:
>
>
> On Wed, Apr 4, 2018 at 6:09 PM, Dan Prince <dpri...@redhat.com> wrote:
>>
>> Kind of a support question but figured I'd ask here in case there are
>> suggestions for workarounds for specific machines.
>>
>> Setting up a new rack of mixed machines this week and hit this issue
>> with HP machines using the ipmi power driver for Ironic. Curious if
>> anyone else has seen this before? The same commands work great with my
>> Dell boxes!
>>
>
> Are you using ILO Drivers?
> https://docs.openstack.org/ironic/latest/admin/drivers/ilo.html
> /sanjay

No. I was using the ipmi driver. Trying to keep things simple.

Dan

>>
>> -
>>
>> [root@localhost ~]# cat x.sh
>> set -x
>> # this is how Ironic sends its IPMI commands it fails
>> echo -n password > /tmp/tmprmdOOv
>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>> power status
>>
>> # this works great
>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>> status
>>
>> [root@localhost ~]# bash x.sh
>> + echo -n password
>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>> power status
>> Error: Unable to establish IPMI v2 / RMCP+ session
>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>> status
>> Chassis Power is on
>>
>> Dan
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Sanjay Upadhyay
> IRC #saneax
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-04 Thread Dan Prince
Kind of a support question but figured I'd ask here in case there are
suggestions for workarounds for specific machines.

Setting up a new rack of mixed machines this week and hit this issue
with HP machines using the ipmi power driver for Ironic. Curious if
anyone else has seen this before? The same commands work great with my
Dell boxes!

-

[root@localhost ~]# cat x.sh
set -x
# this is how Ironic sends its IPMI commands it fails
echo -n password > /tmp/tmprmdOOv
ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
power status

# this works great
ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status

[root@localhost ~]# bash x.sh
+ echo -n password
+ ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
power status
Error: Unable to establish IPMI v2 / RMCP+ session
+ ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status
Chassis Power is on

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Dan Prince
On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena  wrote:
>
>> Greeting folks,
>>
>> During the last PTG we spent time discussing some ideas around an All-In-One
>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>> very similar with what we have today with the containerized undercloud and
>> what we also have with other tools like Packstack or Devstack.
>>
>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>>
>
> I'm really +1 to this. And as a Packstack developer, I'd love to see this as a
> mid-term Packstack replacement. So let's dive into the details.

Curious on this one actually, do you see a need for continued
baremetal support? Today we support both baremetal and containers.
Perhaps "support" is a strong word. We support both in terms of
installation but only containers now have fully supported upgrades.

The interfaces we have today still support baremetal and containers
but there were some suggestions about getting rid of baremetal support
and only having containers. If we were to remove baremetal support
though, Could we keep the Packstack case intact by just using
containers instead?

Dan

>
>> One of the problems that we're trying to solve here is to give a simple tool
>> for developers so they can both easily and quickly deploy an OpenStack for
>> their needs.
>>
>> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and
>> without complexity, reproducing the same exact same tooling as TripleO is
>> using."
>> "As a Neutron developer, I need to develop a feature in Neutron and test it
>> with TripleO in my local env."
>> "As a TripleO dev, I need to implement a new service and test its deployment
>> in my local env."
>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
>> production chain, quickly and simply."
>>
>
> "As a packager, I want an easy/low overhead way to test updated packages with 
> TripleO bits, so I can make sure they will not break any automation".
>
>> Probably more use cases, but to me that's what came into my mind now.
>>
>> Dan kicked-off a doc patch a month ago:
>> https://review.openstack.org/#/c/547038/
>> And I just went ahead and proposed a blueprint:
>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
>> So hopefully we can start prototyping something during Rocky.
>>
>> Before talking about the actual implementation, I would like to gather
>> feedback from people interested by the use-cases. If you recognize yourself
>> in these use-cases and you're not using TripleO today to test your things
>> because it's too complex to deploy, we want to hear from you.
>> I want to see feedback (positive or negative) about this idea. We need to
>> gather ideas, use cases, needs, before we go design a prototype in Rocky.
>>
>
> I would like to offer help with initial testing once there is something in 
> the repos, so count me in!
>
> Regards,
> Javier
>
>> Thanks everyone who'll be involved,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-03 Thread Dan Prince
On Tue, Apr 3, 2018 at 9:23 AM, James Slagle <james.sla...@gmail.com> wrote:
> On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince <dpri...@redhat.com> wrote:
>> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi <emil...@redhat.com> wrote:
>>> Greeting folks,
>>>
>>> During the last PTG we spent time discussing some ideas around an All-In-One
>>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>>> very similar with what we have today with the containerized undercloud and
>>> what we also have with other tools like Packstack or Devstack.
>>>
>>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>>>
>>> One of the problems that we're trying to solve here is to give a simple tool
>>> for developers so they can both easily and quickly deploy an OpenStack for
>>> their needs.
>>>
>>> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
>>> and without complexity, reproducing the same exact same tooling as TripleO
>>> is using."
>>> "As a Neutron developer, I need to develop a feature in Neutron and test it
>>> with TripleO in my local env."
>>> "As a TripleO dev, I need to implement a new service and test its deployment
>>> in my local env."
>>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
>>> production chain, quickly and simply."
>>>
>>> Probably more use cases, but to me that's what came into my mind now.
>>>
>>> Dan kicked-off a doc patch a month ago:
>>> https://review.openstack.org/#/c/547038/
>>> And I just went ahead and proposed a blueprint:
>>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
>>> So hopefully we can start prototyping something during Rocky.
>>
>> I've actually started hacking a bit here:
>>
>> https://github.com/dprince/talon
>>
>> Very early and I haven't committed everything yet. (Probably wouldn't
>> have announced it to the list yet but it might help some understand
>> the use case).
>>
>> I'm running this on my laptop to develop TripleO containers with no
>> extra VM involved.
>>
>> P.S. We should call it Talon!
>>
>> Dan
>>
>>>
>>> Before talking about the actual implementation, I would like to gather
>>> feedback from people interested by the use-cases. If you recognize yourself
>>> in these use-cases and you're not using TripleO today to test your things
>>> because it's too complex to deploy, we want to hear from you.
>>> I want to see feedback (positive or negative) about this idea. We need to
>>> gather ideas, use cases, needs, before we go design a prototype in Rocky.
>>
>> Sorry dude. Already prototyping :)
>
> A related use case to all this work that takes it a step further:
>
> I think it would be useful if we could eventually further break down
> "openstack undercloud deploy" into just the pieces needed to:
>
> - start an ephemeral Heat container
> - create the Heat stack passing all requested -e's
> - run config-download and save the output

Yes! This pretty similar what we outlined at the PTG here [1] (lines 21-23):

The high level workflow of here is already possible now if you use the
new --output-only option to config download [2] and is exactly what I
was doing with the Talon prototype. Essentially trying to take it as
far as possible with our existing commands and then bring that to the
group as a "how do we want to package this better?".

One difference I've taken is instead of using a Heat container I
instead use a python-tripleoclient container (which I aim to push to
Kolla if I can whittle it down). This has the benefit of letting you
do everything in a single container. Also I needed a few other
cherry-picks [3] to pull it off to do things like make it so that
docker-puppet.py consumes puppet-tripleo from within the container
instead of bind mounting it from the host, and disabling puppet from
running on the host machine entirely (something I do not want on my
laptop).

The nice thing about all of this is you end up with a self-contained
'Heat template -> Ansible' generator that can translate a set of heat
templates into ansible playbooks which you then just run. What it does
highlight however is perhaps there are still some dependencies that
must be on each host in order for our Ansible playbooks to work.
Things like paunch, and most of the heat-agent hooks still need to be
on each host OS or the resulting playbooks won't work. Continuing the
work to convert things to pure Ansible without requiring any
heat-agents 

Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-02 Thread Dan Prince
On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi  wrote:
> Greeting folks,
>
> During the last PTG we spent time discussing some ideas around an All-In-One
> installer, using 100% of the TripleO bits to deploy a single node OpenStack
> very similar with what we have today with the containerized undercloud and
> what we also have with other tools like Packstack or Devstack.
>
> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
>
> One of the problems that we're trying to solve here is to give a simple tool
> for developers so they can both easily and quickly deploy an OpenStack for
> their needs.
>
> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
> and without complexity, reproducing the same exact same tooling as TripleO
> is using."
> "As a Neutron developer, I need to develop a feature in Neutron and test it
> with TripleO in my local env."
> "As a TripleO dev, I need to implement a new service and test its deployment
> in my local env."
> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> production chain, quickly and simply."
>
> Probably more use cases, but to me that's what came into my mind now.
>
> Dan kicked-off a doc patch a month ago:
> https://review.openstack.org/#/c/547038/
> And I just went ahead and proposed a blueprint:
> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> So hopefully we can start prototyping something during Rocky.

I've actually started hacking a bit here:

https://github.com/dprince/talon

Very early and I haven't committed everything yet. (Probably wouldn't
have announced it to the list yet but it might help some understand
the use case).

I'm running this on my laptop to develop TripleO containers with no
extra VM involved.

P.S. We should call it Talon!

Dan

>
> Before talking about the actual implementation, I would like to gather
> feedback from people interested by the use-cases. If you recognize yourself
> in these use-cases and you're not using TripleO today to test your things
> because it's too complex to deploy, we want to hear from you.
> I want to see feedback (positive or negative) about this idea. We need to
> gather ideas, use cases, needs, before we go design a prototype in Rocky.

Sorry dude. Already prototyping :)

>
> Thanks everyone who'll be involved,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] [tripleo] Backward incompatible change in YAQL minor version

2018-02-17 Thread Dan Prince
Thanks for the update Emilien. A couple of things to add:

1) This was really difficult to pin-point via the Heat stack error
message ('list index out of range'). I actually had to go and add
LOG.debug statements to Heat to get to the bottom of it. I aim to sync
with a few of the Heat folks next week on this to see if we can do
better here.

2) I had initially thought it would have been much better to revert
the (breaking) change to python-yaql. That said it was from 2016! So I
think our window of opportunity for the revert is probably way too
large to consider that. Sounds like we need to publish the yaql
package more often in RDO, etc. So your patch to update our queries is
probably our only option.

On Fri, Feb 16, 2018 at 8:36 PM, Emilien Macchi  wrote:
> Upgrading YAQL from 1.1.0 to 1.1.3 breaks advanced queries with groupBy
> aggregation.
>
> The commit that broke it is
> https://github.com/openstack/yaql/commit/3fb91784018de335440b01b3b069fe45dc53e025
>
> It broke TripleO: https://bugs.launchpad.net/tripleo/+bug/1750032
> But Alex and I figured (after a strong headache) that we needed to update
> the query like this: https://review.openstack.org/545498
>
> It would be great to avoid this kind of change within minor versions, please
> please.
>
> Happy weekend,
>
> PS: I'm adding YAQL to my linkedin profile right now.

Be careful here. Do you really want to write YAQL queries all day!

Dan

> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ronelle Landy for Tripleo-Quickstart/Extras/CI core

2017-11-30 Thread Dan Prince
+1

On Wed, Nov 29, 2017 at 2:34 PM, John Trowbridge  wrote:

> I would like to propose Ronelle be given +2 for the above repos. She has
> been a solid contributor to tripleo-quickstart and extras almost since the
> beginning. She has solid review numbers, but more importantly has always
> done quality reviews. She also has been working in the very intense rover
> role on the CI squad in the past CI sprint, and has done very well in that
> role.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work

2017-11-30 Thread Dan Prince
On Thu, Nov 16, 2017 at 11:56 AM, James Slagle 
wrote:
>
>
> When we consume these ansible-role-k8s-* roles from t-h-t, I think
> that should be a stepping stone towards migrating away from having to
> use Heat to deploy and configure those services. We know that these
> new ansible roles will be deployable standalone, and the interface to
> do that should be typical ansible best practices (role defaults, vars,
> etc).
>
> We can offer a mechanism such that one can migrate from a
> tripleo-heat-templates/docker/services/database/mysql.yaml deployed
> mariadb to one deployed via
> ansible-role-k8s-mariadb. The config-download mechanism could be
> updated to generate or pull from Heat the necessary ansible vars files
> for configuring the roles. We should make sure that the integration
> with tripleo-heat-templates results in the same inputs/outputs that
> someone would consume if using the roles standalone. Future iterations
> would then not have to require Heat for that service at all, unless
> the operator wanted to continue to configure the service via Heat
> parameters/environments.
>
> What I'm trying to propose is a path towards deprecating the Heat
> parameter/environment driven and hieradata driven approach to
> configuring the services. The ansible-role-k8s-* roles should offer a
> new interface, so I don't think we have to remain tied to Heat
> forever, so we should consider what we want the long term goal to be
> in an ideal world, and take some iterative steps to get there.
>

I like the idea of a leaner set of deployment tooling very much. I think we
are to the point where we need to consider "clean rooming" some things.

That said in moving towards a new interface like you talk about I think we
do have to have feature parity with things like composability and parameter
validation. Otherwise we are going to break things we currently rely on in
our high level (Mistral) deployment workflow. Specifically, I've yet to see
something that would give us the nested parameter validations we leverage
from Heat.

Dan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work

2017-11-30 Thread Dan Prince
On Fri, Nov 17, 2017 at 4:43 AM, Steven Hardy  wrote:
>
>
> In the ansible/kubernetes model, it could work like:
>
> 1. Ansible role makes k8s API call creating pod with multiple containers
> 2. Pod starts temporary container that runs puppet, config files
> written out to shared volume
> 3. Service container starts, config consumed from shared volume
> 4. Optionally run temporary bootstrapping container inside pod
>
> This sort of pattern is documented here:
>
> https://kubernetes.io/docs/tasks/access-application-cluster/communicate-
> containers-same-pod-shared-volume/
>
>
>
Regarding the use of the shared volume I agree this is a nice iteration. We
considered using it within Pike as well but due to the Hybrid nature of the
deployment, and the desire to have config files easily debug friendly on
the host itself we ended up not going there.

In Queens however we are aiming for more or less full containerization so
we could consider the merits of this approach again. Just pointing out that
I don't think Kubernetes is a requirement in order to be able to proceed
with some of this improvement.

Dan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Community managed tech/dev blog: Call for opinions and ideas

2017-11-27 Thread Dan Prince
On Mon, Nov 27, 2017 at 8:55 AM, Flavio Percoco  wrote:

> Greetings,
>
> Last Thursday[0], at the TC office hours, we brainstormed a bit around the
> idea
> of having a tech blog. This idea came first from Joshua Harlow and it was
> then
> briefly discussed at the summit too.
>
> The idea, we have gathered, is to have a space where the community could
> write
> technical posts about OpenStack. The idea is not to have an aggregator
> (that's
> what our planet[1] is for) but a place to write original and curated
> content.
>

Why not just write article's on existing blogs, link them into planet, and
then if they are really good promote them at a higher level?

Having a separate blog that is maintained by a few seems a bit elitist to
me.

Dan


> During the conversation, we argued about what kind of content would be
> acceptable for this platform. Here are some ideas of things we could have
> there:
>
> - Posts that are dev-oriented (e.g: new functions on an oslo lib)
> - Posts that facilitate upstream development (e.g: My awesome dev setup)
> - Deep dive into libvirt internals
>

What is really missing in our current infrastructure setup that really
prevents any of the above?


> - ideas?
>
> As Chris Dent pointed out on that conversation, we should avoid making this
> place a replacement for things that would otherwise go on the mailing list
> -
> activity reports, for example. Having dev news in this platform, we would
> overlap with things that go already on the mailing list and, arguably, we
> would
> be defeating the purpose of the platform. But, there might be room for
> both(?)
>
> Ultimately, we should avoid topics promoting new features in services as
> that's what
> superuser[2] is for.
>
> So, what are your thoughts about this? What kind of content would you
> rather
> have posted here? Do you like the idea at all?
>
> [0] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23op
> enstack-tc.2017-11-23.log.html#t2017-11-23T15:01:25
> [1] http://planet.openstack.org/
> [2] http://superuser.openstack.org/
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominate chem and matbu for tripleo-core !

2017-11-13 Thread Dan Prince
+1

On Thu, Nov 9, 2017 at 3:44 AM, Marios Andreou  wrote:

> Hello fellow owls,
>
> I would like to nominate (and imo these are both long overdue already):
>
> Sofer Athlan Guyot (chem)  and
>
> Mathieu Bultel (matbu)
>
> to tripleo-core. They have both made many many core contributions to the
> upgrades & updates over the last 3 cycles touching many of the tripleo
> repos (tripleo-heat-templates, tripleo-common, python-tripleoclient,
> tripleo-ci, tripleo-docs and others tripleo-quickstart/extras too unless am
> mistaken).
>
> IMO their efforts and contributions are invaluable for the upgrades squad
> (and beyond - see openstack overcloud config download for example) and we
> will be very lucky to have them as fully voting cores.
>
> Please vote with +1 or -1 for either or both chem and matbu - I'll keep it
> open for a week as customary,
>
> thank you,
>
> marios
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing John Fulton core on TripleO

2017-11-13 Thread Dan Prince
+1

On Wed, Nov 8, 2017 at 5:24 PM, Giulio Fidente  wrote:

> Hi,
>
> I would like to propose John Fulton core on TripleO.
>
> I think John did an awesome work during the Pike cycle around the
> integration of ceph-ansible as a replacement for puppet-ceph, for the
> deployment of Ceph in containers.
>
> I think John has good understanding of many different parts of TripleO
> given that the ceph-ansible integration has been a complicated effort
> involving changes in heat/tht/mistral workflows/ci and last but not
> least, docs and he is more recently getting busier with reviews outside
> his main comfort zone.
>
> I am sure John would be a great addition to the team and I welcome him
> first to tune into radioparadise with the rest of us when joining #tripleo
>
> Feedback is welcomed!
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-17 Thread Dan Prince
On Tue, 2017-10-17 at 11:46 +, milanisko k wrote:
> 
> How about the shared container? Wouldn't it be better not have to
> rely on t-h-t especially if we're "scheduling" (and probably
> configuring) the services as a single logical entity? 

The containers architecture for Pike and Queens is very much oriented
around preserving the way we deployed the services already on
baremetal... but moving them into containers. So for Ironic inspector
we had 2 services (2 systemd scripts) both living in separate
containers. Do the the shared nature of this architecture with regards
to network and host access this works fine.

In the future as we move towards Kubernetes rearchitecting the services
so they work better in containers is totally fine. If the services are
that tightly coupled then why not just have one launch the other? Then
they could live in the single container and have a common launch point.
Seems fine to me, but certainly isn't a requirement to get these up and
running in the current architecture.


> Also would allow us to get rid of iptables and better encapsulate the
> inspector services.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-17 Thread Dan Prince
On Tue, 2017-10-17 at 10:06 +, milanisko k wrote:
> 
> Does it mean dnsmasq was run from a stand-alone container?

Yes. There are separate containers for the ironic-inspector and
dnsmasq.

> 
> Could you please point me (in the patch probably) to the spot where
> we configure inspector container to be able to talk to the iptables
> to filter the DHCP traffic for dnsmasq?

Both services (ironic-inspector and dnsmasq) are using --net=host and
--privileged. This essentially has them on the same shared host network
thus the services can interact with the same iptables rules.

> 
> I guess this configuration binds the dnsmasq container to be
> "scheduled" together with inspector container on the same node
> (because of the iptables).

Both services are controlled via the same Heat template and as such
even though they are in separate containers we can guarantee they
should always get launched on the same machine.

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-16 Thread Dan Prince
On Wed, 2017-10-04 at 15:10 +0200, Dmitry Tantsur wrote:
> (top-posting, as it is not a direct response to a specific line)
> 
> This is your friendly reminder that we're not quite near
> containerized 
> ironic-inspector. The THT for it has probably never been tested at
> all, and the 
> iptables magic we do may simply not be containers-compatible. Milan
> would 
> appreciate any help with his ironic-inspector rework.


I spent the time today to test our (very old) ironic-inspector patch
from Pike.

https://review.openstack.org/#/c/457822/

Aside from one tweak I made to run dnsmasq as root (this is how systemd
runs this process as well) the service seems to be working perfectly.
Demo recording of what I did below:

https://asciinema.org/a/wGtvZwE65yoasKrRS8NeGMsrH

Also, just want to re-interate that the t-h-t architecture is also
capable as a baremetal installation tool. While I would like to see
inspector containerized if we really need to run it on baremetal this
architecture would support that fine. It is the same architecture we
use for the overcloud and as such it supports mixing and matching
containers alongside baremetal services.

If that doesn't make sense let me just say that whatever you plan on
doing in Queens to Ironic if you plan on supporting that w/ Puppet on
instack-undercloud I have no doubts about being able to support it in
the architecture I'm proposing we adopt here... whether we run the
service on baremetal or in a container.

Dan

> 
> Dmitry
> 
> On 10/04/2017 03:00 PM, Dan Prince wrote:
> > On Tue, 2017-10-03 at 16:03 -0600, Alex Schultz wrote:
> > > On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince <dpri...@redhat.com>
> > > wrote:
> > > > 
> > > > 
> > > > On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz <aschultz@redhat.c
> > > > om>
> > > > wrote:
> > > > > 
> > > > > On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dprince@redhat.c
> > > > > om>
> > > > > wrote:
> > > > > > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
> > > > > > > Hey Dan,
> > > > > > > 
> > > > > > > Thanks for sending out a note about this. I have a few
> > > > > > > questions
> > > > > > > inline.
> > > > > > > 
> > > > > > > On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dprince@redha
> > > > > > > t.co
> > > > > > > m>
> > > > > > > wrote:
> > > > > > > > One of the things the TripleO containers team is
> > > > > > > > planning
> > > > > > > > on
> > > > > > > > tackling
> > > > > > > > in Queens is fully containerizing the undercloud. At
> > > > > > > > the
> > > > > > > > PTG we
> > > > > > > > created
> > > > > > > > an etherpad [1] that contains a list of features that
> > > > > > > > need
> > > > > > > > to be
> > > > > > > > implemented to fully replace instack-undercloud.
> > > > > > > > 
> > > > > > > 
> > > > > > > I know we talked about this at the PTG and I was
> > > > > > > skeptical
> > > > > > > that this
> > > > > > > will land in Queens. With the exception of the
> > > > > > > Container's
> > > > > > > team
> > > > > > > wanting this, I'm not sure there is an actual end user
> > > > > > > who is
> > > > > > > looking
> > > > > > > for the feature so I want to make sure we're not just
> > > > > > > doing
> > > > > > > more work
> > > > > > > because we as developers think it's a good idea.
> > > > > > 
> > > > > > I've heard from several operators that they were actually
> > > > > > surprised we
> > > > > > implemented containers in the Overcloud first. Validating a
> > > > > > new
> > > > > > deployment framework on a single node Undercloud (for
> > > > > > operators) before
> > > > > > overtaking their entire cloud deployment has a lot of merit
> > > > > > to
> > > > > > it IMO.
> > > > > > When you share the same deployment architecture across the
> > > > >

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-04 Thread Dan Prince
On Wed, Oct 4, 2017 at 9:10 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

> (top-posting, as it is not a direct response to a specific line)
>
> This is your friendly reminder that we're not quite near containerized
> ironic-inspector. The THT for it has probably never been tested at all, and
> the iptables magic we do may simply not be containers-compatible. Milan
> would appreciate any help with his ironic-inspector rework.
>
>
Thanks Dmitry. Exactly the update I was looking for. Look forward to
syncing w/ Milan on this.

Dan


> Dmitry
>
>
> On 10/04/2017 03:00 PM, Dan Prince wrote:
>
>> On Tue, 2017-10-03 at 16:03 -0600, Alex Schultz wrote:
>>
>>> On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince <dpri...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz <aschu...@redhat.com>
>>>> wrote:
>>>>
>>>>>
>>>>> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>>>>>>
>>>>>>> Hey Dan,
>>>>>>>
>>>>>>> Thanks for sending out a note about this. I have a few
>>>>>>> questions
>>>>>>> inline.
>>>>>>>
>>>>>>> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.co
>>>>>>> m>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> One of the things the TripleO containers team is planning
>>>>>>>> on
>>>>>>>> tackling
>>>>>>>> in Queens is fully containerizing the undercloud. At the
>>>>>>>> PTG we
>>>>>>>> created
>>>>>>>> an etherpad [1] that contains a list of features that need
>>>>>>>> to be
>>>>>>>> implemented to fully replace instack-undercloud.
>>>>>>>>
>>>>>>>>
>>>>>>> I know we talked about this at the PTG and I was skeptical
>>>>>>> that this
>>>>>>> will land in Queens. With the exception of the Container's
>>>>>>> team
>>>>>>> wanting this, I'm not sure there is an actual end user who is
>>>>>>> looking
>>>>>>> for the feature so I want to make sure we're not just doing
>>>>>>> more work
>>>>>>> because we as developers think it's a good idea.
>>>>>>>
>>>>>>
>>>>>> I've heard from several operators that they were actually
>>>>>> surprised we
>>>>>> implemented containers in the Overcloud first. Validating a new
>>>>>> deployment framework on a single node Undercloud (for
>>>>>> operators) before
>>>>>> overtaking their entire cloud deployment has a lot of merit to
>>>>>> it IMO.
>>>>>> When you share the same deployment architecture across the
>>>>>> overcloud/undercloud it puts us in a better position to decide
>>>>>> where to
>>>>>> expose new features to operators first (when creating the
>>>>>> undercloud or
>>>>>> overcloud for example).
>>>>>>
>>>>>> Also, if you read my email again I've explicitly listed the
>>>>>> "Containers" benefit last. While I think moving the undercloud
>>>>>> to
>>>>>> containers is a great benefit all by itself this is more of a
>>>>>> "framework alignment" in TripleO and gets us out of maintaining
>>>>>> huge
>>>>>> amounts of technical debt. Re-using the same framework for the
>>>>>> undercloud and overcloud has a lot of merit. It effectively
>>>>>> streamlines
>>>>>> the development process for service developers, and 3rd parties
>>>>>> wishing
>>>>>> to integrate some of their components on a single node. Why be
>>>>>> forced
>>>>>> to create a multi-node dev environment if you don't have to
>>>>>> (aren't
>>>>>> using HA for example).
>>>>>>
>>>>>> Lets be honest. While instack-undercloud helped solve the old
>>>&

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-04 Thread Dan Prince
On Tue, 2017-10-03 at 16:03 -0600, Alex Schultz wrote:
> On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince <dpri...@redhat.com>
> wrote:
> > 
> > 
> > On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz <aschu...@redhat.com>
> > wrote:
> > > 
> > > On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com>
> > > wrote:
> > > > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
> > > > > Hey Dan,
> > > > > 
> > > > > Thanks for sending out a note about this. I have a few
> > > > > questions
> > > > > inline.
> > > > > 
> > > > > On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.co
> > > > > m>
> > > > > wrote:
> > > > > > One of the things the TripleO containers team is planning
> > > > > > on
> > > > > > tackling
> > > > > > in Queens is fully containerizing the undercloud. At the
> > > > > > PTG we
> > > > > > created
> > > > > > an etherpad [1] that contains a list of features that need
> > > > > > to be
> > > > > > implemented to fully replace instack-undercloud.
> > > > > > 
> > > > > 
> > > > > I know we talked about this at the PTG and I was skeptical
> > > > > that this
> > > > > will land in Queens. With the exception of the Container's
> > > > > team
> > > > > wanting this, I'm not sure there is an actual end user who is
> > > > > looking
> > > > > for the feature so I want to make sure we're not just doing
> > > > > more work
> > > > > because we as developers think it's a good idea.
> > > > 
> > > > I've heard from several operators that they were actually
> > > > surprised we
> > > > implemented containers in the Overcloud first. Validating a new
> > > > deployment framework on a single node Undercloud (for
> > > > operators) before
> > > > overtaking their entire cloud deployment has a lot of merit to
> > > > it IMO.
> > > > When you share the same deployment architecture across the
> > > > overcloud/undercloud it puts us in a better position to decide
> > > > where to
> > > > expose new features to operators first (when creating the
> > > > undercloud or
> > > > overcloud for example).
> > > > 
> > > > Also, if you read my email again I've explicitly listed the
> > > > "Containers" benefit last. While I think moving the undercloud
> > > > to
> > > > containers is a great benefit all by itself this is more of a
> > > > "framework alignment" in TripleO and gets us out of maintaining
> > > > huge
> > > > amounts of technical debt. Re-using the same framework for the
> > > > undercloud and overcloud has a lot of merit. It effectively
> > > > streamlines
> > > > the development process for service developers, and 3rd parties
> > > > wishing
> > > > to integrate some of their components on a single node. Why be
> > > > forced
> > > > to create a multi-node dev environment if you don't have to
> > > > (aren't
> > > > using HA for example).
> > > > 
> > > > Lets be honest. While instack-undercloud helped solve the old
> > > > "seed" VM
> > > > issue it was outdated the day it landed upstream. The entire
> > > > premise of
> > > > the tool is that it uses old style "elements" to create the
> > > > undercloud
> > > > and we moved away from those as the primary means driving the
> > > > creation
> > > > of the Overcloud years ago at this point. The new
> > > > 'undercloud_deploy'
> > > > installer gets us back to our roots by once again sharing the
> > > > same
> > > > architecture to create the over and underclouds. A demo from
> > > > long ago
> > > > expands on this idea a bit:  https://www.youtube.com/watch?v=y1
> > > > qMDLAf26
> > > > Q=5s
> > > > 
> > > > In short, we aren't just doing more work because developers
> > > > think it is
> > > > a good idea. This has potential to be one of the most useful
> > > > architectural changes in TripleO that we've made in years

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Dan Prince
On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz <aschu...@redhat.com> wrote:

> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com> wrote:
> > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
> >> Hey Dan,
> >>
> >> Thanks for sending out a note about this. I have a few questions
> >> inline.
> >>
> >> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.com>
> >> wrote:
> >> > One of the things the TripleO containers team is planning on
> >> > tackling
> >> > in Queens is fully containerizing the undercloud. At the PTG we
> >> > created
> >> > an etherpad [1] that contains a list of features that need to be
> >> > implemented to fully replace instack-undercloud.
> >> >
> >>
> >> I know we talked about this at the PTG and I was skeptical that this
> >> will land in Queens. With the exception of the Container's team
> >> wanting this, I'm not sure there is an actual end user who is looking
> >> for the feature so I want to make sure we're not just doing more work
> >> because we as developers think it's a good idea.
> >
> > I've heard from several operators that they were actually surprised we
> > implemented containers in the Overcloud first. Validating a new
> > deployment framework on a single node Undercloud (for operators) before
> > overtaking their entire cloud deployment has a lot of merit to it IMO.
> > When you share the same deployment architecture across the
> > overcloud/undercloud it puts us in a better position to decide where to
> > expose new features to operators first (when creating the undercloud or
> > overcloud for example).
> >
> > Also, if you read my email again I've explicitly listed the
> > "Containers" benefit last. While I think moving the undercloud to
> > containers is a great benefit all by itself this is more of a
> > "framework alignment" in TripleO and gets us out of maintaining huge
> > amounts of technical debt. Re-using the same framework for the
> > undercloud and overcloud has a lot of merit. It effectively streamlines
> > the development process for service developers, and 3rd parties wishing
> > to integrate some of their components on a single node. Why be forced
> > to create a multi-node dev environment if you don't have to (aren't
> > using HA for example).
> >
> > Lets be honest. While instack-undercloud helped solve the old "seed" VM
> > issue it was outdated the day it landed upstream. The entire premise of
> > the tool is that it uses old style "elements" to create the undercloud
> > and we moved away from those as the primary means driving the creation
> > of the Overcloud years ago at this point. The new 'undercloud_deploy'
> > installer gets us back to our roots by once again sharing the same
> > architecture to create the over and underclouds. A demo from long ago
> > expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
> > Q=5s
> >
> > In short, we aren't just doing more work because developers think it is
> > a good idea. This has potential to be one of the most useful
> > architectural changes in TripleO that we've made in years. Could
> > significantly decrease our CI reasources if we use it to replace the
> > existing scenarios jobs which take multiple VMs per job. Is a building
> > block we could use for other features like and HA undercloud. And yes,
> > it does also have a huge impact on developer velocity in that many of
> > us already prefer to use the tool as a means of streamlining our
> > dev/test cycles to minutes instead of hours. Why spend hours running
> > quickstart Ansible scripts when in many cases you can just doit.sh. htt
> > ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
> >
>
> So like I've repeatedly said, I'm not completely against it as I agree
> what we have is not ideal.  I'm not -2, I'm -1 pending additional
> information. I'm trying to be realistic and reduce our risk for this
> cycle.


This reduces our complexity greatly I think in that once it is completed
will allow us to eliminate two project (instack and instack-undercloud) and
the maintenance thereof. Furthermore, as this dovetails nice with the
Ansible


>  IMHO doit.sh is not acceptable as an undercloud installer and
> this is what I've been trying to point out as the actual impact to the
> end user who has to use this thing.


doit.sh is an example of where the effort is today. It is essentially the
same stuff we document online here:
http://tripleo.org/install/c

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Dan Prince
On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
> Hey Dan,
> 
> Thanks for sending out a note about this. I have a few questions
> inline.
> 
> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.com>
> wrote:
> > One of the things the TripleO containers team is planning on
> > tackling
> > in Queens is fully containerizing the undercloud. At the PTG we
> > created
> > an etherpad [1] that contains a list of features that need to be
> > implemented to fully replace instack-undercloud.
> > 
> 
> I know we talked about this at the PTG and I was skeptical that this
> will land in Queens. With the exception of the Container's team
> wanting this, I'm not sure there is an actual end user who is looking
> for the feature so I want to make sure we're not just doing more work
> because we as developers think it's a good idea.

I've heard from several operators that they were actually surprised we
implemented containers in the Overcloud first. Validating a new
deployment framework on a single node Undercloud (for operators) before
overtaking their entire cloud deployment has a lot of merit to it IMO.
When you share the same deployment architecture across the
overcloud/undercloud it puts us in a better position to decide where to
expose new features to operators first (when creating the undercloud or
overcloud for example).

Also, if you read my email again I've explicitly listed the
"Containers" benefit last. While I think moving the undercloud to
containers is a great benefit all by itself this is more of a
"framework alignment" in TripleO and gets us out of maintaining huge
amounts of technical debt. Re-using the same framework for the
undercloud and overcloud has a lot of merit. It effectively streamlines
the development process for service developers, and 3rd parties wishing
to integrate some of their components on a single node. Why be forced
to create a multi-node dev environment if you don't have to (aren't
using HA for example).

Lets be honest. While instack-undercloud helped solve the old "seed" VM
issue it was outdated the day it landed upstream. The entire premise of
the tool is that it uses old style "elements" to create the undercloud
and we moved away from those as the primary means driving the creation
of the Overcloud years ago at this point. The new 'undercloud_deploy'
installer gets us back to our roots by once again sharing the same
architecture to create the over and underclouds. A demo from long ago
expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
Q=5s

In short, we aren't just doing more work because developers think it is
a good idea. This has potential to be one of the most useful
architectural changes in TripleO that we've made in years. Could
significantly decrease our CI reasources if we use it to replace the
existing scenarios jobs which take multiple VMs per job. Is a building
block we could use for other features like and HA undercloud. And yes,
it does also have a huge impact on developer velocity in that many of
us already prefer to use the tool as a means of streamlining our
dev/test cycles to minutes instead of hours. Why spend hours running
quickstart Ansible scripts when in many cases you can just doit.sh. htt
ps://github.com/dprince/undercloud_containers/blob/master/doit.sh

Lastly, this isn't just a containers team thing. We've been using the
undercloud_deploy architecture across many teams to help develop for
almost an entire cycle now. Huge benefits. I would go as far as saying
that undercloud_deploy was *the* biggest feature in Pike that enabled
us to bang out a majority of the docker/service templates in tripleo-
heat-templates.

>  Given that etherpad
> appears to contain a pretty big list of features, are we going to be
> able to land all of them by M2?  Would it be beneficial to craft a
> basic spec related to this to ensure we are not missing additional
> things?

I'm not sure there is a lot of value in creating a spec at this point.
We've already got an approved blueprint for the feature in Pike here: h
ttps://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud

I think we might get more velocity out of grooming the etherpad and
perhaps dividing this work among the appropriate teams.

> 
> > Benefits of this work:
> > 
> >  -Alignment: aligning the undercloud and overcloud installers gets
> > rid
> > of dual maintenance of services.
> > 
> 
> I like reusing existing stuff. +1
> 
> >  -Composability: tripleo-heat-templates and our new Ansible
> > architecture around it are composable. This means any set of
> > services
> > can be used to build up your own undercloud. In other words the
> > framework here isn't just useful for "underclouds". It is really
> > the
> > ability

[openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-02 Thread Dan Prince
One of the things the TripleO containers team is planning on tackling
in Queens is fully containerizing the undercloud. At the PTG we created
an etherpad [1] that contains a list of features that need to be
implemented to fully replace instack-undercloud.

Benefits of this work:

 -Alignment: aligning the undercloud and overcloud installers gets rid
of dual maintenance of services.

 -Composability: tripleo-heat-templates and our new Ansible
architecture around it are composable. This means any set of services
can be used to build up your own undercloud. In other words the
framework here isn't just useful for "underclouds". It is really the
ability to deploy Tripleo on a single node with no external
dependencies. Single node TripleO installer. The containers team has
already been leveraging existing (experimental) undercloud_deploy
installer to develop services for Pike.

 -Development: The containerized undercloud is a great development
tool. It utilizes the same framework as the full overcloud deployment
but takes about 20 minutes to deploy.  This means faster iterations,
less waiting, and more testing.  Having this be a first class citizen
in the ecosystem will ensure this platform is functioning for
developers to use all the time.

 -CI resources: better use of CI resources. At the PTG we received
feedback from the OpenStack infrastructure team that our upstream CI
resource usage is quite high at times (even as high as 50% of the
total). Because of the shared framework and single node capabilities we
can re-architecture much of our upstream CI matrix around single node.
We no longer require multinode jobs to be able to test many of the
services in tripleo-heat-templates... we can just use a single cloud VM
instead. We'll still want multinode undercloud -> overcloud jobs for
testing things like HA and baremetal provisioning. But we can cover a
large set of the services (in particular many of the new scenario jobs
we added in Pike) with single node CI test runs in much less time.

 -Containers: There are no plans to containerize the existing instack-
undercloud work. By moving our undercloud installer to a tripleo-heat-
templates and Ansible architecture we can leverage containers.
Interestingly, the same installer also supports baremetal (package)
installation as well at this point. Like to overcloud however I think
making containers our undercloud default would better align the TripleO
tooling.

We are actively working through a few issues with the deployment
framework Ansible effort to fully integrate that into the undercloud
installer. We are also reaching out to other teams like the UI and
Security folks to coordinate the efforts around those components. If
there are any questions about the effort or you'd like to be involved
in the implementation let us know. Stay tuned for more specific updates
as we organize to get as much of this in M1 and M2 as possible.

On behalf of the containers team,

Dan

[1] https://etherpad.openstack.org/p/tripleo-queens-undercloud-containe
rs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-25 Thread Dan Prince
On Thu, Sep 21, 2017 at 8:53 AM, Jiří Stránský  wrote:

> On 21.9.2017 12:31, Giulio Fidente wrote:
>
>> On 09/20/2017 07:36 PM, James Slagle wrote:
>>
>>> On Tue, Sep 19, 2017 at 8:37 AM, Giulio Fidente 
>>> wrote:
>>>
 On 09/18/2017 05:37 PM, James Slagle wrote:

> - The entire sequence and flow is driven via Mistral on the Undercloud
> by default. This preserves the API layer and provides a clean reusable
> interface for the CLI and GUI.
>

 I think it's worth saying that we want to move the deployment steps out
 of heat and in ansible, not in mistral so that mistral will run the
 workflow only once and let ansible go through the steps

 I think having the steps in mistral would be a nice option to be able to
 rerun easily a particular deployment step from the GUI, versus having
 them in ansible which is instead a better option for CLI users ... but
 it looks like having them in ansible is the only option which permits us
 to reuse the same code to deploy an undercloud because having the steps
 in mistral would require the undercloud installation itself to depend on
 mistral which we don't want to

 James, Dan, please comment on the above if I am wrong

>>>
>>> That's correct. We don't want to require Mistral to install the
>>> Undercloud. However, I don't think that necessarily means it has to be
>>> a single call to ansible-playbook. We could have multiple invocations
>>> of ansible-playbook. Both Mistral and CLI code for installing the
>>> undercloud could handle that easily.
>>>
>>> You wouldn't be able to interleave an external playbook among the
>>> deploy steps however. That would have to be done under a single call
>>> to ansible-playbook (at least how that is written now). We could
>>> however have hooks that could serve as integration points to call
>>> external playbooks after each step.
>>>
>>
>> the benefits of driving the steps from mistral are that then we could
>> also interleave the deployment steps and we won't need the
>> ansible-playbook hook for the "external" services:
>>
>> 1) collect the ansible tasks *and* the workflow_tasks (per step) from heat
>>
>> 2) launch the stepN deployment workflow (ansible-playbook)
>>
>> 3) execute any workflow_task defined for stepN (like ceph-ansible
>> playbook)
>>
>> 4) repeat 2 and 3 for stepN+1
>>
>> I think this would also provide a nice interface for the UI ... but then
>> we'd need mistral to be able to deploy the undercloud
>>
>>
> Alternatively we could do the main step loop in Ansible directly, and have
> the tasks do whatever they need to get the particular service deployed,
> from  to launching a nested ansible-playbook run if that's what it takes.
>
> That way we could run the whole thing end-to-end via ansible-playbook, or
> if needed one could execute smaller bits by themselves (steps or nested
> playbook runs) -- that capability is not baked in by default, but i think
> we could make it so.
>

This was the idea that had the most traction at the PTG when we discussed
it. Things can still be interleaved across the installers (stepwise) but we
effectively eliminate the complexity of having multiple tools involved
within the main deploy step loop as you described it.

I think we should consider making it so that the main Ansible loop can call
any external installer in a stepwise fashion though. It doesn't have to be
just Ansible it calls. In this manner we would be supporting calling into
multiple phases of an external installer.

During the undercloud deployment we get all the benefits of Ansible driving
our primary deployment loop and can still call into external installers
like Kubernetes if we want to. On the overcloud we'd still be leveraging
the high level Mistral workflow to kick off the initial Ansible
playbooks... but once that happens it would be Ansible driving any external
installers directly.

Dan


>
> Also the interface for services would be clean and simple -- it's always
> the ansible tasks.
>
> And Mistral-less use cases become easier to handle too (= undercloud
> installation when Mistral isn't present yet, or development envs when you
> want to tune the playbook directly without being forced to go through
> Mistral).
>
> Logging becomes a bit more unwieldy in this scenario though, as for the
> nested ansible-playbook execution, all output would go into a task in the
> outer playbook, which would be harder to follow and the log of the outer
> playbook could be huge.
>
> So this solution is no silver bullet, but from my current point of view it
> seems a bit less conceptually foreign than using Mistral to provide step
> loop functionality to Ansible, which should be able to handle that on its
> own.
>
>
> - It would still be possible to run ansible-playbook directly for
> various use cases (dev/test/POC/demos). This preserves the quick
> iteration via Ansible that is often desired.
>

Re: [openstack-dev] [tripleo] TripleO UI and CLI feature parity

2017-09-13 Thread Dan Prince
On Tue, Sep 12, 2017 at 9:58 PM, Jiri Tomasek  wrote:

> Hello all,
>
> As we are in the planning phase for Queens cycle, I'd like to open the
> discussion on the topic of CLI (tripleoclient) and GUI (tripleo-ui) feature
> parity.
>
> Two years ago, when TripleO UI was started, it was agreed that in order to
> provide API for GUI and to achieve compatibility between GUI and CLI, the
> TripleO business logic gets extracted from tripleoclient into
> tripleo-common library and it will be provided through Mistral actions and
> workflows so GUI and other potential clients can use it.
>
> The problem:
>
> Currently we are facing a recurring problem that when a new feature is
> added to TripleO it often gets a correctly implemented business logic in
> form of utility functions in tripleo-common but those are then used
> directly by tripleoclient. At this point the feature is considered complete
> as it is integrated in CLI and passes CI tests. The consequences of this
> approach are:
>
> - there is no API for the new feature, so the feature is only usable by CLI
> - part of the business logic still lives in tripleoclient
> - GUI can not support the feature and gets behind CLI capabilities
> - GUI contributors need to identify the new feature, raise bugs [1],
> feature then gets API support in tripleo-common
> - API implementation is not tested in CI
> - GUI and CLI diverges in how that feature is operated as business logic
> is implemented twice, which has number of negative effects on TripleO
> functionality (backwards compatibility, upgrades...)
>

Nice summary here. I think we do need to be more careful in how we add
features to python-tripleoclient so that we guard against breaking some of
the UI use cases. We have guarded some features on this front in the past
during the review process. When TripleO validations was added for instance
we were extra careful in how we execute Ansible (via Mistral) so that both
the UI and CLI could run it.


>
> The biggest point of divergence between GUI and CLI is that CLI tends to
> generate a set of local files which are then put together when deploy
> command is run whereas GUI operates on Deployment plan which is stored in
> Swift and accessed through API provided by tripleo-common.
>
> The described problem currently affects all of the features which CLI uses
> to generate files which are used in deploy command (e.g. Roles management,
> Container images preparation, Networks management etc.) There is no API for
> those features and therefore GUI can't support them until Mistral actions
> and workflows are implemented for it.
>
> Proposed solution:
>
> We should stop considering TripleO as a set of utility scripts used to
> construct 'deploy' command, we should rather consider TripleO as a
> deployment application which has its internal state (Deployment plan in
> Swift) which is accessed and modified via API.
> TripleO feature should be considered complete when API for it is created.
> CLI should solely use TripleO business logic through Mistral actions and
> workflows provided by tripleo-common - same as any other client has to.
>
> Results of this change are:
> - tripleoclient is extremely lightweight, containing no tripleo business
> logic
>

The python-client may be "lightweight" but the downstream packages that
typically install this are extremely heavy. This is largely due to the
instack-undercloud requirements that could arguably be split out into a
separate subpackage. Just a minor nit, that we might consider making the
package lighter for RPMs as well.


> - tripleo-common API is tested in CI as it is used by CLI
> - tripleoclient and tripleo-ui are perfectly compatible, interoperable and
> its features and capabilities match
> - TripleO business logic lives solely in tripleo-common and is operated
> the same way by any client
> - no new backward compatibility problems caused by releasing features
> which are not supported by API are not introduced
> - new features related to Ansible or containers are available to all
> clients
> - upgrades work the same way for deployments deployed via CLI and GUI
> - deployment is replicable without need of keeping the the deploy command
> and generated files around (exported deployment plan has all the
> information)
>
> Note that argument of convenience of being able to modify deployment files
> locally is less and less relevant as we are incrementally moving from
> forcing user to modify templates manually (all the jinja templating,
> roles_data.yaml, network_data.yaml generation, container images
> preparation, derive parameters workflows etc.). In Pike we have made
> changes to simplify the way Deployment plan is stored and it is extremely
> easy to import and export it in case when some manual changes are needed.
>
> Proposed action items:
> - Document what feature complete means in TripleO and how features should
> be accessed by clients
> - Identify steps to achieve feature parity between CLI and 

[openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-19 Thread Dan Prince
I wanted to give a quick heads up on some breaking changes that started
landing last week with regards to how container images are specified
with Heat parameters in TripleO. There are a few patches associated
with converting over to the new changes but the primary patches are
listed below here [1] and here [2].

Here are a few examples where I'm using a local (insecure) docker
registry on 172.19.0.2.

The old parameters were:

  
  DockerNamespaceIsRegistry: true
  DockerNamespace: 172.19.0.2:8787/tripleoupstream
  DockerKeystoneImage: centos-binary-keystone:latest
  ...

The new parameters simplify things quite a bit so that each
Docker*Image parameter contains the *entire* URL required to pull the
docker image. It ends up looking something like this:

  ...
  DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
  DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
keystone:latest
  ...

The benefit of the new format is that it makes it possible to pull
images from multiple registries without first staging them to a local
docker registry. Also, we've removed the 'tripleoupstream' default
container names and now require them to be specified. Removing the
default should make it much more explicit that the end user has
specified container image names correctly and doesn't accidentally use
'tripleoupstream' by accident because one of the container image
parameters didn't get specified. Finally the simplification of the
DockerInsecureRegistryAddress parameter into a single setting makes
things more clear to the end user as well.

A new python-tripleoclient command makes it possible to generate a
custom heat environment with defaults for your environment and
registry. For the examples above I can run 'overcloud container image
prepare' to generate a custom heat environment like this:

openstack overcloud container image prepare --
namespace=172.19.0.2:8787/tripleoupstream --env-
file=$HOME/containers.yaml

We choose not to implement backwards compatibility with the old image
formats as almost all of the Heat parameters here are net new in Pike
and as such have not yet been released yet. The changes here should
make it much easier to manage containers and work with other community
docker registries like RDO, etc.

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/?id=e76d84f784d27a7a2d9e5f3a8b019f8254cb4d6c
[2] https://review.openstack.org/#/c/479398/17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-13 Thread Dan Prince
On Fri, 2017-06-09 at 09:24 -0600, Alex Schultz wrote:
> Hey folks,
> 
> I wanted to bring to your attention that we've merged the change[0]
> to
> add a basic set of roles that can be combined to create your own
> roles_data.yaml as needed.  With this change the roles_data.yaml and
> roles_data_undercloud.yaml files in THT should not be changed by
> hand.

In general I like the feature.

I added some comments to your validations [1] patch below. We need
those validations, but I think we need to carefully consider adding a
hard dependency on python-tripleoclient simply to have validations in
tree. Wondering if perhaps a t-h-t-utils library project might be in
order here to contain routines we use in t-h-t and in higher level
workflow tools in Mistral and on the CLI? This might also make the
tools/process-templates.py stuff cleaner as well.

Thoughts?

Dan

> Instead if you have an update to a role, please update the
> appropriate
> roles/*.yaml file. I have proposed a change[1] to THT with additional
> tools to validate that the roles/*.yaml files are updated and that
> there are no unaccounted for roles_data.yaml changes.  Additionally
> this change adds in a new tox target to assist in the generate of
> these basic roles data files that we provide.
> 
> Ideally I would like to get rid of the roles_data.yaml and
> roles_data_undercloud.yaml so that the end user doesn't have to
> generate this file at all but that won't happen this cycle.  In the
> mean time, additional documentation around how to work with roles has
> been added to the roles README[2].
> 
> Thanks,
> -Alex
> 
> [0] https://review.openstack.org/#/c/445687/
> [1] https://review.openstack.org/#/c/472731/
> [2] https://github.com/openstack/tripleo-heat-templates/blob/master/r
> oles/README.rst
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Dan Prince
On Mon, 2017-06-05 at 16:11 +0200, Jiří Stránský wrote:
> On 5.6.2017 08:59, Sagi Shnaidman wrote:
> > Hi
> > I think a "deep dive" about containers in TripleO and some helpful
> > documentation would help a lot for valuable reviews of these
> > container
> > patches. The knowledge gap that's accumulated here is pretty big.
> 
> As per last week's discussion [1], i hope this is something i could
> do. 
> I'm drafting a preliminary agenda in this etherpad, feel free to add 
> more suggestions if i missed something:
> 
> https://etherpad.openstack.org/p/tripleo-deep-dive-containers
> 
> My current intention is to give a fairly high level view of the
> TripleO 
> container land: from deployment, upgrades, debugging failed CI jobs,
> to 
> how CI itself was done.
> 
> I'm hoping we could make it this Thursday still. If that's too short
> of 
> a notice for several folks or if i hit some trouble with preparation,
> we 
> might move it to 15th. Any feedback is welcome of course.

Nice Jirka. Thanks for organizing this!

Dan

> 
> Have a good day,
> 
> Jirka
> 
> > 
> > Thanks
> > 
> > On Jun 5, 2017 03:39, "Dan Prince" <dpri...@redhat.com> wrote:
> > 
> > > Hi,
> > > 
> > > Any help reviewing the following patches for the overcloud
> > > containerization effort in TripleO would be appreciated:
> > > 
> > > https://etherpad.openstack.org/p/tripleo-containers-todo
> > > 
> > > If you've got new services related to the containerization
> > > efforts feel
> > > free to add them here too.
> > > 
> > > Thanks,
> > > 
> > > Dan
> > > 
> > > _
> > > _
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
> > > subscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> > 
> > 
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] overcloud containers patches todo

2017-06-04 Thread Dan Prince
Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization efforts feel
free to add them here too.

Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-18 Thread Dan Prince
On Thu, 2017-05-18 at 03:29 +, Steven Dake (stdake) wrote:
> My experience with BTRFS has been flawless.  My experience with
> overlayfs is that occasionally (older centos kernels) returned
>  as permissions (rather the drwxrwrw).  This most often
> happened after using the yum overlay driver.  I’ve found overlay to
> be pretty reliable as a “read-only” filesystem – eg just serving up
> container images, not persistent storage.

We've now switched to 'overlay2' and things seem happier. CI passes and
for me locally I'm not seeing any issues in TripleO CI yet either.

Curious to see if the Kolla tests upstream work with it as well:

https://review.openstack.org/#/c/465920/

Dan

>  
> YMMV.  Overlayfs is the long-term filesystem of choice for the use
> case you outlined.  I’ve heard overlayfs has improved over the last
> year in terms of backport quality so maybe it is approaching ready.
>  
> Regards
> -steve
>  
>  
> From: Steve Baker <sba...@redhat.com>
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" <openstack-dev@lists.openstack.org>
> Date: Wednesday, May 17, 2017 at 7:30 PM
> To: "OpenStack Development Mailing List (not for usage questions)"  penstack-...@lists.openstack.org>, "dwa...@redhat.com" <dwalsh@redhat
> .com>
> Subject: Re: [openstack-dev] [TripleO][Kolla] default docker storage
> backend for TripleO
>  
>  
>  
> On Thu, May 18, 2017 at 12:38 PM, Fox, Kevin M <kevin@pnnl.gov>
> wrote:
> I've only used btrfs and devicemapper on el7. btrfs has worked well.
> devicemapper ate may data on multiple occasions. Is redhat supporting
> overlay in the el7 kernels now?
>  
> overlay2 is documented as a Technology Preview graph driver in the
> Atomic Host 7.3.4 release notes:
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linu
> x_atomic_host/7/html-single/release_notes/
>  
>  
>  
> _
> From: Dan Prince [dpri...@redhat.com]
> Sent: Wednesday, May 17, 2017 5:24 PM
> To: openstack-dev
> Subject: [openstack-dev] [TripleO][Kolla] default docker storage
> backend for    TripleO
> 
> TripleO currently uses the default "loopback" docker storage device.
> This is not recommended for production (see 'docker info').
> 
> We've been poking around with docker storage backends in TripleO for
> almost 2 months now here:
> 
>  https://review.openstack.org/#/c/451916/
> 
> For TripleO there are a couple of considerations:
> 
>  - we intend to support in place upgrades from baremetal to
> containers
> 
>  - when doing in place upgrades re-partitioning disks is hard, if not
> impossible. This makes using devicemapper hard.
> 
>  - we'd like to to use a docker storage backend that is production
> ready.
> 
>  - our target OS is latest Centos/RHEL 7
> 
> As we approach pike 2 I'm keen to move towards a more production
> docker
> storage backend. Is there consensus that 'overlay2' is a reasonable
> approach to this? Or is it too early to use that with the
> combinations
> above?
> 
> Looking around at what is recommended in other projects it seems to
> be
> a mix as well from devicemapper to btrfs.
> 
> [1] https://docs.openshift.com/container-platform/3.3/install_config/
> in
> stall/host_preparation.html#configuring-docker-storage
> [2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_Re
> dH
> at.sh#n30
> 
>  
> I'd love to be able to use overlay2. I've CCed Daniel Walsh with the
> hope we can get a general overview of the maturity of overlay2 on
> rhel/centos.
>  
> I tried using overlay2 recently to create an undercloud and hit an
> issue doing a "cp -a *" on deleted files. This was with kernel-
> 3.10.0-514.16.1 and docker-1.12.6.
>  
> I want to get to the bottom of it so I'll reproduce and raise a bug
> as appropriate.
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Kolla] default docker storage backend for TripleO

2017-05-17 Thread Dan Prince
TripleO currently uses the default "loopback" docker storage device.
This is not recommended for production (see 'docker info').

We've been poking around with docker storage backends in TripleO for
almost 2 months now here:

 https://review.openstack.org/#/c/451916/

For TripleO there are a couple of considerations:

 - we intend to support in place upgrades from baremetal to containers

 - when doing in place upgrades re-partitioning disks is hard, if not
impossible. This makes using devicemapper hard.

 - we'd like to to use a docker storage backend that is production
ready.

 - our target OS is latest Centos/RHEL 7

As we approach pike 2 I'm keen to move towards a more production docker
storage backend. Is there consensus that 'overlay2' is a reasonable
approach to this? Or is it too early to use that with the combinations
above?

Looking around at what is recommended in other projects it seems to be
a mix as well from devicemapper to btrfs.

[1] https://docs.openshift.com/container-platform/3.3/install_config/in
stall/host_preparation.html#configuring-docker-storage
[2] http://git.openstack.org/cgit/openstack/kolla/tree/tools/setup_RedH
at.sh#n30


Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][logging] oslo.log fluentd native logging

2017-05-10 Thread Dan Prince
On Mon, 2017-04-24 at 07:47 -0400, Joe Talerico wrote:
> Hey owls - I have been playing with oslo.log fluentd integration[1]
> in
> a poc commit here [2]. Enabling the native service logging is nice
> and
> tracebacks no longer multiple inserts into elastic - there is a
> "traceback" key which would contain the traceback if there was one.
> 
> The system-level / kernel level logging is still needed with the
> fluent client on each Overcloud node.
> 
> I see Martin did the initial work [3] to integrate fluentd, is there
> anyone looking at migrating the OpenStack services to using the
> oslo.log facility?

Nobody officially implementing this yet that I know of. But it does
look promising.

The idea of using oslo.logs fluentd formatter could dovetail very
nicely into our new containers (docker) servers for Pike in that it
would allow us to log to stdout directly within the container... but
still support the Fluentd logging interfaces that we have today.

The only downside would be that not all services in OpenStack support
olso.log (I don't think Swift does for example). Nor do some of the
core services we deploy like Galera and RabbitMQ. So we'd have a mixed
bag of host and stdout logging perhaps for some things or would need to
integrate with Fluentd differently for services without oslo.log
support.

Our current approach to containers logging in TripleO recently landed
here and exposed the logs to a directory on the host specifically so
that we could aim to support Fluentd integrations:

https://review.openstack.org/#/c/442603/

Perhaps we should revisit this in the (near) future to improve our
containers deployments.

Dan

> 
> Joe
> 
> [1] https://github.com/openstack/oslo.log/blob/master/oslo_log/format
> ters.py#L167
> [2] https://review.openstack.org/#/c/456760/
> [3] https://specs.openstack.org/openstack/tripleo-specs/specs/newton/
> tripleo-opstools-centralized-logging.html
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Dan Prince
On Thu, 2017-05-04 at 14:11 -0400, Emilien Macchi wrote:
> On Thu, May 4, 2017 at 9:41 AM, Dan Prince <dpri...@redhat.com>
> wrote:
> > On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> > > - Original Message -
> > > > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> > > > > (cross-posting)
> > > > > Instead of running the Pingtest, we would execute a Tempest
> > > > > Scenario
> > > > > that boot an instance from volume (like Pingstest is already
> > > > > doing)
> > > > > and see how it goes (in term of coverage and runtime).
> > > > > I volunteer to kick-off the work with someone more expert
> > > > > than I
> > > > > am
> > > > > with quickstart (Arx maybe?).
> > > > > 
> > > > > Another iteration could be to start building an easy
> > > > > interface to
> > > > > select which Tempest tests we want a TripleO CI job to run
> > > > > and
> > > > > plug
> > > > > it
> > > > > to our CI tooling (tripleo-quickstart I presume).
> > > > 
> > > > Running a subset of Tempest tests isn't the same thing as
> > > > designing
> > > > (and owning) your own test suite that targets the things that
> > > > mean
> > > > the
> > > > most to our community (namely speed and coverage). Even giving
> > > > up
> > > > 5-10
> > > > minutes of runtime...just to be able to run Tempest isn't
> > > > something
> > > > that some of us would be willing to do.
> > > 
> > > As I mentioned, you can do it with Tempest (the library). You can
> > > have your own test suite that does exactly what you are asking
> > > (namely, a set of scenario tests based on Heat which targets the
> > > TripleO use case) in a Tempest plugin and there is no absolute
> > > reason
> > > that those tests should add 5-10 minutes of runtime compared to
> > > pingtest.
> > > 
> > > It/they would be exactly pingtest, only implemented using a
> > > different
> > > library and running with a different runner, with the *exact*
> > > same
> > > run time.
> > > 
> > > Obvious advantages: only one technology used to run tests, so if
> > > anyone else want to run additional tests, there is no need to
> > > maintain two code paths; reuse on a big and proven library of
> > > test
> > > and test runner tools.
> > 
> > I like the idea of getting pingtest out of tripleo.sh as more of a
> > stand alone tool. I would support an effort that re-implemented
> > it...
> > and using tempest-lib would be totally fine. And as you point out
> > one
> > could even combine these tests with a more common "Tempest" run
> > that
> > incorporates the scenarios, etc.
> 
> I don't understand why we would re-implement the pingtest in a
> tempest plugin.
> Could you please tell us what is the technical difference between
> what
> does this scenario :
> https://github.com/openstack/tempest/blob/master/tempest/scenario/tes
> t_volume_boot_pattern.py
> 
> And this pingtest:
> https://github.com/openstack/tripleo-heat-templates/blob/master/ci/pi
> ngtests/tenantvm_floatingip.yaml
> 
> They both create a volume Cinder, snapshot it in Glance and and spawn
> a Nova server from the volume.
> 
> What one does that the other one doesn't?

I don't think these are the same things. Does the Tempest test even
create a floating IP? And in the case of pingtest we also cover Heat
API in the overcloud (also valuable coverage). And even if they could
be made to match today is there any guarantee that they would diverge
in the future or maintain the same speed goals as that test lives in
Tempest (and most TripleO cores don't review there).

The main difference that I care about is it is easier for us to
maintain and fix the pingtest varient at this point. We care a lot
about our CI, and like I said before increasing the runtime isn't
something we could easily tolerate. I'm willing to entertain reuse so
long as it also allows us the speed and control we desire.

> 
> > To me the message is clear that we DO NOT want to consume the
> > normal
> > Tempest scenarios in TripleO upstream CI at this point. Sure there
> > is
> > overlap there, but the focus of those tests is just plain
> > different...
> 
> I haven't seen strong pushback in this thread except you.

Perhaps most cores haven't weighed in on this 

Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-04 Thread Dan Prince
On Thu, 2017-05-04 at 03:11 -0400, Luigi Toscano wrote:
> - Original Message -
> > On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> > > (cross-posting)
> > 
> > > Instead of running the Pingtest, we would execute a Tempest
> > > Scenario
> > > that boot an instance from volume (like Pingstest is already
> > > doing)
> > > and see how it goes (in term of coverage and runtime).
> > > I volunteer to kick-off the work with someone more expert than I
> > > am
> > > with quickstart (Arx maybe?).
> > > 
> > > Another iteration could be to start building an easy interface to
> > > select which Tempest tests we want a TripleO CI job to run and
> > > plug
> > > it
> > > to our CI tooling (tripleo-quickstart I presume).
> > 
> > Running a subset of Tempest tests isn't the same thing as designing
> > (and owning) your own test suite that targets the things that mean
> > the
> > most to our community (namely speed and coverage). Even giving up
> > 5-10
> > minutes of runtime...just to be able to run Tempest isn't something
> > that some of us would be willing to do.
> 
> As I mentioned, you can do it with Tempest (the library). You can
> have your own test suite that does exactly what you are asking
> (namely, a set of scenario tests based on Heat which targets the
> TripleO use case) in a Tempest plugin and there is no absolute reason
> that those tests should add 5-10 minutes of runtime compared to
> pingtest. 
> 
> It/they would be exactly pingtest, only implemented using a different
> library and running with a different runner, with the *exact* same
> run time. 
> 
> Obvious advantages: only one technology used to run tests, so if
> anyone else want to run additional tests, there is no need to
> maintain two code paths; reuse on a big and proven library of test
> and test runner tools.

I like the idea of getting pingtest out of tripleo.sh as more of a
stand alone tool. I would support an effort that re-implemented it...
and using tempest-lib would be totally fine. And as you point out one
could even combine these tests with a more common "Tempest" run that
incorporates the scenarios, etc.

To me the message is clear that we DO NOT want to consume the normal
Tempest scenarios in TripleO upstream CI at this point. Sure there is
overlap there, but the focus of those tests is just plain different...
speed isn't a primary concern there as it is for us so I don't think we
should do it now. And probably not ever unless the CI job time is less
than an hour. Like even if we were able to tune a set of stock Tempest
smoke tests today to our liking unless TripleO proper gates on the
runtime of those not increasing we'd be at risk of breaking our CI
queues as the wall time would potentially get too long. In this regard
this entire thread is poorly named I think in that we are no longer
talking about 'pingtest vs. tempest' but rather the implementation
details of how we reimplement our existing pingtest to better suite the
community.

So ++ for the idea of experimenting with the use of tempest.lib. But
stay away from the idea of using Tempest smoke tests and the like for
TripleO I think ATM.

Its also worth noting there is some risk when maintaining your own in-
tree Tempest tests [1]. If I understood that thread correctly that
breakage wouldn't have occurred if the stable branch tests were gating
Tempest proper... which is a very hard thing to do if we have our own
in-tree stuff. So there is a cost to doing what you suggest here, but
probably one that we'd be willing to accept.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116172.
html

Dan

> 
> Ciao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-05-03 Thread Dan Prince
On Wed, 2017-05-03 at 17:53 -0400, Emilien Macchi wrote:
> (cross-posting)
> 
> I've seen a bunch of interesting thoughts here.
> The most relevant feedback I've seen so far:
> 
> - TripleO folks want to keep testing fast and efficient.
> - Tempest folks understand this problematic and is willing to
> collaborate.
> 
> I propose that we move forward and experiment the usage of Tempest in
> TripleO CI for one job that could be experimental or non-voting to
> start.

Experimental or periodic at first please.

> Instead of running the Pingtest, we would execute a Tempest Scenario
> that boot an instance from volume (like Pingstest is already doing)
> and see how it goes (in term of coverage and runtime).
> I volunteer to kick-off the work with someone more expert than I am
> with quickstart (Arx maybe?).
> 
> Another iteration could be to start building an easy interface to
> select which Tempest tests we want a TripleO CI job to run and plug
> it
> to our CI tooling (tripleo-quickstart I presume).

Running a subset of Tempest tests isn't the same thing as designing
(and owning) your own test suite that targets the things that mean the
most to our community (namely speed and coverage). Even giving up 5-10
minutes of runtime...just to be able to run Tempest isn't something
that some of us would be willing to do.

> I also hear some feedback about keeping the pingtest alive for some
> uses cases, and I agree we could keep some CI jobs to run the
> pingtest
> when it makes more sense (when we want to test Heat for example, or
> just maintain it for developers who used it).



> 
> How does it sounds? Please bring feedback.
> 
> 
> On Tue, Apr 18, 2017 at 7:41 AM, Attila Fazekas 
> wrote:
> > 
> > 
> > On Tue, Apr 18, 2017 at 11:04 AM, Arx Cruz 
> > wrote:
> > > 
> > > 
> > > 
> > > On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy  > > > wrote:
> > > > 
> > > > On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick
> > > > wrote:
> > > > > On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec  > > > > an.com>
> > > > > wrote:
> > > > > > Tempest isn't really either of those things.  According to
> > > > > > another
> > > > > > message
> > > > > > in this thread it takes around 15 minutes to run just the
> > > > > > smoke
> > > > > > tests.
> > > > > > That's unacceptable for a lot of our CI jobs.
> > > 
> > > 
> > > I rather spend 15 minutes running tempest than add a regression
> > > or a new
> > > bug, which already happen in the past.
> > > 
> > 
> > The smoke tests might not be the best test selection anyway, you
> > should pick
> > some scenario which does
> > for example snapshot of images and volumes. yes, these are the slow
> > ones,
> > but they can run in parallel.
> > 
> > Very likely you do not really want to run all tempest test, but
> > 10~20 minute
> > time,
> > sounds reasonable for a sanity test.
> > 
> > The tempest config utility also should be extended by some parallel
> > capability,
> > and should be able to use already downloaded (part of the image)
> > resources.
> > 
> > Tempest/testr/subunit worker balance is not always the best,
> > technically would be possible to do dynamic balancing, but it would
> > require
> > a lot of work.
> > Let me know when it becomes the main concern, I can check what
> > can/cannot be
> > done.
> > 
> > 
> > > 
> > > > 
> > > > > Ben, is the issue merely the time it takes? Is it the affect
> > > > > that time
> > > > > taken has on hardware availability?
> > > > 
> > > > It's both, but the main constraint is the infra job timeout,
> > > > which is
> > > > about
> > > > 2.5hrs - if you look at our current jobs many regularly get
> > > > close to (and
> > > > sometimes exceed this), so we just don't have the time budget
> > > > available
> > > > to
> > > > run exhasutive tests every commit.
> > > 
> > > 
> > > We have green light from infra to increase the job timeout to 5
> > > hours, we
> > > do that in our periodic full tempest job.
> > 
> > 
> > Sounds good, but I am afraid it could hurt more than helping, it
> > could delay
> > other things get fixed by lot
> > especially if we got some extra flakiness, because of foobar.
> > 
> > You cannot have all possible tripleo configs on the gate anyway,
> > so something will pass which will require a quick fix.
> > 
> > IMHO the only real solution, is making the before test-run steps
> > faster or
> > shorter.
> > 
> > Do you have any option to start the tempest running jobs in a more
> > developed
> > state ?
> > I mean, having more things already done at the start
> > time  (images/snapshot)
> > and just do a fast upgrade at the beginning of the job.
> > 
> > Openstack installation can be completed in a `fast` way (~minute)
> > on
> > RHEL/Fedora systems
> > after the yum steps, also if you are able to aggregate all yum step
> > to
> > single
> > command execution (transaction) you generally able to save a lot of
> > time.
> > 
> > There 

Re: [openstack-dev] [tripleo] container jobs are unstable

2017-04-07 Thread Dan Prince
On Thu, 2017-04-06 at 15:32 -0400, Paul Belanger wrote:
> On Thu, Mar 30, 2017 at 11:01:08AM -0400, Paul Belanger wrote:
> > On Thu, Mar 30, 2017 at 03:08:57PM +0100, Steven Hardy wrote:
> > > To be fair, we discussed this on IRC yesterday, everyone agreed
> > > infra
> > > supported docker cache/registry was a great idea, but you said
> > > there was no
> > > known timeline for it actually getting done.
> > > 
> > > So while we all want to see that happen, and potentially help out
> > > with the
> > > effort, we're also trying to mitigate the fact that work isn't
> > > done by
> > > working around it in our OVB environment.
> > > 
> > > FWIW I think we absolutely need multinode container jobs, e.g
> > > using infra
> > > resources, as that has worked out great for our puppet based CI,
> > > but we
> > > really need to work out how to optimize the container download
> > > speed in
> > > that environment before that will work well AFAIK.
> > > 
> > > You referenced https://review.openstack.org/#/c/447524/ in your
> > > other
> > > reply, which AFAICS is a spec about publishing to dockerhub,
> > > which sounds
> > > great, but we have the opposite problem, we need to consume those
> > > published
> > > images during our CI runs, and currently downloading images takes
> > > too long.
> > > So we ideally need some sort of local registry/pull-through-cache 
> > > that
> > > speeds up that process.
> > > 
> > > How can we move forward here, is there anyone on the infra side
> > > we can work
> > > with to discuss further?
> > > 
> > 
> > Yes, I am currently working with clarkb to adress some of these
> > concerns. Today
> > we are looking at setup our cloud mirrors to cache[1] specific
> > URLs, for example
> > we are trying testing out http://trunk.rdoproject.org  This is not
> > a long term
> > solution for projects, but a short. It will be opt-in for now,
> > rather then us
> > set it up for all jobs.  Long term, we move rdoproject.org into
> > AFS.
> > 
> > I have been trying to see if we can do the same for docker hub, and
> > continue to
> > run it.  The main issue, at least for me, is we don't want to
> > depend on docker
> > tooling for this. I'd rather not install a docker into our control
> > play at this
> > point in time.
> > 
> > So, all of that to stay, it will take some time. I understand it is
> > a high
> > priority, but lets solve the current mirroring issues with tripleo
> > first (RDO,
> > gems, github), and lets see if the apache cache proxy with work for
> > hub.docker.com too.
> > 
> > [1] https://review.openstack.org/451554
> 
> Wanted to follow up to this thread, we managed to get a reverse proxy
> cache[2]
> for https://registry-1.docker.io working. So far, I've just tested
> ubuntu,
> fedora, centos images but the caching works. Once we land this, any
> jobs using
> docker can take advantage of the mirror.
> 
> [2] https://review.openstack.org/#/c/453811


Thanks for your help in this Paul.

A reverse proxy cache wasn't exactly what I was expecting so it took a
few more patches to get all this initially wired into the TripleO OVB
jobs (6 patches so far). Once we have this we can duplicate a similar
setup for the multinode patches as well.

I created a quick etherpad below [1] to track the status of these
patches. I think they mostly need to land in the order they are listed
in the etherpad...

[1] https://etherpad.openstack.org/p/tripleo-docker-registry-mirror

> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla] extended start and hostpath persistent dirs (was: kolla_bootstrap for non-openstack services)

2017-04-04 Thread Dan Prince
On Tue, 2017-04-04 at 18:03 +0200, Bogdan Dobrelya wrote:
> On 03.04.2017 21:01, Dan Prince wrote:
> > On Mon, 2017-04-03 at 16:15 +0200, Bogdan Dobrelya wrote:
> > > Let's please re-evaluate configuration of containerized non-
> > > openstack,
> > > like database, message queue, key value, web, services for
> > > tripleo
> > > heat
> > > templates and Kolla. Here is an example containerized etcd patch
> > > [0].
> > > 
> > > tl;dr use kolla images and bootsrap OR upstream images with
> > > direct
> > > commands:
> > > 
> > > .. code-block:: yaml
> > > kolla_config:
> > > /var/lib/kolla/config_files/foo.json
> > >   command: /usr/bin/foo
> > > 
> > > vs
> > > 
> > > .. code-block:: yaml
> > > foo:
> > > image: upstream/foo:latest
> > > command: /usr/bin/foo
> > > 
> > > Note, tht already doesn't use configs [1] copied into the images
> > > by
> > > kolla build time. The next and logical step might be to omit
> > > kolla's
> > > bootstrap, where applicable, as well.
> > 
> > The kolla config file copying proved to be a bit pedantic. So we
> > removed it. A good example of this would be how this played out for
> > the
> > keystone service:
> > 
> > http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
> > it/?
> > id=332e8ec10345ad5c8bf10a532f6f6003da682b68
> > 
> > > 
> > > There is a two options:
> > > * use kolla images and bootstrap as t-h-t does now for all
> > > services
> > > being containerized
> > 
> > We do not use KOLLA_BOOTSTRAP for all services now. Only 4 services
> > use
> > it currently I think. I think the general consensus is we should
> > not be
> > using it unless there is a functional requirement to do so.
> > Services
> > like Mysql and Rabbitmq have some extra initialization that needs
> > to
> > execute in-container before the services startup. We could
> > duplicate
> > this in tripleo-heat-templates, run it with docker-cmd perhaps and
> > do
> > it that way but we initially made exception for just a few
> > services.
> > 
> > Other services like Glance are using it, but that is just
> > historical.
> > There is already a patch to remove the use of KOLLA_BOOTSTRAP for
> > this
> > service:
> > 
> > https://review.openstack.org/#/c/440884/1
> > 
> > So in summary the consensus is we'd prefer not to be using the
> > KOLLA_BOOTSTRAP environment variables because in some cases there
> > is a
> > 'Kolla' flavor to these things that don't match how TripleO deploys
> > things.
> > 
> > It is worth pointing out that while we aren't using KOLLA_BOOTSTRAP
> > we
> > are using the kolla startup systems in many cases. This gives some
> > features around file permissions, extra sudoers files, etc. We may
> > be
> > able to stop using this for some services but I also think we are
> > getting value out of the interfaces today. They aren't nearly as
> > verbose as the Kolla config copying stuff so we could go either
> > way.
> 
> That's interesting, there is related topic with hostpath mounted
> persistent log dirs [0]. The issue is:
> 
> [tl;dr] kolla_config is a bad fit for logs, let's run containers'
> steps
> as 'user: root' in a user namespace remapped [1] for a
> tripleocontainers
> system user. Thoughts? And would that work for stock centos7/rhel7
> kernels?
> 
> 
> So, the issue is when log/data dirs created by host_prep_tasks, we
> must
> configure host permissions in a simple non-opinionated way.
> What I can see for t-h-t to cover it on its own:
> * Custom entrypoints for containers adds complexity and head ache.

Good point. But the entry points Kolla uses for many containers don't
match what our systemd services already use on baremetal. As we are
striving for update path that does not break end users upgrading from
baremetal to containers we have to have a mechanism that gives us
configuration parity across the implementions. Controlling the entry
point either by injecting it into the container (via something like
Kolla's template overrides mechanism) or via tripleo-heat-templates
direction (much more hackable) is where we ended up.

In general we like Kolla images at the moment for what they provide.
But there are some cases where we need to control things that have too
much of a "kolla flavor" and would potentially break upgrades/features
if we used them directly.

> * Mount logs wor

Re: [openstack-dev] [tripleo] Roadmap for Container CI work

2017-04-04 Thread Dan Prince
On Tue, 2017-04-04 at 16:01 -0400, Emilien Macchi wrote:
> After our weekly meeting of today, I found useful to share and
> discuss
> our roadmap for Container CI jobs in TripleO.
> They are ordered by priority from the highest to lowest:
> 
> 1. Swap ovb-nonha job with ovb-containers, enable introspection on
> the
> container job and shuffle other coverage (e.g ssl) to other jobs
> (HA?). It will help us to get coverage for ovb-containers scenario
> again, without consuming more rh1 resources and keep existing
> coverage.

The existing containers job already had introspection enabled I think.
So this is largely a swap (containers for nonha). We may move some of
the SSL features into the existing HA job if that makes more sense to
keep coverage on par with what the nonha job was already covering.

> 2. Get multinode coverage of deployments - this should integrate with
> the scenarios we already have defined for non-container deployment.
> This is super important to cover all overcloud services, like we did
> with classic deployments. It should be non voting to start and then
> voting once it works. We should find a way to keep the same templates
> as we have now, and just include the docker environment. In other
> words, find a way to keep using:
> https://github.com/openstack/tripleo-heat-templates/blob/master/ci/en
> vironments/scenario001-multinode.yaml
> so we don't duplicate scenario environments.
> 3. Implement container upgrade job, which for Pike will be deploy a
> baremetal overcloud, then migrate on upgrade to containers. Use
> multinode jobs for this task. Start with a non-voting job and move to
> the gate once it work. I also suggest to use scenarios framework, so
> we keep good coverage.

Using multinode here is a great start. We might also at some point
decide to swap out the existing OVB job to be a baremetal -> containers
version so we have end to end coverage there as well.

> 4. After we implement the workflow for minor updates, have a job with
> tests container-to-container updates for minor (rolling) updates,
> this
> ideally should add some coverage to ensure no downtime of APIs and
> possibly checks for service restarts (ref recent bugs about bouncing
> services on minor updates)
> 5. Once Pike is released and Queens starts, let's work on container
> to
> containers upgrade job.
> 
> Any feedback or question is highly welcome,
> 
> Note: The proposal comes from shardy's notes on
> https://etherpad.openstack.org/p/tripleo-container-ci - feel free to
> contribute to the etherpad or mailing list.
> 
> Thanks,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][kolla] kolla_bootstrap for non-openstack services

2017-04-03 Thread Dan Prince
On Mon, 2017-04-03 at 16:15 +0200, Bogdan Dobrelya wrote:
> Let's please re-evaluate configuration of containerized non-
> openstack,
> like database, message queue, key value, web, services for tripleo
> heat
> templates and Kolla. Here is an example containerized etcd patch [0].
> 
> tl;dr use kolla images and bootsrap OR upstream images with direct
> commands:
> 
> .. code-block:: yaml
> kolla_config:
> /var/lib/kolla/config_files/foo.json
>   command: /usr/bin/foo
> 
> vs
> 
> .. code-block:: yaml
> foo:
> image: upstream/foo:latest
> command: /usr/bin/foo
> 
> Note, tht already doesn't use configs [1] copied into the images by
> kolla build time. The next and logical step might be to omit kolla's
> bootstrap, where applicable, as well.

The kolla config file copying proved to be a bit pedantic. So we
removed it. A good example of this would be how this played out for the
keystone service:

http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?
id=332e8ec10345ad5c8bf10a532f6f6003da682b68

> 
> There is a two options:
> * use kolla images and bootstrap as t-h-t does now for all services
> being containerized

We do not use KOLLA_BOOTSTRAP for all services now. Only 4 services use
it currently I think. I think the general consensus is we should not be
using it unless there is a functional requirement to do so. Services
like Mysql and Rabbitmq have some extra initialization that needs to
execute in-container before the services startup. We could duplicate
this in tripleo-heat-templates, run it with docker-cmd perhaps and do
it that way but we initially made exception for just a few services.

Other services like Glance are using it, but that is just historical.
There is already a patch to remove the use of KOLLA_BOOTSTRAP for this
service:

https://review.openstack.org/#/c/440884/1

So in summary the consensus is we'd prefer not to be using the
KOLLA_BOOTSTRAP environment variables because in some cases there is a
'Kolla' flavor to these things that don't match how TripleO deploys
things.

It is worth pointing out that while we aren't using KOLLA_BOOTSTRAP we
are using the kolla startup systems in many cases. This gives some
features around file permissions, extra sudoers files, etc. We may be
able to stop using this for some services but I also think we are
getting value out of the interfaces today. They aren't nearly as
verbose as the Kolla config copying stuff so we could go either way.

> pros: same way to template everything, kolla build/start just works.
> risks: non-openstack services, eventually, may stop being supported
> by
> Kolla for number of reasons. Kolla bootstrap changes aren't tested in
> tripleo CI and might be breaking
> cons: locking in to the kolla opinionated bootstrap entry points and
> kolla_config's config.json and command.
> 
> * if applicable to the service, use upstream images (etcd example
> [2])
> w/o any kolla parts.
> pros: less moving parts like custom entry points, no locking in
> onto opinionated Kolla config/bootstrap
> risks: Upstream image changes aren't tested in tripleo CI and might
> be
> breaking
> cons: different ways to template openstack/non-openstack services,
> kolla
> build/start doesn't work for the latter.
> 
> [0] https://review.openstack.org/#/c/447627/
> [1] https://review.openstack.org/#/c/451366
> [2]
> https://review.openstack.org/#/c/445883/2/contrib/overcloud_container
> s.yaml
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Idempotence of the deployment process

2017-04-02 Thread Dan Prince
On Fri, 2017-03-31 at 17:21 -0600, Alex Schultz wrote:
> Hey folks,
> 
> I wanted to raise awareness of the concept of idempotence[0] and how
> it affects deployment(s).  In the puppet world, we consider this very
> important because since puppet is all about ensuring a desired state
> (ie. a system with config files + services). That being said, I feel
> that it is important for any deployment tool to be aware of this.
> When the same code is applied to the system repeatedly (as would be
> the case in a puppet master deployment) the subsequent runs should
> result in no changes if there is no need.  If you take a configured
> system and rerun the same deployment code you don't want your
> services
> restarting when the end state is supposed to be the same. In the case
> of TripleO, we should be able deploy an overcloud and rerun the
> deployment process should result in no configuration changes and 0
> services being restarted during the process. The second run should
> essentially be a noop.
> 
> We have recently uncovered various bugs[1][2][3][4] that have
> introduced service disruption due to a lack of idempotency causing
> service restarts. So when reviewing or developing new code what is
> important about the deployment is to think about what happens if I
> run
> this bit of code twice.  There are a few common items that come up
> around idempotency. Things like execs in puppet-tripleo should be
> refreshonly or use unless/onlyif to prevent running again if
> unnecessary.  Additionally in the TripleO configuration it's
> important
> to understand in which step a service is configured and if it
> possibly
> would get deconfigured in another step.  For example, we configure
> apache and some wsgi services in step 3. But we currently configure
> some additional wsgi openstack services in step 4 which is resulting
> in excessive httpd restarts and possible service unavailability[5]
> when updates are applied.
> 
> Another important place to understand this concept is in upgrades
> where we currently allow for ansible tasks to be used. These should
> result in an idempotent action when puppet is subsequently run which
> means that the two bits of code essentially need to result in the
> same
> configuration. For example in the nova-api upgrades for Newton to
> Ocata we needed to run the same commands[6] that would later be run
> by
> puppet to prevent clashing configurations and possible idempotency
> problems.
> 
> Idempotency issues can cause service disruptions, longer deployment
> times for end users, or even possible misconfigurations.  I think it
> might be beneficial to add an idempotency periodic job that is
> basically a double run of the deployment process to ensure no service
> or configuration changes on the second run. Thoughts?  Ideally one in
> the gate would be awesome but I think it would take to long to be
> feasible with all the other jobs we currently run.

How would we verify that services aren't getting changed/restarted
even? Checking process runtimes perhaps or something?

If you used the multinode jobs or perhaps the new undercloud_deploy
installer (single node) it might be feasible to add this into the gate.
I would avoid adding this to the OVB queue as it is already too full
and we can probably gain the coverage we need without that type of
testing.

Dan

> 
> Thanks,
> -Alex
> 
> [0] http://binford2k.com/content/2015/10/idempotence-not-just-big-sca
> ry-word
> [1] https://bugs.launchpad.net/tripleo/+bug/1664650
> [2] https://bugs.launchpad.net/puppet-nova/+bug/1665443
> [3] https://bugs.launchpad.net/tripleo/+bug/1665405
> [4] https://bugs.launchpad.net/tripleo/+bug/1665426
> [5] https://review.openstack.org/#/c/434016/
> [6] https://review.openstack.org/#/c/405241/
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-30 Thread Dan Prince
On Wed, 2017-03-29 at 22:07 -0400, Paul Belanger wrote:
> On Thu, Mar 30, 2017 at 09:56:59AM +1300, Steve Baker wrote:
> > On Thu, Mar 30, 2017 at 9:39 AM, Emilien Macchi <emil...@redhat.com
> > > wrote:
> > 
> > > On Mon, Mar 27, 2017 at 8:00 AM, Flavio Percoco <fla...@redhat.co
> > > m> wrote:
> > > > On 23/03/17 16:24 +0100, Martin André wrote:
> > > > > 
> > > > > On Wed, Mar 22, 2017 at 2:20 PM, Dan Prince <dprince@redhat.c
> > > > > om> wrote:
> > > > > > 
> > > > > > On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:
> > > > > > > 
> > > > > > > On 22/03/17 13:32 +0100, Flavio Percoco wrote:
> > > > > > > > On 21/03/17 23:15 -0400, Emilien Macchi wrote:
> > > > > > > > > Hey,
> > > > > > > > > 
> > > > > > > > > I've noticed that container jobs look pretty unstable
> > > > > > > > > lately; to
> > > > > > > > > me,
> > > > > > > > > it sounds like a timeout:
> > > > > > > > > http://logs.openstack.org/19/447319/2/check-tripleo/g
> > > > > > > > > ate-tripleo-
> > > > > > > > > ci-centos-7-ovb-containers-oooq-
> > > > > > > > > nv/bca496a/console.html#_2017-03-
> > > > > > > > > 22_00_08_55_358973
> > > > > > > > 
> > > > > > > > There are different hypothesis on what is going on
> > > > > > > > here. Some
> > > > > > > > patches have
> > > > > > > > landed to improve the write performance on containers
> > > > > > > > by using
> > > > > > > > hostpath mounts
> > > > > > > > but we think the real slowness is coming from the
> > > > > > > > images download.
> > > > > > > > 
> > > > > > > > This said, this is still under investigation and the
> > > > > > > > containers
> > > > > > > > squad will
> > > > > > > > report back as soon as there are new findings.
> > > > > > > 
> > > > > > > Also, to be more precise, Martin André is looking into
> > > > > > > this. He also
> > > > > > > fixed the
> > > > > > > gate in the last 2 weeks.
> > > > > > 
> > > > > > 
> > > > > > I spoke w/ Martin on IRC. He seems to think this is the
> > > > > > cause of some
> > > > > > of the failures:
> > > > > > 
> > > > > > http://logs.openstack.org/32/446432/1/check-tripleo/gate-
> > > 
> > > tripleo-ci-cen
> > > > > > tos-7-ovb-containers-oooq-nv/543bc80/logs/oooq/overcloud-
> > > > > > controller-
> > > > > > 0/var/log/extra/docker/containers/heat_engine/log/heat/heat
> > > > > > -
> > > > > > engine.log.txt.gz#_2017-03-21_20_26_29_697
> > > > > > 
> > > > > > 
> > > > > > Looks like Heat isn't able to create Nova instances in the
> > > > > > overcloud
> > > > > > due to "Host 'overcloud-novacompute-0' is not mapped to any
> > > > > > cell'. This
> > > > > > means our cells initialization code for containers may not
> > > > > > be quite
> > > > > > right... or there is a race somewhere.
> > > > > 
> > > > > 
> > > > > Here are some findings. I've looked at time measures from CI
> > > > > for
> > > > > https://review.openstack.org/#/c/448533/ which provided the
> > > > > most
> > > > > recent results:
> > > > > 
> > > > > * gate-tripleo-ci-centos-7-ovb-ha [1]
> > > > >    undercloud install: 23
> > > > >    overcloud deploy: 72
> > > > >    total time: 125
> > > > > * gate-tripleo-ci-centos-7-ovb-nonha [2]
> > > > >    undercloud install: 25
> > > > >    overcloud deploy: 48
> > > > >    total time: 122
> > > > > * gate-tripleo-ci-centos-7-ovb-updates [3]
> > > > >    undercloud install: 24
> > > > >    ov

Re: [openstack-dev] [tripleo] os-cloud-config retirement

2017-03-30 Thread Dan Prince
There is one case that I was thinking about reusing this piece of code
within a container to help initialize keystone endpoints. It would
require some changes and updates (to match how puppet-* configures
endpoints).

For TripleO containers we use various puppet modules (along with hiera)
to drive the creation of endpoints. This functionally works fine, but
is quite slow to execute (puppet is slow here) and takes several
minutes to complete. I'm wondering if a single optimized python script
might serve us better here. It could be driven via YAML (perhaps
similar to our Hiera), idempotent, and likely much faster than having
the code driven by puppet. This doesn't have to live in os-cloud-
config, but initially I thought that might be a reasonable place for
it. It is worth pointing out that this would be something that would
need to be driven by our t-h-t workflow and not a post-installation
task. So perhaps that makes it not a good fit for os-cloud-config. But
it is similar to the keystone initialization already there so I thought
I'd mention it.

Dan

On Thu, 2017-03-30 at 08:13 -0400, Emilien Macchi wrote:
> Hi,
> 
> os-cloud-config was deprecated in the Ocata release and is going to
> be
> removed in Pike.
> 
> TripleO project doesn't need it anymore and after some investigation
> in codesearch.openstack.org, nobody is using it in OpenStack.
> I'm working on the removal this cycle, please let us know any
> concern.
> 
> Thanks,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] patch abandoment policy

2017-03-27 Thread Dan Prince
On Mon, 2017-03-27 at 13:49 +0200, Flavio Percoco wrote:
> On 24/03/17 17:16 -0400, Dan Prince wrote:
> > On Thu, 2017-03-23 at 16:20 -0600, Alex Schultz wrote:
> > > Hey folks,
> > > 
> > > So after looking at the backlog of patches to review across all
> > > of
> > > the
> > > tripleo projects, I noticed we have a bunch of really old stale
> > > patches. I think it's time we address when we can abandon these
> > > stale
> > > patches.
> > > 
> > > Please comment on the proposed policy[0].  I know this has
> > > previously
> > > been brought up [1] but I would like to formalize the policy so
> > > we
> > > can
> > > reduce the backlog of stale patches.  If you're wondering what
> > > would
> > > be abandoned by this policy as it currently sits, I have a gerrit
> > > dashboard for you[2] (it excludes diskimage-builder) .
> > 
> > I think it is fine to periodically review patches and abandon them
> > if
> > need be. Last time this came up I wasn't in fan of auto-abandoning
> > though. Rather I just made a pass manually and did it in fairly
> > short
> > order. The reason I like the manual approach is a lot of ideas
> > could
> > get lost (or silently ignored) if nobody acts on them manually.
> > 
> > Rather then try to automate this would it serve us better to add a
> > link
> > to your Gerrit query in [2] below to highlight these patches and
> > quickly go through them.
> 
> I used to do this in Glance. I had 2 scripts that ran every week. The
> first one
> would select the patches to abandon and comment on them saying that
> the patches
> would be abandoned in a week. The second script abandoned the patches
> that had
> been flagged to be abandoned that were not updated in a week.

I don't think a week is enough time to react in all cases though. There
could be a really good idea that comes in, gets flagged as abandoned
and then nobody thinks about it again because it got abandoned.

There is sometimes a fine line between automation that helps humans do
their job better... and automation that goes to far. I don't think
TripleO or Glance projects have enough patch volume that it would take
the core team more than an hour to triage patches that need to be
abandoned. We probably don't even need to do this weekly. Once a month,
or once a quarter for that matter would probably be fine I think.

Dan

> 
> It was easy to know what patches needed to be checked since these
> script ran w/
> its own user (Glance Bot). I believe this worked pretty well and the
> Glance team
> is now working on a better version of that bot.
> 
> I'd share the scripts I used but they are broken and depend on
> another broken
> library but you get the idea/rule we used in Glance.
> 
> Flavio
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] patch abandoment policy

2017-03-24 Thread Dan Prince
On Thu, 2017-03-23 at 16:20 -0600, Alex Schultz wrote:
> Hey folks,
> 
> So after looking at the backlog of patches to review across all of
> the
> tripleo projects, I noticed we have a bunch of really old stale
> patches. I think it's time we address when we can abandon these stale
> patches.
> 
> Please comment on the proposed policy[0].  I know this has previously
> been brought up [1] but I would like to formalize the policy so we
> can
> reduce the backlog of stale patches.  If you're wondering what would
> be abandoned by this policy as it currently sits, I have a gerrit
> dashboard for you[2] (it excludes diskimage-builder) .

I think it is fine to periodically review patches and abandon them if
need be. Last time this came up I wasn't in fan of auto-abandoning
though. Rather I just made a pass manually and did it in fairly short
order. The reason I like the manual approach is a lot of ideas could
get lost (or silently ignored) if nobody acts on them manually.

Rather then try to automate this would it serve us better to add a link
to your Gerrit query in [2] below to highlight these patches and
quickly go through them.

Dan

> 
> Thanks,
> -Alex
> 
> [0] https://review.openstack.org/#/c/449332/
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-October/0
> 7.html
> [2] https://goo.gl/XC9Hy7
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-03-22 Thread Dan Prince
On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:
> On 22/03/17 13:32 +0100, Flavio Percoco wrote:
> > On 21/03/17 23:15 -0400, Emilien Macchi wrote:
> > > Hey,
> > > 
> > > I've noticed that container jobs look pretty unstable lately; to
> > > me,
> > > it sounds like a timeout:
> > > http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-
> > > ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-
> > > 22_00_08_55_358973
> > 
> > There are different hypothesis on what is going on here. Some
> > patches have
> > landed to improve the write performance on containers by using
> > hostpath mounts
> > but we think the real slowness is coming from the images download.
> > 
> > This said, this is still under investigation and the containers
> > squad will
> > report back as soon as there are new findings.
> 
> Also, to be more precise, Martin André is looking into this. He also
> fixed the
> gate in the last 2 weeks.

I spoke w/ Martin on IRC. He seems to think this is the cause of some
of the failures:

http://logs.openstack.org/32/446432/1/check-tripleo/gate-tripleo-ci-cen
tos-7-ovb-containers-oooq-nv/543bc80/logs/oooq/overcloud-controller-
0/var/log/extra/docker/containers/heat_engine/log/heat/heat-
engine.log.txt.gz#_2017-03-21_20_26_29_697


Looks like Heat isn't able to create Nova instances in the overcloud
due to "Host 'overcloud-novacompute-0' is not mapped to any cell'. This
means our cells initialization code for containers may not be quite
right... or there is a race somewhere.

Dan

> 
> Flavio
> 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-13 Thread Dan Prince
Hi Heidi,

I like this one a good bit better. He might looks a smidge cross-eyed
to me... but I'd take this one any day over the previous version.

Thanks for trying to capture the spirit of the original logos.

Dan

On Fri, 2017-03-10 at 08:26 -0800, Heidi Joy Tretheway wrote:
> Hi TripleO team, 
> 
> Here’s an update on your project logo. Our illustrator tried to be as
> true as possible to your original, while ensuring it matched the line
> weight, color palette and style of the rest. We also worked to make
> sure that three Os in the logo are preserved. Thanks for your
> patience as we worked on this! Feel free to direct feedback to me.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Containers] Containers work tracking, updates and schedule

2017-03-13 Thread Dan Prince
On Tue, 2017-03-07 at 12:08 +0100, Flavio Percoco wrote:
> Greetings,
> 
> We've been organizing the containers work for TripleO in a more
> consumable way
> that will hopefully ease the engagement from different squads and
> teams in
> OpenStack.

For reference as a lot of the initial services were completed as part
of the undercloud we had been tracking that upstream already here as
well:

https://etherpad.openstack.org/p/tripleo-composable-containers-underclo
ud

A lot of the underpinnings that enable our docker approach with t-h-t
were in that slightly older etherpad revision.

Dan

> 
> The result of this work is all in this etherpad[0], which we'll use
> as a central
> place to keep providing updates and collecting information about the
> containers
> effort. The etherpad is organized as follows:
> 
> * One section defining the goals of the effort and its evolution
> * One section listing bugs that are critical (in addition to the link
> querying
>   all the bugs tagged as `containers`)
> * RDO Tasks are tasks specific for the RDO upstream community (like
> working on a
>   pipeline to build containers)
> * One section dedicated to CI's schedule. Tasks we've pending and
> when they
>   should be completed.
> * One section for general overcloud tasks grouped by milestone
> * One section for review links. This section has been split into
> smaller groups
>   to make reviews easier.
> * One section with the list of services that still have to be
> containerized
> 
> We'll keeping this etherpad updated but we'll be also providing
> updates to the
> mailing list with more frequency.
> 
> [0] https://etherpad.openstack.org/p/tripleo-composable-containers-ov
> ercloud
> 
> Let us know if you need anything,
> Flavio
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] A sneak peek to TripleO + Containers

2017-03-13 Thread Dan Prince
On Mon, 2017-02-13 at 11:46 +0100, Flavio Percoco wrote:
> Hello,
> 
> I've been playing with a self-installing container for the
> containerized TripleO
> undercloud and I thought I'd share some of the progress we've made so
> far.
> 
> This is definitely not at its final, ideal, state but I wanted to
> provide a
> sneak peek to what is coming and what the updates/content of the
> TripleO+Containers sessions will be next week at the PTG.
> 
> The image[0] shows the output of[1] after running the containerized
> composable
> undercloud deployment using a self-installing container[2]. Again,
> this is not
> stable and it still needs work. You can see in the screenshot that
> one of the
> neutron's agent failed and from the repo[3] that I'm using the
> scripts we've
> been using for development instead of using oooq or something like
> that. One
> interesting thing is that running[2] will leave you with an almost
> entirely
> clean host. It still writes some stuff in `/var/lib` and
> `/etc/puppet` but that
> can be improved for sure.
> 
> Anyway, after all the disclaimers, I hope you'll be able to
> appreciate the
> progress we've made. Dan Prince has been able to deploy an overcloud
> on top of
> the containerized undercloud already, which is great news.

I've been tracking the progress on the composable undercloud since
January upstream here too fwiw:

https://etherpad.openstack.org/p/tripleo-composable-containers-underclo
ud

Dan

> 
> [0] http://imgur.com/a/Mol28
> [1] docker ps -a --filter label=managed_by=docker-cmd
> [2] docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -ti
> flaper87/tripleo-undercloud-init-container
> [3] https://github.com/flaper87/tripleo-undercloud-init-container
> 
> Enjoy,
> Flavio
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Dan Prince
On Thu, 2017-02-16 at 19:54 +, Jeremy Stanley wrote:
> On 2017-02-16 14:09:53 -0500 (-0500), Dan Prince wrote:
> [...]
> > This isn't about aligning anything. It is about artistic control.
> > The
> > foundation wants to have icons their way playing the "community
> > card"
> > to make those who had icons they like conform. It is clear you buy
> > into
> > this.
> > 
> > Each team will have its own mascot anyway so does it really matter
> > if
> > there is some deviation in the mix? I think not. We have a mascot
> > we
> > like. It even fits the general requirements for OpenStack mascots
> > so
> > all we are arguing about here is artistic style really. I say let
> > the
> > developers have some leverage in this category... what is the harm
> > really?
> 
> [...]
> 
> You're really reading far too much conspiracy into this. Keep in
> mind that this was coming from the foundation's marketing team, and
> while they've been very eager to interface with the community on
> this effort they may have failed to some degree in explaining their
> reasons (which as we all know leaves a vacuum where conspiracy
> theories proliferate).
> 
> As I understand things there are some pages on the
> foundation-controlled www.openstack.org site where they want to
> refer to various projects/teams and having a set of icons
> representing them was a desire of the designers for that site, to
> make it more navigable and easier to digest. They place significant
> importance on consistency and aesthetics, and while that doesn't
> necessarily match my personal utilitarian nature I can at least
> understand their position on the matter. Rather than just words or
> meaningless symbols as icons they thought it would be compelling to
> base those icons on mascots, but to maintain the aesthetic of the
> site the specific renderings needed to follow some basic guidelines.
> They could have picked mascots at random out of the aether to use
> there, but instead wanted to solicit input from the teams whose work
> these would represent so that they might have some additional
> special meaning to the community at large.
> 
> As I said earlier in the thread, if you have existing art you like
> then use that in your documentation, in the wiki, on team tee-shirts
> you make, et cetera. The goal is not to take those away. This is a
> simple need for the marketing team and foundation Web site designers
> to have art they can use for their own purposes which meets their
> relatively strict design aesthetics... and if that art is also
> something the community wants to use, then all the better but it's
> in no way mandatory. The foundation has no direct control over
> community members' choices here, nor have they attempted to pretend
> otherwise that I've seen.

And there is that rub again. There is implied along with this pressure
to adopt the new logo. If you don't you'll get a blank space as a sort
of punishment for going your own way. As Monty said directly... they
want conformance and cohesion over team identity.

Read the initial replies on this thread. Almost every single person
besides (Flavio and Monty) preferred to keep the original TripleO
mascot. Same thing on the Ironic thread as far as I can tell (those
devs almost all initially preferred the old mascot before they were
talked out of it.). And then you wore them down. Keep asking the same
question again and again and I guess over time people stop caring.

Its all just silliness really. Why the foundation got involved in this
mascot business to begin with and didn't just leave it to the
individual projects.

And again. Not a great time to be talking about any of this. My sense
of urgency is largely based on the fact that Emilien sent out an
official team stance on this. I wasn't part of that... so apologies for
being late to this conversation.

Dan 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Dan Prince
On Thu, 2017-02-16 at 06:36 -0600, Monty Taylor wrote:
> On 02/15/2017 07:54 PM, Dan Prince wrote:
> > At the high level all of this sounds like a fine grand plan: "Help
> > projects create mascots" with a common theme. On the ground level I
> > can
> > tell you it feels a lot more like you are crushing the spirit of
> > creativity and motivation for some of us.
> 
> Haven't we had this argument on the tech side enough? Do we have to
> have
> it all over again just because there are some illustrators and
> foundation staff involved?
> 
> We KEEP deciding as a community that we value cohesion for OpenStack
> over individual projects having unlimited freedom to do whatever they
> heck they want. This is no different. There are now a set of
> logos/mascots that exist within a common visual language. Neat!
> 
> > What's in a mascot? I dunno. Call it a force of motivation. Call it
> > team building. In fact, one of the first things I did as PTL of
> > TripleO
> > was create a mascot for the project. Perhaps not officially... but
> > there was agreement among those in the project. And we liked the
> > little
> > guy. And he grew on us. And we even iterated on him a bit and made
> > him
> > better.
> 
> Yah - I hear that. But once again, if the project is "OpenStack" and
> not
> just TripleO - that's exactly what's going on here. And the project
> _is_
> OpenStack. That's what we're here to do, that's what we work on,
> that's
> what we are members of a Foundation to support. Not TripleO, not
> Infra,
> not Nova. This isn't the Oslo Foundation or the Ironic consortium.
> It's
> OpenStack.
> 
> That means, for exactly the reasons you list, that it's important.
> It's
> important to underscore and bolster the fact that we are One
> OpenStack.
> 
> > 
> > 
> > 6 months or so ago we were presented with a new owl from the
> > foundation... which had almost none of the same qualities as the
> > original. Many of us took a survey about that and provided
> > feedback,
> > but I haven't found anyone who was really happy with it. Consensus
> > was
> > we liked the originals. Sometimes sticking with your roots is a
> > good
> > thing.
> > 
> > I happened to be off yesterday but I was really discouraged to read
> > that the team is now convinced we have to adopt your version of the
> > owl: http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2
> > 017-
> > 02-14-14.00.log.html
> > 
> > This all sounds like we are being "steamrolled" into using the new
> > owl
> > because things have to align. I'm not asking that you use our owl
> > on
> > your website. But if you want to... then great. I think it is
> > possible
> > to show that things work together without forcing them all to have
> > the
> > same mascot styles:
> >  https://www.linuxfoundation.org/about
> 
> Except that we are not the linux foundation. The linux foundation IS
> a
> loose confederation of unrelated projects that happen to share a
> legal
> parent entity. The LF does great work on behalf of those projects -
> but
> that is not what we are.
> 

I missed that you replied down here as well so take two...

The point there was it is entirely possible to make graphs that show
how things work together with differently themed Mascots. The
foundation doesn't have to force recommend that projects adhere to a
strict styling guide.

> > But I do think the OpenStack Foundation has overstated its case
> > here
> > and should reverse track a bit. Make it *clear* that projects can
> > keep
> > their own version of their mascots. In fact I think the foundation
> > should encourage them to do so (keep the originals). The opposite
> > seems
> > be happening on several projects like TripleO and Ironic.
> > 
> > P.S. vintage TripleO owl "beany babies" would be super cool
> 
> Totally. Make a vintage beanine - that's an awesome idea. I'd wear
> one.
> I've been considering trying to figure out how to make another "What
> the
> F**k is OpenStack?" shirt because mine is dying. The past is cool.
> But
> Dopenstack isn't the present - and honestly that's a good thing. So
> keep
> a sense of nostalgia for the old owl ... but I do NOT think the
> foundation should 'encourage' projects to 'keep' their originals. The
> foundation is expressing that they cannot FORCE a project to do
> anything
> - they do not have that power. But maybe alignment is a thing that
> can
> happen without anyone forcing anyone else to do it?
> 
> It IS important for th

Re: [openstack-dev] The end of OpenStack packages in Debian?

2017-02-16 Thread Dan Prince
Nice work on the packages Thomas. I've always admired that you got the
Debian packages upstream first :).

Best wishes.

Dan


On Wed, 2017-02-15 at 13:42 +0100, Thomas Goirand wrote:
> Hi there,
> 
> It's been a while since I planed on writing this message. I couldn't
> write it because the situation makes me really sad. At this point, it
> starts to be urgent to post it.
> 
> As for many other folks, Mirantis decided to end its contract with
> me.
> This happened when I was the most successful doing the job, with all
> of
> the packaging CI moved to OpenStack infra at the end of the OpenStack
> Newton cycle, after we were able to release Newton this way. I was
> hoping to start packaging on every commit for Ocata. That's yet
> another
> reason for me to be very frustrated about all of this. Such is
> life...
> 
> Over the last few months, I hoped for having enough strengths to
> continue my packaging work anyway, and get Ocata packages done. But
> that's not what happened. The biggest reason for this is that I know
> that this needs to be a full time job. And at this point, I still
> don't
> know what my professional future will be. A company, in Barcelona,
> told
> me I'd get hired to continue my past work of packaging OpenStack in
> Debian, but so far, I'm still waiting for a definitive answer, so I'm
> looking into some other opportunities.
> 
> All this to say that, unless someone wants to hire me for it (which
> would be the best outcome, but I fear this wont happen), or if
> someone
> steps in (this seems unlikely at this point), both the packaging-deb
> and
> the faith of OpenStack packages in Debian are currently compromised.
> 
> I will continue to maintain OpenStack Newton during the lifetime of
> Debian Stretch though, but I don't plan on doing anything more. This
> means that maybe, Newton will be the last release of OpenStack in
> Debian. If things continue this way, I probably will ask for the
> removal
> of all OpenStack packages from Debian Sid after Stretch gets released
> (unless I know that someone will do the work).
> 
> As a consequence, the following projects wont get packages even in
> Ubuntu (as they were "community maintained", which means done by me
> and
> later sync into Ubuntu...):
> 
> - congress
> - gnocchi
> - magnum
> - mistral
> - murano
> - sahara
> - senlin
> - watcher
> - zaqar
> 
> Hopefully, Canonical will continue to maintain the other 15 (more
> core...) projects in UCA.
> 
> Thanks for the fish,
> 
> Thomas Goirand (zigo)
> 
> P,S: To the infra folks: please keep the packaging CI as it is, as it
> will be useful for the lifetime of Stretch.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Dan Prince
On Thu, 2017-02-16 at 08:36 -0500, Emilien Macchi wrote:
> I'll shamelessly cross-post here, I didn't know where to reply.
> I agree with Monty and Flavio here.
> 
> Heidi has been doing an excellent job in helping OpenStack community
> about $topic, always in the open, which has been very appreciated.
> 
> We, as a team, have agreed on re-iterating on a new proposal to find
> a
> new logo that meets both TripleO devs taste and OpenStack community
> needs. That's how we work together.
> Unfortunately, this choice won't satisfy everyone and that's fine,
> that's how Open Source works I guess.

I don't think I see team agreement here. Was the IRC meeting last
Tuesday what we are using as a basis for this agreement. Not everyone
was present I think... also I think people are quite busy right now
with the upcoming release to be focusing on this discussion right now.

Wouldn't it be wise to postpone this a bit and gather a larger sampling
of the team feelings before making a consolidated statement like this?
Also, before committing to using any option wouldn't it be good to
actually see it first?

Dan

> 
> So the next steps are clear: let's work with Heidi and her team to
> find the best logo for TripleO, and let's all have a Gin Tonic in
> Atlanta.
> 
> And folks, that's a logo, I think we'll all survive.
> Peace.
> 
> On Thu, Feb 16, 2017 at 7:51 AM, Flavio Percoco 
> wrote:
> > On 13/02/17 21:38 -0500, Emilien Macchi wrote:
> > > 
> > > Team, I've got this email from Heidi.
> > > 
> > > I see 3 options :
> > > 
> > > 1. Keep existing logo: http://tripleo.org/_static/tripleo_owl.svg
> > >  .
> > > 
> > > 2. Re-design a new logo that "meets" OpenStack "requirements".
> > > 
> > > 3. Pick-up the one proposed (see below).
> > 
> > 
> > #3
> > 
> > Flavio
> > 
> > > 
> > > 
> > > Personally, I would vote for keeping our existing logo (1.)
> > > unless someone
> > > has time to create another one or if the team likes the proposed
> > > one.
> > > 
> > > The reason why I want to keep our logo is because our current
> > > logo was
> > > created by TripleO devs, we like it and we already have tee-
> > > shirts and
> > > other goodies with it. I don't see any good reason to change it.
> > > 
> > > Discussion is open and we'll vote as a team.
> > > 
> > > Thanks,
> > > 
> > > Emilien.
> > > 
> > > -- Forwarded message --
> > > From: Heidi Joy Tretheway 
> > > Date: Mon, Feb 13, 2017 at 8:27 PM
> > > Subject: TripleO mascot - how can I help your team?
> > > To: Emilien Macchi 
> > > 
> > > 
> > > Hi Emilien,
> > > 
> > > I’m following up on the much-debated TripleO logo. I’d like to
> > > help your
> > > team reach a solution that makes them happy but still fits within
> > > the
> > > family of logos we’re using at the PTG and going forward. Here’s
> > > what our
> > > illustrators came up with, which hides an “O” shape in the owl
> > > (face and
> > > wing arcs).
> > > 
> > > https://www.dropbox.com/sh/qz45miiiam3caiy/AAAzPGYEZRMGH6Otid3bLf
> > > HFa?dl=0
> > > At this point, I don’t have quorum from your team (I got a lot of
> > > conflicting feedback, most of which was “don’t like” but not
> > > actionable
> > > for
> > > the illustrators to make a revision). At the PTG, we’ll have
> > > mascot
> > > stickers and signage for all teams except for Ironic and TripleO,
> > > since
> > > we’re still waiting on your teams to make a final decision.
> > > 
> > > May I recommend that your team choose one person (or a small
> > > group of no
> > > more than three) to finalize this? I was able to work through all
> > > of
> > > Swift’s issues with just a quick 15-minute chat with John
> > > Dickinson and
> > > I’d
> > > like to believe we can solve this for TripleO as well.
> > > 
> > > We know some of your team has expressed concern over retiring the
> > > existing
> > > mascot. It’s not our intention to make anyone “get rid of” a
> > > beloved icon.
> > > Your team can certainly print it on vintage items like shirts and
> > > stickers.
> > > But for official channels like the website, we need a logo to
> > > represent
> > > TripleO that’s cohesive with the rest of the set.
> > > 
> > > Perhaps when you’re face to face with your team at the PTG, you
> > > can
> > > discuss
> > > and hopefully render a final decision to either accept this as a
> > > logo, or
> > > determine a few people willing to make any final changes with me?
> > > 
> > > Thanks in advance for your help!
> > > 
> > > 
> > > [image: photo]
> > > *Heidi Joy Tretheway*
> > > Senior Marketing Manager, OpenStack Foundation
> > > 503 816 9769 | Skype: heidi.tretheway
> > > 
> > >  > > e_id=5499768844845056=0.9726545857097719#>
> > > 
> > > 
> > >  
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > --
> > > Emilien Macchi
> > 
> > 

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread Dan Prince
On Thu, 2017-02-16 at 06:36 -0600, Monty Taylor wrote:
> On 02/15/2017 07:54 PM, Dan Prince wrote:
> > At the high level all of this sounds like a fine grand plan: "Help
> > projects create mascots" with a common theme. On the ground level I
> > can
> > tell you it feels a lot more like you are crushing the spirit of
> > creativity and motivation for some of us.
> 
> Haven't we had this argument on the tech side enough? Do we have to
> have
> it all over again just because there are some illustrators and
> foundation staff involved?
> 
> We KEEP deciding as a community that we value cohesion for OpenStack
> over individual projects having unlimited freedom to do whatever they
> heck they want. This is no different. There are now a set of
> logos/mascots that exist within a common visual language. Neat!

Is it *Neat* to make a projects and teams feel bad because they don't
want to change their logo? Is it *Neat* to make arguments that we
aren't  part of the same communities because some people have done
creative things with logos?

No. Its not neat. Its wrong. And its sad. Go ahead... look at what you
are creating and tell yourself it is neat I guess. And I'm sure you'll
get some nice websites and stickers out of it. But if you look on the
ground level at how it makes those who created some of the original
(vintage) stuff feel it isn't good. There is a subtle, don't go off
creating your own version of things or you are a bad person theme
here... and that I'm afraid will squelch, not encourage, creativity and
innovation.

> 
> > What's in a mascot? I dunno. Call it a force of motivation. Call it
> > team building. In fact, one of the first things I did as PTL of
> > TripleO
> > was create a mascot for the project. Perhaps not officially... but
> > there was agreement among those in the project. And we liked the
> > little
> > guy. And he grew on us. And we even iterated on him a bit and made
> > him
> > better.
> 
> Yah - I hear that. But once again, if the project is "OpenStack" and
> not
> just TripleO - that's exactly what's going on here. And the project
> _is_
> OpenStack. That's what we're here to do, that's what we work on,
> that's
> what we are members of a Foundation to support. Not TripleO, not
> Infra,
> not Nova. This isn't the Oslo Foundation or the Ironic consortium.
> It's
> OpenStack.
> 
> That means, for exactly the reasons you list, that it's important.
> It's
> important to underscore and bolster the fact that we are One
> OpenStack.
> 
> > 
> > 
> > 6 months or so ago we were presented with a new owl from the
> > foundation... which had almost none of the same qualities as the
> > original. Many of us took a survey about that and provided
> > feedback,
> > but I haven't found anyone who was really happy with it. Consensus
> > was
> > we liked the originals. Sometimes sticking with your roots is a
> > good
> > thing.
> > 
> > I happened to be off yesterday but I was really discouraged to read
> > that the team is now convinced we have to adopt your version of the
> > owl: http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2
> > 017-
> > 02-14-14.00.log.html
> > 
> > This all sounds like we are being "steamrolled" into using the new
> > owl
> > because things have to align. I'm not asking that you use our owl
> > on
> > your website. But if you want to... then great. I think it is
> > possible
> > to show that things work together without forcing them all to have
> > the
> > same mascot styles:
> >  https://www.linuxfoundation.org/about
> 
> Except that we are not the linux foundation. The linux foundation IS
> a
> loose confederation of unrelated projects that happen to share a
> legal
> parent entity. The LF does great work on behalf of those projects -
> but
> that is not what we are.
> 
> > But I do think the OpenStack Foundation has overstated its case
> > here
> > and should reverse track a bit. Make it *clear* that projects can
> > keep
> > their own version of their mascots. In fact I think the foundation
> > should encourage them to do so (keep the originals). The opposite
> > seems
> > be happening on several projects like TripleO and Ironic.
> > 
> > P.S. vintage TripleO owl "beany babies" would be super cool
> 
> Totally. Make a vintage beanine - that's an awesome idea. I'd wear
> one.
> I've been considering trying to figure out how to make another "What
> the
> F**k is OpenStack?" shirt because mine is dying. The past is cool.
> But
> Dopenstack isn't the pr

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-15 Thread Dan Prince
At the high level all of this sounds like a fine grand plan: "Help
projects create mascots" with a common theme. On the ground level I can
tell you it feels a lot more like you are crushing the spirit of
creativity and motivation for some of us.

What's in a mascot? I dunno. Call it a force of motivation. Call it
team building. In fact, one of the first things I did as PTL of TripleO
was create a mascot for the project. Perhaps not officially... but
there was agreement among those in the project. And we liked the little
guy. And he grew on us. And we even iterated on him a bit and made him
better.



6 months or so ago we were presented with a new owl from the
foundation... which had almost none of the same qualities as the
original. Many of us took a survey about that and provided feedback,
but I haven't found anyone who was really happy with it. Consensus was
we liked the originals. Sometimes sticking with your roots is a good
thing.

I happened to be off yesterday but I was really discouraged to read
that the team is now convinced we have to adopt your version of the
owl: http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-
02-14-14.00.log.html

This all sounds like we are being "steamrolled" into using the new owl
because things have to align. I'm not asking that you use our owl on
your website. But if you want to... then great. I think it is possible
to show that things work together without forcing them all to have the
same mascot styles:
 https://www.linuxfoundation.org/about

But I do think the OpenStack Foundation has overstated its case here
and should reverse track a bit. Make it *clear* that projects can keep
their own version of their mascots. In fact I think the foundation
should encourage them to do so (keep the originals). The opposite seems
be happening on several projects like TripleO and Ironic.

P.S. vintage TripleO owl "beany babies" would be super cool

Dan


On Wed, 2017-02-15 at 13:26 -0800, Heidi Joy Tretheway wrote:
> Hi Dan, 
> I’m glad you asked! The value of creating a family of logos is in
> communicating that OpenStack projects work together. While the some
> of the designs of the existing mascots were great, none of them
> looked like they were part of the same family, and that sent a
> message to the market that the projects themselves didn’t necessarily
> work well together. 
> 
> Also, many teams told us they were happy to have design resources to
> make a logo—about three-quarters of projects didn’t have an existing
> logo, and many wanted one but didn’t have the ability to create their
> own. It’s nice to be able to support all projects in the big tent on
> an even footing.
> 
> All teams were encouraged to choose their own mascots; none was
> forced to select one, and projects with existing logos got the first
> right to keep their mascots, which we worked to blend together in a
> consistent style. We also allow projects with existing mascots to
> continue printing vintage swag, like stickers and T-shirts, out of
> respect for the great efforts of the developers who designed the
> originals. 
> 
> The new logos are used on official channels, like the website, and
> they help us better showcase the projects as a group and promote
> them. I’m working with a few projects that haven’t yet settled on a
> design to ensure we can at least reach a compromise, such as for
> TripleO in moving the design closer to the team’s original. (And on
> that note - I’m doing my best to answer each question individually,
> so I appreciate your patience.) 
> 
> In any design undertaking—and especially with this one, which touches
> 60+ project teams—there will be a lot of conflicting views. That’s
> OK, and we’ve done our best to listen to feedback and adapt to teams’
> preferences. I assure you this isn’t an effort to “corporatize” our
> fabulous open source community, but rather to make it feel more
> cohesive and connected. 
> 
> I hope that when you see all of the logos together—and especially
> when you hear more about why teams chose these mascots—that you’ll
> enjoy them as much as I do. (Fun fact: Why did QA chose a little
> brown bat as its mascot? Because that creature eats its weight in
> bugs every day!) It’s been a real pleasure working with the community
> on this project. 
> 
> —Heidi Joy
> 
> 
> > On Feb 15, 2017, at 12:52 PM, Dan Prince <dpri...@redhat.com>
> > wrote:
> > 
> > The fact that the foundation is involved in "streamlining" team
> > logos
> > just kind of makes me a bit sad I guess. I mean, what value is this
> > really adding to the OpenStack projects?
> > 
> > Devs on many projects spend their own time on creating logos...
> > that
> > they like. I say let them be happy and have their own logo

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-15 Thread Dan Prince
I was off earlier this week (and unfortunately couldn't attend the
weekly meeting) but from the sound of it a very different conclusion
has been reached within the TripleO community. Apparently, this was
understood to mean that we *could not* keep our existing logo. And that
is sad thing to do "just because" someone wants to have a common theme.

So I say lets keep it. And continue using it as is...

At least that is the way it is in some communities:

https://www.kernel.org/

And yes. They have stylized old tux as well... but at least they kept
the original too.

Dan

On Wed, 2017-02-15 at 21:12 +, Jeremy Stanley wrote:
> On 2017-02-15 15:52:30 -0500 (-0500), Dan Prince wrote:
> > The fact that the foundation is involved in "streamlining" team
> > logos
> > just kind of makes me a bit sad I guess. I mean, what value is this
> > really adding to the OpenStack projects?
> > 
> > Devs on many projects spend their own time on creating logos...
> > that
> > they like. I say let them be happy and have their own logos. No
> > harm
> > here I think. Move along and let us focus on the important things.
> 
> [...]
> 
> There's definitely nothing stopping various teams from using other
> logos if they want. The OpenStack Foundation doesn't govern the
> project, they merely want to have a consistent set of logos _they_
> can use on their own site when providing aggregate information about
> multiple teams and were hoping that they could gather enough
> feedback to be able to put together something that those teams would
> also be interested in reusing. It's in no way mandatory or exclusive
> of other art you may already have and feel a connection with.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-15 Thread Dan Prince
The fact that the foundation is involved in "streamlining" team logos
just kind of makes me a bit sad I guess. I mean, what value is this
really adding to the OpenStack projects?

Devs on many projects spend their own time on creating logos... that
they like. I say let them be happy and have their own logos. No harm
here I think. Move along and let us focus on the important things.

Dan

On Tue, 2017-02-14 at 09:44 -0500, Emilien Macchi wrote:
> Heidi,
> 
> TripleO team discussed about the logo this morning during our weekly
> meeting.
> We understood that we had to change our existing logo because it
> doesn't meet OpenStack themes requirements, and that's fine.
> 
> Though we would like you to re-iterate on a new logo that would be a
> variation on the current logo that moves closer to the foundation.
> 
> Our existing logo is: http://tripleo.org/_static/tripleo_owl.svg
> 
> Please let us know if you can prepare a new logo close to this one,
> and I think we would find a middle ground here.
> 
> Thanks a lot,
> 
> On Tue, Feb 14, 2017 at 7:30 AM, Carlos Camacho Gonzalez
>  wrote:
> > I'll vote also for option 1 if we can keep it.
> > 
> > One thing, you can see how designers have iterated over ironic's
> > logo
> > http://lists.openstack.org/pipermail/openstack-dev/attachments/2017
> > 0201/a016f685/attachment.png
> > to fit the OpenStack illustration style.
> > 
> > Is it possible for designers to iterate over
> > http://tripleo.org/_static/tripleo_owl.svg to make it fit the
> > guidelines?
> > 
> > Cheers,
> > Carlos.
> > 
> > On Tue, Feb 14, 2017 at 12:34 PM, Lucas Alvares Gomes
> >  wrote:
> > > 
> > > Hi,
> > > 
> > > Just a FYI, we have had similar discussions about the proposed
> > > logo for
> > > Ironic, there's still many unanswered questions but, if you guys
> > > are
> > > interested this is the link to the ML thread:
> > > http://lists.openstack.org/pipermail/openstack-dev/2017-February/
> > > 111401.html
> > > 
> > > Cheers,
> > > Lucas
> > > 
> > > On Tue, Feb 14, 2017 at 2:38 AM, Emilien Macchi  > > om>
> > > wrote:
> > > > 
> > > > Team, I've got this email from Heidi.
> > > > 
> > > > I see 3 options :
> > > > 
> > > > 1. Keep existing logo: http://tripleo.org/_static/tripleo_owl.s
> > > > vg .
> > > > 
> > > > 2. Re-design a new logo that "meets" OpenStack "requirements".
> > > > 
> > > > 3. Pick-up the one proposed (see below).
> > > > 
> > > > 
> > > > Personally, I would vote for keeping our existing logo (1.)
> > > > unless
> > > > someone has time to create another one or if the team likes the
> > > > proposed
> > > > one.
> > > > 
> > > > The reason why I want to keep our logo is because our current
> > > > logo was
> > > > created by TripleO devs, we like it and we already have tee-
> > > > shirts and other
> > > > goodies with it. I don't see any good reason to change it.
> > > > 
> > > > Discussion is open and we'll vote as a team.
> > > > 
> > > > Thanks,
> > > > 
> > > > Emilien.
> > > > 
> > > > -- Forwarded message --
> > > > From: Heidi Joy Tretheway 
> > > > Date: Mon, Feb 13, 2017 at 8:27 PM
> > > > Subject: TripleO mascot - how can I help your team?
> > > > To: Emilien Macchi 
> > > > 
> > > > 
> > > > Hi Emilien,
> > > > 
> > > > I’m following up on the much-debated TripleO logo. I’d like to
> > > > help your
> > > > team reach a solution that makes them happy but still fits
> > > > within the family
> > > > of logos we’re using at the PTG and going forward. Here’s what
> > > > our
> > > > illustrators came up with, which hides an “O” shape in the owl
> > > > (face and
> > > > wing arcs).
> > > > 
> > > > https://www.dropbox.com/sh/qz45miiiam3caiy/AAAzPGYEZRMGH6Otid3b
> > > > LfHFa?dl=0
> > > > At this point, I don’t have quorum from your team (I got a lot
> > > > of
> > > > conflicting feedback, most of which was “don’t like” but not
> > > > actionable for
> > > > the illustrators to make a revision). At the PTG, we’ll have
> > > > mascot stickers
> > > > and signage for all teams except for Ironic and TripleO, since
> > > > we’re still
> > > > waiting on your teams to make a final decision.
> > > > 
> > > > May I recommend that your team choose one person (or a small
> > > > group of no
> > > > more than three) to finalize this? I was able to work through
> > > > all of Swift’s
> > > > issues with just a quick 15-minute chat with John Dickinson and
> > > > I’d like to
> > > > believe we can solve this for TripleO as well.
> > > > 
> > > > We know some of your team has expressed concern over retiring
> > > > the
> > > > existing mascot. It’s not our intention to make anyone “get rid
> > > > of” a
> > > > beloved icon. Your team can certainly print it on vintage items
> > > > like shirts
> > > > and stickers. But for official channels like the website, we
> > > > need a logo to
> > > > represent TripleO that’s cohesive with the rest of the set.
> 

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-15 Thread Dan Prince
On Tue, 2017-02-14 at 13:30 +0100, Carlos Camacho Gonzalez wrote:
> I'll vote also for option 1 if we can keep it.
> 
> One thing, you can see how designers have iterated over ironic's logo
> http://lists.openstack.org/pipermail/openstack-
> dev/attachments/20170201/a016f685/attachment.png to fit the OpenStack
> illustration style.
> 
> Is it possible for designers to iterate over
> http://tripleo.org/_static/tripleo_owl.svg to make it fit the
> guidelines?

There was a survey that came out after the last summit and I thought
many of us already asked them to do that... but yes. If they can it
might be acceptable. But I think we should get to decide if we like it
or not.

Dan

> 
> Cheers,
> Carlos.
> 
> On Tue, Feb 14, 2017 at 12:34 PM, Lucas Alvares Gomes  ail.com> wrote:
> >
> > Hi,
> >
> > Just a FYI, we have had similar discussions about the proposed logo
> for Ironic, there's still many unanswered questions but, if you guys
> are interested this is the link to the ML thread: http://lists.openst
> ack.org/pipermail/openstack-dev/2017-February/111401.html
> >
> > Cheers,
> > Lucas
> >
> > On Tue, Feb 14, 2017 at 2:38 AM, Emilien Macchi  > wrote:
> >>
> >> Team, I've got this email from Heidi.
> >>
> >> I see 3 options :
> >>
> >> 1. Keep existing logo: http://tripleo.org/_static/tripleo_owl.svg
> .
> >>
> >> 2. Re-design a new logo that "meets" OpenStack "requirements".
> >>
> >> 3. Pick-up the one proposed (see below).
> >>
> >>
> >> Personally, I would vote for keeping our existing logo (1.) unless
> someone has time to create another one or if the team likes the
> proposed one.
> >>
> >> The reason why I want to keep our logo is because our current logo
> was created by TripleO devs, we like it and we already have tee-
> shirts and other goodies with it. I don't see any good reason to
> change it.
> >>
> >> Discussion is open and we'll vote as a team.
> >>
> >> Thanks,
> >>
> >> Emilien.
> >>
> >> -- Forwarded message --
> >> From: Heidi Joy Tretheway 
> >> Date: Mon, Feb 13, 2017 at 8:27 PM
> >> Subject: TripleO mascot - how can I help your team?
> >> To: Emilien Macchi 
> >>
> >>
> >> Hi Emilien,
> >>
> >> I’m following up on the much-debated TripleO logo. I’d like to
> help your team reach a solution that makes them happy but still fits
> within the family of logos we’re using at the PTG and going forward.
> Here’s what our illustrators came up with, which hides an “O” shape
> in the owl (face and wing arcs).
> >>
> >> https://www.dropbox.com/sh/qz45miiiam3caiy/AAAzPGYEZRMGH6Otid3bLfH
> Fa?dl=0
> >> At this point, I don’t have quorum from your team (I got a lot of
> conflicting feedback, most of which was “don’t like” but not
> actionable for the illustrators to make a revision). At the PTG,
> we’ll have mascot stickers and signage for all teams except for
> Ironic and TripleO, since we’re still waiting on your teams to make a
> final decision.
> >>
> >> May I recommend that your team choose one person (or a small group
> of no more than three) to finalize this? I was able to work through
> all of Swift’s issues with just a quick 15-minute chat with John
> Dickinson and I’d like to believe we can solve this for TripleO as
> well.
> >>
> >> We know some of your team has expressed concern over retiring the
> existing mascot. It’s not our intention to make anyone “get rid of” a
> beloved icon. Your team can certainly print it on vintage items like
> shirts and stickers. But for official channels like the website, we
> need a logo to represent TripleO that’s cohesive with the rest of the
> set.
> >>
> >> Perhaps when you’re face to face with your team at the PTG, you
> can discuss and hopefully render a final decision to either accept
> this as a logo, or determine a few people willing to make any final
> changes with me?
> >>
> >> Thanks in advance for your help!
> >>
> >>
> >> Heidi Joy Tretheway
> >> Senior Marketing Manager, OpenStack Foundation
> >> 503 816 9769 | Skype: heidi.tretheway
> >>  
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Emilien Macchi
> >>
> >>
> _
> _
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:uns
> ubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> _
> _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> 

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-15 Thread Dan Prince
Option #1 keep it.

Dan

On Mon, 2017-02-13 at 21:38 -0500, Emilien Macchi wrote:
> Team, I've got this email from Heidi.
> 
> I see 3 options :
> 
> 1. Keep existing logo: http://tripleo.org/_static/tripleo_owl.svg .
> 
> 2. Re-design a new logo that "meets" OpenStack "requirements".
> 
> 3. Pick-up the one proposed (see below).
> 
> 
> Personally, I would vote for keeping our existing logo (1.) unless
> someone has time to create another one or if the team likes the
> proposed one.
> 
> The reason why I want to keep our logo is because our current logo
> was created by TripleO devs, we like it and we already have tee-
> shirts and other goodies with it. I don't see any good reason to
> change it.
> 
> Discussion is open and we'll vote as a team.
> 
> Thanks,
> 
> Emilien. 
> 
> -- Forwarded message --
> From: Heidi Joy Tretheway 
> Date: Mon, Feb 13, 2017 at 8:27 PM
> Subject: TripleO mascot - how can I help your team?
> To: Emilien Macchi 
> 
> 
> Hi Emilien, 
> 
> I’m following up on the much-debated TripleO logo. I’d like to help
> your team reach a solution that makes them happy but still fits
> within the family of logos we’re using at the PTG and going forward.
> Here’s what our illustrators came up with, which hides an “O” shape
> in the owl (face and wing arcs). 
> 
> https://www.dropbox.com/sh/qz45miiiam3caiy/AAAzPGYEZRMGH6Otid3bLfHFa?
> dl=0
> 
> At this point, I don’t have quorum from your team (I got a lot of
> conflicting feedback, most of which was “don’t like” but not
> actionable for the illustrators to make a revision). At the PTG,
> we’ll have mascot stickers and signage for all teams except for
> Ironic and TripleO, since we’re still waiting on your teams to make a
> final decision. 
> 
> May I recommend that your team choose one person (or a small group of
> no more than three) to finalize this? I was able to work through all
> of Swift’s issues with just a quick 15-minute chat with John
> Dickinson and I’d like to believe we can solve this for TripleO as
> well. 
> 
> We know some of your team has expressed concern over retiring the
> existing mascot. It’s not our intention to make anyone “get rid of” a
> beloved icon. Your team can certainly print it on vintage items like
> shirts and stickers. But for official channels like the website, we
> need a logo to represent TripleO that’s cohesive with the rest of the
> set. 
> 
> Perhaps when you’re face to face with your team at the PTG, you can
> discuss and hopefully render a final decision to either accept this
> as a logo, or determine a few people willing to make any final
> changes with me? 
> 
> Thanks in advance for your help!
> 
> 
>   Heidi Joy Tretheway
> Senior Marketing Manager, OpenStack Foundation503 816
> 9769 | Skype: heidi.tretheway  
> 
> 
> 
> 
> 
> 
> -- 
> Emilien Macchi
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-heat-templates, vendor plugins and the new hiera hook

2017-01-27 Thread Dan Prince
On Wed, 2017-01-25 at 14:59 +0200, Marios Andreou wrote:
> Hi, as part of the composable upgrades workflow shaping up for Newton
> to
> Ocata, we need to install the new hiera hook that was first added
> with
> [1] and disable the old hook and data as part of the upgrade
> initialization [2]. Most of the existing hieradata was ported to use
> the
> new hook in [3]. The deletion of the old hiera data is necessary for
> the
> Ocata upgrade, but it also means it will break any plugins still
> using
> the 'old' os-apply-config hiera hook.


Nice catch on the old vendor hieradata. I clearly missed those
interfaces for the in-tree extraconfig data when doing these conversion
(sorry about that). Would be nice to get some sort of coverage on these
interfaces I guess.

The new hook uses Json and is much cleaner. We were accumulating a lot
of hacks in t-h-t to work around deficiencies with the old o-a-c YAML
element mechanism. What this means is a conversion tool is hard. Not
impossible, but might not get 100% of the cases I think due to the
differences in how YAML and Json can handle arrays and such with all
the conversions going on.

Updating the rest of the in-tree interfaces (like you have done) should
get most of it. For any out of tree extraconfig code that relies on the
old heira element would it be reasonable to fail-fast instead? There
isn't a great place to do this unfortunately but a couple of options:

1) in the agent hook itself: https://review.openstack.org/#/c/426241/1

2) in the old hiera hook: https://review.openstack.org/#/c/425955/

Option #1 handles signals more nicely but couples the old and new
implementations a bit with the extra check. Option #2 doesn't currently
handle signaling nicely (as shardy pointed out in the review).

Dan

> 
> In order to be able to upgrade to Ocata any templates that define
> hiera
> data need to be using the new hiera hook and then the overcloud nodes
> need to have the new hook installed (installing is done in [2] as a
> matter of necessity, and that is what prompted this email in the
> first
> place). I've had a go at updating all the plugin templates that are
> still using the old hiera data with a review at [4] which I have -1
> for now.
> 
> I'll try and reach out to some individuals more directly as well but
> wanted to get the review at [4] and this email out as a first step,
> 
> thanks, marios
> 
> [1] https://review.openstack.org/#/c/379733/
> [2]
> https://review.openstack.org/#/c/424715/2/extraconfig/tasks/newton_oc
> ata_upgrade_init_common.sh
> [3] https://review.openstack.org/#/c/384757/
> [4] https://review.openstack.org/#/c/425154/
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] proposing Michele Baldessari part of core team

2016-11-09 Thread Dan Prince
+1 from me.

Dan

On Fri, 2016-11-04 at 13:40 -0400, Emilien Macchi wrote:
> MIchele Baldessari (bandini on IRC) has consistently demonstrated
> high
> levels of contributions in TripleO projects, specifically in High
> Availability area where's he's for us a guru (I still don't
> understand
> how pacemaker works, but hopefully he does).
> 
> He has done incredible work on composable services and also on
> improving our HA configuration by following reference architectures.
> Always here during meetings, and on #tripleo to give support to our
> team, he's a great team player and we are lucky to have him onboard.
> I believe he would be a great core reviewer on HA-related work and we
> expect his review stats to continue improving as his scope broadens
> over time.
> 
> As usual, feedback is welcome and please vote for this proposal!
> 
> Thanks,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Setting up to 3rd party CI OVB jobs

2016-10-12 Thread Dan Prince
On Fri, 2016-10-07 at 09:03 -0400, Paul Belanger wrote:
> Greetings,
> 
> I wanted to propose a work item, that I am happy to spearhead, about
> setting up
> a 3rd party CI system for tripleo project. The work I am proposing,
> wouldn't
> actually affect anything today about tripleo-ci but provider a
> working example
> of how 3rd party CI will work and potential migration path.
> 
> This is just one example of how it would work, obviously everything
> is open for
> discussions but I think you'll find the plan to be workable.
> Additionally, this
> topic would only apply to OVB jobs, existing jobs already running on
> cloud
> providers from openstack-infra would not be affected.


The plan you describe here sounds reasonable. Testing out a 3rd party
system in parallel to our existing CI causes no harm and certainly
allows us to evaluate things and learn from the new setup.

A couple of things I would like to see discussed a bit more (either
here or in a new thread if deemed unrelated) are how do we benefit from
these changes in making the OVB jobs 3rd party.

There are at least 3 groups who likely care about this along with how
this benefits them:

-the openstack-infra team:

  * standardization: doesn't have to deal with special case OVB clouds

-the tripleo OVB cloud/CI maintainers:

  * Can manage the 3rd party cloud how they like it. Using images or whatever 
with less regard for openstack-infra compatability.

-the tripleo core team:

  * The OVB jobs are mostly the same. The maintenance is potentially
further diverging from upstream though. So is there any benefit to 3rd
party for the core team? Unclear to me at this point. The OVB jobs are
still the same. They aren't running any faster than they are today. The
maintenance of them might even get harder for some due to the fact that
we have different base images across our upstream infra multinode jobs
and what we run via the OVB 3rd party testing.



The tripleo-ci end-to-end test jobs have always fallen into the high
maintenance category. We've only recently switched to OVB and one of
the nice things about doing that is we are using something much closer
to stock openstack vs. our previous CI cloud. Sure there are some OVB
configuration differences to enable testing of baremetal in the cloud
but we are using more OpenStack to drive things. So by simply using
more OpenStack within our CI we should be more closely aligning with
infra. A move in the right direction anyway.

Going through all this effort I really would like to see all the teams
gain from the effort. Like, for me the point of having upstream
tripleo-ci tests is that we catch breakages. Breakages that no other
upstream projects are catching. And the solution to stopping those
breakages from happening isn't IMO to move some of the most valuable CI
tests into 3rd party. That may cover over some of the maintenance rubs
in the short/mid term perhaps. But I view it as a bit of a retreat in
where we could be with upstream testing.

So rather than just taking what we have in the OVB jobs today and
making the same, long running (1.5 hours +) CI job (which catches lots
of things) could we re-imaging the pipeline a bit in the process so we
improve this. I guess my concern is we'll go to all the trouble to move
this and we'll actually negatively impact the speed with which the
tripleo core team can land code instead of increasing it. I guess what
I'm asking is in doing this move can we raise the bar for TripleO core
any too?

Dan 


> 
> What I am proposing is we move tripleo-test-cloud-rh2 (currently
> disabled) from
> openstack-infra (nodepool) to rdoproject (nodepool).  This give us a
> cloud we
> can use for OVB; we know it works because OVB jobs have run on it
> before.
> 
> There is a few issues we'd first need to work on, specifically since
> rdoproject.org is currently using SoftwareFactory[1] we'd need to
> have them
> adding support for nodepool-builder. This is needed so we can use the
> existing
> DIB elements that openstack-infra does to create centos-7 images
> (which tripleo
> uses today). We have 2 options, wait for SF team to add support for
> this (I
> don't know how long that is, but they know of the request) or we
> manually setup
> a external nodepool-builder instance for rdoproject.org, which
> connects to
> nodepool.rdoproject.org via gearman (I suggest we do this).
> 
> Once that issue is solved, things are a little easier.  It would just
> be a
> matter of porting upstream CI configuration to rdoproject.org and
> validating
> images, JJB jobs and test validation. Cloud credentials removed from
> openstack-infra and added to rdoproject.org.
> 
> I'd basically need help from rdoproject (eg: dmsimard) with some of
> the admin
> tasks, a long with a VM for nodepool-builder. We already have the
> 3rdparty CI
> bits setup in rdoproject.org, we are actually running DLRN builds on
> python-tripleoclient / python-openstackclient upstream patches.
> 
> I think the biggest step is 

Re: [openstack-dev] [tripleo] website update

2016-10-11 Thread Dan Prince
On Tue, 2016-10-11 at 18:08 -0500, Ben Nemec wrote:
> 
> On 10/11/2016 02:42 PM, Dan Prince wrote:
> > 
> > A quick update on the tripleo.org website outage today. We are in
> > the
> > process of moving the server to a new host location. Until DNS
> > updates
> > please use the following URL if you want to access the CI status
> > report:
> > 
> > http://66.187.229.219/cistatus.html
> 
> This doesn't seem to be updating for me.  I'm pretty sure I've been 
> seeing the same patches listed there pretty much all day.  Was this 
> maybe driven by a cron job that we're missing on the new server?

Thanks for pointing this out Ben. The script was in place to re-run it
but I had a pushd without a popd. I've always had mixed feelings about
pushd/popd rather preferring to just directly 'cd' to the directory
explicitly. Anyway, it is all fixed up now so I think automatic updates
should be back now.

Dan

> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] website update

2016-10-11 Thread Dan Prince
A quick update on the tripleo.org website outage today. We are in the
process of moving the server to a new host location. Until DNS updates
please use the following URL if you want to access the CI status
report:

http://66.187.229.219/cistatus.html

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO Core nominations

2016-09-23 Thread Dan Prince
On Thu, 2016-09-15 at 10:20 +0100, Steven Hardy wrote:
> Hi all,
> 
> As we work to finish the last remaining tasks for Newton, it's a good
> time
> to look back over the cycle, and recognize the excellent work done by
> several new contributors.
> 
> We've seen a different contributor pattern develop recently, where
> many
> folks are subsystem experts and mostly focus on a particular project
> or
> area of functionality.  I think this is a good thing, and it's
> hopefully
> going to allow our community to scale more effectively over time (and
> it
> fits pretty nicely with our new composable/modular architecture).
> 
> We do still need folks who can review with the entire TripleO
> architecture
> in mind, but I'm very confident folks will start out as subsystem
> experts
> and over time broaden their area of experience to encompass more of
> the TripleO projects (we're already starting to see this IMO).
> 
> We've had some discussion in the past[1] about strictly defining
> subteams,
> vs just adding folks to tripleo-core and expecting good judgement to
> be
> used (e.g only approve/+2 stuff you're familiar with - and note that
> it's
> totally fine for a core reviewer to continue to +1 things if the
> patch
> looks OK but is outside their area of experience).
> 
> So, I'm in favor of continuing that pattern and just welcoming some
> of our
> subsystem expert friends to tripleo-core, let me know if folks feel
> strongly otherwise :)
> 
> The nominations, are based partly on the stats[2] and partly on my
> own
> experience looking at reviews, patches and IRC discussion with these
> folks
> - I've included details of the subsystems I expect these folks to
> focus
> their +2A power on (at least initially):
> 
> 1. Brent Eagles
> 
> Brent has been doing some excellent work mostly related to Neutron
> this
> cycle - his reviews have been increasingly detailed, and show a solid
> understanding of our composable services architecture.  He's also
> provided
> a lot of valuable feedback on specs such as dpdk and sr-iov.  I
> propose
> Brent continues this exellent Neutron focussed work, while also
> expanding
> his review focus such as the good feedback he's been providing on new
> Mistral actions in tripleo-common for custom-roles.
> 
> 2. Pradeep Kilambi
> 
> Pradeep has done a large amount of pretty complex work around
> Ceilomenter
> and Aodh over the last two cycles - he's dealt with some pretty tough
> challenges around upgrades and has consistently provided good review
> feedback and solid analysis via discussion on IRC.  I propose Prad
> continues this excellent Ceilomenter/Aodh focussed work, while also
> expanding review focus aiming to cover more of t-h-t and other repos
> over
> time.
> 
> 3. Carlos Camacho
> 
> Carlos has been mostly focussed on composability, and has done a
> great job
> of working through the initial architecture implementation, including
> writing some very detailed initial docs[3] to help folks make the
> transition
> to the new architecture.  I'd suggest that Carlos looks to maintain
> this
> focus on composable services, while also building depth of reviews in
> other
> repos.
> 
> 4. Ryan Brady
> 
> Ryan has been one of the main contributors implementing the new
> Mistral
> based API in tripleo-common.  His reviews, patches and IRC discussion
> have
> consistently demonstrated that he's an expert on the mistral
> actions/workflows and I think it makes sense for him to help with
> review
> velocity in this area, and also look to help with those subsystems
> interacting with the API such as tripleoclient.
> 
> 5. Dan Sneddon
> 
> For many cycles, Dan has been driving direction around our network
> architecture, and he's been consistently doing a relatively small
> number of
> very high-quality and insightful reviews on both os-net-config and
> the
> network templates for tripleo-heat-templates.  I'd suggest Dan
> continues
> this focus, and he's indicated he may have more bandwidth to help
> with
> reviews around networking in future.
> 
> Please can I get feedback from exisitng core reviewers - you're free
> to +1
> these nominations (or abstain), but any -1 will veto the
> process.  I'll
> wait one week, and if we have consensus add the above folks to
> tripleo-core.
> 
> Finally, there are quite a few folks doing great work that are not on
> this
> list, but seem to be well on track towards core status.  Some of
> those
> folks I've already reached out to, but if you're not nominated now,
> please
> don't be disheartened, and feel free to chat to me on IRC about
> it.  Also
> note the following:
> 
>  - We need folks to regularly show up, establishing a long-term
> pattern of
>    doing useful reviews, but core status isn't about raw number of
> reviews,
>    it's about consistent downvotes and detailed, well considered and
>    insightful feedback that helps increase quality and catch issues
> early.
> 
>  - Try to spend some time reviewing stuff outside your normal area of

Re: [openstack-dev] [TripleO] Easier way of trying TripleO

2016-08-22 Thread Dan Prince
On Fri, 2016-08-19 at 15:31 -0400, Dan Prince wrote:
> On Tue, 2013-11-19 at 16:40 -0500, James Slagle wrote:
> > 
> > I'd like to propose an idea around a simplified and complimentary
> > version of
> > devtest that makes it easier for someone to get started and try
> > TripleO.  
> > 
> > The goal being to get people using TripleO as a way to experience
> > the
> > deployment of OpenStack, and not necessarily a way to get an
> > experience of a
> > useable OpenStack cloud itself.
> > 
> > To that end, we could:
> > 
> > 1) Provide an undercloud vm image so that you could effectively
> > skip
> > the entire
> >    seed setup.
> 
> The question here for me is what are you proposing to use to create
> this image? Is it something that could live in tripleo-puppet-
> elements
> like we manage the overcloud package dependencies? Or is it more than
> this? I'd like to not have to build another alternate tool to help
> manage this.
> 
> What if instead of an undercloud image we just created the undercloud
> locally out of containers? Similar to what I've recently proposed
> with
> the heat all-in-one installer here: https://dprince.github.io/tripleo
> -o
> nward-dark-owl.html we could leverage the containers composable
> service
> work for the overcloud in t-h-t and get containers support in the
> undercloud for free.
> 
> If you still want to run an undercloud VM you could configure things
> that way locally, or provide an image with containers in it I guess
> too.
> 
> I'm fine supporting an easier developer case for TripleO but I'd like
> to ultimately have less duplication across the maintenance of the
> Undercloud and Overcloud as part of our solutions for these things
> too.
> 
> Dan

I had a good laugh when James pinged me about this on IRC this morning.

I must have sorted my openstack-dev folder incorrectly... for whatever
reason this message came to my attention on Friday evening so I decided
it worth a reply.

So a bit of mismatched context here... probably best to have a laugh and move 
along. :)

Dan



> 
> > 
> > 2) Provide pre-built downloadable images for the overcloud and
> > deployment
> >    kernel and ramdisk.
> > 3) Instructions on how to use these images to deploy a running
> >    overcloud.
> > 
> > Images could be provided for Ubuntu and Fedora, since both those
> > work
> > fairly
> > well today.
> > 
> > The instructions would look something like:
> > 
> > 1) Download all the images.
> > 2) Perform initial host setup.  This would be much smaller than
> > what
> > is
> >    required for devtest and off the top of my head would mostly be:
> >    - openvswitch bridge setup
> >    - libvirt configuration
> >    - ssh configuration (for the baremetal virtual power driver)
> > 3) Start the undercloud vm.  It would need to be bootstrapped with
> > an
> > initial
> >    static json file for the heat metadata, same as the seed works
> > today.
> > 4) Any last mile manual configuration, such as nova.conf edits for
> > the virtual
> >    power driver user.
> > 6) Use tuskar+horizon (running on the undercloud)  to deploy the
> > overcloud.
> > 7) Overcloud configuration (don't see this being much different
> > than
> > what is
> >    there today).
> > 
> > All the openstack clients, heat templates, etc., are on the
> > undercloud vm, and
> > that's where they're used from, as opposed to from the host
> > (results
> > in less stuff
> > to install/configure on the host).
> > 
> > We could also provide instructions on how to configure the
> > undercloud
> > vm to
> > provision baremetal.  I assume this would be possible, given the
> > correct
> > bridged networking setup.
> > 
> > It could make sense to use an all in one overcloud for this as
> > well,
> > given it's
> > going for simplification.
> > 
> > Obviously, this approach implies some image management on the
> > community's part,
> > and I think we'd document and use all the existing tools (dib,
> > elements) to
> > build images, etc.
> > 
> > Thoughts on this approach?  
> > 
> > --
> > -- James Slagle
> > --
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-20 Thread Dan Prince
On Wed, 2016-08-17 at 23:42 +, arkady_kanev...@dell.com wrote:
> What is the goal of undercloud?
> Primarily to deploy and manage/upgrade/update overcloud.
> It is not targeted for multitenancy and the only “application”
> running on it is overcloud.
> While it may have a couple of VMs running in undercloud it is more
> convenience than actual need.
>  
> So what are the OpenStack projects need to run in undercloud to
> achieve its primary goal?

Our "TripleO" undercloud only requires a subset of the available
services that we run int the Overcloud. So ironic, heat, mistral,
zaqar, keystone, nova, swift, glance, neutron. These would mostly
satisfy our needs.

>  
> Having robust undercloud so it can handle faults, like node or
> network failures, is more important than being able to deploy all
> OpenStack services on it.

I think we can have all of these things. By using Heat we are a step
closer to an HA undercloud. The fact that we can deploy all the other
services on the undercloud too may seem irrelevant, until it doesn't.
This sort of "everything can be in your undercloud" use case could be
quite cool in fact. I don't think we'd force the idea on anyone though
and if it takes some time for people to warm up to the latter use case
that is fine.

The primary points in doing all this are to help benefit the TripleO
undercloud via code and template re-use. I think this stands on its
own.

Dan

>  
> Arkady
> -Original Message-
> From: Dan Prince [mailto:dpri...@redhat.com] 
> Sent: Friday, August 05, 2016 6:35 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> Subject: Re: [openstack-dev] [TripleO] a new Undercloud install
> driven by Heat
> 
> On Fri, 2016-08-05 at 12:27 +0200, Dmitry Tantsur wrote:
> > On 08/04/2016 11:48 PM, Dan Prince wrote:
> > > 
> > > Last week I started some prototype work on what could be a new
> way 
> > > to install the Undercloud. The driving force behind this was some
> of 
> > > the recent "composable services" work we've done in TripleO so 
> > > initially I called in composable undercloud. There is an
> etherpad 
> > > here with links to some of the patches already posted upstream
> (many 
> > > of which stand as general imporovements on their own outside the 
> > > scope of what I'm talking about here).
> > > 
> > > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > > 
> > > The idea in short is that we could spin up a small single process
> > > all-
> > > in-one heat-all (engine and API) and thereby avoid things like 
> > > Rabbit, and MySQL. Then we can use Heat templates to drive the 
> > > Undercloud deployment just like we do in the Overcloud.
> > I don't want to sound rude, but please no. The fact that you have
> a 
> > hammer does not mean everything around is nails :( What problem
> are 
> > you trying to solve by doing it?
> 
> Several problems I think.
> 
> One is TripleO has gradually moved away from elements. And while we
> still use DIB elements for some things we no longer favor that tool
> and instead rely on Heat and config management tooling to do our
> stepwise deployment ordering. This leaves us using instack-undercloud 
> a tool built specifically to install elements locally as a means to
> create our undercloud. It works... and I do think we've packaged it
> nicely but it isn't the best architectural fit for where we are going
> I think. I actually think that from an end/user contribution
> standpoint using t-h- t could be quite nice for adding features to
> the Undercloud.
> 
> Second would be re-use. We just spent a huge amount of time in Newton
> (and some in Mitaka) refactoring t-h-t around composable services. So
> say you add a new composable service for Barbican in the Overcloud...
> wouldn't it be nice to be able to consume the same thing in your
> Undercloud as well? Right now you can't, you have to do some of the
> work twice and in quite different formats I think. Sure, there is
> some amount of shared puppet work but that is only part of the
> picture I think.
> 
> There are new features to think about here too. Once upon a time
> TripleO supported multi-node underclouds. When we switched to
> instack- undercloud we moved away from that. By switching back to
> tripleo-heat- templates we could structure our templates around
> abstractions like resource groups and the new 'deployed-server' trick
> that allow you to create machines either locally or perhaps via
> Ironic too. We could avoid Ironic entirely and always install the
> Undercloud on existing servers via 'deployed-server' as well.
> 
> Lastly,

Re: [openstack-dev] [TripleO] Easier way of trying TripleO

2016-08-19 Thread Dan Prince
On Tue, 2013-11-19 at 16:40 -0500, James Slagle wrote:
> I'd like to propose an idea around a simplified and complimentary
> version of
> devtest that makes it easier for someone to get started and try
> TripleO.  
> 
> The goal being to get people using TripleO as a way to experience the
> deployment of OpenStack, and not necessarily a way to get an
> experience of a
> useable OpenStack cloud itself.
> 
> To that end, we could:
> 
> 1) Provide an undercloud vm image so that you could effectively skip
> the entire
>    seed setup.

The question here for me is what are you proposing to use to create
this image? Is it something that could live in tripleo-puppet-elements
like we manage the overcloud package dependencies? Or is it more than
this? I'd like to not have to build another alternate tool to help
manage this.

What if instead of an undercloud image we just created the undercloud
locally out of containers? Similar to what I've recently proposed with
the heat all-in-one installer here: https://dprince.github.io/tripleo-o
nward-dark-owl.html we could leverage the containers composable service
work for the overcloud in t-h-t and get containers support in the
undercloud for free.

If you still want to run an undercloud VM you could configure things
that way locally, or provide an image with containers in it I guess
too.

I'm fine supporting an easier developer case for TripleO but I'd like
to ultimately have less duplication across the maintenance of the
Undercloud and Overcloud as part of our solutions for these things too.

Dan

> 2) Provide pre-built downloadable images for the overcloud and
> deployment
>    kernel and ramdisk.
> 3) Instructions on how to use these images to deploy a running
>    overcloud.
> 
> Images could be provided for Ubuntu and Fedora, since both those work
> fairly
> well today.
> 
> The instructions would look something like:
> 
> 1) Download all the images.
> 2) Perform initial host setup.  This would be much smaller than what
> is
>    required for devtest and off the top of my head would mostly be:
>    - openvswitch bridge setup
>    - libvirt configuration
>    - ssh configuration (for the baremetal virtual power driver)
> 3) Start the undercloud vm.  It would need to be bootstrapped with an
> initial
>    static json file for the heat metadata, same as the seed works
> today.
> 4) Any last mile manual configuration, such as nova.conf edits for
> the virtual
>    power driver user.
> 6) Use tuskar+horizon (running on the undercloud)  to deploy the
> overcloud.
> 7) Overcloud configuration (don't see this being much different than
> what is
>    there today).
> 
> All the openstack clients, heat templates, etc., are on the
> undercloud vm, and
> that's where they're used from, as opposed to from the host (results
> in less stuff
> to install/configure on the host).
> 
> We could also provide instructions on how to configure the undercloud
> vm to
> provision baremetal.  I assume this would be possible, given the
> correct
> bridged networking setup.
> 
> It could make sense to use an all in one overcloud for this as well,
> given it's
> going for simplification.
> 
> Obviously, this approach implies some image management on the
> community's part,
> and I think we'd document and use all the existing tools (dib,
> elements) to
> build images, etc.
> 
> Thoughts on this approach?  
> 
> --
> -- James Slagle
> --
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Network Template Generator

2016-08-08 Thread Dan Prince
On Mon, 2016-08-08 at 15:42 -0500, Ben Nemec wrote:
> This is something that has existed for a while, but I had been
> hesitant
> to evangelize it until it was a little more proven.  At this point
> I've
> used it to generate templates for a number of different environments,
> and it has worked well.  I decided it was time to record another demo
> and throw it out there for the broader community to look at.  See
> details on my blog:
> http://blog.nemebean.com/content/tripleo-network-isolation-template-g
> enerator
> 
> Most of what you need to know is either there or in the video itself.
> Let me know what you think.

Very cool. For those that don't like "hand cutting" their own network
configuration templates this is a good CLI based generator.

Like you mention it would be nice to eventually converge this tool
somehow into both the UI and CLI but given that it works with older
releases as well it makes sense that it is CLI only for now.

Dan

> 
> Thanks.
> 
> -Ben
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] HA with only one node.

2016-08-06 Thread Dan Prince
On Sat, 2016-08-06 at 13:21 -0400, Adam Young wrote:
> As I try to debug Federaion problems, I am often finding I have to
> check 
> three nodes to see where the actual requrest was processed. However,
> If 
> I close down to of the controller nodes in Nova, the whole thing just
> fails.
> 
> 
> So, while that in it self is a problem, what I would like to be able
> to 
> do in development is have HA running, but with only a single
> controller 
> node answering requests.  How do I do that?

I have a $HOME/custom.yaml environment file which contains this:

parameters:
  ControllerCount: 1

If you do something similar and then include that environment in your
--environments list you should end up with just a single controller.

Do this in addition to using environments/puppet-pacemaker.yaml and you
should have "single node HA" (aka pacemaker on a single controller).

Dan

> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dan Prince
On Fri, 2016-08-05 at 13:56 +0200, Dmitry Tantsur wrote:
> On 08/05/2016 01:21 PM, Steven Hardy wrote:
> > 
> > On Fri, Aug 05, 2016 at 12:27:40PM +0200, Dmitry Tantsur wrote:
> > > 
> > > On 08/04/2016 11:48 PM, Dan Prince wrote:
> > > > 
> > > > Last week I started some prototype work on what could be a new
> > > > way to
> > > > install the Undercloud. The driving force behind this was some
> > > > of the
> > > > recent "composable services" work we've done in TripleO so
> > > > initially I
> > > > called in composable undercloud. There is an etherpad here with
> > > > links
> > > > to some of the patches already posted upstream (many of which
> > > > stand as
> > > > general imporovements on their own outside the scope of what
> > > > I'm
> > > > talking about here).
> > > > 
> > > > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > > > 
> > > > The idea in short is that we could spin up a small single
> > > > process all-
> > > > in-one heat-all (engine and API) and thereby avoid things like
> > > > Rabbit,
> > > > and MySQL. Then we can use Heat templates to drive the
> > > > Undercloud
> > > > deployment just like we do in the Overcloud.
> > > I don't want to sound rude, but please no. The fact that you have
> > > a hammer
> > > does not mean everything around is nails :( What problem are you
> > > trying to
> > > solve by doing it?
> > I think Dan explains it pretty well in his video, and your comment
> > indicates a fundamental misunderstanding around the entire TripleO
> > vision,
> > which is about symmetry and reuse between deployment tooling and
> > the
> > deployed cloud.
> Well, except for you need some non-openstack starting point, because 
> unlike with e.g. ansible installing any openstack service(s) does
> not 
> end at "dnf install ".
> 
> > 
> > 
> > The problems this would solve are several:
> > 
> > 1. Remove divergence between undercloud and overcloud puppet
> > implementation
> > (instead of having an undercloud specific manifest, we reuse the
> > *exact*
> > same stuff we use for overcloud deployments)
> I'm not against reusing puppet bits, I'm against building the same
> heavy 
> abstraction layer with heat around it.

What do you mean by heavy exactly. The entire point here was to
demonstrate that this can work and *is* actually quite lightweight I
think.

We are already building an abstraction layer. So why not just use it in
2 places instead of one.

> 
> > 
> > 
> > 2. Better modularity, far easier to enable/disable services
> Why? Do you expect enabling/disabling Nova, for example? In this
> regard 
> undercloud is fundamentally different from overcloud: for the former
> we 
> have a list of required services and a pretty light list of optional 
> services.

I think this is a very narrow view of the Undercloud and ignores the
fact that continually adding booleans to enable or disable features is
not scalable. Using the same composability and deployment framework we
have developed for the Overcloud might make better sense to me.

There is also real potential here to re-use this as a means to install
other package based types of setups. An "anything is an undercloud"
sort of approach could be the next logic step... all of this for free
because we are building abstractions to install these things in the
Overcloud as well.

> 
> > 
> > 
> > 3. Get container integration "for free" when we land it in the
> > overcloud
> > 
> > 4. Any introspection and debugging workflow becomes identical
> > between the
> > undercloud and overcloud
> I would love a defined debugging workflow for the overcloud first..

The nice thing about demo I showed for debugging is all the output
comes back to the console. Heat, os-collect-config, puppet, etc. all
there at your fingertips. Set 'debug=True' and you have everything you
need I think.

After building it I've quite enjoyed how fast it is to test and debug
creating a prototype undercloud.yaml.

> 
> > 
> > 
> > 5. We remove dependencies on a bunch of legacy scripts which run
> > outside of
> > puppet
> If you mean instack-undercloud element, we're getting rid of them 
> anyway, no?

We mean all of the elements. Besides a few bootstrapping things we have
gradually moved towards using Heat hooks to run things as opposed to
the traditional os-apply-config/os-refresh-config hooks. This provides
b

Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dan Prince
On Fri, 2016-08-05 at 13:39 +0200, Thomas Herve wrote:
> On Thu, Aug 4, 2016 at 11:48 PM, Dan Prince <dpri...@redhat.com>
> wrote:
> > 
> > Last week I started some prototype work on what could be a new way
> > to
> > install the Undercloud. The driving force behind this was some of
> > the
> > recent "composable services" work we've done in TripleO so
> > initially I
> > called in composable undercloud. There is an etherpad here with
> > links
> > to some of the patches already posted upstream (many of which stand
> > as
> > general imporovements on their own outside the scope of what I'm
> > talking about here).
> > 
> > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > 
> > The idea in short is that we could spin up a small single process
> > all-
> > in-one heat-all (engine and API) and thereby avoid things like
> > Rabbit,
> > and MySQL.
> I saw those patches coming, I'm interested in the all-in-one
> approach,
> if only for testing purpose. I hope to be able to propose a solution
> with broker-less RPC instead of fake RPC at some point, but it's a
> good first step.
> 
> I'm a bit more intrigued by the no-auth patch. It seems that Heat
> would rely heavily on Keystone interactions even after initial
> authentication, so I wonder how that work. As it seems you would need
> to push the same approach to Ironic, have you considered starting
> Keystone instead? It's a simple WSGI service, and can work with
> SQLite
> as well I believe.

You are correct. Noauth wasn't enough. I had to add a bit more to make
OS::Heat::SoftwareDeployments happy to get the templates I showed in
the demo working. Surprisingly though if I avoid Heat
OS::Heat::SoftwareDeployments and only used OS:Heat::SoftwareConfig's
in my templates no extra keystone auth was needed. This is because heat
only creates the extra Keystone user, trust, etc. when realizing the
software deployments I think.

I started with this which should work for multiple projects besides
just Heat: https://review.openstack.org/#/c/351351/2/tripleoclient/fake
_keystone.py

I'd be happy to swap in full Keystone if people prefer but that would
be more memory, and setup. Keystone dropped it's eventlet runner
recently so we'd have to fork another WSGI process to run it I think
somewhere in an out of the way (non-default ports, etc) fashion. I was
trying to keep the project list minimal so I went and stubbed in only
what was functionally needed for this here with an eye that we'd
actually (at some point) make heat support true noauth again.

Dan

> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-05 Thread Dan Prince
On Fri, 2016-08-05 at 12:27 +0200, Dmitry Tantsur wrote:
> On 08/04/2016 11:48 PM, Dan Prince wrote:
> > 
> > Last week I started some prototype work on what could be a new way
> > to
> > install the Undercloud. The driving force behind this was some of
> > the
> > recent "composable services" work we've done in TripleO so
> > initially I
> > called in composable undercloud. There is an etherpad here with
> > links
> > to some of the patches already posted upstream (many of which stand
> > as
> > general imporovements on their own outside the scope of what I'm
> > talking about here).
> > 
> > https://etherpad.openstack.org/p/tripleo-composable-undercloud
> > 
> > The idea in short is that we could spin up a small single process
> > all-
> > in-one heat-all (engine and API) and thereby avoid things like
> > Rabbit,
> > and MySQL. Then we can use Heat templates to drive the Undercloud
> > deployment just like we do in the Overcloud.
> I don't want to sound rude, but please no. The fact that you have a 
> hammer does not mean everything around is nails :( What problem are
> you 
> trying to solve by doing it?

Several problems I think.

One is TripleO has gradually moved away from elements. And while we
still use DIB elements for some things we no longer favor that tool and
instead rely on Heat and config management tooling to do our stepwise
deployment ordering. This leaves us using instack-undercloud a tool
built specifically to install elements locally as a means to create our
undercloud. It works... and I do think we've packaged it nicely but it
isn't the best architectural fit for where we are going I think. I
actually think that from an end/user contribution standpoint using t-h-
t could be quite nice for adding features to the Undercloud.

Second would be re-use. We just spent a huge amount of time in Newton
(and some in Mitaka) refactoring t-h-t around composable services. So
say you add a new composable service for Barbican in the Overcloud...
wouldn't it be nice to be able to consume the same thing in your
Undercloud as well? Right now you can't, you have to do some of the
work twice and in quite different formats I think. Sure, there is some
amount of shared puppet work but that is only part of the picture I
think.

There are new features to think about here too. Once upon a time
TripleO supported multi-node underclouds. When we switched to instack-
undercloud we moved away from that. By switching back to tripleo-heat-
templates we could structure our templates around abstractions like
resource groups and the new 'deployed-server' trick that allow you to
create machines either locally or perhaps via Ironic too. We could
avoid Ironic entirely and always install the Undercloud on existing
servers via 'deployed-server' as well.

Lastly, there is container work ongoing for the Overcloud. Again, I'd
like to see us adopt a format that would allow it to be used in the
Undercloud as well as opposed to having to re-implement features in the
Over and Under clouds all the time.

> 
> Undercloud installation is already sometimes fragile, but it's
> probably 
> the least fragile part right now (at least from my experience) And
> at 
> the very least it's pretty obviously debuggable in most cases. THT
> is 
> hard to understand and often impossible to debug. I'd prefer we move 
> away from THT completely rather than trying to fix it in one more
> place 
> where heat does not fit..

What tool did you have in mind. FWIW I started with heat because by
using just Heat I was able to take the initial steps to prototype this.

In my mind Mistral might be next here and in fact it already supports
the single process launching idea thing. Keeping the undercloud
installer as light as possible would be ideal though.

Dan

> 
> > 
> > 
> > I created a short video demonstration which goes over some of the
> > history behind the approach, and shows a live demo of all of this
> > working with the patches above:
> > 
> > https://www.youtube.com/watch?v=y1qMDLAf26Q
> > 
> > Thoughts? Would it be cool to have a session to discuss this more
> > in
> > Barcelona?
> > 
> > Dan Prince (dprince)
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?sub

[openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-04 Thread Dan Prince
Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.

I created a short video demonstration which goes over some of the
history behind the approach, and shows a live demo of all of this
working with the patches above:

https://www.youtube.com/watch?v=y1qMDLAf26Q

Thoughts? Would it be cool to have a session to discuss this more in
Barcelona?

Dan Prince (dprince)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] CI package build issues

2016-06-23 Thread Dan Prince
After discovering some regressions today we found what we think is a
package build issue in our CI environment which might be the cause of
our issues:

https://bugs.launchpad.net/tripleo/+bug/1595660

Specifically, there is a case where DLRN might not be giving an error
code if build failures occur, and thus our jobs don't get the updated
package symlink and thus give a false positive.

Until we get this solved be careful when merging. You might look for
'packages not built correctly: not updating the consistent symlink' in
the job output. I see over 200 of these in the last 24 hours:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=messa
ge%3A%5C%22not%20updating%20the%20consistent%20symlink%5C%22%20AND%20fi
lename%3A%5C%22console.html%5C%22

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] all CI jobs are failing: error: seed_2.qcow2 not in qcow2 format

2016-06-19 Thread Dan Prince
Thanks Emilien,

I marked the bug below as a duplicate of this issue I had filed earlier
 with this title "CI jobs failing, mirror-server is down
(192.168.1.101)":

https://bugs.launchpad.net/tripleo/+bug/1594161

The root cause of the issue seemed to be that the mirror server was
unresponsive. I took a look this afternoon the compute host running the
mirror server was totally unresponsive. I rebooted it and then noticed
nova-api on the controller node was also throwing errors. Turns out we
also had a bunch of nodepool nodes stuck in error/deleting. After
cleaning those up I restarted services on the compute node and was able
to get the mirror server running again.

Lets see what happens now...

Dan

On Sun, 2016-06-19 at 20:03 -0400, Emilien Macchi wrote:
> Hi,
> 
> It seems like all CI jobs are currently red, I reported the bug here:
> https://bugs.launchpad.net/tripleo/+bug/1594203
> 
> "qemu-system-x86_64: -drive
> file=/var/lib/libvirt/images/seed_2.qcow2,if=none,id=drive-sata0-0-
> 0,format=qcow2,cache=unsafe:
> could not open disk image /var/lib/libvirt/images/seed_2.qcow2: not
> in
> qcow2 format"
> 
> Latest successful run: 2016-06-19 12:48:27 (EST)
> I have investigated a bit and unsuccessfully found the root cause
> yet,
> moreover it could be something on CI servers and I have no access.
> 
> Thanks,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [TripleO] TripleO UI Initial Wireframes

2016-06-16 Thread Dan Prince
I left some comments on the wireframes themselves. One general concept
I would like to see capture is to make sure that things across the UI
and CLI have parity.

Specifically things like if I register nodes on the CLI we use a JSON
file format:

http://tripleo.org/environments/environments.html#instackenv

Supporting individual nodes to be created is fine as well since a
command line user could just run Ironic client commands directly too.

I also left a comment about the screen with multiple plans. I like this
idea and it is something that we can pursue but due to fact that we use
a flat physical deployment network there would need to be some extra
care in setting up the network ranges, vlans, etc across multiple
plans. Again this is something I would like to see us support and
document with the CLI before we go and expose the capability in the UI.

Dan


On Mon, 2016-06-06 at 15:03 -0400, Liz Blanchard wrote:
> Hi All,
> 
> I wanted to share some brainstorming we've done on the TripleO UI. I
> put together wireframes[1] to reflect some ideas we have on moving
> forward with features in the UI and would love to get any feedback
> you all have. Feel free to comment via this email or comment within
> InVision.
> 
> Best,
> Liz
> 
> [1] https://invis.io/KW7JTXBBR
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-16 Thread Dan Prince
On Thu, 2016-06-09 at 15:03 +0100, Steven Hardy wrote:
> Hi all,
> 
> I've been in discussion with Martin André and Tomas Sedovic, who are
> involved with the creation of the new tripleo-validations repo[1]
> 
> We've agreed that rather than create another gerrit group, they can
> be
> added to tripleo-core and agree to restrict +A to this repo for the
> time
> being (hopefully they'll both continue to review more widely, and
> obviously
> Tomas is a former TripleO core anyway, so welcome back! :)
> 
> If folks feel strongly we should create another group we can, but
> this
> seems like a low-overhead approach, and well aligned with the scope
> of the
> repo, let me know if you disagree.


For more isolated projects that can be used standalone I have a slight
preference for sub-teams. I recently proposed this for os-net-config:

https://review.openstack.org/#/c/307975/

If we think tripleo-validations is more of a "TripleO" thing and won't
be useful outside of TripleO proper then I think adding them to
tripleo-core is probably fine. If our intent is to make this a generic
set of validations then perhaps a subteam makes sense.

For now, I'm totally fine adding Andre and Tomas to TripleO though too.

> 
> Also, while reviewing the core group[2] I noticed the following
> members who
> are no longer active and should probably be removed:
> 
> - Radomir Dopieralski
> - Martyn Taylor
> - Clint Byrum

+1 for these changes.

> 
> I know Clint is still involved with DiB (which has a separate core
> group),
> but he's indicated he's no longer going to be directly involved in
> other
> tripleo development, and AFAIK neither Martyn or Radomir are actively
> involved in TripleO reviews - thanks to them all for their
> contribution,
> we'll gladly add you back in the future should you wish to return :)
> 
> Please let me know if there are any concerns or objections, if there
> are
> none I will make these changes next week.
> 
> Thanks,
> 
> Steve
> 
> [1] https://github.com/openstack/tripleo-validations
> [2] https://review.openstack.org/#/admin/groups/190,members
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >