Re: [openstack-dev] [puppet] [stable] Deprecation of newton branches

2018-11-19 Thread Alex Schultz
On Mon, Nov 19, 2018 at 1:18 AM Tobias Urdin  wrote:
>
> Hello,
>
> We've been talking for a while about the deprecation and removal of the
> stable/newton branches.
> I think it's time now that we get rid of them, we have no open patches
> and Newton is considered EOL.
>
> Could cores get back with a quick feedback and then the stable team can
> get rid of those whenever they have time.
>

yes please. let's EOL them

> Best regards
> Tobias
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Enrique Llorente Pastora as a core reviewer for TripleO

2018-11-15 Thread Alex Schultz
+1
On Thu, Nov 15, 2018 at 8:51 AM Sagi Shnaidman  wrote:
>
> Hi,
> I'd like to propose Quique (@quiquell) as a core reviewer for TripleO. Quique 
> is actively involved in improvements and development of TripleO and TripleO 
> CI. He also helps in other projects including but not limited to 
> Infrastructure.
> He shows a very good understanding how TripleO and CI works and I'd like 
> suggest him as core reviewer of TripleO in CI related code.
>
> Please vote!
> My +1 is here :)
>
> Thanks
> --
> Best regards
> Sagi Shnaidman
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] puppet5 has broken the master gate

2018-11-12 Thread Alex Schultz
Just a heads up but we recently updated to puppet5 in the master
dependencies. It appears that this has completely hosed the master
scenarios and containers-multinode jobs.  Please do recheck/approve
anything until we get this resolved.

See https://bugs.launchpad.net/tripleo/+bug/1803024

I have a possible fix (https://review.openstack.org/#/c/617441/) but
it's probably a better idea to roll back the puppet package if
possible.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-11-05 Thread Alex Schultz
On Mon, Nov 5, 2018 at 3:47 AM Bogdan Dobrelya  wrote:
>
> Let's also think of removing puppet-tripleo from the base container.
> It really brings the world-in (and yum updates in CI!) each job and each
> container!
> So if we did so, we should then either install puppet-tripleo and co on
> the host and bind-mount it for the docker-puppet deployment task steps
> (bad idea IMO), OR use the magical --volumes-from 
> option to mount volumes from some "puppet-config" sidecar container
> inside each of the containers being launched by docker-puppet tooling.
>

This does bring an interesting point as we also include this in
overcloud-full. I know Dan had a patch to stop using the
puppet-tripleo from the host[0] which is the opposite of this.  While
these yum updates happen a bunch in CI, they aren't super large
updates. But yes I think we need to figure out the correct way forward
with these packages.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/550848/


> On 10/31/18 6:35 PM, Alex Schultz wrote:
> >
> > So this is a single layer that is updated once and shared by all the
> > containers that inherit from it. I did notice the same thing and have
> > proposed a change in the layering of these packages last night.
> >
> > https://review.openstack.org/#/c/614371/
> >
> > In general this does raise a point about dependencies of services and
> > what the actual impact of adding new ones to projects is. Especially
> > in the container world where this might be duplicated N times
> > depending on the number of services deployed.  With the move to
> > containers, much of the sharedness that being on a single host
> > provided has been lost at a cost of increased bandwidth, memory, and
> > storage usage.
> >
> > Thanks,
> > -Alex
> >
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] gate issues please do not approve/recheck

2018-11-01 Thread Alex Schultz
Ok since the podman revert patche has been successfully merged and
we've landed most of the non-voting scenario patches, it should be OK
to restore/recheck.  It would be a good idea to prioritized things to
land and if it's not critical, let's hold off on approving until we're
sure the gate is much better.

Thanks,
-Alex

On Wed, Oct 31, 2018 at 9:39 AM Alex Schultz  wrote:
>
> Hey folks,
>
> So we have identified an issue that has been causing a bunch of
> failures and proposed a revert of our podman testing[0].  We have
> cleared the gate and are asking that you not approve or recheck any
> patches at this time.  We will let you know when it is safe to start
> approving things.
>
> Thanks,
> -Alex
>
> [0] https://review.openstack.org/#/c/614537/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-31 Thread Alex Schultz
On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås  wrote:
>
> On Tue, 2018-10-30 at 15:00 -0600, Alex Schultz wrote:
> > On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan 
> > wrote:
> > >
> > > On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote:
> > > > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec <
> > > > openst...@nemebean.com> wrote:
> > > > >
> > > > > Tagging with tripleo since my suggestion below is specific to
> > > > > that project.
> > > > >
> > > > > On 10/30/18 11:03 AM, Clark Boylan wrote:
> > > > > > Hello everyone,
> > > > > >
> > > > > > A little while back I sent email explaining how the gate
> > > > > > queues work and how fixing bugs helps us test and merge more
> > > > > > code. All of this still is still true and we should keep
> > > > > > pushing to improve our testing to avoid gate resets.
> > > > > >
> > > > > > Last week we migrated Zuul and Nodepool to a new Zookeeper
> > > > > > cluster. In the process of doing this we had to restart Zuul
> > > > > > which brought in a new logging feature that exposes node
> > > > > > resource usage by jobs. Using this data I've been able to
> > > > > > generate some report information on where our node demand is
> > > > > > going. This change [0] produces this report [1].
> > > > > >
> > > > > > As with optimizing software we want to identify which changes
> > > > > > will have the biggest impact and to be able to measure
> > > > > > whether or not changes have had an impact once we have made
> > > > > > them. Hopefully this information is a start at doing that.
> > > > > > Currently we can only look back to the point Zuul was
> > > > > > restarted, but we have a thirty day log rotation for this
> > > > > > service and should be able to look at a month's worth of data
> > > > > > going forward.
> > > > > >
> > > > > > Looking at the data you might notice that Tripleo is using
> > > > > > many more node resources than our other projects. They are
> > > > > > aware of this and have a plan [2] to reduce their resource
> > > > > > consumption. We'll likely be using this report generator to
> > > > > > check progress of this plan over time.
> > > > >
> > > > > I know at one point we had discussed reducing the concurrency
> > > > > of the
> > > > > tripleo gate to help with this. Since tripleo is still using
> > > > > >50% of the
> > > > > resources it seems like maybe we should revisit that, at least
> > > > > for the
> > > > > short-term until the more major changes can be made? Looking
> > > > > through the
> > > > > merge history for tripleo projects I don't see a lot of cases
> > > > > (any, in
> > > > > fact) where more than a dozen patches made it through anyway*,
> > > > > so I
> > > > > suspect it wouldn't have a significant impact on gate
> > > > > throughput, but it
> > > > > would free up quite a few nodes for other uses.
> > > > >
> > > >
> > > > It's the failures in gate and resets.  At this point I think it
> > > > would
> > > > be a good idea to turn down the concurrency of the tripleo queue
> > > > in
> > > > the gate if possible. As of late it's been timeouts but we've
> > > > been
> > > > unable to track down why it's timing out specifically.  I
> > > > personally
> > > > have a feeling it's the container download times since we do not
> > > > have
> > > > a local registry available and are only able to leverage the
> > > > mirrors
> > > > for some levels of caching. Unfortunately we don't get the best
> > > > information about this out of docker (or the mirrors) and it's
> > > > really
> > > > hard to determine what exactly makes things run a bit slower.
> > >
> > > We actually tried this not too long ago
> > > https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b
> > >  but decided to revert it because it didn't decrease the check
> > >

[openstack-dev] [tripleo] reducing our upstream CI footprint

2018-10-31 Thread Alex Schultz
Hey everyone,

Based on previous emails around this[0][1], I have proposed a possible
reducing in our usage by switching the scenario001--011 jobs to
non-voting and removing them from the gate[2]. This will reduce the
likelihood of causing gate resets and hopefully allow us to land
corrective patches sooner.  In terms of risks, there is a risk that we
might introduce breaking changes in the scenarios because they are
officially non-voting, and we will still be gating promotions on these
scenarios.  This means that if they are broken, they will need the
same attention and care to fix them so we should be vigilant when the
jobs are failing.

The hope is that we can switch these scenarios out with voting
standalone versions in the next few weeks, but until that I think we
should proceed by removing them from the gate.  I know this is less
than ideal but as most failures with these jobs in the gate are either
timeouts or unrelated to the changes (or gate queue), they are more of
hindrance than a help at this point.

Thanks,
-Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136141.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html
[2] 
https://review.openstack.org/#/q/topic:reduce-tripleo-usage+(status:open+OR+status:merged)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] gate issues please do not approve/recheck

2018-10-31 Thread Alex Schultz
Hey folks,

So we have identified an issue that has been causing a bunch of
failures and proposed a revert of our podman testing[0].  We have
cleared the gate and are asking that you not approve or recheck any
patches at this time.  We will let you know when it is safe to start
approving things.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/614537/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Alex Schultz
On Tue, Oct 30, 2018 at 12:25 PM Clark Boylan  wrote:
>
> On Tue, Oct 30, 2018, at 10:42 AM, Alex Schultz wrote:
> > On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:
> > >
> > > Tagging with tripleo since my suggestion below is specific to that 
> > > project.
> > >
> > > On 10/30/18 11:03 AM, Clark Boylan wrote:
> > > > Hello everyone,
> > > >
> > > > A little while back I sent email explaining how the gate queues work 
> > > > and how fixing bugs helps us test and merge more code. All of this 
> > > > still is still true and we should keep pushing to improve our testing 
> > > > to avoid gate resets.
> > > >
> > > > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In 
> > > > the process of doing this we had to restart Zuul which brought in a new 
> > > > logging feature that exposes node resource usage by jobs. Using this 
> > > > data I've been able to generate some report information on where our 
> > > > node demand is going. This change [0] produces this report [1].
> > > >
> > > > As with optimizing software we want to identify which changes will have 
> > > > the biggest impact and to be able to measure whether or not changes 
> > > > have had an impact once we have made them. Hopefully this information 
> > > > is a start at doing that. Currently we can only look back to the point 
> > > > Zuul was restarted, but we have a thirty day log rotation for this 
> > > > service and should be able to look at a month's worth of data going 
> > > > forward.
> > > >
> > > > Looking at the data you might notice that Tripleo is using many more 
> > > > node resources than our other projects. They are aware of this and have 
> > > > a plan [2] to reduce their resource consumption. We'll likely be using 
> > > > this report generator to check progress of this plan over time.
> > >
> > > I know at one point we had discussed reducing the concurrency of the
> > > tripleo gate to help with this. Since tripleo is still using >50% of the
> > > resources it seems like maybe we should revisit that, at least for the
> > > short-term until the more major changes can be made? Looking through the
> > > merge history for tripleo projects I don't see a lot of cases (any, in
> > > fact) where more than a dozen patches made it through anyway*, so I
> > > suspect it wouldn't have a significant impact on gate throughput, but it
> > > would free up quite a few nodes for other uses.
> > >
> >
> > It's the failures in gate and resets.  At this point I think it would
> > be a good idea to turn down the concurrency of the tripleo queue in
> > the gate if possible. As of late it's been timeouts but we've been
> > unable to track down why it's timing out specifically.  I personally
> > have a feeling it's the container download times since we do not have
> > a local registry available and are only able to leverage the mirrors
> > for some levels of caching. Unfortunately we don't get the best
> > information about this out of docker (or the mirrors) and it's really
> > hard to determine what exactly makes things run a bit slower.
>
> We actually tried this not too long ago 
> https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=22d98f7aab0fb23849f715a8796384cffa84600b
>  but decided to revert it because it didn't decrease the check queue backlog 
> significantly. We were still running at several hours behind most of the time.
>
> If we want to set up better monitoring and measuring and try it again we can 
> do that. But we probably want to measure queue sizes with and without the 
> change like that to better understand if it helps.
>
> As for container image download times can we quantify that via docker logs? 
> Basically sum up the amount of time spent by a job downloading images so that 
> we can see what the impact is but also measure if changes improve that? As 
> for other ideas improving things seems like many of the images that tripleo 
> use are quite large. I recall seeing a > 600MB image just for rsyslog. 
> Wouldn't it be advantageous for both the gate and tripleo in the real world 
> to trim the size of those images (which should improve download times). In 
> any case quantifying the size of the downloads and trimming those if possible 
> is likely also worthwhile.
>

So it's not that simple as we don't just download all the images in a
distinct task and 

Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-10-30 Thread Alex Schultz
On Tue, Oct 30, 2018 at 11:36 AM Ben Nemec  wrote:
>
> Tagging with tripleo since my suggestion below is specific to that project.
>
> On 10/30/18 11:03 AM, Clark Boylan wrote:
> > Hello everyone,
> >
> > A little while back I sent email explaining how the gate queues work and 
> > how fixing bugs helps us test and merge more code. All of this still is 
> > still true and we should keep pushing to improve our testing to avoid gate 
> > resets.
> >
> > Last week we migrated Zuul and Nodepool to a new Zookeeper cluster. In the 
> > process of doing this we had to restart Zuul which brought in a new logging 
> > feature that exposes node resource usage by jobs. Using this data I've been 
> > able to generate some report information on where our node demand is going. 
> > This change [0] produces this report [1].
> >
> > As with optimizing software we want to identify which changes will have the 
> > biggest impact and to be able to measure whether or not changes have had an 
> > impact once we have made them. Hopefully this information is a start at 
> > doing that. Currently we can only look back to the point Zuul was 
> > restarted, but we have a thirty day log rotation for this service and 
> > should be able to look at a month's worth of data going forward.
> >
> > Looking at the data you might notice that Tripleo is using many more node 
> > resources than our other projects. They are aware of this and have a plan 
> > [2] to reduce their resource consumption. We'll likely be using this report 
> > generator to check progress of this plan over time.
>
> I know at one point we had discussed reducing the concurrency of the
> tripleo gate to help with this. Since tripleo is still using >50% of the
> resources it seems like maybe we should revisit that, at least for the
> short-term until the more major changes can be made? Looking through the
> merge history for tripleo projects I don't see a lot of cases (any, in
> fact) where more than a dozen patches made it through anyway*, so I
> suspect it wouldn't have a significant impact on gate throughput, but it
> would free up quite a few nodes for other uses.
>

It's the failures in gate and resets.  At this point I think it would
be a good idea to turn down the concurrency of the tripleo queue in
the gate if possible. As of late it's been timeouts but we've been
unable to track down why it's timing out specifically.  I personally
have a feeling it's the container download times since we do not have
a local registry available and are only able to leverage the mirrors
for some levels of caching. Unfortunately we don't get the best
information about this out of docker (or the mirrors) and it's really
hard to determine what exactly makes things run a bit slower.

I've asked about the status of moving the scenarios off of multinode
to standalone which would half the number of systems being run for
these jobs. It's currently next on the list of things to tackle after
we get a single fedora28 job up and running.

Thanks,
-Alex

> *: I have no actual stats to back that up, I'm just looking through the
> IRC backlog for merge bot messages. If such stats do exist somewhere we
> should look at them instead. :-)
>
> >
> > Also related to the long queue backlogs is this proposal [3] to change how 
> > Zuul prioritizes resource allocations to try to be more fair.
> >
> > [0] https://review.openstack.org/#/c/613674/
> > [1] http://paste.openstack.org/show/733644/
> > [2] 
> > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135396.html
> > [3] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html
> >
> > If you find any of this interesting and would like to help feel free to 
> > reach out to myself or the infra team.
> >
> > Thank you,
> > Clark
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] retirement of instack

2018-10-29 Thread Alex Schultz
With the proposed retirement of instack-undercloud[0], we will no
longer be supporting future development of the instack project as
well. As with instack-undercloud, we will continue to support the
stable branches of instack for their life but will not being doing any
future development.  Please let me know if there are any issues.

Thanks,
-Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2018-October/136098.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][tripleo] puppet 5.5.7 breaks a bunch of stuff

2018-10-26 Thread Alex Schultz
Just a heads up. I've been battling with some unit test issues with
the latest version of puppet 5.5.  I've proposed some fixes[0][1], but
it appears that there is a larger issue with legacy functions which
affects the stable branches.  I've reported the issues[2][3] upstream
to Puppetlabs, but it'll likely be some time before we have any
resolution. In the mean time I would recommend pinning to 5.5.6 if
possible.

Thanks,
-Alex

[0] https://bugs.launchpad.net/puppet-nova/+bug/1799757
[1] https://bugs.launchpad.net/tripleo/+bug/1799786
[2] https://tickets.puppetlabs.com/browse/PUP-9270
[3] https://tickets.puppetlabs.com/browse/PUP-9271

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] retirement of instack-undercloud

2018-10-26 Thread Alex Schultz
We have officially moved off of the instack-undercloud deployment
process in Rocky and have officially removed it's support from
python-tripleoclient in Stein.  In order to prevent confusion I have
proposed a patch to start the retirement of instack-undercloud[0].  We
will continue to support the stable branches for their life but we
don't want any further patches to instack-undercloud going forward.
Please let me know if there are any issues.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/613621/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Alex Schultz
On Thu, Oct 25, 2018 at 9:16 AM Bogdan Dobrelya  wrote:
>
>
> On 10/19/18 8:04 PM, Alex Schultz wrote:
> > On Fri, Oct 19, 2018 at 10:53 AM James Slagle  
> > wrote:
> >>
> >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  
> >> wrote:
> >> > Additionally I took a stab at combining the puppet/docker service
> >> > definitions for the aodh services in a similar structure to start
> >> > reducing the overhead we've had from maintaining the docker/puppet
> >> > implementations seperately.  You can see the patch
> >> > https://review.openstack.org/#/c/611188/ for an additional example of
> >> > this.
> >>
> >> That patch takes the approach of removing baremetal support. Is that
> >> what we agreed to do?
> >>
> >
> > Since it's deprecated since Queens[0], yes? I think it is time to stop
> > continuing this method of installation.  Given that I'm not even sure
>
> My point and concern retains as before, unless we fully dropped the
> docker support for Queens (and downstream LTS released for it), we
> should not modify the t-h-t directory tree, due to associated
> maintenance of backports complexity reasons
>

This is why we have duplication of things in THT.  For environment
files this is actually an issue due to the fact they are the end user
interface. But these service files should be internal and where they
live should not matter.  We already have had this in the past and have
managed to continue to do backports so I don't think this as a reason
not to do this clean up.  It feels like we use this as a reason not to
actually move forward on cleanup and we end up carrying the tech debt.
By this logic, we'll never be able to cleanup anything if we can't
handle moving files around.

I think there are some patches to do soft links (dprince might be able
to provide the patches) which could at least handle this backward
compatibility around locations, but I think we need to actually move
forward on the simplification of the service definitions unless
there's a blocking technical issue with this effort.

Thanks,
-Alex

> > the upgrade process even works anymore with baremetal, I don't think
> > there's a reason to keep it as it directly impacts the time it takes
> > to perform deployments and also contributes to increased complexity
> > all around.
> >
> > [0] 
> > http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html
> >
> >> I'm not specifically opposed, as I'm pretty sure the baremetal
> >> implementations are no longer tested anywhere, but I know that Dan had
> >> some concerns about that last time around.
> >>
> >> The alternative we discussed was using jinja2 to include common
> >> data/tasks in both the puppet/docker/ansible implementations. That
> >> would also result in reducing the number of Heat resources in these
> >> stacks and hopefully reduce the amount of time it takes to
> >> create/update the ServiceChain stacks.
> >>
> >
> > I'd rather we officially get rid of the one of the two methods and
> > converge on a single method without increasing the complexity via
> > jinja to continue to support both. If there's an improvement to be had
> > after we've converged on a single structure for including the base
> > bits, maybe we could do that then?
> >
> > Thanks,
> > -Alex
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-19 Thread Alex Schultz
On Fri, Oct 19, 2018 at 10:53 AM James Slagle  wrote:
>
> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
> > Additionally I took a stab at combining the puppet/docker service
> > definitions for the aodh services in a similar structure to start
> > reducing the overhead we've had from maintaining the docker/puppet
> > implementations seperately.  You can see the patch
> > https://review.openstack.org/#/c/611188/ for an additional example of
> > this.
>
> That patch takes the approach of removing baremetal support. Is that
> what we agreed to do?
>

Since it's deprecated since Queens[0], yes? I think it is time to stop
continuing this method of installation.  Given that I'm not even sure
the upgrade process even works anymore with baremetal, I don't think
there's a reason to keep it as it directly impacts the time it takes
to perform deployments and also contributes to increased complexity
all around.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html

> I'm not specifically opposed, as I'm pretty sure the baremetal
> implementations are no longer tested anywhere, but I know that Dan had
> some concerns about that last time around.
>
> The alternative we discussed was using jinja2 to include common
> data/tasks in both the puppet/docker/ansible implementations. That
> would also result in reducing the number of Heat resources in these
> stacks and hopefully reduce the amount of time it takes to
> create/update the ServiceChain stacks.
>

I'd rather we officially get rid of the one of the two methods and
converge on a single method without increasing the complexity via
jinja to continue to support both. If there's an improvement to be had
after we've converged on a single structure for including the base
bits, maybe we could do that then?

Thanks,
-Alex

> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bob Fournier as core reviewer

2018-10-19 Thread Alex Schultz
+1
On Fri, Oct 19, 2018 at 6:29 AM Emilien Macchi  wrote:
>
> On Fri, Oct 19, 2018 at 8:24 AM Juan Antonio Osorio Robles 
>  wrote:
>>
>> I would like to propose Bob Fournier (bfournie) as a core reviewer in
>> TripleO. His patches and reviews have spanned quite a wide range in our
>> project, his reviews show great insight and quality and I think he would
>> be a addition to the core team.
>>
>> What do you folks think?
>
>
> Big +1, Bob is a solid contributor/reviewer. His area of knowledge has been 
> critical in all aspects of Hardware Provisioning integration but also in 
> other TripleO bits.
> --
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-17 Thread Alex Schultz
Time to resurrect this thread.

On Thu, Jul 5, 2018 at 12:14 PM James Slagle  wrote:
>
> On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince  wrote:
> > Last week I was tinkering with my docker configuration a bit and was a
> > bit surprised that puppet/services/docker.yaml no longer used puppet to
> > configure the docker daemon. It now uses Ansible [1] which is very cool
> > but brings up the question of how should we clearly indicate to
> > developers and users that we are using Ansible vs Puppet for
> > configuration?
> >
> > TripleO has been around for a while now, has supported multiple
> > configuration ans service types over the years: os-apply-config,
> > puppet, containers, and now Ansible. In the past we've used rigid
> > directory structures to identify which "service type" was used. More
> > recently we mixed things up a bit more even by extending one service
> > type from another ("docker" services all initially extended the
> > "puppet" services to generate config files and provide an easy upgrade
> > path).
> >
> > Similarly we now use Ansible all over the place for other things in
> > many of or docker and puppet services for things like upgrades. That is
> > all good too. I guess the thing I'm getting at here is just a way to
> > cleanly identify which services are configured via Puppet vs. Ansible.
> > And how can we do that in the least destructive way possible so as not
> > to confuse ourselves and our users in the process.
> >
> > Also, I think its work keeping in mind that TripleO was once a multi-
> > vendor project with vendors that had different preferences on service
> > configuration. Also having the ability to support multiple
> > configuration mechanisms in the future could once again present itself
> > (thinking of Kubernetes as an example). Keeping in mind there may be a
> > conversion period that could well last more than a release or two.
> >
> > I suggested a 'services/ansible' directory with mixed responces in our
> > #tripleo meeting this week. Any other thoughts on the matter?
>
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
>
> Instead of:
>
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
>
> We'd have:
>
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml
>
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)
>
> Personally, such an organization is something I'm more used to. It
> feels more similar to how most would expect a puppet module or ansible
> role to be organized, where you have the abstraction (service
> configuration) at a higher directory level than specific
> implementations.
>
> It would also lend itself more easily to adding implementations only
> for specific services, and address the question of if a new top level
> implementation directory needs to be created. For example, adding a
> services/nova/nova-api-chef.yaml seems a lot less contentious than
> adding a top level chef/services/nova-api.yaml.
>
> It'd also be nice if we had a way to mark the default within a given
> service's directory. Perhaps services/nova/nova-api-default.yaml,
> which would be a new template that just consumes the default? Or
> perhaps a symlink, although it was pointed out symlinks don't work in
> swift containers. Still, that could possibly be addressed in our plan
> upload workflows. Then the resource-registry would point at
> nova-api-default.yaml. One could easily tell which is the default
> without having to cross reference with the resource-registry.
>

So since I'm adding a new ansible service, I thought I'd try and take
a stab at this naming thing. I've taken James's idea and proposed an
implementation here:
https://review.openstack.org/#/c/588111/

The idea would be that the THT code for the service deployment would
end up in something like:

deployment//-.yaml

Additionally I took a stab at combining the puppet/docker service
definitions for the aodh services in a similar structure to start
reducing the overhead we've had from maintaining the docker/puppet
implementations seperately.  You can see the patch
https://review.openstack.org/#/c/611188/ for an additional example of
this.

Please let me know what you think.

Thanks,
-Alex

>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tripleo][puppet] clearing the gate and landing patches to help CI

2018-10-04 Thread Alex Schultz
And master is blocked again.  We need https://review.openstack.org/#/c/607952/

Thanks,
-Alex

On Fri, Sep 28, 2018 at 9:02 AM Alex Schultz  wrote:
>
> Hey Folks,
>
> Currently the tripleo gate is at 21 hours and we're continue to have
> timeouts and now scenario001/004 (in queens/pike) appear to be broken.
> Additionally we've got some patches in puppet-openstack that we need
> to land in order to resolve broken puppet unit tests which is
> affecting both projects.
>
> Currently we need to wait for the following to land in puppet:
> https://review.openstack.org/#/q/I4875b8bc8b2333046fc3a08b4669774fd26c89cb
> https://review.openstack.org/#/c/605350/
>
> In tripleo we currently have not identified the root cause for any of
> the timeout failures so I'd for us to work on that before trying to
> land anything else because the gate resets are killing us and not
> helping anything.  We have landed a few patches that have improved the
> situation but we're still hitting issues.
>
> https://bugs.launchpad.net/tripleo/+bug/1795009 is the bug for the
> scenario001/004 issues.  It appears that we're ending up with a newer
> version of ansible on the system then what the packages provide. Still
> working on figuring out where it's coming from.
>
> Please do not approve anything or recheck unless it's to address CI
> issues at this time.
>
> Thanks,
> -Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][puppet] clearing the gate and landing patches to help CI

2018-09-28 Thread Alex Schultz
Hey Folks,

Currently the tripleo gate is at 21 hours and we're continue to have
timeouts and now scenario001/004 (in queens/pike) appear to be broken.
Additionally we've got some patches in puppet-openstack that we need
to land in order to resolve broken puppet unit tests which is
affecting both projects.

Currently we need to wait for the following to land in puppet:
https://review.openstack.org/#/q/I4875b8bc8b2333046fc3a08b4669774fd26c89cb
https://review.openstack.org/#/c/605350/

In tripleo we currently have not identified the root cause for any of
the timeout failures so I'd for us to work on that before trying to
land anything else because the gate resets are killing us and not
helping anything.  We have landed a few patches that have improved the
situation but we're still hitting issues.

https://bugs.launchpad.net/tripleo/+bug/1795009 is the bug for the
scenario001/004 issues.  It appears that we're ending up with a newer
version of ansible on the system then what the packages provide. Still
working on figuring out where it's coming from.

Please do not approve anything or recheck unless it's to address CI
issues at this time.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode?

2018-09-18 Thread Alex Schultz
On Tue, Sep 18, 2018 at 1:27 PM, Matt Riedemann  wrote:
> The release page says Ocata is planned to go into extended maintenance mode
> on Aug 27 [1]. There really isn't much to this except it means we don't do
> releases for Ocata anymore [2]. There is a caveat that project teams that do
> not wish to maintain stable/ocata after this point can immediately end of
> life the branch for their project [3]. We can still run CI using tags, e.g.
> if keystone goes ocata-eol, devstack on stable/ocata can still continue to
> install from stable/ocata for nova and the ocata-eol tag for keystone.
> Having said that, if there is no undue burden on the project team keeping
> the lights on for stable/ocata, I would recommend not tagging the
> stable/ocata branch end of life at this point.
>
> So, questions that need answering are:
>
> 1. Should we cut a final release for projects with stable/ocata branches
> before going into extended maintenance mode? I tend to think "yes" to flush
> the queue of backports. In fact, [3] doesn't mention it, but the resolution
> said we'd tag the branch [4] to indicate it has entered the EM phase.
>
> 2. Are there any projects that would want to skip EM and go directly to EOL
> (yes this feels like a Monopoly question)?
>

I believe TripleO would like to EOL instead of EM for Ocata as
indicated by the thead
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134671.html

Thanks,
-Alex

> [1] https://releases.openstack.org/
> [2]
> https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases
> [3]
> https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance
> [4]
> https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Alex Schultz
On Fri, Sep 14, 2018 at 10:20 AM, Elõd Illés  wrote:
> Hi,
>
> just a comment: Ocata release is not EOL [1][2] rather in Extended
> Maintenance. Do you really want to EOL TripleO stable/ocata?
>

Yes unless there are any objections.  We've already been keeping this
branch alive on life support but CI has started to fail and we've just
been turning it off jobs as they fail.  We had not planned on extended
maintenance for Ocata (or Pike).  We'll likely consider that starting
with Queens.  We could switch it to extended maintenance but without
the promotion jobs we won't have packages to run CI so it would be
better to just EOL it.

Thanks,
-Alex

> [1] https://releases.openstack.org/
> [2]
> https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
>
> Cheers,
>
> Előd
>
>
>
> On 2018-09-14 09:20, Juan Antonio Osorio Robles wrote:
>>
>>
>> On 09/14/2018 09:01 AM, Alex Schultz wrote:
>>>
>>> On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar 
>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> As Ocata release is already EOL on 27-08-2018 [1].
>>>> In TripleO, we are running Ocata jobs in TripleO CI and in promotion
>>>> pipelines.
>>>> Can we drop it all the jobs related to Ocata or do we need to keep some
>>>> jobs
>>>> to support upgrades in CI?
>>>>
>>> I think unless there are any objections around upgrades, we can drop
>>> the promotion pipelines. It's likely that we'll also want to
>>> officially EOL the tripleo ocata branches.
>>
>> sounds good to me.
>>>
>>> Thanks,
>>> -Alex
>>>
>>>> Links:
>>>> [1.] https://releases.openstack.org/
>>>>
>>>> Thanks,
>>>>
>>>> Chandan Kumar
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Regarding dropping Ocata related jobs from TripleO

2018-09-14 Thread Alex Schultz
On Fri, Sep 14, 2018 at 6:37 AM, Chandan kumar  wrote:
> Hello,
>
> As Ocata release is already EOL on 27-08-2018 [1].
> In TripleO, we are running Ocata jobs in TripleO CI and in promotion 
> pipelines.
> Can we drop it all the jobs related to Ocata or do we need to keep some jobs
> to support upgrades in CI?
>

I think unless there are any objections around upgrades, we can drop
the promotion pipelines. It's likely that we'll also want to
officially EOL the tripleo ocata branches.

Thanks,
-Alex

> Links:
> [1.] https://releases.openstack.org/
>
> Thanks,
>
> Chandan Kumar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-09-11 Thread Alex Schultz
Thanks everyone for coming and chatting.  From the meeting we've had a
few items where we can collaborate together.

Here are some specific bullet points:

- TripleO folks should feel free to propose some minor structural
changes if they make the integration easier.  TripleO is currently
investigating what it would look like to pull the keystone ansible
parts out of tripleo-heat-templates and put it into
ansible-role-tripleo-keystone.  It would be beneficial to use this
role as an example for how the os_keystone role can be consumed.
- The openstack-ansible-tests has some good examples of ansible-lint
rules that can be used to improve quality
- Tags could be used to limit the scope of OpenStack Ansible roles,
but it sounds like including tasks would be a better pattern.
- Need to establish a pattern for disabling packaging/service
configurations globally in OpenStack Ansible roles.
- Shared roles are open for reuse/replacement if something better is
available (upstream/elsewhere).

If anyone has any others, feel free to comment.

Thanks,
-Alex

On Mon, Sep 10, 2018 at 10:58 AM, Alex Schultz  wrote:
> I just realized I booked the room and put it in the etherpad but
> forgot to email out the time.
>
> Time: Tuesday 09:00-10:45
> Room: Big Thompson
>
> https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg
>
> Thanks,
> -Alex
>
> On Tue, Sep 4, 2018 at 1:03 PM, Alex Schultz  wrote:
>> On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser  wrote:
>>> Hi Alex,
>>>
>>> I am very much in favour of what you're bringing up.  We do have
>>> multiple projects that leverage Ansible in different ways and we all
>>> end up doing the same thing at the end.  The duplication of work is
>>> not really beneficial for us as it takes away from our use-cases.
>>>
>>> I believe that there is a certain number of steps that we all share
>>> regardless of how we deploy, some of the things that come up to me
>>> right away are:
>>>
>>> - Configuring infrastructure services (i.e.: create vhosts for service
>>> in rabbitmq, create databases for services, configure users for
>>> rabbitmq, db, etc)
>>> - Configuring inter-OpenStack services (i.e. keystone_authtoken
>>> section, creating endpoints, etc and users for services)
>>> - Configuring actual OpenStack services (i.e.
>>> /etc//.conf file with the ability of extending
>>> options)
>>> - Running CI/integration on a cloud (i.e. common role that literally
>>> gets an admin user, password and auth endpoint and creates all
>>> resources and does CI)
>>>
>>> This would deduplicate a lot of work, and especially the last one, it
>>> might be beneficial for more than Ansible-based projects, I can
>>> imagine Puppet OpenStack leveraging this as well inside Zuul CI
>>> (optionally)... However, I think that this something which we should
>>> discus further for the PTG.  I think that there will be a tiny bit
>>> upfront work as we all standarize but then it's a win for all involved
>>> communities.
>>>
>>> I would like to propose that deployment tools maybe sit down together
>>> at the PTG, all share how we use Ansible to accomplish these tasks and
>>> then perhaps we can work all together on abstracting some of these
>>> concepts together for us to all leverage.
>>>
>>
>> I'm currently trying to get a spot on Tuesday morning to further
>> discuss some of this items.  In the mean time I've started an
>> etherpad[0] to start collecting ideas for things to discuss.  At the
>> moment I've got the tempest role collaboration and some basic ideas
>> for best practice items that we can discuss.  Feel free to add your
>> own and I'll update the etherpad with a time slot when I get one
>> nailed down.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg
>>
>>> I'll let others chime in as well.
>>>
>>> Regards,
>>> Mohammed
>>>
>>> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz  wrote:
>>>> Ahoy folks,
>>>>
>>>> I think it's time we come up with some basic rules/patterns on where
>>>> code lands when it comes to OpenStack related Ansible roles and as we
>>>> convert/export things. There was a recent proposal to create an
>>>> ansible-role-tempest[0] that would take what we use in
>>>> tripleo-quickstart-extras[1] and separate it for re-usability by
>>>> others.   So it was asked if we could work with the openstack-ansible
>>>> team and leverage the exist

Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-09-10 Thread Alex Schultz
I just realized I booked the room and put it in the etherpad but
forgot to email out the time.

Time: Tuesday 09:00-10:45
Room: Big Thompson

https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg

Thanks,
-Alex

On Tue, Sep 4, 2018 at 1:03 PM, Alex Schultz  wrote:
> On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser  wrote:
>> Hi Alex,
>>
>> I am very much in favour of what you're bringing up.  We do have
>> multiple projects that leverage Ansible in different ways and we all
>> end up doing the same thing at the end.  The duplication of work is
>> not really beneficial for us as it takes away from our use-cases.
>>
>> I believe that there is a certain number of steps that we all share
>> regardless of how we deploy, some of the things that come up to me
>> right away are:
>>
>> - Configuring infrastructure services (i.e.: create vhosts for service
>> in rabbitmq, create databases for services, configure users for
>> rabbitmq, db, etc)
>> - Configuring inter-OpenStack services (i.e. keystone_authtoken
>> section, creating endpoints, etc and users for services)
>> - Configuring actual OpenStack services (i.e.
>> /etc//.conf file with the ability of extending
>> options)
>> - Running CI/integration on a cloud (i.e. common role that literally
>> gets an admin user, password and auth endpoint and creates all
>> resources and does CI)
>>
>> This would deduplicate a lot of work, and especially the last one, it
>> might be beneficial for more than Ansible-based projects, I can
>> imagine Puppet OpenStack leveraging this as well inside Zuul CI
>> (optionally)... However, I think that this something which we should
>> discus further for the PTG.  I think that there will be a tiny bit
>> upfront work as we all standarize but then it's a win for all involved
>> communities.
>>
>> I would like to propose that deployment tools maybe sit down together
>> at the PTG, all share how we use Ansible to accomplish these tasks and
>> then perhaps we can work all together on abstracting some of these
>> concepts together for us to all leverage.
>>
>
> I'm currently trying to get a spot on Tuesday morning to further
> discuss some of this items.  In the mean time I've started an
> etherpad[0] to start collecting ideas for things to discuss.  At the
> moment I've got the tempest role collaboration and some basic ideas
> for best practice items that we can discuss.  Feel free to add your
> own and I'll update the etherpad with a time slot when I get one
> nailed down.
>
> Thanks,
> -Alex
>
> [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg
>
>> I'll let others chime in as well.
>>
>> Regards,
>> Mohammed
>>
>> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz  wrote:
>>> Ahoy folks,
>>>
>>> I think it's time we come up with some basic rules/patterns on where
>>> code lands when it comes to OpenStack related Ansible roles and as we
>>> convert/export things. There was a recent proposal to create an
>>> ansible-role-tempest[0] that would take what we use in
>>> tripleo-quickstart-extras[1] and separate it for re-usability by
>>> others.   So it was asked if we could work with the openstack-ansible
>>> team and leverage the existing openstack-ansible-os_tempest[2].  It
>>> turns out we have a few more already existing roles laying around as
>>> well[3][4].
>>>
>>> What I would like to propose is that we as a community come together
>>> to agree on specific patterns so that we can leverage the same roles
>>> for some of the core configuration/deployment functionality while
>>> still allowing for specific project specific customization.  What I've
>>> noticed between all the project is that we have a few specific core
>>> pieces of functionality that needs to be handled (or skipped as it may
>>> be) for each service being deployed.
>>>
>>> 1) software installation
>>> 2) configuration management
>>> 3) service management
>>> 4) misc service actions
>>>
>>> Depending on which flavor of the deployment you're using, the content
>>> of each of these may be different.  Just about the only thing that is
>>> shared between them all would be the configuration management part.
>>> To that, I was wondering if there would be a benefit to establishing a
>>> pattern within say openstack-ansible where we can disable items #1 and
>>> #3 but reuse #2 in projects like kolla/tripleo where we need to do
>>> some configuration generation.  If we can't es

Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-09-04 Thread Alex Schultz
On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser  wrote:
> Hi Alex,
>
> I am very much in favour of what you're bringing up.  We do have
> multiple projects that leverage Ansible in different ways and we all
> end up doing the same thing at the end.  The duplication of work is
> not really beneficial for us as it takes away from our use-cases.
>
> I believe that there is a certain number of steps that we all share
> regardless of how we deploy, some of the things that come up to me
> right away are:
>
> - Configuring infrastructure services (i.e.: create vhosts for service
> in rabbitmq, create databases for services, configure users for
> rabbitmq, db, etc)
> - Configuring inter-OpenStack services (i.e. keystone_authtoken
> section, creating endpoints, etc and users for services)
> - Configuring actual OpenStack services (i.e.
> /etc//.conf file with the ability of extending
> options)
> - Running CI/integration on a cloud (i.e. common role that literally
> gets an admin user, password and auth endpoint and creates all
> resources and does CI)
>
> This would deduplicate a lot of work, and especially the last one, it
> might be beneficial for more than Ansible-based projects, I can
> imagine Puppet OpenStack leveraging this as well inside Zuul CI
> (optionally)... However, I think that this something which we should
> discus further for the PTG.  I think that there will be a tiny bit
> upfront work as we all standarize but then it's a win for all involved
> communities.
>
> I would like to propose that deployment tools maybe sit down together
> at the PTG, all share how we use Ansible to accomplish these tasks and
> then perhaps we can work all together on abstracting some of these
> concepts together for us to all leverage.
>

I'm currently trying to get a spot on Tuesday morning to further
discuss some of this items.  In the mean time I've started an
etherpad[0] to start collecting ideas for things to discuss.  At the
moment I've got the tempest role collaboration and some basic ideas
for best practice items that we can discuss.  Feel free to add your
own and I'll update the etherpad with a time slot when I get one
nailed down.

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg

> I'll let others chime in as well.
>
> Regards,
> Mohammed
>
> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz  wrote:
>> Ahoy folks,
>>
>> I think it's time we come up with some basic rules/patterns on where
>> code lands when it comes to OpenStack related Ansible roles and as we
>> convert/export things. There was a recent proposal to create an
>> ansible-role-tempest[0] that would take what we use in
>> tripleo-quickstart-extras[1] and separate it for re-usability by
>> others.   So it was asked if we could work with the openstack-ansible
>> team and leverage the existing openstack-ansible-os_tempest[2].  It
>> turns out we have a few more already existing roles laying around as
>> well[3][4].
>>
>> What I would like to propose is that we as a community come together
>> to agree on specific patterns so that we can leverage the same roles
>> for some of the core configuration/deployment functionality while
>> still allowing for specific project specific customization.  What I've
>> noticed between all the project is that we have a few specific core
>> pieces of functionality that needs to be handled (or skipped as it may
>> be) for each service being deployed.
>>
>> 1) software installation
>> 2) configuration management
>> 3) service management
>> 4) misc service actions
>>
>> Depending on which flavor of the deployment you're using, the content
>> of each of these may be different.  Just about the only thing that is
>> shared between them all would be the configuration management part.
>> To that, I was wondering if there would be a benefit to establishing a
>> pattern within say openstack-ansible where we can disable items #1 and
>> #3 but reuse #2 in projects like kolla/tripleo where we need to do
>> some configuration generation.  If we can't establish a similar
>> pattern it'll make it harder to reuse and contribute between the
>> various projects.
>>
>> In tripleo we've recently created a bunch of ansible-role-tripleo-*
>> repositories which we were planning on moving the tripleo specific
>> tasks (for upgrades, etc) to and were hoping that we might be able to
>> reuse the upstream ansible roles similar to how we've previously
>> leverage the puppet openstack work for configurations.  So for us, it
>> would be beneficial if we could maybe help align/contribute/guide the
>> configuration management and maybe m

Re: [openstack-dev] [tripleo] using multiple roles

2018-09-04 Thread Alex Schultz
On Tue, Sep 4, 2018 at 8:15 AM, Samuel Monderer
 wrote:
> Is it possible to have the roles_data.yaml file generated when running
> "openstack overcloud deploy"??
>

Not at this time.  That is something we'd like to get to, but is not
currently prioritized.

Thanks,
-Alex

> On Tue, Sep 4, 2018 at 4:52 PM Alex Schultz  wrote:
>>
>> On Tue, Sep 4, 2018 at 2:31 AM, Samuel Monderer
>>  wrote:
>> > Hi,
>> >
>> > Due to many different HW in our environment we have multiple roles.
>> > I would like to place each role definition if a different file.
>> > Is it possible to refer to all the roles from roles_data.yaml to all the
>> > different files instead of having a long roles_data.yaml file?
>> >
>>
>> So you can have them in different files for general management,
>> however in order to actually consume them  they need to be in a
>> roles_data.yaml file for the deployment. We offer a few cli commands
>> to help with this management.  The 'openstack overcloud roles
>> generate' command can be used to generate a roles_data.yaml for your
>> deployment. You can store the individual roles in a folder and use the
>> 'openstack overcloud roles list --roles-path /your/folder' to view the
>> available roles.  This workflow is described in the roles README[0]
>>
>> Thanks,
>> -Alex
>>
>> [0]
>> http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/roles/README.rst
>>
>> > Regards,
>> > Samuel
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] using multiple roles

2018-09-04 Thread Alex Schultz
On Tue, Sep 4, 2018 at 2:31 AM, Samuel Monderer
 wrote:
> Hi,
>
> Due to many different HW in our environment we have multiple roles.
> I would like to place each role definition if a different file.
> Is it possible to refer to all the roles from roles_data.yaml to all the
> different files instead of having a long roles_data.yaml file?
>

So you can have them in different files for general management,
however in order to actually consume them  they need to be in a
roles_data.yaml file for the deployment. We offer a few cli commands
to help with this management.  The 'openstack overcloud roles
generate' command can be used to generate a roles_data.yaml for your
deployment. You can store the individual roles in a folder and use the
'openstack overcloud roles list --roles-path /your/folder' to view the
available roles.  This workflow is described in the roles README[0]

Thanks,
-Alex

[0] 
http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/roles/README.rst

> Regards,
> Samuel
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Rocky RC1 released!

2018-08-24 Thread Alex Schultz
On Fri, Aug 24, 2018 at 9:09 AM, Emilien Macchi  wrote:
> We just released Rocky RC1 and branched stable/rocky for most of tripleo
> repos, please let us know if we missed something.
> Please don't forget to backport the patches that land in master and that you
> want in Rocky.
>
> We're currently investigating if we whether or not we'll need an RC2 so
> don't be surprised if Launchpad bugs are moved around during the next days.
>

I've created a Rocky RC2 milestone in launchpad and moved the current
open critical bugs over to it. I would like to target August 31, 2018
(next Friday) as a date to identify any major blockers that would
require an RC2.  If none are found, I propose that we mark RC1 as the
final release for Rocky.

Please take a look at the current open Critical issues and move them
to Stein if appropriate.

https://bugs.launchpad.net/tripleo/?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS_option=any=_reporter=_commenter==_subscriber=%3Alist=86388=_combinator=ANY_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

Thanks,
-Alex


> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-0, August 27 - 31

2018-08-23 Thread Alex Schultz
On Thu, Aug 23, 2018 at 10:12 AM, Sean McGinnis  wrote:
> This is the final countdown email for the Rocky development cycle. Thanks to
> everyone involved in the Rocky release!
>
> Development Focus
> -
>
> Teams attending the PTG should be preparing for those discussions and 
> capturing
> information in the etherpads:
>
> https://wiki.openstack.org/wiki/PTG/Stein/Etherpads
>
> General Information
> ---
>
> The release team plans on doing the final Rocky release on 29 August. We will
> re-tag the last commit used for the final RC using the final version number.
>
> If you have not already done so, now would be a good time to take a look at 
> the
> Stein schedule and start planning team activities:
>
> https://releases.openstack.org/stein/schedule.html
>
> Actions
> -
>
> PTLs and release liaisons should watch for the final release patch from the
> release team. While not required, we would appreciate having an ack from each
> team before we approve it on the 29th.
>
> We are still missing releases for the following tempest plugins. Some are
> pending getting pypi and release jobs set up, but please try to prioritize
> getting these done as soon as possible.
>
> barbican-tempest-plugin
> blazar-tempest-plugin
> cloudkitty-tempest-plugin
> congress-tempest-plugin
> ec2api-tempest-plugin
> magnum-tempest-plugin
> mistral-tempest-plugin
> monasca-kibana-plugin
> monasca-tempest-plugin
> murano-tempest-plugin
> networking-generic-switch-tempest-plugin
> oswin-tempest-plugin
> senlin-tempest-plugin
> telemetry-tempest-plugin
> tripleo-common-tempest-plugin

To speak for the tripleo-common-template-plugin, it's currently not
used and there aren't any tests so I don't think it's in a spot for
it's first release during Rocky. I'm not sure the current status of
this effort so it'll be something we'll need to raise at the PTG.

> trove-tempest-plugin
> watcher-tempest-plugin
> zaqar-tempest-plugin
>
> Upcoming Deadlines & Dates
> --
>
> Final RC deadline: August 23
> Rocky Release: August 29
> Cycle trailing RC deadline: August 30
> Stein PTG: September 10-14
> Cycle trailing Rocky release: November 28
>
> --
> Sean McGinnis (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] fedora28 python3 test environment

2018-08-20 Thread Alex Schultz
On Fri, Aug 17, 2018 at 5:18 PM, Alex Schultz  wrote:
> Ahoy folks,
>
> In order to get to a spot where can start evaluate the current status
> of TripleO under python3 I've thrown together a set of ansible
> playbooks[0] to launch a fedora28 node and build the required
> python-tripleoclient (and dependencies)  These playbooks will spawn a
> VM on an OpenStack cloud, runs through the the steps from the RDO
> etherpad[1] for using the fedora stablized repo and builds all the
> currently outstanding python3 package builds[2] for
> python-tripleoclient & company.  Once the playblook has completed it
> should be at a spot to 'dnf install python3-tripleoclient'.
>
> I believe from here we can focus on getting the undercloud[3] and
> standalone[4] processes working correctly under python3.  I think
> initially we should use the existing CentOS7 containers we build under
> the existing processes to see if we can't get the services deployed as
> we work on building out all the required python3 packaging.
>

To follow up, I've started an etherpad[0] to track the various issues
related to the python3 version of tripleoclient.

[0] https://etherpad.openstack.org/p/tripleo-python3-tripleoclient-issues

> Thanks,
> -Alex
>
> [0] https://github.com/mwhahaha/tripleo-f28-testbed
> [1] https://review.rdoproject.org/etherpad/p/use-fedora-stabilized
> [2] 
> https://review.rdoproject.org/r/#/q/status:open+owner:%22Alex+Schultz+%253Caschultz%2540next-development.com%253E%22+topic:python3
> [3] 
> https://docs.openstack.org/tripleo-docs/latest/install/installation/installation.html
> [4] 
> https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] fedora28 python3 test environment

2018-08-17 Thread Alex Schultz
Ahoy folks,

In order to get to a spot where can start evaluate the current status
of TripleO under python3 I've thrown together a set of ansible
playbooks[0] to launch a fedora28 node and build the required
python-tripleoclient (and dependencies)  These playbooks will spawn a
VM on an OpenStack cloud, runs through the the steps from the RDO
etherpad[1] for using the fedora stablized repo and builds all the
currently outstanding python3 package builds[2] for
python-tripleoclient & company.  Once the playblook has completed it
should be at a spot to 'dnf install python3-tripleoclient'.

I believe from here we can focus on getting the undercloud[3] and
standalone[4] processes working correctly under python3.  I think
initially we should use the existing CentOS7 containers we build under
the existing processes to see if we can't get the services deployed as
we work on building out all the required python3 packaging.

Thanks,
-Alex

[0] https://github.com/mwhahaha/tripleo-f28-testbed
[1] https://review.rdoproject.org/etherpad/p/use-fedora-stabilized
[2] 
https://review.rdoproject.org/r/#/q/status:open+owner:%22Alex+Schultz+%253Caschultz%2540next-development.com%253E%22+topic:python3
[3] 
https://docs.openstack.org/tripleo-docs/latest/install/installation/installation.html
[4] 
https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI is blocked

2018-08-15 Thread Alex Schultz
Please do not approve or recheck anything until further notice. We've
got a few issues that have basically broken all the jobs.

https://bugs.launchpad.net/tripleo/+bug/1786764
https://bugs.launchpad.net/tripleo/+bug/1787226
https://bugs.launchpad.net/tripleo/+bug/1787244
https://bugs.launchpad.net/tripleo/+bug/1787268

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ansible roles in tripleo

2018-08-15 Thread Alex Schultz
On Tue, Aug 14, 2018 at 11:51 AM, Jill Rouleau  wrote:
> Hey folks,
>
> Like Alex mentioned[0] earlier, we've created a bunch of ansible roles
> for tripleo specific bits.  The idea is to start putting some basic
> cookiecutter type things in them to get things started, then move some
> low-hanging fruit out of tripleo-heat-templates and into the appropriate
> roles.  For example, docker/services/keystone.yaml could have
> upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-role-
> tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and the
> t-h-t updated to
> include_role: ansible-role-tripleo-keystone
>   tasks_from: upgrade.yml
> without having to modify any puppet or heat directives.
>

Do we have any examples of what the upgrade.yml would be or what type
of variables (naming conventions or otherwise) which would need to be
handled as part of this transtion?  I assume we may want to continue
passing in some variable to indicate the current deployment step.  Is
there something along these lines that we will be proposing or need to
handle?  We're already doing something similar with the
host_prep_tasks for the docker registry[0] but we have a set_fact
block to pass parameters in.   I'm assuming we'll need to define
something similar.

Thanks,
-Alex

[0] 
http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/puppet/services/docker-registry.yaml#n54

> This would let us define some patterns for implementing these tripleo
> roles during Stein while looking at how we can make use of ansible for
> things like core config.
>
> t-h-t and config-download will still drive the vast majority of playbook
> creation for now, but for new playbooks (such as for operations tasks)
> tripleo-ansible[1] would be our project directory.
>
> So in addition to the larger conversation about how deployers can start
> to standardize how we're all using ansible, I'd like to also have a
> tripleo-specific conversation at PTG on how we can break out some of our
> ansible that's currently embedded in t-h-t into more modular and
> flexible roles.
>
> Cheers,
> Jill
>
> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/13311
> 9.html
> [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][requirements][cycle-with-intermediary][cycle-trailing] requirements is going to branch stable/rocky at ~08-15-2018 2100Z

2018-08-14 Thread Alex Schultz
On Tue, Aug 14, 2018 at 10:13 AM, Matthew Thode
 wrote:

.. snip ..

> ansible-role-container-registry
> ansible-role-redhat-subscription
> ansible-role-tripleo-modify-image
> instack-undercloud
> os-apply-config
> os-collect-config
> os-net-config
> os-refresh-config
> paunch
> python-tricircleclient
> tripleo-common-tempest-plugin
> tripleo-ipsec
> tripleo-ui
> tripleo-validations
> puppet-tripleo
> python-tripleoclient
> tripleo-common
> tripleo-heat-templates
> tripleo-image-elements
> tripleo-puppet-elements

From a tripleo aspect, we're aware and will likely branch the client
at the end of the week and others soonish but we're dependent on
packaging for the most part. We'll keep an eye on breakages due to any
requirement changes.  Thanks for the heads up

> puppet-aodh
> puppet-barbican
> puppet-ceilometer
> puppet-cinder
> puppet-cloudkitty
> puppet-congress
> puppet-designate
> puppet-ec2api
> puppet-freezer
> puppet-glance
> puppet-glare
> puppet-gnocchi
> puppet-heat
> puppet-horizon
> puppet-ironic
> puppet-keystone
> puppet-magnum
> puppet-manila
> puppet-mistral
> puppet-monasca
> puppet-murano
> puppet-neutron
> puppet-nova
> puppet-octavia
> puppet-openstack_extras
> puppet-openstacklib
> puppet-oslo
> puppet-ovn
> puppet-panko
> puppet-qdr
> puppet-rally
> puppet-sahara
> puppet-swift
> puppet-tacker
> puppet-tempest
> puppet-trove
> puppet-vitrage
> puppet-vswitch
> puppet-watcher
> puppet-zaqar

puppet-* are fine as we rely on packaging and requirements bits are
only for docs.

> --
> Matthew Thode (prometheanfire)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Alex Schultz
On Thu, Aug 9, 2018 at 2:56 PM, Doug Hellmann  wrote:
> Excerpts from Alex Schultz's message of 2018-08-09 14:31:34 -0600:
>> Ahoy folks,
>>
>> I think it's time we come up with some basic rules/patterns on where
>> code lands when it comes to OpenStack related Ansible roles and as we
>> convert/export things. There was a recent proposal to create an
>> ansible-role-tempest[0] that would take what we use in
>> tripleo-quickstart-extras[1] and separate it for re-usability by
>> others.   So it was asked if we could work with the openstack-ansible
>> team and leverage the existing openstack-ansible-os_tempest[2].  It
>> turns out we have a few more already existing roles laying around as
>> well[3][4].
>>
>> What I would like to propose is that we as a community come together
>> to agree on specific patterns so that we can leverage the same roles
>> for some of the core configuration/deployment functionality while
>> still allowing for specific project specific customization.  What I've
>> noticed between all the project is that we have a few specific core
>> pieces of functionality that needs to be handled (or skipped as it may
>> be) for each service being deployed.
>>
>> 1) software installation
>> 2) configuration management
>> 3) service management
>> 4) misc service actions
>>
>> Depending on which flavor of the deployment you're using, the content
>> of each of these may be different.  Just about the only thing that is
>> shared between them all would be the configuration management part.
>
> Does that make the 4 things separate roles, then? Isn't the role
> usually the unit of sharing between playbooks?
>

It can be, but it doesn't have to be.  The problem comes in with the
granularity at which you are defining the concept of the overall
action.  If you want a role to encompass all that is "nova", you could
have a single nova role that you invoke 5 different times to do the
different actions during the overall deployment. Or you could create a
role for nova-install, nova-config, nova-service, nova-cells, etc etc.
I think splitting them out into their own role is a bit too much in
terms of management.   In my particular openstack-ansible is already
creating a role to manage "nova".  So is there a way that I can
leverage part of their process within mine without having to duplicate
it.  You can pull in the task files themselves from a different so
technically I think you could define a ansible-role-tripleo-nova that
does some include_tasks: ../../os_nova/tasks/install.yaml but then
we'd have to duplicate the variables in our playbook rather than
invoking a role with some parameters.

IMHO this structure is an issue with the general sharing concepts of
roles/tasks within ansible.  It's not really well defined and there's
not really a concept of inheritance so I can't really extend your
tasks with mine in more of a programming sense. I have to duplicate it
or do something like include a specific task file from another role.
Since I can't really extend a role in the traditional OO programing
sense, I would like to figure out how I can leverage only part of it.
This can be done by establishing ansible variables to trigger specific
actions or just actually including the raw tasks themselves.   Either
of these concepts needs some sort of contract to be established to the
other won't get broken.   We had this in puppet via parameters which
are checked, there isn't really a similar concept in ansible so it
seems that we need to agree on some community established rules.

For tripleo, I would like to just invoke the os_nova role and pass in
like install: false, service: false, config_dir:
/my/special/location/, config_data: {...} and it spit out the configs.
Then my roles would actually leverage these via containers/etc.  Of
course most of this goes away if we had a unified (not file based)
configuration method across all services (openstack and non-openstack)
but we don't. :D

Thanks,
-Alex

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-08-09 Thread Alex Schultz
Ahoy folks,

I think it's time we come up with some basic rules/patterns on where
code lands when it comes to OpenStack related Ansible roles and as we
convert/export things. There was a recent proposal to create an
ansible-role-tempest[0] that would take what we use in
tripleo-quickstart-extras[1] and separate it for re-usability by
others.   So it was asked if we could work with the openstack-ansible
team and leverage the existing openstack-ansible-os_tempest[2].  It
turns out we have a few more already existing roles laying around as
well[3][4].

What I would like to propose is that we as a community come together
to agree on specific patterns so that we can leverage the same roles
for some of the core configuration/deployment functionality while
still allowing for specific project specific customization.  What I've
noticed between all the project is that we have a few specific core
pieces of functionality that needs to be handled (or skipped as it may
be) for each service being deployed.

1) software installation
2) configuration management
3) service management
4) misc service actions

Depending on which flavor of the deployment you're using, the content
of each of these may be different.  Just about the only thing that is
shared between them all would be the configuration management part.
To that, I was wondering if there would be a benefit to establishing a
pattern within say openstack-ansible where we can disable items #1 and
#3 but reuse #2 in projects like kolla/tripleo where we need to do
some configuration generation.  If we can't establish a similar
pattern it'll make it harder to reuse and contribute between the
various projects.

In tripleo we've recently created a bunch of ansible-role-tripleo-*
repositories which we were planning on moving the tripleo specific
tasks (for upgrades, etc) to and were hoping that we might be able to
reuse the upstream ansible roles similar to how we've previously
leverage the puppet openstack work for configurations.  So for us, it
would be beneficial if we could maybe help align/contribute/guide the
configuration management and maybe misc service action portions of the
openstack-ansible roles, but be able to disable the actual software
install/service management as that would be managed via our
ansible-role-tripleo-* roles.

Is this something that would be beneficial to further discuss at the
PTG? Anyone have any additional suggestions/thoughts?

My personal thoughts for tripleo would be that we'd have
tripleo-ansible calls openstack-ansible- for core config but
package/service installation disabled and calls
ansible-role-tripleo- for tripleo specific actions such as
opinionated packages/service configuration/upgrades.  Maybe this is
too complex? But at the same time, do we need to come up with 3
different ways to do this?

Thanks,
-Alex

[0] https://review.openstack.org/#/c/589133/
[1] 
http://git.openstack.org/cgit/openstack/tripleo-quickstart-extras/tree/roles/validate-tempest
[2] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest/
[3] 
http://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/tempest
[4] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-tempest

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO

2018-08-06 Thread Alex Schultz
+1

On Mon, Aug 6, 2018 at 7:19 AM, Bogdan Dobrelya  wrote:
> +1
>
> On 8/1/18 1:31 PM, Giulio Fidente wrote:
>>
>> Hi,
>>
>> I would like to propose Lukas Bezdicka core on TripleO.
>>
>> Lukas did a lot work in our tripleoclient, tripleo-common and
>> tripleo-heat-templates repos to make FFU possible.
>>
>> FFU, which is meant to permit upgrades from Newton to Queens, requires
>> in depth understanding of many TripleO components (for example Heat,
>> Mistral and the TripleO client) but also of specific TripleO features
>> which were added during the course of the three releases (for example
>> config-download and upgrade tasks). I believe his FFU work to have been
>> very challenging.
>>
>> Given his broad understanding, more recently Lukas started helping doing
>> reviews in other areas.
>>
>> I am so sure he'll be a great addition to our group that I am not even
>> looking for comments, just votes :D
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-03 Thread Alex Schultz
On Thu, Aug 2, 2018 at 11:32 PM, Cédric Jeanneret  wrote:
>
>
> On 08/02/2018 11:41 PM, Steve Baker wrote:
>>
>>
>> On 02/08/18 13:03, Alex Schultz wrote:
>>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya 
>>> wrote:
>>>> On 7/6/18 7:02 PM, Ben Nemec wrote:
>>>>>
>>>>>
>>>>> On 07/05/2018 01:23 PM, Dan Prince wrote:
>>>>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
>>>>>>>
>>>>>>> I would almost rather see us organize the directories by service
>>>>>>> name/project instead of implementation.
>>>>>>>
>>>>>>> Instead of:
>>>>>>>
>>>>>>> puppet/services/nova-api.yaml
>>>>>>> puppet/services/nova-conductor.yaml
>>>>>>> docker/services/nova-api.yaml
>>>>>>> docker/services/nova-conductor.yaml
>>>>>>>
>>>>>>> We'd have:
>>>>>>>
>>>>>>> services/nova/nova-api-puppet.yaml
>>>>>>> services/nova/nova-conductor-puppet.yaml
>>>>>>> services/nova/nova-api-docker.yaml
>>>>>>> services/nova/nova-conductor-docker.yaml
>>>>>>>
>>>>>>> (or perhaps even another level of directories to indicate
>>>>>>> puppet/docker/ansible?)
>>>>>>
>>>>>> I'd be open to this but doing changes on this scale is a much larger
>>>>>> developer and user impact than what I was thinking we would be willing
>>>>>> to entertain for the issue that caused me to bring this up (i.e.
>>>>>> how to
>>>>>> identify services which get configured by Ansible).
>>>>>>
>>>>>> Its also worth noting that many projects keep these sorts of things in
>>>>>> different repos too. Like Kolla fully separates kolla-ansible and
>>>>>> kolla-kubernetes as they are quite divergent. We have been able to
>>>>>> preserve some of our common service architectures but as things move
>>>>>> towards kubernetes we may which to change things structurally a bit
>>>>>> too.
>>>>>
>>>>> True, but the current directory layout was from back when we
>>>>> intended to
>>>>> support multiple deployment tools in parallel (originally
>>>>> tripleo-image-elements and puppet).  Since I think it has become
>>>>> clear that
>>>>> it's impractical to maintain two different technologies to do
>>>>> essentially
>>>>> the same thing I'm not sure there's a need for it now.  It's also worth
>>>>> noting that kolla-kubernetes basically died because there wasn't enough
>>>>> people to maintain both deployment methods, so we're not the only
>>>>> ones who
>>>>> have found that to be true.  If/when we move to kubernetes I would
>>>>> anticipate it going like the initial containers work did -
>>>>> development for a
>>>>> couple of cycles, then a switch to the new thing and deprecation of
>>>>> the old
>>>>> thing, then removal of support for the old thing.
>>>>>
>>>>> That being said, because of the fact that the service yamls are
>>>>> essentially an API for TripleO because they're referenced in user
>>>>
>>>> this ^^
>>>>
>>>>> resource registries, I'm not sure it's worth the churn to move
>>>>> everything
>>>>> either.  I think that's going to be an issue either way though, it's
>>>>> just a
>>>>> question of the scope.  _Something_ is going to move around no
>>>>> matter how we
>>>>> reorganize so it's a problem that needs to be addressed anyway.
>>>>
>>>> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
>>>> maintainers doing backports for queens (and the LTS downstream
>>>> release based
>>>> on it). Now imagine kubernetes support comes within those next a few
>>>> years,
>>>> before we can let the old API just go...
>>>>
>>>> I have an example [0] to share all that pain brought by a simple move of
>>>> 'API defaults' from environments/services-docker to
>>>> environments/services
>>>> plus environments/services-baremetal

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-01 Thread Alex Schultz
On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:
> On 7/6/18 7:02 PM, Ben Nemec wrote:
>>
>>
>>
>> On 07/05/2018 01:23 PM, Dan Prince wrote:
>>>
>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


 I would almost rather see us organize the directories by service
 name/project instead of implementation.

 Instead of:

 puppet/services/nova-api.yaml
 puppet/services/nova-conductor.yaml
 docker/services/nova-api.yaml
 docker/services/nova-conductor.yaml

 We'd have:

 services/nova/nova-api-puppet.yaml
 services/nova/nova-conductor-puppet.yaml
 services/nova/nova-api-docker.yaml
 services/nova/nova-conductor-docker.yaml

 (or perhaps even another level of directories to indicate
 puppet/docker/ansible?)
>>>
>>>
>>> I'd be open to this but doing changes on this scale is a much larger
>>> developer and user impact than what I was thinking we would be willing
>>> to entertain for the issue that caused me to bring this up (i.e. how to
>>> identify services which get configured by Ansible).
>>>
>>> Its also worth noting that many projects keep these sorts of things in
>>> different repos too. Like Kolla fully separates kolla-ansible and
>>> kolla-kubernetes as they are quite divergent. We have been able to
>>> preserve some of our common service architectures but as things move
>>> towards kubernetes we may which to change things structurally a bit
>>> too.
>>
>>
>> True, but the current directory layout was from back when we intended to
>> support multiple deployment tools in parallel (originally
>> tripleo-image-elements and puppet).  Since I think it has become clear that
>> it's impractical to maintain two different technologies to do essentially
>> the same thing I'm not sure there's a need for it now.  It's also worth
>> noting that kolla-kubernetes basically died because there wasn't enough
>> people to maintain both deployment methods, so we're not the only ones who
>> have found that to be true.  If/when we move to kubernetes I would
>> anticipate it going like the initial containers work did - development for a
>> couple of cycles, then a switch to the new thing and deprecation of the old
>> thing, then removal of support for the old thing.
>>
>> That being said, because of the fact that the service yamls are
>> essentially an API for TripleO because they're referenced in user
>
>
> this ^^
>
>> resource registries, I'm not sure it's worth the churn to move everything
>> either.  I think that's going to be an issue either way though, it's just a
>> question of the scope.  _Something_ is going to move around no matter how we
>> reorganize so it's a problem that needs to be addressed anyway.
>
>
> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
> maintainers doing backports for queens (and the LTS downstream release based
> on it). Now imagine kubernetes support comes within those next a few years,
> before we can let the old API just go...
>
> I have an example [0] to share all that pain brought by a simple move of
> 'API defaults' from environments/services-docker to environments/services
> plus environments/services-baremetal. Each time a file changes contents by
> its old location, like here [1], I had to run a lot of sanity checks to
> rebase it properly. Like checking for the updated paths in resource
> registries are still valid or had to/been moved as well, then picking the
> source of truth for diverged old vs changes locations - all that to loose
> nothing important in progress.
>
> So I'd say please let's do *not* change services' paths/namespaces in t-h-t
> "API" w/o real need to do that, when there is no more alternatives left to
> that.
>

Ok so it's time to dig this thread back up. I'm currently looking at
the chrony support which will require a new service[0][1]. Rather than
add it under puppet, we'll likely want to leverage ansible. So I guess
the question is where do we put services going forward?  Additionally
as we look to truly removing the baremetal deployment options and
puppet service deployment, it seems like we need to consolidate under
a single structure.  Given that we don't want force too much churn,
does this mean that we should align to the docker/services/*.yaml
structure or should we be proposing a new structure that we can try to
align on.

There is outstanding tech-debt around the nested stacks and references
within these services when we added the container deployments so it's
something that would be beneficial to start tackling sooner rather
than later.  Personally I think we're always going to have the issue
when we rename files that could have been referenced by custom
templates, but I don't think we can continue to carry the outstanding
tech debt around these static locations.  Should we be investing in
coming up with some sort of mappings that we can use/warn a user on
when we move files?

Thanks,
-Alex

[0] 

Re: [openstack-dev] [tripleo][ci][metrics] FFE request for QDR integration in TripleO (Was: Stucked in the middle of work because of RDO CI)

2018-07-31 Thread Alex Schultz
On Tue, Jul 31, 2018 at 11:31 AM, Pradeep Kilambi  wrote:
> Hi Alex:
>
> Can you consider this our FFE for the QDR patches. Its mainly blocked on CI
> issues. Half the patches for QDR integration are already merged. The other 3
> referenced need to get merged once CI passes. Please consider this out
> formal request for FFE for QDR integration in tripleo.
>

Ok if it's just these patches and there is no further work it should
be OK. I did point out (prior to CI issues) that the patch[0] actually
broke the ovb jobs back in June. It seemed to be related to missing
containers or something to that effect.  So we'll need to be extra
care when merging this to ensure it does not break anything.  If we
get clean jobs prior to the rc1, we can merge it. If not I'd say we
need to hold off.  I don't consider this is a blocking feature.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/578749/

> Cheers,
> ~ Prad
>
> On Tue, Jul 31, 2018 at 7:40 AM Sagi Shnaidman  wrote:
>>
>> Hi, Martin
>>
>> I see master OVB jobs are passing now [1], please recheck.
>>
>> [1] http://cistatus.tripleo.org/
>>
>> On Tue, Jul 31, 2018 at 12:24 PM, Martin Magr  wrote:
>>>
>>> Greetings guys,
>>>
>>>   it is pretty obvious that RDO CI jobs in TripleO projects are broken
>>> [0]. Once Zuul CI jobs will pass would it be possible to have AMQP/collectd
>>> patches ([1],[2],[3]) merged please even though the negative result of RDO
>>> CI jobs? Half of the patches for this feature is merged and the other half
>>> is stucked in this situation, were nobody reviews these patches, because
>>> there is red -1. Those patches passed Zuul jobs several times already and
>>> were manually tested too.
>>>
>>> Thanks in advance for consideration of this situation,
>>> Martin
>>>
>>> [0]
>>> https://trello.com/c/hkvfxAdX/667-cixtripleoci-rdo-software-factory-3rd-party-jobs-failing-due-to-instance-nodefailure
>>> [1] https://review.openstack.org/#/c/578749
>>> [2] https://review.openstack.org/#/c/576057/
>>> [3] https://review.openstack.org/#/c/572312/
>>>
>>> --
>>> Martin Mágr
>>> Senior Software Engineer
>>> Red Hat Czech
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Best regards
>> Sagi Shnaidman
>
>
>
> --
> Cheers,
> ~ Prad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] deployement fails

2018-07-31 Thread Alex Schultz
On Mon, Jul 30, 2018 at 8:48 AM, Samuel Monderer
 wrote:
> Hi,
>
> I'm trying to deploy a small environment with one controller and one compute
> but i get a timeout with no specific information in the logs
>
> 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
> CREATE_IN_PROGRESS  state changed
> 2018-07-30 13:19:41Z [overcloud.Controller.0.ControllerConfig]:
> CREATE_COMPLETE  state changed
> 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: CREATE_FAILED  CREATE
> aborted (Task create from ResourceGroup "ComputeGammaV3" Stack "overcloud"
> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
> 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3]: UPDATE_FAILED  Stack UPDATE
> cancelled
> 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
> 2018-07-30 14:04:51Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED  Stack
> CREATE cancelled
> 2018-07-30 14:04:51Z [overcloud.Controller]: CREATE_FAILED  CREATE aborted
> (Task create from ResourceGroup "Controller" Stack "overcloud"
> [690ee33c-8194-4713-a44f-9c8dcf88359f] Timed out)
> 2018-07-30 14:04:51Z [overcloud]: CREATE_FAILED  Timed out
> 2018-07-30 14:04:51Z [overcloud.Controller]: UPDATE_FAILED  Stack UPDATE
> cancelled
> 2018-07-30 14:04:51Z [overcloud.Controller.0]: CREATE_FAILED  Stack CREATE
> cancelled
> 2018-07-30 14:04:52Z [overcloud.ComputeGammaV3.0]: CREATE_FAILED
> resources[0]: Stack CREATE cancelled
>
>  Stack overcloud CREATE_FAILED
>
> overcloud.ComputeGammaV3.0:
>   resource_type: OS::TripleO::ComputeGammaV3
>   physical_resource_id: 5755d746-7cbf-4f3d-a9e1-d94a713705a7
>   status: CREATE_FAILED
>   status_reason: |
> resources[0]: Stack CREATE cancelled
> overcloud.Controller.0:
>   resource_type: OS::TripleO::Controller
>   physical_resource_id: 4bcf84c1-1d54-45ee-9f81-b6dda780cbd7
>   status: CREATE_FAILED
>   status_reason: |
> resources[0]: Stack CREATE cancelled
> Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
> Not cleaning temporary directory /tmp/tripleoclient-vxGzKo
> Heat Stack create failed.
> Heat Stack create failed.
> (undercloud) [stack@staging-director ~]$
>

So this is a timeout likely caused by a bad network configuration so
no response makes it back to Heat during the deployment. Heat never
gets a response back so it just times out.  You'll need to check your
host network configuration and trouble shoot that.

Thanks,
-Alex

> It seems that it wasn't able to configure the OVS bridges
>
> (undercloud) [stack@staging-director ~]$ openstack software deployment show
> 4b4fc54f-7912-40e2-8ad4-79f6179fe701
> +---++
> | Field | Value  |
> +---++
> | id| 4b4fc54f-7912-40e2-8ad4-79f6179fe701   |
> | server_id | 0accb7a3-4869-4497-8f3b-5a3d99f3926b   |
> | config_id | 2641b4dd-afc7-4bf5-a2e2-481c207e4b7f   |
> | creation_time | 2018-07-30T13:19:44Z   |
> | updated_time  ||
> | status| IN_PROGRESS|
> | status_reason | Deploy data available  |
> | input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} |
> | action| CREATE |
> +---++
> (undercloud) [stack@staging-director ~]$ openstack software deployment show
> a297e8ae-f4c9-41b0-938f-c51f9fe23843
> +---++
> | Field | Value  |
> +---++
> | id| a297e8ae-f4c9-41b0-938f-c51f9fe23843   |
> | server_id | 145167da-9b96-4eee-bfe9-399b854c1e84   |
> | config_id | d1baf0a5-de9b-48f2-b486-9f5d97f7e94f   |
> | creation_time | 2018-07-30T13:17:29Z   |
> | updated_time  ||
> | status| IN_PROGRESS|
> | status_reason | Deploy data available  |
> | input_values  | {u'interface_name': u'nic1', u'bridge_name': u'br-ex'} |
> | action| CREATE |
> +---++
> (undercloud) [stack@staging-director ~]$
>
> Regards,
> Samuel
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-07-30 Thread Alex Schultz
On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr  wrote:
>
>
> On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi  wrote:
>>
>> Your fellow reporter took a break from writing, but is now back on his
>> pen.
>>
>> Welcome to the twenty-fifth edition of a weekly update in TripleO world!
>> The goal is to provide a short reading (less than 5 minutes) to learn
>> what's new this week.
>> Any contributions and feedback are welcome.
>> Link to the previous version:
>> http://lists.openstack.org/pipermail/openstack-dev/2018-June/131426.html
>>
>> +-+
>> | General announcements |
>> +-+
>>
>> +--> Rocky Milestone 3 is next week. After, any feature code will require
>> Feature Freeze Exception (FFE), asked on the mailing-list. We'll enter a
>> bug-fix only and stabilization period, until we can push the first stable
>> version of Rocky.
>
>
> Hey guys,
>
>   I would like to ask for FFE for backup and restore, where we ended up
> deciding where is the best place for the code base for this project (please
> see [1] for details). We believe that B support for overcloud control
> plane will be good addition to a rocky release, but we started with this
> initiative quite late indeed. The final result should the support in
> openstack client, where "openstack overcloud (backup|restore)" would work as
> a charm. Thanks in advance for considering this feature.
>

Was there a blueprint/spec for this effort?  Additionally do we have a
list of the outstanding work required for this? If it's just these two
playbooks, it might be ok for an FFE. But if there's additional
tripleoclient related changes, I wouldn't necessarily feel comfortable
with these unless we have a complete list of work.  Just as a side
note, I'm not sure putting these in tripleo-common is going to be the
ideal place for this.

Thanks,
-Alex

> Regards,
> Martin
>
> [1] https://review.openstack.org/#/c/582453/
>
>>
>> +--> Next PTG will be in Denver, please propose topics:
>> https://etherpad.openstack.org/p/tripleoci-ptg-stein
>> +--> Multiple squads are currently brainstorming a framework to provide
>> validations pre/post upgrades - stay in touch!
>>
>> +--+
>> | Continuous Integration |
>> +--+
>>
>> +--> Sprint theme: migration to Zuul v3 (More on
>> https://trello.com/c/vyWXcKOB/841-sprint-16-goals)
>> +--> Sagi is the rover and Chandan is the ruck. Please tell them any CI
>> issue.
>> +--> Promotion on master is 4 days, 0 days on Queens and Pike and 1 day on
>> Ocata.
>> +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>>
>> +-+
>> | Upgrades |
>> +-+
>>
>> +--> Good progress on major upgrades workflow, need reviews!
>> +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status
>>
>> +---+
>> | Containers |
>> +---+
>>
>> +--> We switched python-tripleoclient to deploy containerized undercloud
>> by default!
>> +--> Image prepare via workflow is still work in progress.
>> +--> More:
>> https://etherpad.openstack.org/p/tripleo-containers-squad-status
>>
>> +--+
>> | config-download |
>> +--+
>>
>> +--> UI integration is almost done (need review)
>> +--> Bug with failure listing is being fixed:
>> https://bugs.launchpad.net/tripleo/+bug/1779093
>> +--> More:
>> https://etherpad.openstack.org/p/tripleo-config-download-squad-status
>>
>> +--+
>> | Integration |
>> +--+
>>
>> +--> We're enabling decoupled deployment plans e.g for OpenShift, DPDK
>> etc:
>> https://review.openstack.org/#/q/topic:alternate_plans+(status:open+OR+status:merged)
>> (need reviews).
>> +--> More:
>> https://etherpad.openstack.org/p/tripleo-integration-squad-status
>>
>> +-+
>> | UI/CLI |
>> +-+
>>
>> +--> Good progress on network configuration via UI
>> +--> Config-download patches are being reviewed and a lot of testing is
>> going on.
>> +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status
>>
>> +---+
>> | Validations |
>> +---+
>>
>> +--> Working on OpenShift validations, need reviews.
>> +--> More:
>> https://etherpad.openstack.org/p/tripleo-validations-squad-status
>>
>> +---+
>> | Networking |
>> +---+
>>
>> +--> No updates this week.
>> +--> More:
>> https://etherpad.openstack.org/p/tripleo-networking-squad-status
>>
>> +--+
>> | Workflows |
>> +--+
>>
>> +--> No updates this week.
>> +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status
>>
>> +---+
>> | Security |
>> +---+
>>
>> +--> Working on Secrets management and Limit TripleO users efforts
>> +--> More: https://etherpad.openstack.org/p/tripleo-security-squad
>>
>> ++
>> | Owl fact  |
>> ++
>> Elf owls live in a cacti. They are the smallest owls, and live in the
>> southwestern United States and 

Re: [openstack-dev] [tripleo] Rocky Ceph update/upgrade regression risk (semi-FFE)

2018-07-27 Thread Alex Schultz
On Fri, Jul 27, 2018 at 5:48 AM, Emilien Macchi  wrote:
>
>
> On Fri, Jul 27, 2018 at 3:58 AM Jiří Stránský  wrote:
>>
>> I'd call this a semi-FFE, as a few of the patches have characteristics of
>> feature work,
>> but at the same time i don't believe we can afford having Ceph
>> unupgradable in Rocky, so it has characteristics of a regression bug
>> too. I reported a bug [2] and tagged the patches in case we end up
>> having to do backports.
>
>
> Right, let's consider it as a bug and not a feature. Also, it's upgrade
> related so it's top-priority as we did in prior cycles. Therefore I think
> it's fine.

I second this.  We must be able to upgrade so this needs to be addressed.

> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for config-download-ui

2018-07-26 Thread Alex Schultz
On Thu, Jul 26, 2018 at 2:31 AM, Jiri Tomasek  wrote:
> Hello,
>
> I would like to request a FFE for [1]. Current status of TripleO UI patches
> is here [2] there are last 2 patches pending review which currently depend
> on [3] which is close to land.
>
> [1] https://blueprints.launchpad.net/tripleo/+spec/config-download-ui/
> [2]
> https://review.openstack.org/#/q/project:openstack/tripleo-ui+branch:master+topic:bp/config-download-ui
> [3] https://review.openstack.org/#/c/583293/
>

Sounds good. Let's get those last two patches landed.

Thanks,
-Alex

> Thanks
> -- Jiri
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for container-prepare-workflow

2018-07-25 Thread Alex Schultz
On Wed, Jul 25, 2018 at 3:50 PM, Steve Baker  wrote:
> I'd like to request a FFE for this blueprint[1].
>
> The remaining changes will be tracked as Depends-On on this oooq change[2].
>
> Initially the aim of this blueprint was to do all container prepare
> operations in a mistral action before the overcloud deploy. However the
> priority for delivery switched to helping blueprint containerized-undercloud
> with its container prepare. Once this was complete it was apparent that the
> overcloud prepare could share the undercloud prepare approach.
>
> The undercloud prepare does the following:
>
> 1) During undercloud_config, do a try-run prepare to populate the image
> parameters (but don't do any image transfers)
>
> 2) During tripleo-deploy, driven by tripleo-heat-templates, do the actual
> prepare after the undercloud registry is installed but before and containers
> are required
>
> For the overcloud, 1) will be done by a mistral action[3] and 2) will be
> done during overcloud deploy[4].
>
> The vast majority of code for this blueprint has landed and is exercised by
> containerized-undercloud. I don't expect issues with the overcloud changes
> landing, but in the worst case scenario the overcloud prepare can be done
> manually by running the new command "openstack tripleo container image
> prepare" as documented in this change [5].
>

Sounds good, hopefully we can figure out the issue with the reverted
patch and get it landed.

Thanks,
-Alex

> [1]
> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow
>
> [2] https://review.openstack.org/#/c/573476/
>
> [3] https://review.openstack.org/#/c/558972/ (landed but currently being
> reverted)
>
> [4] https://review.openstack.org/#/c/581919/ (plus the series before it)
>
> [5] https://review.openstack.org/#/c/553104/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] PTL non-candidacy

2018-07-25 Thread Alex Schultz
Hey folks,

So it's been great fun and we've accomplished much over the last two
cycles but I believe it is time for me to step back and let someone
else do the PTLing.  I'm not going anywhere so I'll still be around to
focus on the simplification and improvements that TripleO needs going
forward.  I look forwards to continuing our efforts with everyone.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Jose Luis Franco for TripleO core reviewer on Upgrade bits

2018-07-23 Thread Alex Schultz
+1

On Fri, Jul 20, 2018 at 2:07 AM, Carlos Camacho Gonzalez
 wrote:
> Hi!!!
>
> I'll like to propose Jose Luis Franco [1][2] for core reviewer in all the
> TripleO upgrades bits. He shows a constant and active involvement in
> improving and fixing our updates/upgrades workflows, he helps also trying to
> develop/improve/fix our upstream support for testing the updates/upgrades.
>
> Please vote -1/+1, and consider this my +1 vote :)
>
> [1]: https://review.openstack.org/#/q/owner:jfrancoa%2540redhat.com
> [2]: http://stackalytics.com/?release=all=commits_id=jfrancoa
>
> Cheers,
> Carlos.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][nova][tripleo] Safe guest shutdowns with kolla?

2018-07-13 Thread Alex Schultz
On Fri, Jul 13, 2018 at 1:54 AM, Bogdan Dobrelya  wrote:
> [Added tripleo]
>
> It would be nice to have this situation verified/improved for containerized
> libvirt for compute nodes deployed with TripleO as well.
>
> On 7/12/18 11:02 PM, Clint Byrum wrote:
>>
>> Greetings! We've been deploying with Kolla on CentOS 7 now for a while,
>> and
>> we've recently noticed a rather troubling behavior when we shutdown
>> hypervisors.
>>
>> Somewhere between systemd and libvirt's systemd-machined integration,
>> we see that guests get killed aggressively by SIGTERM'ing all of the
>> qemu-kvm processes. This seems to happen because they are scoped into
>> machine.slice, but systemd-machined is killed which drops those scopes
>> and thus results in killing off the machines.
>
>
> So far we had observed the similar [0] happening, but to systemd vs
> containers managed by docker-daemon (dockerd).
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1778913
>
>
>>
>> In the past, we've used the libvirt-guests service when our libvirt was
>> running outside of containers. This worked splendidly, as we could
>> have it wait 5 minutes for VMs to attempt a graceful shutdown, avoiding
>> interrupting any running processes. But this service isn't available on
>> the host OS, as it won't be able to talk to libvirt inside the container.
>>
>> The solution I've come up with for now is this:
>>
>> [Unit]
>> Description=Manage libvirt guests in kolla safely
>> After=docker.service systemd-machined.service
>> Requires=docker.service
>>
>> [Install]
>> WantedBy=sysinit.target
>>
>> [Service]
>> Type=oneshot
>> RemainAfterExit=yes
>> TimeoutStopSec=400
>> ExecStart=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh
>> start
>> ExecStart=/usr/bin/docker start nova_compute
>> ExecStop=/usr/bin/docker stop nova_compute
>> ExecStop=/usr/bin/docker exec nova_libvirt /usr/libexec/libvirt-guests.sh
>> shutdown
>>
>> This doesn't seem to work, though I'm still trying to work out
>> the ordering and such. It should ensure that before we stop the
>> systemd-machined and destroy all of its scopes (thus, killing all the
>> vms), we run the libvirt-guests.sh script to try and shut them down. The
>> TimeoutStopSec=400 is because the script itself waits 300 seconds for any
>> VM that refuses to shutdown cleanly, so this gives it a chance to wait
>> for at least one of those. This is an imperfect solution but it allows us
>> to move forward after having made a reasonable attempt at clean shutdowns.
>>
>> Anyway, just wondering if anybody else using kolla-ansible or kolla
>> containers in general have run into this problem, and whether or not
>> there are better/known solutions.
>
>
> As I noted above, I think the issue may be valid for TripleO as well.
>

I think https://review.openstack.org/#/c/580351/ is trying to address this.

Thanks,
-Alex

>>
>> Thanks!
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][pre] removing default ssh rule from tripleo::firewall::pre

2018-07-13 Thread Alex Schultz
On Thu, Jul 12, 2018 at 8:17 PM, Lars Kellogg-Stedman  wrote:
> I've had a few operators complain about the permissive rule tripleo
> creates for ssh.  The current alternatives seems to be to either disable
> tripleo firewall management completely, or move from the default-deny
> model to a set of rules that include higher-priority blacklist rules
> for ssh traffic.
>
> I've just submitted a pair of reviews [1] that (a) remove the default
> "allow ssh from everywhere" rule in tripleo::firewall:pre and (b) add
> a DefaultFirewallRules parameter to the tripleo-firewall service.
>
> The default value for this new parameter is the same rule that was
> previously in tripleo::firewall::pre, but now it can be replaced by an
> operator as part of the deployment configuration.
>
> For example, a deployment can include:
>
> parameter_defaults:
>   DefaultFirewallRules:
> tripleo.tripleo_firewall.firewall_rules:
>   '003 allow ssh from internal networks':
> source: '172.16.0.0/22'
> proto: 'tcp'
> dport: 22
>   '003 allow ssh from bastion host':
> source: '192.168.1.10'
> proto: 'tcp'
> dport: 22
>

I've commented on the reviews, but for the wider audience I don't
think we should completely remove these default rules. As we've
switched to ansible (and ssh) being the deployment orchestration
mechanism, it is important that we don't allow a user to lock
themselves out of their cloud via a bad ssh rule. I think we should
update the default rule to allow access over the control plane but
there must be at least 1 rule that we're enforcing exist so the
deployment and update processes will continue to function.

Thanks,
-Alex

> [1] 
> https://review.openstack.org/#/q/topic:feature/firewall%20(status:open%20OR%20status:merged)
>
> --
> Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
> http://blog.oddbit.com/|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky blueprints

2018-07-11 Thread Alex Schultz
Hello everyone,

As milestone 3 is quickly approaching, it's time to review the open
blueprints[0] and their status.  It appears that we have made good
progress on implementing significant functionality this cycle but we
still have some open items.  Below is the list of blueprints that are
still open so we'll want to see if they will make M3 and if not, we'd
like to move them out to Stein and they won't make Rocky without an
FFE.

Currently not marked implemented but without any open patches (likely
implemented):
- https://blueprints.launchpad.net/tripleo/+spec/major-upgrade-workflow
- 
https://blueprints.launchpad.net/tripleo/+spec/tripleo-predictable-ctlplane-ips

Currently open with pending patches (may need FFE):
- https://blueprints.launchpad.net/tripleo/+spec/config-download-ui
- https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow
- https://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud
- https://blueprints.launchpad.net/tripleo/+spec/bluestore
- https://blueprints.launchpad.net/tripleo/+spec/gui-node-discovery-by-range
- https://blueprints.launchpad.net/tripleo/+spec/multiarch-support
- 
https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-templates
- https://blueprints.launchpad.net/tripleo/+spec/sriov-vfs-as-network-interface
- https://blueprints.launchpad.net/tripleo/+spec/custom-validations

Currently open without work (should be moved to Stein):
- https://blueprints.launchpad.net/tripleo/+spec/automated-ui-testing
- https://blueprints.launchpad.net/tripleo/+spec/plan-from-git-in-gui
- https://blueprints.launchpad.net/tripleo/+spec/tripleo-ui-react-walkthrough
- 
https://blueprints.launchpad.net/tripleo/+spec/wrapping-workflow-for-node-operations
- https://blueprints.launchpad.net/tripleo/+spec/ironic-overcloud-ci


Please take some time to review this list and update it.  If you think
you are close to finishing out the feature and would like to request
an FFE please start getting that together with appropriate details and
justifications for the FFE.

Thanks,
-Alex

[0] https://blueprints.launchpad.net/tripleo/rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Removing old / unused images

2018-06-26 Thread Alex Schultz
On Tue, Jun 26, 2018 at 8:05 AM, Paul Bourke  wrote:
> Hi all,
>
> At the weekly meeting a week or two ago, we mentioned removing some old /
> unused images from Kolla in the interest of keeping the gate run times down,
> as well as general code hygiene.
>
> The images I've determined that are either no longer relevant, or were
> simply never made use of in kolla-ansible are the following:
>
> * almanach
> * certmonger
> * dind
> * qdrouterd
> * rsyslog
>
> * helm-repository
> * kube
> * kubernetes-entrypoint
> * kubetoolbox
>
> If you still care about any of these or I've made an oversight, please have
> a look at the patch [0]
>

I have commented as tripleo is using some of these. I would say that
you shouldn't just remove these and there needs to be a proper
deprecation policy. Just because you aren't using them in
kolla-ansible doesn't mean someone isn't actually using them.

Thanks,
-Alex

> Thanks!
> -Paul
>
> [0] https://review.openstack.org/#/c/578111/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI is down stop workflowing

2018-06-19 Thread Alex Schultz
On Tue, Jun 19, 2018 at 1:45 PM, Wesley Hayutin  wrote:
> Check and gate jobs look clear.
> More details on a bit.
>


So for a recap of the last 24 hours or so...

Mistral auth problems - https://bugs.launchpad.net/tripleo/+bug/1777541
 - caused by https://review.openstack.org/#/c/574878/
 - fixed by https://review.openstack.org/#/c/576336/

Undercloud install failure - https://bugs.launchpad.net/tripleo/+bug/1777616
- caused by https://review.openstack.org/#/c/570307/
- fixed by https://review.openstack.org/#/c/576428/

Keystone duplicate role - https://bugs.launchpad.net/tripleo/+bug/1777451
- caused by https://review.openstack.org/#/c/572243/
- fixed by https://review.openstack.org/#/c/576356 and
https://review.openstack.org/#/c/576393/

The puppet issues should be prevented in the future by adding tripleo
undercloud jobs back in to the appropriate modules, see
https://review.openstack.org/#/q/topic:tripleo-ci+(status:open)
I recommended the undercloud jobs because that gives us some basic
coverage and the instack-undercloud job still uses puppet without
containers.  We'll likely want to replace these jobs with standalone
versions at somepoint as that configuration gets more mature.

We've restored any patches that were abandoned in the gate and it
should be ok to recheck.

Thanks,
-Alex

> Thanks
>
> Sent from my mobile
>
> On Tue, Jun 19, 2018, 07:33 Felix Enrique Llorente Pastora
>  wrote:
>>
>> Hi,
>>
>>We have the following bugs with fixes that need to land to unblock
>> check/gate jobs:
>>
>>https://bugs.launchpad.net/tripleo/+bug/1777451
>>https://bugs.launchpad.net/tripleo/+bug/1777616
>>
>>You can check them out at #tripleo ooolpbot.
>>
>>Please stop workflowing temporally until they get merged.
>>
>> BR.
>>
>> --
>> Quique Llorente
>>
>> Openstack TripleO CI
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Alex Schultz
On Tue, Jun 19, 2018 at 9:17 AM, Jiří Stránský  wrote:
> On 19.6.2018 16:29, Lars Kellogg-Stedman wrote:
>>
>> On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:
>>>
>>> Is this the same issue Carlos is trying to fix via
>>> https://review.openstack.org/#/c/494517/ ?
>>
>>
>> That solves part of the problem, but it's not a complete solution.
>> In particular, it doesn't solve the problem that bit me: if you're
>> changing puppet providers (e.g., replacing
>> provider/keystone_config/ini_setting.rb with
>> provider/keystone_config/openstackconfig.rb), you still have the old
>> provider sitting around causing problems because unpacking a tarball
>> only *adds* files.
>>
>>> Yeah I think we've never seen this because normally the
>>> /etc/puppet/modules tarball overwrites the symlink, effectively giving
>>> you a new tree (the first time round at least).
>>
>>
>> But it doesn't, and that's the unexpected problem: if you replace the
>> /etc/puppet/modules/keystone symlink with a directory, then
>> /usr/share/openstack-puppet/modules/keystone is still there, and while
>> the manifests won't be used, the contents of the lib/ directory will
>> still be active.
>>
>>> Probably we could add something to the script to enable a forced
>>> cleanup each update:
>>>
>>>
>>> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9
>>
>>
>> We could:
>>
>> (a) unpack the replacement puppet modules into a temporary location,
>>then
>>
>> (b) for each module; rm -rf the target directory and then copy it into
>>place
>>
>> But! This would require deploy_artifacts.sh to know that it was
>> unpacking puppet modules rather than a generic tarball.
>>
>>> This would have to be optional, so we could add something like a
>>> DeployArtifactsCleanupDirs parameter perhaps?
>>
>>
>> If we went with the above, sure.
>>
>>> One more thought which just occurred to me - we could add support for
>>> a git checkout/pull to the script?
>>
>>
>> Reiterating our conversion in #tripleo, I think rather than adding a
>> bunch of specific functionality to the DeployArtifacts feature, it
>> would make more sense to add the ability to include some sort of
>> user-defined pre/post tasks, either as shell scripts or as ansible
>> playbooks or something.
>>
>> On the other hand, I like your suggestion of just ditching
>> DeployArtifacts for a new composable service that defines
>> host_prep_tasks (or re-implenting DeployArtifacts as a composable
>> service), so I'm going to look at that as a possible alternative to
>> what I'm currently doing.
>>
>
> For the puppet modules specifically, we might also add another
> directory+mount into the docker-puppet container, which would be blank by
> default (unlike the existing, already populated /etc/puppet and
> /usr/share/openstack-puppet/modules). And we'd put that directory at the
> very start of modulepath. Then i *think* puppet would use a particular
> module from that dir *only*, not merge the contents with the rest of
> modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter
> (didn't test it though). That should get us around the "tgz only adds files"
> problem without any rm -rf.
>

So the described problem is only a problem with puppet facts and
providers as they all get loaded from the entire module path. Normal
puppet classes are less conflict-y because it takes the first it finds
and stops.

> The above is somewhat of an orthogonal suggestion to the composable service
> approach, they would work well alongside i think. (And +1 on
> "DeployArtifacts as composable service" as something worth investigating /
> implementing.)
>

-1 to more services. We take a Heat time penalty for each new
composable service we add and in this case I don't think this should
be a service itself.  I think for this case, it would be better suited
as a host prep task than a defined service.  Providing a way for users
to define external host prep tasks might make more sense.

Thanks,
-Alex

> Jirka
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-19 Thread Alex Schultz
On Wed, Jun 13, 2018 at 9:50 AM, Emilien Macchi  wrote:
> Alan Bishop has been highly involved in the Storage backends integration in
> TripleO and Puppet modules, always here to update with new features, fix
> (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>

Since there are no objections, I have added Alan to the cores list.

Thanks,
-Alex

> Please vote -1/+1,
> Thanks!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cloudkitty] configuration, deployment or packaging issue?

2018-06-18 Thread Alex Schultz
On Mon, Jun 18, 2018 at 4:08 PM, Tobias Urdin  wrote:
> Hello CloudKitty team,
>
>
> I'm having an issue with this review not going through and being stuck after
> staring at it for a while now [1].
>
> Is there any configuration[2] issue that are causing the error[3]? Or is the
> package broken?
>

Likely due to https://review.openstack.org/#/c/538256/ which appears
to change the metrics.yaml format. It doesn't look backwards
compatible so the puppet module probably needs updating.

>
> Thanks for helping out!
>
> Best regards
>
>
> [1] https://review.openstack.org/#/c/569641/
>
> [2]
> http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/etc/cloudkitty/
>
> [3]
> http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/cloudkitty/processor.txt.gz
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Puppet debugging help?

2018-06-18 Thread Alex Schultz
On Mon, Jun 18, 2018 at 9:13 AM, Lars Kellogg-Stedman  wrote:
> Hey folks,
>
> I'm trying to patch puppet-keystone to support multi-valued
> configuration options (like trusted_dashboard).  I have a patch that
> works, mostly, but I've run into a frustrating problem (frustrating
> because it would seem to be orthogonal to my patches, which affect the
> keystone_config provider and type).
>
> During the initial deploy, running tripleo::profile::base::keystone
> fails with:
>
>   "Error: Could not set 'present' on ensure: undefined method `new'
>   for nil:NilClass at
>   /etc/puppet/modules/tripleo/manifests/profile/base/keystone.pp:274",
>

It's likely erroring in the keystone_domain provider.

https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L115-L122
or
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L155-L161

Providers are notoriously bad at their error messaging.   Usually this
error happens when we get a null back from the underlying command and
we're still trying to do something.  This could point to a
misconfiguration of keystone if it's not getting anything back.

> The line in question is:
>
>   70: if $step == 3 and $manage_domain {
>   71:   if hiera('heat_engine_enabled', false) {
>   72: # create these seperate and don't use ::heat::keystone::domain since
>   73: # that class writes out the configs
>   74: keystone_domain { $heat_admin_domain:
> ensure  => 'present',
> enabled => true
>   }
>
> The thing is, despite the error...it creates the keystone domain
> *anyway*, and a subsequent run of the module will complete without any
> errors.
>
> I'm not entirely sure that the error is telling me, since *none* of
> the puppet types or providers have a "new" method as far as I can see.
> Any pointers you can offer would be appreciated.
>
> Thanks!
>
> --
> Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
> http://blog.oddbit.com/|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One)

2018-06-15 Thread Alex Schultz
On Mon, Jun 4, 2018 at 6:26 PM, Emilien Macchi  wrote:
> TL;DR: we made nice progress and you can checkout this demo:
> https://asciinema.org/a/185533
>
> We started the discussion back in Dublin during the last PTG. The idea of
> Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud)
> is to deploy a single node OpenStack where the provisioning happens on the
> same node (there is no notion of {under/over}cloud).
>
> A kind of a "packstack" or "devstack" but using TripleO which has can offer:
> - composable containerized services
> - composable upgrades
> - composable roles
> - Ansible driven deployment
>
> One of the key features we have been focusing so far are:
> - low bar to be able to dev/test TripleO (single machine: VM), with simpler
> tooling
> - make it fast (being able to deploy OpenStack in minutes)


So to provide an update, I spent this week trying to get the network
configuration down for the standalone configuration. I've proposed
docs[0] for two configurations. I was able to test the two
configurations:

a) 2 nic (requires second nic with an accessable second "public"
network that is optionally route-able for VM connectivity)
b) 1 nic (requires 3 ips)

Additionally I've proposed an update to the Standalone role[1] that
includes Controller + Compute on a single node. With this I was able
to try out Keystone, Nova, Neutron (with ovs, floating ips), Glance
(backed by Swift), Cinder (lvm).  This configuration took about 35
mins to go from 0 to cloud on a single 16gb VM hosted on some old
hardware.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/575859/
[1] https://review.openstack.org/#/c/575862/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Migration to Storyboard

2018-06-15 Thread Alex Schultz
On Fri, Jun 15, 2018 at 3:12 AM, Michele Baldessari  wrote:
> On Mon, May 21, 2018 at 01:58:26PM -0700, Emilien Macchi wrote:
>> During the Storyboard session today:
>> https://etherpad.openstack.org/p/continuing-the-migration-lp-sb
>>
>> We mentioned that TripleO would continue to migrate during Rocky cycle.
>> Like Alex mentioned in this thread, we need to migrate the scripts used by
>> the CI squad so they work with SB.
>> Once this is done, we'll proceed to the full migration of all blueprints
>> and bugs into tripleo-common project in SB.
>> Projects like tripleo-validations, tripleo-ui (more?) who have 1:1 mapping
>> between their "name" and project repository could use a dedicated project
>> in SB, although we need to keep things simple for our users so they know
>> where to file a bug without confusion.
>> We hope to proceed during Rocky but it'll probably take some time to update
>> our scripts and documentation, also educate our community to use the tool,
>> so we expect the Stein cycle the first cycle where we actually consume SB.
>>
>> I really wanted to thank the SB team for their patience and help, TripleO
>> is big and this migration hasn't been easy but we'll make it :-)
>
> Having used storyboard for the first time today to file a bug^Wstory in heat,
> I'd like to raise a couple of concerns on this migration. And by all
> means, if I just missed to RTFM, feel free to point me in the right
> direction.
>
> 1. Searching for bugs in a specific project is *extremely* cumbersome
>and I am not even sure I got it right (first you need to put
>openstack/project in the search bar, wait and click it. Then you add
>the term you are looking for. I have genuinely no idea if I get all
>the issues I was looking for or not as it is not obvious on what
>fields this search is performed
> 2. Advanced search is either very well hidden or not existant yet?
>E.g. how do you search for bugs filed by someone or over a certain
>release, or just generally more complex searches which are super
>useful in order to avoid filing duplicate bugs.
>
> I think Zane's additional list also matches my experience very well:
> http://lists.openstack.org/pipermail/openstack-dev/2018-June/131365.html
>
> So my take is that a migration atm is a bit premature and I would
> postpone it at least to Stein.
>

Given that my original request was to try and do it prior to M2 and
that's past, I think I'd also side with waiting until early Stein to
continue.  Let's focus on the Rocky work and push this to early Stein
instead. I'll talk about this in the next IRC meeting if anyone wishes
to discuss further.

Thanks,
-Alex

> Thanks,
> Michele
>
>> Thanks,
>>
>> On Tue, May 15, 2018 at 7:53 AM, Alex Schultz  wrote:
>>
>> > Bumping this up so folks can review this.  It was mentioned in this
>> > week's meeting that it would be a good idea for folks to take a look
>> > at Storyboard to get familiar with it.  The upstream docs have been
>> > updated[0] to point to the differences when dealing with proposed
>> > patches.  Please take some time to review this and raise any
>> > concerns/issues now.
>> >
>> > Thanks,
>> > -Alex
>> >
>> > [0] https://docs.openstack.org/infra/manual/developers.html#
>> > development-workflow
>> >
>> > On Wed, May 9, 2018 at 1:24 PM, Alex Schultz  wrote:
>> > > Hello tripleo folks,
>> > >
>> > > So we've been experimenting with migrating some squads over to
>> > > storyboard[0] but this seems to be causing more issues than perhaps
>> > > it's worth.  Since the upstream community would like to standardize on
>> > > Storyboard at some point, I would propose that we do a cut over of all
>> > > the tripleo bugs/blueprints from Launchpad to Storyboard.
>> > >
>> > > In the irc meeting this week[1], I asked that the tripleo-ci team make
>> > > sure the existing scripts that we use to monitor bugs for CI support
>> > > Storyboard.  I would consider this a prerequisite for the migration.
>> > > I am thinking it would be beneficial to get this done before or as
>> > > close to M2.
>> > >
>> > > Thoughts, concerns, etc?
>> > >
>> > > Thanks,
>> > > -Alex
>> > >
>> > > [0] https://storyboard.openstack.org/#!/project_group/76
>> > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/
>> > tripleo.2018-05-08-14.00.log.html#l-42
>> >
>> > __

Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits

2018-06-13 Thread Alex Schultz
+1

On Wed, Jun 13, 2018 at 9:50 AM, Emilien Macchi  wrote:
> Alan Bishop has been highly involved in the Storage backends integration in
> TripleO and Puppet modules, always here to update with new features, fix
> (nasty and untestable third-party backends) bugs and manage all the
> backports for stable releases:
> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22
>
> He's also well knowledgeable of how TripleO works and how containers are
> integrated, I would like to propose him as core on TripleO projects for
> patches related to storage things (Cinder, Glance, Swift, Manila, and
> backends).
>
> Please vote -1/+1,
> Thanks!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules

2018-05-31 Thread Alex Schultz
On Wed, May 30, 2018 at 3:18 PM, Remo Mattei  wrote:

> Hello all,
> I have talked to several people about this and I would love to get this
> finalized once and for all. I have checked the OpenStack puppet modules
> which are mostly developed by the Red Hat team, as of right now, TripleO is
> using a combo of Ansible and puppet to deploy but in the next couple of
> releases, the plan is to move away from the puppet option.
>
>
So the OpenStack puppet modules are maintained by others other than Red
Hat, however we have been a major contributor since TripleO has relied on
them for some time.  That being said, as TripleO has migrated to containers
built with Kolla, we've adapted our deployment mechanism to include Ansible
and we really only use puppet for configuration generation.  Our goal for
TripleO is to eventually be fully containerized which isn't something the
puppet modules support today and I'm not sure is on the road map.


>
> So consequently, what will be the plan of TripleO and the puppet modules?
>


As TripleO moves forward, we may continue to support deployments via puppet
modules but the amount of testing that we'll be including upstream will
mostly exercise external Ansible integrations (example, ceph-ansible,
openshift-ansible, etc) and Kolla containers.  As of Queens, most of the
services deployed via TripleO are deployed via containers and not on
baremetal via puppet. We no longer support deploying OpenStack services on
baremetal via the puppet modules and will likely be removing this support
in the code in Stein.  The end goal will likely be moving away from puppet
modules within TripleO if we can solve the backwards compatibility and
configuration generation via other mechanism.  We will likely recommend
leveraging external Ansible role calls rather than including puppet modules
and using those to deploy services that are not inherently supported by
TripleO.  I can't really give a time frame as we are still working out the
details, but it is likely that over the next several cycles we'll see a
reduction in the dependence of puppet in TripleO and an increase in
leveraging available Ansible roles.

>From the Puppet OpenStack standpoint, others are stepping up to continue to
ensure the modules are available and I know I'll keep an eye on them for as
long as TripleO leverages some of the functionality.  The Puppet OpenStack
modules are very stable but I'm not sure without additional community folks
stepping up that there will be support for newer functionality being added
by the various OpenStack projects.  I'm sure others can chime in here on
their usage/plans for the Puppet OpenStack modules.

Hope that helps.

Thanks,
-Alex


>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][tripleo] Release of openstack/tripleo-validations failed

2018-05-29 Thread Alex Schultz
On Tue, May 29, 2018 at 9:31 AM, Doug Hellmann  wrote:
> Excerpts from zuul's message of 2018-05-29 14:28:57 +:
>> Build failed.
>>
>> - release-openstack-python 
>> http://logs.openstack.org/26/26956a27b95550e2162243da79d62bb1b19d50d7/release/release-openstack-python/2bd7f7d/
>>  : POST_FAILURE in 6m 34s
>> - announce-release announce-release : SKIPPED
>> - propose-update-constraints propose-update-constraints : SKIPPED
>>
>
> There appears to be an issue with the tripleo-validations README.rst
> file. It's likely this is new validation being done by PyPI, so rather
> than worrying about which change broke things, I suggest just working out
> how to fix it and move on.
>
> http://logs.openstack.org/26/26956a27b95550e2162243da79d62bb1b19d50d7/release/release-openstack-python/2bd7f7d/job-output.txt.gz#_2018-05-29_14_28_35_963702
>

https://bugs.launchpad.net/tripleo/+bug/1774001

I've proposed a fix to shuffle around the readme to clean this up.
https://review.openstack.org/570954

Thanks,
-Alex

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Alex Schultz
le-container-registry/
[1] http://git.openstack.org/cgit/openstack/ansible-role-redhat-subscription/
[2] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-keystone/
[3] http://git.openstack.org/cgit/openstack/puppet-openstacklib/
[4] https://review.openstack.org/#/c/565856/
[5] https://review.openstack.org/#/c/569830

> Thanks
>
>
>
> On Wed, May 23, 2018 at 7:04 PM, Alex Schultz <aschu...@redhat.com> wrote:
>>
>> On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman <sshna...@redhat.com>
>> wrote:
>> > Hi, Sergii
>> >
>> > thanks for the question. It's not first time that this topic is raised
>> > and
>> > from first view it could seem that branching would help to that sort of
>> > issues.
>> >
>> > Although it's not the case. Tripleo-quickstart(-extras) is part of CI
>> > code,
>> > as well as tripleo-ci repo which have never been branched. The reason
>> > for
>> > that is relative small impact on CI code from product branching. Think
>> > about
>> > backport almost *every* patch to oooq and extras to all supported
>> > branches,
>> > down to newton at least. This will be a really *huge* price and non
>> > reasonable work. Just think about active maintenance of 3-4 versions of
>> > CI
>> > code in each of 3 repositories. It will take all time of CI team with
>> > almost
>> > zero value of this work.
>> >
>>
>> So I'm not sure I completely agree with this assessment as there is a
>> price paid for every {%if release in [...]%} that we have to carry in
>> oooq{,-extras}.  These go away if we branch because we don't have to
>> worry about breaking previous releases or current release (which may
>> or may not actually have CI results).
>>
>> > What regards patch you listed, we would have backport this change to
>> > *every*
>> > branch, and it wouldn't really help to avoid the issue. The source of
>> > problem is not branchless repo here.
>> >
>>
>> No we shouldn't be backporting every change.  The logic in oooq-extras
>> should be version specific and if we're changing an interface in
>> tripleo in a breaking fashion we're doing it wrong in tripleo. If
>> we're backporting things to work around tripleo issues, we're doing it
>> wrong in quickstart.
>>
>> > Regarding catching such issues and Bogdans point, that's right we added
>> > a
>> > few jobs to catch such issues in the future and prevent breakages, and a
>> > few
>> > running jobs is reasonable price to keep configuration working in all
>> > branches. Comparing to maintenance nightmare with branches of CI code,
>> > it's
>> > really a *zero* price.
>> >
>>
>> Nothing is free. If there's a high maintenance cost, we haven't
>> properly identified the optimal way to separate functionality between
>> tripleo/quickstart.  I have repeatedly said that the provisioning
>> parts of quickstart should be separate because those aren't tied to a
>> tripleo version and this along with the scenario configs should be the
>> only unbranched repo we have. Any roles related to how to
>> configure/work with tripleo should be branched and tied to a stable
>> branch of tripleo. This would actually be beneficial for tripleo as
>> well because then we can see when we are introducing backwards
>> incompatible changes.
>>
>> Thanks,
>> -Alex
>>
>> > Thanks
>> >
>> >
>> > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk <sgolo...@redhat.com>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> Looking at [1], I am thinking about the price we paid for not
>> >> branching tripleo-quickstart. Can we discuss the options to prevent
>> >> the issues such as [1]? Thank you in advance.
>> >>
>> >> [1] https://review.openstack.org/#/c/569830/4
>> >>
>> >> --
>> >> Best Regards,
>> >> Sergii Golovatiuk
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > --
>> > Best regards
>> > Sagi Shnaidman
>> >
>> >
>> >

Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-23 Thread Alex Schultz
On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman  wrote:
> Hi, Sergii
>
> thanks for the question. It's not first time that this topic is raised and
> from first view it could seem that branching would help to that sort of
> issues.
>
> Although it's not the case. Tripleo-quickstart(-extras) is part of CI code,
> as well as tripleo-ci repo which have never been branched. The reason for
> that is relative small impact on CI code from product branching. Think about
> backport almost *every* patch to oooq and extras to all supported branches,
> down to newton at least. This will be a really *huge* price and non
> reasonable work. Just think about active maintenance of 3-4 versions of CI
> code in each of 3 repositories. It will take all time of CI team with almost
> zero value of this work.
>

So I'm not sure I completely agree with this assessment as there is a
price paid for every {%if release in [...]%} that we have to carry in
oooq{,-extras}.  These go away if we branch because we don't have to
worry about breaking previous releases or current release (which may
or may not actually have CI results).

> What regards patch you listed, we would have backport this change to *every*
> branch, and it wouldn't really help to avoid the issue. The source of
> problem is not branchless repo here.
>

No we shouldn't be backporting every change.  The logic in oooq-extras
should be version specific and if we're changing an interface in
tripleo in a breaking fashion we're doing it wrong in tripleo. If
we're backporting things to work around tripleo issues, we're doing it
wrong in quickstart.

> Regarding catching such issues and Bogdans point, that's right we added a
> few jobs to catch such issues in the future and prevent breakages, and a few
> running jobs is reasonable price to keep configuration working in all
> branches. Comparing to maintenance nightmare with branches of CI code, it's
> really a *zero* price.
>

Nothing is free. If there's a high maintenance cost, we haven't
properly identified the optimal way to separate functionality between
tripleo/quickstart.  I have repeatedly said that the provisioning
parts of quickstart should be separate because those aren't tied to a
tripleo version and this along with the scenario configs should be the
only unbranched repo we have. Any roles related to how to
configure/work with tripleo should be branched and tied to a stable
branch of tripleo. This would actually be beneficial for tripleo as
well because then we can see when we are introducing backwards
incompatible changes.

Thanks,
-Alex

> Thanks
>
>
> On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk 
> wrote:
>>
>> Hi,
>>
>> Looking at [1], I am thinking about the price we paid for not
>> branching tripleo-quickstart. Can we discuss the options to prevent
>> the issues such as [1]? Thank you in advance.
>>
>> [1] https://review.openstack.org/#/c/569830/4
>>
>> --
>> Best Regards,
>> Sergii Golovatiuk
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best regards
> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Cancel IRC meeting for May 22, 2018

2018-05-16 Thread Alex Schultz
Since the summit is coming up, there will likely be very low
attendance. We'll carry any open items until the following week.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Automating documentation the tripleo way?

2018-05-16 Thread Alex Schultz
On Wed, May 16, 2018 at 1:04 PM, Doug Hellmann  wrote:
> Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600:
>> On Wed, May 16, 2018 at 2:41 PM Doug Hellmann  wrote:
>>
>> > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200:
>> > > Hi all,
>> > >
>> > > In the past few years, we've seen several efforts aimed at automating
>> > > procedural documentation, mostly centered around the OpenStack
>> > > installation guide. This idea to automatically produce and verify
>> > > installation steps or similar procedures was mentioned again at the last
>> > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing).
>> > >
>> > > It was brought to my attention that the tripleo team has been working on
>> > > automating some of the tripleo deployment procedures, using a Bash script
>> > > with included comment lines to supply some RST-formatted narrative, for
>> > > example:
>> > >
>> > >
>> > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2
>> > >
>> > > The Bash script can then be converted to RST, e.g.:
>> > >
>> > >
>> > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/
>> > >
>> > > Source Code:
>> > >
>> > >
>> > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs
>> > >
>> > > I really liked this approach and while I don't want to sound like selling
>> > > other people's work, I'm wondering if there is still an interest among
>> > the
>> > > broader OpenStack community in automating documentation like this?
>> > >
>> > > Thanks,
>> > > pk
>> > >
>> >
>> > Weren't the folks doing the training-labs or training-guides taking a
>> > similar approach? IIRC, they ended up implementing what amounted to
>> > their own installer for OpenStack, and then ended up with all of the
>> > associated upgrade and testing burden.
>> >
>> > I like the idea of trying to use some automation from this, but I wonder
>> > if we'd be better off extracting data from other tools, rather than
>> > building a new one.
>> >
>> > Doug
>> >
>>
>> So there really isn't anything new to create, the work is done and executed
>> on every tripleo change that runs in rdo-cloud.
>
> It wasn't clear what Petr was hoping to get. Deploying with TripleO is
> only one way to deploy, so we wouldn't be able to replace the current
> installation guides with the results of this work. It sounds like that's
> not the goal, though.
>
>>
>> Instead of dismissing the idea upfront I'm more inclined to set an
>> achievable small step to see how well it works.  My thought would be to
>> focus on the upcoming all-in-one installer and the automated doc generated
>> with that workflow.  I'd like to target publishing the all-in-one tripleo
>> installer doc to [1] for Stein and of course a section of tripleo.org.
>
> As an official project, why is TripleO still publishing docs to its own
> site? That's not something we generally encourage.
>

We publish on docs.o.o. It's the same docs, just different theme.

https://docs.openstack.org/tripleo-docs/latest/install/index.html
https://docs.openstack.org/tripleo-docs/latest/contributor/index.html

I guess we could just change the tripleo.org to redirect to the
docs.o.o stuff. I'm not sure the history behind this.  I would say
that you can't really find them from the main docs.o.o page unless you
search so maybe that's part of it? I'm assuming this is likely because
we don't version our docs in the past so they don't show up.  Is there
a better way to ensure visibility of docs?

Thanks,
-Alex

> That said, publishing a new deployment guide based on this technique
> makes sense in general. What about Ben's comments elsewhere in the
> thread?
>
> Doug
>
>>
>> What do you think?
>>
>> [1] https://docs.openstack.org/queens/deploy/
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Migration to Storyboard

2018-05-15 Thread Alex Schultz
Bumping this up so folks can review this.  It was mentioned in this
week's meeting that it would be a good idea for folks to take a look
at Storyboard to get familiar with it.  The upstream docs have been
updated[0] to point to the differences when dealing with proposed
patches.  Please take some time to review this and raise any
concerns/issues now.

Thanks,
-Alex

[0] https://docs.openstack.org/infra/manual/developers.html#development-workflow

On Wed, May 9, 2018 at 1:24 PM, Alex Schultz <aschu...@redhat.com> wrote:
> Hello tripleo folks,
>
> So we've been experimenting with migrating some squads over to
> storyboard[0] but this seems to be causing more issues than perhaps
> it's worth.  Since the upstream community would like to standardize on
> Storyboard at some point, I would propose that we do a cut over of all
> the tripleo bugs/blueprints from Launchpad to Storyboard.
>
> In the irc meeting this week[1], I asked that the tripleo-ci team make
> sure the existing scripts that we use to monitor bugs for CI support
> Storyboard.  I would consider this a prerequisite for the migration.
> I am thinking it would be beneficial to get this done before or as
> close to M2.
>
> Thoughts, concerns, etc?
>
> Thanks,
> -Alex
>
> [0] https://storyboard.openstack.org/#!/project_group/76
> [1] 
> http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-14 Thread Alex Schultz
On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya  wrote:
> An update for your review please folks
>
>> Bogdan Dobrelya  writes:
>>
>>> Hello.
>>> As Zuul documentation [0] explains, the names "check", "gate", and
>>> "post"  may be altered for more advanced pipelines. Is it doable to
>>> introduce, for particular openstack projects, multiple check
>>> stages/steps as check-1, check-2 and so on? And is it possible to make
>>> the consequent steps reusing environments from the previous steps
>>> finished with?
>>>
>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve
>>> with this "virtual RFE", and using such multi-staged check pipelines,
>>> is reducing (ideally, de-duplicating) some of the common steps for
>>> existing CI jobs.
>>
>>
>> What you're describing sounds more like a job graph within a pipeline.
>> See:
>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies
>> for how to configure a job to run only after another job has completed.
>> There is also a facility to pass data between such jobs.
>>
>> ... (skipped) ...
>>
>> Creating a job graph to have one job use the results of the previous job
>> can make sense in a lot of cases.  It doesn't always save *time*
>> however.
>>
>> It's worth noting that in OpenStack's Zuul, we have made an explicit
>> choice not to have long-running integration jobs depend on shorter pep8
>> or tox jobs, and that's because we value developer time more than CPU
>> time.  We would rather run all of the tests and return all of the
>> results so a developer can fix all of the errors as quickly as possible,
>> rather than forcing an iterative workflow where they have to fix all the
>> whitespace issues before the CI system will tell them which actual tests
>> broke.
>>
>> -Jim
>
>
> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
> undercloud deployments vs upgrades testing (and some more). Given that those
> undercloud jobs have not so high fail rates though, I think Emilien is right
> in his comments and those would buy us nothing.
>
> From the other side, what do you think folks of making the
> tripleo-ci-centos-7-3nodes-multinode depend on
> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily
> and long running, and is non-voting. It deploys (see featuresets configs
> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the
> containers-multinode fails - see the CI stats page [4]. I've found only a 2
> cases there for the otherwise situation, when containers-multinode fails,
> but 3nodes-multinode passes. So cutting off those future failures via the
> dependency added, *would* buy us something and allow other jobs to wait less
> to commence, by a reasonable price of somewhat extended time of the main
> zuul pipeline. I think it makes sense and that extended CI time will not
> overhead the RDO CI execution times so much to become a problem. WDYT?
>

I'm not sure it makes sense to add a dependency on other deployment
tests. It's going to add additional time to the CI run because the
upgrade won't start until well over an hour after the rest of the
jobs.  The only thing I could think of where this makes more sense is
to delay the deployment tests until the pep8/unit tests pass.  e.g.
let's not burn resources when the code is bad. There might be
arguments about lack of information from a deployment when developing
things but I would argue that the patch should be vetted properly
first in a local environment before taking CI resources.

Thanks,
-Alex

> [0] https://review.openstack.org/#/c/568275/
> [1] https://review.openstack.org/#/c/568278/
> [2] https://review.openstack.org/#/c/568326/
> [3]
> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html
> [4] http://tripleo.org/cistatus.html
>
> * ignore the column 1, it's obsolete, all CI jobs now using configs download
> AFAICT...
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Migration to Storyboard

2018-05-09 Thread Alex Schultz
On Wed, May 9, 2018 at 2:20 PM, Wesley Hayutin <whayu...@redhat.com> wrote:
>
>
> On Wed, May 9, 2018 at 3:25 PM Alex Schultz <aschu...@redhat.com> wrote:
>>
>> Hello tripleo folks,
>>
>> So we've been experimenting with migrating some squads over to
>> storyboard[0] but this seems to be causing more issues than perhaps
>> it's worth.  Since the upstream community would like to standardize on
>> Storyboard at some point, I would propose that we do a cut over of all
>> the tripleo bugs/blueprints from Launchpad to Storyboard.
>>
>> In the irc meeting this week[1], I asked that the tripleo-ci team make
>> sure the existing scripts that we use to monitor bugs for CI support
>> Storyboard.  I would consider this a prerequisite for the migration.
>> I am thinking it would be beneficial to get this done before or as
>> close to M2.
>>
>> Thoughts, concerns, etc?
>
>
> Just clarifying.  You would like to have the tooling updated by M2, which is
> fine I think.  However squads are not expected to change all their existing
> procedures by M2 correct?   I'm concerned about migrating our current kanban
> boards to storyboard by M2.
>

I'm talking about tooling (irc bot/monitoring) and launchpad migration
complete by m2. Any other boards can wait until squads want to move
over.

Thanks,
-Alex

> Thanks
>
>>
>>
>> Thanks,
>> -Alex
>>
>> [0] https://storyboard.openstack.org/#!/project_group/76
>> [1]
>> http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Migration to Storyboard

2018-05-09 Thread Alex Schultz
Hello tripleo folks,

So we've been experimenting with migrating some squads over to
storyboard[0] but this seems to be causing more issues than perhaps
it's worth.  Since the upstream community would like to standardize on
Storyboard at some point, I would propose that we do a cut over of all
the tripleo bugs/blueprints from Launchpad to Storyboard.

In the irc meeting this week[1], I asked that the tripleo-ci team make
sure the existing scripts that we use to monitor bugs for CI support
Storyboard.  I would consider this a prerequisite for the migration.
I am thinking it would be beneficial to get this done before or as
close to M2.

Thoughts, concerns, etc?

Thanks,
-Alex

[0] https://storyboard.openstack.org/#!/project_group/76
[1] 
http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 20th Edition

2018-05-08 Thread Alex Schultz
Welcome to the twentieth edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130090.html

+-+
| General announcements |
+-+

+--> Further discussions about Storyboard migration will be coming to
the ML this week.
+--> We have 4 more weeks until milestone 2 ! Check-out the schedule:
https://releases.openstack.org/rocky/schedule.html

+--+
| Continuous Integration |
+--+

+--> Ruck is myoung and Rover is sshnaidm. Please let them know any
new CI issue.
+--> Master promotion is 0 day, Queens is 1 day, Pike is 3 days and
Ocata is 2 days. Kudos folks!
+--> Upcoming DLRN changes coming that may impact CI, see
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130195.html
+--> Still working on libvirt based multinode reproducer, see
https://goo.gl/DYCnkx
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> Continued progress for ffwd upgrades as well as cleaing up
upgrade/updates jobs.
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> Continued efforts to align instack-undercloud & containerized undercloud
+--> all-in-one work beginning to extract the deployment
framework/tooling from the containerized undercloud
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> Progress on OpenStark operations Ansible role:
https://github.com/samdoran/ansible-role-openstack-operations
+--> Working on Skydive transition to external tasks
+--> Working on improving performances when deploying Ceph with Ansible.
+--> client/api/workflow for "play deployment failures list"
equivalent to "stack failures list
+--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status

+--+
| Integration |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> Custom validations
+--> Fixing node health validations
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> Continued work on neutron sidecar containers
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Patches for public TLS by default are up,
https://review.openstack.org/#/q/topic:public-tls-default+status:open
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++

Burrowing owls migrate to the Rocky Mountain Arsenal National Wildlife
Refuge (near Denver, CO) every summer and raise their young in
abandoned prairie dog burrows.
https://www.fws.gov/nwrs/threecolumn.aspx?id=2147510941

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Retirement of tripleo-incubator

2018-05-03 Thread Alex Schultz
We haven't used tripleo-incubator in some time and it is no longer
maintained. We are planning on officially retiring it ASAP[0].   We
had previously said we would do it for the pike py35 goals[1] but we
never got around to removing it.  Efforts have begun to officially
retire it.  Please let us know if there are any issues.

Thanks,
-Alex

[0] 
https://review.openstack.org/#/q/topic:bug/1768590+(status:open+OR+status:merged)
[1] 
http://git.openstack.org/cgit/openstack/governance/tree/goals/pike/python35.rst#n868

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] container-to-container-upgrades CI job and tripleo-common versions

2018-05-03 Thread Alex Schultz
On Thu, May 3, 2018 at 8:29 AM, John Fulton  wrote:
> We hit a bug [1] in CI job container-to-container-upgrades because a
> workflow that was needed only for Pike and Queens was removed [2] as
> clean up for the migration to external_deploy_tasks.
>
> As we need to support an n undercloud deploying an n-1 overcloud and
> then upgrading it to an n overcloud, the CI job deploys with Queens
> THT and master tripleo-common. I take this to be by design as per this
> support requirement.
>

I think we've always had to support this for mixed version installs.
We need to be able to manage n-1 with the latest undercloud bits. So
it does seem that tripleo-common needs to continue to be backwards
compatible for one release.  So let's restore the workflow and get an
upgrade job in place so we can detect these types of breakages.
Alternatively perhaps we need an n-1 deployment on a the latest
undercloud job.

Thanks,
-Alex

> An implication of this is that we need to keep tripleo-common
> backwards compatible for the n-1 release and thus we couldn't delete
> this workflow until Stein.
>
> An alternative is to require that tripleo-common be of the same
> version as tripleo-heat-templates.
>
> Recommendations?
>
>   John
>
> PS: for the sake of getting CI I think we should restore the workflow
> for now [3]
>
> [1] https://bugs.launchpad.net/tripleo/+bug/1768116
> [2] https://review.openstack.org/#/c/563047
> [3] https://review.openstack.org/#/c/565580
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [heat-templates] Deprecated environment files

2018-05-03 Thread Alex Schultz
On Thu, Apr 26, 2018 at 6:08 AM, Waleed Musa  wrote:
> Hi guys,
>
>
> I'm wondering, what is the plan of having  these environments/*.yaml and
> enviroments/services-baremetal/*.yaml.
>
> It seems that it's deprecated files, Please advice here.
>

The services-baremetal were to allow for an end user to continue to
use the service on baremetal during the deprecation process.
Additionally it's when we switched over to containers by default. For
new services, I would recommend not creating the
services-baremetal/*.yaml file.  If you have to update an existing
service, please also update the baremetal equivalent at least for this
cycle. We can probably start removing services-baremetal/* in Stein.

Thanks,
-Alex

>
> Regards
>
> Waleed Mousa
>
> SW Engineer at Mellanox
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] validating overcloud config changes on a redeploy

2018-05-02 Thread Alex Schultz
On Fri, Apr 27, 2018 at 9:49 AM, Ade Lee  wrote:
> Hi,
>
> Recently I starting looking at how we implement password changes in an
> existing deployment, and found that there were issues.  This made me
> wonder whether we needed a test job to confirm that password changes
> (and other config changes) are in fact executed properly.
>
> As far as I understand it, the way to do password changes is to -
> 1) Create a yaml file containing the parameters to be changed and
>their new values
> 2) call openstack overcloud deploy and append -e new_params.yaml
>
> Note that the above steps can really describe the testing of setting
> any config changes (not just passwords).
>
> Of course, if we do change passwords, we'll want to validate that the
> config files have changed, the keystone/dbusers have been modified, the
> mistral plan has been updated, services are still running etc.
>
> After talking with many folks, it seems there is no clear consensus
> where code to do the above tasks should live.  Should it be in tripleo-
> upgrades, or in tripleo-validations or in a separate repo?
>
> Is there anyone already doing something similar?
>
> If we end up creating a role to do this, ideally it should be
> deployment tool agnostic - usable by both infrared or quickstart or
> others.
>
> Whats the best way to do this?
>

So in my mind, this falls under a testing framework validation where
we want to perform a set of $deployment_actions and ensure that
$specific_things have been completed. For the most part we don't have
anything like that in the upstream tripleo project for actions that
aren't covered by tempest tests.  Even tempest tests are only ensuring
that we configured the services so they work but not a state
transition from A to B.  Honestly I don't think tripleo-upgrades or
tripleo-validations is the appropriate place for this type of check.
tripleo-validations might make sense if we expected an end user to do
this after performing a specific action but I don't think there's
enough of these types of actions for that to be warrented.  It's more
likely that we would want to come up with a deployment test suite that
could be run offline where a scenario like 'change all the passwords'
would be executed and verified that it functioned as expected (all the
passwords were changed).  Something like this might work in a periodic
upstream job but it's more like a full validation suite that would
most likely need to be run offline.

Thanks,
-Alex

> Thanks,
> Ade
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Third party module commits to TripleO/Newton branch

2018-05-02 Thread Alex Schultz
On Wed, May 2, 2018 at 10:23 AM, Shyam Biradar
 wrote:
> Hi,
>
> I am working on TrilioVault deployment integration with TripleO.
> This integration will contain changes to TripleO heat templates repo and
> tripleO puppet module as shown in attached document.
>
> We are targeting this integration for OpenStack Newton release first.
> I just wanted to know, if we are allowed to commit new changes which are not
> related to any core components to Newton branch of tripleo heat templates
> repo,
> openstack tripleo repo.
>

No we're looking to shut down the upstream newton repos in the near
future (like this month[0]).  We're only keeping it open at this point
for fast-forward upgrade type of issues. You would need to work on
master and backport as far as available and downstream the rest.

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-01-14.00.log.html#l-164

>
>
> Thanks & Regards,
> Shyam Biradar,
> Email: shyambiradarsgg...@gmail.com,
> Contact: +91 8600266938.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-04-30 Thread Alex Schultz
On Mon, Apr 30, 2018 at 3:16 PM, Ben Nemec  wrote:
> Resending from an address that is subscribed to the list.  Apologies to
> those of you who get this twice.
>
> On 04/30/2018 10:06 AM, Doug Hellmann wrote:
>>
>> It would be useful to have more input from PTLs on this issue, so I'm
>> CCing all of them to get their attention.
>>
>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
>>>
>>> It's time to talk about the next steps in our migration from python
>>> 2 to python 3.
>>>
>>> Up to this point we have mostly focused on reaching a state where
>>> we support both versions of the language. We are not quite there
>>> with all projects, as you can see by reviewing the test coverage
>>> status information at
>>>
>>> https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects
>>>
>>> Still, we need to press on to the next phase of the migration, which
>>> I have been calling "Python 3 first". This is where we use python
>>> 3 as the default, for everything, and set up the exceptions we need
>>> for anything that still requires python 2.
>>>
>>> To reach that stage, we need to:
>>>
>>> 1. Change the documentation and release notes jobs to use python 3.
>>> (The Oslo team recently completed this, and found that we did
>>> need to make a few small code changes to get them to work.)
>>> 2. Change (or duplicate) all functional test jobs to run under
>>> python 3.
>>> 3. Change the packaging jobs to use python 3.
>>> 4. Update devstack to use 3 by default and require setting a flag to
>>> use 2. (This may trigger other job changes.)
>>>
>>> At that point, all of our deliverables will be produced using python
>>> 3, and we can be relatively confident that if we no longer had
>>> access to python 2 we could still continue operating. We could also
>>> start updating deployment tools to use either python 3 or 2, so
>>> that users could actually deploy using the python 3 versions of
>>> services.
>>>
>>> Somewhere in that time frame our third-party CI systems will need
>>> to ensure they have python 3 support as well.
>>>
>>> After the "Python 3 first" phase is completed we should release
>>> one series using the packages built with python 3. Perhaps Stein?
>>> Or is that too ambitious?
>>>
>>> Next, we will be ready to address the prerequisites for "Python 3
>>> only," which will allow us to drop Python 2 support.
>>>
>>> We need to wait to drop python 2 support as a community, rather
>>> than going one project at a time, to avoid doubling the work of
>>> downstream consumers such as distros and independent deployers. We
>>> don't want them to have to package all (or even a large number) of
>>> the dependencies of OpenStack twice because they have to install
>>> some services running under python 2 and others under 3. Ideally
>>> they would be able to upgrade all of the services on a node together
>>> as part of their transition to the new version, without ending up
>>> with a python 2 version of a dependency along side a python 3 version
>>> of the same package.
>>>
>>> The remaining items could be fixed earlier, but this is the point
>>> at which they would block us:
>>>
>>> 1. Fix oslo.service functional tests -- the Oslo team needs help
>>> maintaining this library. Alternatively, we could move all
>>> services to use cotyledon (https://pypi.org/project/cotyledon/).
>
>
> For everyone's awareness, we discussed this in the Oslo meeting today and
> our first step is to see how many, if any, services are actually relying on
> the oslo.service functionality that doesn't work in Python 3 today.  From
> there we will come up with a plan for how to move forward.
>
> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
>
>>>
>>> 2. Finish the unit test and functional test ports so that all of
>>> our tests can run under python 3 (this implies that the services
>>> all run under python 3, so there is no more porting to do).
>
>
> And integration tests?  I know for the initial python 3 goal we said just
> unit and functional, but it seems to me that we can't claim full python 3
> compatibility until we can run our tempest jobs against python 3-based
> OpenStack.
>
>>>
>>> Finally, after we have *all* tests running on python 3, we can
>>> safely drop python 2.
>>>
>>> We have previously discussed the end of the T cycle as the point
>>> at which we would have all of those tests running, and if that holds
>>> true we could reasonably drop python 2 during the beginning of the
>>> U cycle, in late 2019 and before the 2020 cut-off point when upstream
>>> python 2 support will be dropped.
>>>
>>> I need some info from the deployment tool teams to understand whether
>>> they would be ready to take the plunge during T or U and start
>>> deploying only the python 3 version. Are there other upgrade issues
>>> that need to be addressed to support moving from 2 to 3? Something
>>> that might be part of the platform(s), rather 

Re: [openstack-dev] [Openstack-operators] The Forum Schedule is now live

2018-04-30 Thread Alex Schultz
On Mon, Apr 30, 2018 at 9:47 AM, Jimmy McArthur  wrote:
> Project Updates are in their own track:
> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223
>

TripleO is still missing?

Thanks,
-Alex

> As are SIG, BoF and Working Groups:
> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218
>
> Amy Marrich
> April 30, 2018 at 10:44 AM
> Emilien,
>
> I believe that the Project Updates are separate from the Forum? I know I saw
> some in the schedule before the Forum submittals were even closed. Maybe
> contact speaker support or Jimmy will answer here.
>
> Thanks,
>
> Amy (spotz)
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> Emilien Macchi
> April 30, 2018 at 10:33 AM
>
>
>> Hello all -
>>
>> Please take a look here for the posted Forum schedule:
>> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224
>> You should also see it update on your Summit App.
>
>
> Why TripleO doesn't have project update?
> Maybe we could combine it with TripleO - Project Onboarding if needed but it
> would be great to have it advertised as a project update!
>
> Thanks,
> --
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Jimmy McArthur
> April 27, 2018 at 11:04 AM
> Hello all -
>
> Please take a look here for the posted Forum schedule:
> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224
> You should also see it update on your Summit App.
>
> Thank you and see you in Vancouver!
> Jimmy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core

2018-04-27 Thread Alex Schultz
+1

On Fri, Apr 27, 2018 at 11:41 AM, Emilien Macchi  wrote:
> +1, thanks Tobias for your contributions!
>
> On Fri, Apr 27, 2018 at 8:21 AM, Iury Gregory  wrote:
>>
>> +1
>>
>> On Fri, Apr 27, 2018, 12:15 Mohammed Naser  wrote:
>>>
>>> Hi everyone,
>>>
>>> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack
>>> team as they've been putting great reviews over the past few months
>>> and they have directly contributed in resolving all the Ubuntu
>>> deployment issues and helped us bring Ubuntu support back and make the
>>> jobs voting again.
>>>
>>> Thank you,
>>> Mohammed
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits

2018-04-23 Thread Alex Schultz
+1

On Mon, Apr 23, 2018 at 5:55 AM, James Slagle  wrote:
> On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi  wrote:
>> Greetings,
>>
>> As you probably know mcornea on IRC, Marius Cornea has been contributing on
>> TripleO for a while, specially on the upgrade bits.
>> Part of the quality team, he's always testing real customer scenarios and
>> brings a lot of good feedback in his reviews, and quite often takes care of
>> fixing complex bugs when it comes to advanced upgrades scenarios.
>> He's very involved in tripleo-upgrade repository where he's already core,
>> but I think it's time to let him +2 on other tripleo repos for the patches
>> related to upgrades (we trust people's judgement for reviews).
>>
>> As usual, we'll vote!
>>
>> Thanks everyone for your feedback and thanks Marius for your hard work and
>> involvement in the project.
>
> +1
>
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky Milestone 1 has past

2018-04-21 Thread Alex Schultz
Hey everyone,

We released Rocky Milestone 1 this week[0].  I have gone through and
updated the blueprints that were still targeted to rocky-1 to move
them to rocky-2.  Please take some time to review the outstanding
blueprints to make sure that we still still be able to deliver them
during the Rocky release. If any need to get pushed, please let me
know. We would like to continue doing a soft feature freeze at
Milestone 2, so make sure you are paying attention to the schedule.

Thanks,
-Alex

[0] https://launchpad.net/tripleo/+milestone/rocky-1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs

2018-04-05 Thread Alex Schultz
On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin  wrote:
> FYI...
>
> This is news to me so thanks to Emilien for pointing it out [1].
> There are official tags for tripleo launchpad bugs.  Personally, I like what
> I've seen recently with some extra tags as they could be helpful in finding
> the history of particular issues.
> So hypothetically would it be "wrong" to create an official tag for each
> featureset config number upstream.  I ask because that is adding a lot of
> tags but also serves as a good test case for what is good/bad use of tags.
>

We list official tags over in the specs repo[0].   That being said as
we investigate switching over to storyboard, we'll probably want to
revisit tags as they will have to be used more to replace some of the
functionality we had with launchpad (e.g. milestones).  You could
always add the tags without being an official tag. I'm not sure I
would really want all the featuresets as tags.  I'd rather see us
actually figure out what component is actually failing than relying on
a featureset (and the Rosetta stone for decoding featuresets to
functionality[1]).


Thanks,
-Alex


[0] 
http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/policy/bug-tagging.rst#n30
[1] 
https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/doc/source/feature-configuration.rst#n21
> Thanks
>
> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO

2018-04-02 Thread Alex Schultz
On Thu, Mar 29, 2018 at 11:32 AM, David Moreau Simard
 wrote:
> Nice!
>
> I don't have a strong opinion
> about this but what I might recommend would be to chat with the
> openshift-ansible [1] and the kolla-ansible [2] folks.
>
> I'm happy to do the introductions if necessary !
>
> Their models, requirements or context might be different than ours but at
> the end of the day, it's a set of Ansible roles and playbooks to install
> something.
> It would be a good idea just to informally chat about the reasons why their
> things are set up the way they are, what are the pros, cons.. or their
> challenges.
>
> I'm not saying we should structure our things like theirs.
> What I'm trying to say is that they've surely learned a lot over the years
> these projects have existed and it's surely worthwhile to chat with them so
> we don't repeat some of the same mistakes.
>
> Generally just draw from their experience, learn from their conclusions and
> take that into account before committing to any particular model we'd like
> to have in TripleO ?

Yea it'd probably be a good idea to check with them on some of their
structure choices.  I think we do not necessarily want to use a
similar structure to those based on our experiences with oooq,
openstack-puppet-modules, etc.  I think this first iteration to get
some of the upgrade tasks out of the various */services/*.yaml will
help us build out a decent structure that might be reusable.  I did
notice that kolla-ansible has a main.yaml[0] that might be interesting
for us to consider when we start using the ansible roles directly
rather than importing the tasks themselves.

What I'd really like for us to work on is better cookiecutter/testing
structure for ansible roles themselves so we stop just merging ansible
bits that are only tested via full deployment tests (which we may not
even run).  As much as I hate rspec puppet tests, it is really nice
for testing the logic without having to do an actual deployment.

Thanks,
-Alex

[0] 
https://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/keystone/tasks/main.yml

>
> [1]: https://github.com/openshift/openshift-ansible
> [2]: https://github.com/openstack/kolla-ansible
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]
>
> On Thu, Mar 29, 2018, 12:34 PM David Peacock,  wrote:
>>
>> Hi everyone,
>>
>> During the recent PTG in Dublin, it was decided that we'd prototype a way
>> forward with Ansible tasks in TripleO that adhere to Ansible best practises,
>> creating dedicated roles with unique git repositories and RPM packaging per
>> role.
>>
>> With a view to moving in this direction, a couple of us on the TripleO
>> team have begun developing tooling to facilitate this.  Initially we've
>> worked on a tool [0] to extract Ansible tasks lists from
>> tripleo-heat-templates and move them into new formally structured Ansible
>> roles.
>>
>> An example with the existing keystone docker service [1]:
>>
>> The upgrade_tasks block will become:
>>
>> ```
>> upgrade_tasks:
>>   - import_role:
>>   name: tripleo-role-keystone
>>   tasks_from: upgrade.yaml
>> ```
>>
>> The fast_forward_upgrade_tasks block will become:
>>
>> ```
>> fast_forward_upgrade_tasks:
>>   - import_role:
>>   name: tripleo-role-keystone
>>   tasks_from: fast_forward_upgrade.yaml
>> ```
>>
>> And this role [2] will be structured:
>>
>> ```
>> tripleo-role-keystone/
>> └── tasks
>> ├── fast_forward_upgrade.yaml
>> ├── main.yaml
>> └── upgrade.yaml
>> ```
>>
>> We'd love to hear any feedback from the community as we move towards this.
>>
>> Thank you,
>> David Peacock
>>
>> [0]
>> https://github.com/davidjpeacock/openstack-role-extract/blob/master/role-extractor-creator.py
>> [1]
>> https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/keystone.yaml
>> [2] https://github.com/davidjpeacock/tripleo-role-keystone
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints for Rocky

2018-04-02 Thread Alex Schultz
On Tue, Mar 13, 2018 at 7:58 AM, Alex Schultz <aschu...@redhat.com> wrote:
> Hey everyone,
>
> So we currently have 63 blueprints for currently targeted for
> Rocky[0].  Please make sure that any blueprints you are interested in
> delivering have an assignee set and have been approved.  I would like
> to have the ones we plan on delivering for Rocky to be updated by
> April 3, 2018.  Any blueprints that have not been updated will be
> moved out to the next cycle after this date.
>

Reminder this is tomorrow. I'll be going through the blueprints and
moving them out this week.

> Thanks,
> -Alex
>
> [0] https://blueprints.launchpad.net/tripleo/rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Tag of openstack/instack-undercloud failed

2018-03-26 Thread Alex Schultz
On Mon, Mar 26, 2018 at 2:33 PM, Doug Hellmann  wrote:
> Excerpts from zuul's message of 2018-03-26 18:11:49 +:
>> Build failed.
>>
>> - publish-openstack-releasenotes 
>> http://logs.openstack.org/94/94bb28ae46bd263314c9d846069ca913d225e625/tag/publish-openstack-releasenotes/9440894/
>>  : POST_FAILURE in 3m 29s
>>
>
> This release notes build failure is probably not a problem, but I
> don't recognize the cause of the error so I wanted to bring it up
> in case someone else did.
>
> Is the ".pike.html.AEKeun" a lock file of some sort? Or a temporary
> file created for some other purpose?
>

I think that's part of the rsync process.  From
https://rsync.samba.org/how-rsync-works.html

> The receiver will read from the sender data for each file identified by the 
> file index number. It will open the local file (called the basis) and will 
> create a temporary file.
>
>  The receiver will expect to read non-matched data and/or to match records 
> all in sequence for the final file contents. When non-matched data is read it 
> will be written to the temp-file. When a block match record is received the
>  receiver will seek to the block offset in the basis file and copy the block 
> to the temp-file. In this way the temp-file is built from beginning to end.
>
> The file's checksum is generated as the temp-file is built. At the end of the 
> file, this checksum is compared with the file checksum from the sender. If 
> the file checksums do not match the temp-file is deleted. If the file fails 
> once it will > be reprocessed in a second phase, and if it fails twice an 
> error is reported.
>
> After the temp-file has been completed, its ownership and permissions and 
> modification time are set. It is then renamed to replace the basis file.


Thanks,
-Alex

> Doug
>
> rsync: failed to set permissions on 
> "/afs/.openstack.org/docs/releasenotes/instack-undercloud/.pike.html.AEKeun": 
> No such file or directory (2)
> rsync: rename 
> "/afs/.openstack.org/docs/releasenotes/instack-undercloud/.pike.html.AEKeun" 
> -> "pike.html": No such file or directory (2)
> rsync error: some files/attrs were not transferred (see previous errors) 
> (code 23) at main.c(1183) [sender=3.1.1]
> Traceback (most recent call last):
>   File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 115, in 
> 
> main()
>   File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 110, in main
> output = afs_sync(p['source'], p['target'])
>   File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 95, in 
> afs_sync
> output['output'] = subprocess.check_output(shell_cmd, shell=True)
>   File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
> **kwargs).stdout
>   File "/usr/lib/python3.5/subprocess.py", line 708, in run
> output=stdout, stderr=stderr)
> subprocess.CalledProcessError: Command '/bin/bash -c "mkdir -p 
> /afs/.openstack.org/docs/releasenotes/instack-undercloud/ && /usr/bin/rsync 
> -rtp --safe-links --delete-after --out-format='<>%i %n%L' 
> --filter='merge /tmp/tmpcoywd87i' 
> /var/lib/zuul/builds/9440894ee812414bb2ae813da1bbdfdd/work/artifacts/ 
> /afs/.openstack.org/docs/releasenotes/instack-undercloud/"' returned non-zero 
> exit status 23
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Bug status

2018-03-20 Thread Alex Schultz
Hey everyone,

In today's IRC meeting, I brought up[0] that we've been having an
increase in the number of open bugs of the last few weeks. We're
currently at about 635 open bugs.  It would be beneficial for everyone
to take a look at the bugs that they are currently assigned to and
ensure they are up to date.

Additionally, there was chat about possibly introducing some process
around the Triaging of bugs such that we should be assigning squad
tags to all the bugs so that there's some potential ownership.  I'm
not sure what that would look like, so if others thinks this might be
a good idea, feel free to comment.

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-03-20-14.01.log.html#l-69

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Blueprints for Rocky

2018-03-13 Thread Alex Schultz
Hey everyone,

So we currently have 63 blueprints for currently targeted for
Rocky[0].  Please make sure that any blueprints you are interested in
delivering have an assignee set and have been approved.  I would like
to have the ones we plan on delivering for Rocky to be updated by
April 3, 2018.  Any blueprints that have not been updated will be
moved out to the next cycle after this date.

Thanks,
-Alex

[0] https://blueprints.launchpad.net/tripleo/rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla][openstack-ansible][puppet][tripleo] requirements unfreeze and you, how you should handle it

2018-02-08 Thread Alex Schultz
On Thu, Feb 8, 2018 at 2:29 PM, Matthew Thode  wrote:
> As the title states, cycle trailing projects will need to change their
> requirements update behavior until they create stable/queens branches.
>
> When requirements unfreezes we will be doing rocky work, meaning that
> requirements updates to your projects (our master to your master) will
> be for rocky.
>
> I requests that all the projects tagged in the email's subject get a +1
> from a requirements core before merging until they branch stable/queens.
>

For clarity: before merging requirements updates.  So for TripleO
folks please do not merge any requirements updates unless the
requirements cores have +1'd or we've branched Queens.

> Once they branch stable/queens the projects are free to proceed as
> normal.
>
> If the projects tagged in the subject can ack me (email or irc) I'd
> appreciate it, would give us some peace of mind to unfreeze tomorrow.
>
> --
> Matthew Thode (prometheanfire)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Unbranched repositories and testing

2018-02-08 Thread Alex Schultz
On Tue, Oct 10, 2017 at 2:24 PM, Emilien Macchi <emil...@redhat.com> wrote:
> On Fri, Oct 6, 2017 at 5:09 AM, Jiří Stránský <ji...@redhat.com> wrote:
>> On 5.10.2017 22:40, Alex Schultz wrote:
>>>
>>> Hey folks,
>>>
>>> So I wandered across the policy spec[0] for how we should be handling
>>> unbranched repository reviews and I would like to start a broader
>>> discussion around this topic.  We've seen it several times over the
>>> recent history where a change in oooqe or tripleo-ci ends up affecting
>>> either a stable branch or an additional set of jobs that were not run
>>> on the change.  I think it's unrealistic to run every possible job
>>> combination on every submission and it's also a giant waste of CI
>>> resources.  I also don't necessarily agree that we should be using
>>> depends-on to prove things are fine for a given patch for the same
>>> reasons. That being said, we do need to minimize our risk for patches
>>> to these repositories.
>>>
>>> At the PTG retrospective I mentioned component design structure[1] as
>>> something we need to be more aware of. I think this particular topic
>>> is one of those types of things where we could benefit from evaluating
>>> the structure and policy around these unbranched repositories to see
>>> if we can improve it.  Is there a particular reason why we continue to
>>> try and support deployment of (at least) 3 or 4 different versions
>>> within a single repository?  Are we adding new features that really
>>> shouldn't be consumed by these older versions such that perhaps it
>>> makes sense to actually create stable branches?  Perhaps there are
>>> some other ideas that might work?
>>
>>
>> Other folks probably have a better view of the full context here, but i'll
>> chime in with my 2 cents anyway..
>>
>> I think using stable branches for tripleo-quickstart-extras could be worth
>> it. The content there is quite tightly coupled with the expected TripleO
>> end-user workflows, which tend to evolve considerably between releases.
>> Branching extras might be a good way to "match the reality" in that sense,
>> and stop worrying about breaking older workflows. (Just recently it came up
>> that the upgrade workflow in O is slightly updated to make it work in P, and
>> will change quite a bit for Q. Minor updates also changed between O and P.)
>>
>> I'd say that tripleo-quickstart is a different story though. It seems fairly
>> release-agnostic in its focus. We may want to keep it unbranched (?). That
>> probably applies even more for tripleo-ci, where ability to make changes
>> which affect how TripleO does CIing in general, across releases, is IMO a
>> significant feature.
>>
>> Maybe branching quickstart-extras might require some code reshuffling
>> between what belongs there and what belongs into quickstart itself.
>
> I agree a lot with Jirka and I think branching oooq-extras would be a
> good first start to see how it goes.
> If we find it helpful and working correctly, we could go the next
> steps and see if there is any other repo that could be branched
> (tripleo-ci or oooq) but I guess for now the best candidate is
> oooq-extras.
>

I'm resurrecting this thread as we seemed to have done it again[0]
with a change oooq-extras master breaking stable/pike.  So I would
propose that we start investigating branching oooq-extras.  Does
anyone see any blocking issues with starting to branch this
repository?

Thanks,
-Alex

[0] https://bugs.launchpad.net/tripleo/+bug/1748315


>> (Just my 2 cents, i'm likely not among the most important stakeholders in
>> this...)
>>
>> Jirka
>>
>>
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/#/c/478488/
>>> [1] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-05 Thread Alex Schultz
On Thu, Feb 1, 2018 at 11:55 AM, James E. Blair  wrote:
> Zane Bitter  writes:
>
>> Yeah, it's definitely nice to have that flexibility. e.g. here is a
>> patch that wouldn't merge for 3 months because the thing it was
>> dependent on also got proposed as a backport:
>>
>> https://review.openstack.org/#/c/514761/1
>>
>> From an OpenStack perspective, it would be nice if a Gerrit ID implied
>> a change from the same Gerrit instance as the current repo and the
>> same branch as the current patch if it exists (otherwise any branch),
>> and we could optionally use a URL instead to select a particular
>> change.
>
> Yeah, that's reasonable, and it is similar to things Zuul does in other
> areas, but I think one of the thing we want to do with Depends-On is
> consider that Zuul isn't the only audience.  It's there just as much for
> the reviewers, and other folks.  So when it comes to Gerrit change ids,
> I feel we had to constrain it to Gerrit's own behavior.  When you click
> on one of those in Gerrit, it shows you all of the changes across all of
> the repos and branches with that change-id.  So that result list is what
> Zuul should work with.  Otherwise there's a discontinuity between what a
> user sees when they click the hyperlink under the change-id and what
> Zuul does.
>
> Similarly, in the new system, you click the URL and you see what Zuul is
> going to use.
>
> And that leads into the reason we want to drop the old syntax: to make
> it seamless for a GitHub user to know how to Depends-On a Gerrit change,
> and vice versa, with neither requiring domain-specific knowledge about
> the system.
>

While I can appreciate that, having to manage urls for backports in
commit messages will lead to missing patches and other PEBAC related
problems. Perhaps rather than throwing out this functionality we can
push for improvements in the gerrit interaction itself?  I'm really -1
on removing the change-id syntax just for this reasoning. The UX of
having to manage complex depends-on urls for things like backports
makes switching to URLs a non-starter unless I have a bunch of
external system deps (and I generally don't).

Thanks,
-Alex

> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky PTL candidacy

2018-02-05 Thread Alex Schultz
I would like to nominate myself for the TripleO PTL role for the Rocky cycle.

As PTL of TripleO for the Queens cycle, the focus was on improving containerized
services, improving the deployment process and CI, and improving
visibility of the status of the project. I personally believe over the last
cycle we've made great strides on all these fronts.  For Rocky, I would like
to continue to focus on:

* Reducing duplication and tech debt
  When we switched over to containerization, we've had to implement some items
  in multiple places to support backwards compatibility. I believe it's time
  to spend some efforts to reduce duplication of code and processes and focus
  on simplifying actions for the end user.  An example of this will be efforts
  to align the undercloud and overcloud deployment processes.

* Simplifying the deployment process
  Additionally with the containerization switch, we've added new requirements
  for actions that must be performed by the end user to deploy OpenStack.
  I believe we should spend time looking at what actions we can remove or reduce
  by automating them as part of the deployment process.  An example of this
  will be efforts to enable autodiscovery for the nodes on the undercloud
  as well as switching to the config-download by default.

* Continued efforts around CI
  We've made great strides in stablizing the CI as well as implementing zuul v3.
  We need to continue to move our CI into fully native zuul v3 actions and
  focus on developers ability to reproduce CI outside of the upstream.

Thanks,
Alex Schultz
irc: mwhahaha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ptl] TripleO PTL unavailable

2018-01-14 Thread Alex Schultz
Due to a loss in my family, I will not be around for the next few
weeks. If you have any TripleO issues, please reach out to Emilien
Macchi (emil...@redhat.com) or Steven Hardy (sha...@redhat.com).

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI promotion blockers

2018-01-02 Thread Alex Schultz
On Tue, Jan 2, 2018 at 9:08 AM, Julie Pichon  wrote:
> Hi!
>
> On 27 December 2017 at 16:48, Emilien Macchi  wrote:
>> - Keystone removed _member_ role management, so we stopped using it
>> (only Member is enough): https://review.openstack.org/#/c/529849/
>
> There's been so many issues with the default member role and Horizon
> over the years, that one got my attention. I can see that
> puppet-horizon still expects '_member_' for role management [1].
> However trying to understand the Keystone patch linked to in the
> commit, it looks like there's total freedom in which role name to use
> so we can't just change the default in puppet-horizon to use 'Member'
> as other consumers may expect and settle on '_member_' in their
> environment. (Right?)
>
> In this case, the proper way to fix this for TripleO deployments may
> be to make the change in instack-undercloud (I presume in [2]) so that
> the default role is explicitly set to 'Member' for us? Does that sound
> like the correct approach to get to a working Horizon?
>

We probably should at least change _member_ to Member in
puppet-horizon. That fixes both projects for the default case.

Thanks,
-Alex

> Julie
>
> [1] 
> https://github.com/openstack/puppet-horizon/blob/master/manifests/init.pp#L458
> [2] 
> https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.yaml.template#L622
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
> Just a note, the queens repo is not currently synced in the infra so
> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
> queens to the infra configuration to resolve this:
> https://review.openstack.org/529670
>

As a follow up, the mirrors have landed and two of the four scenarios
now pass.  Scenario001 is failing on ceilometer-api which was removed
so I have a patch[0] to remove it. Scenario004 is having issues with
neutron and the db looks to be very unhappy[1].

Thanks,
-Alex

[0] https://review.openstack.org/529787
[1] 
http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
On Thu, Dec 21, 2017 at 10:40 AM, Alex Schultz <aschu...@redhat.com> wrote:
> Currently they are all globally failing in master (we are using pike
> still[0] which is probably the problem) in the tempest run[1] due to:
> AttributeError: 'module' object has no attribute 'requires_ext'
>
> I've submit a patch[2] to switch UCA to queens. If history is any
> indication, it will probably end up with a bunch of failing tests that
> will need to be looked at. Feel free to follow along/help with the
> switch.
>

Just a note, the queens repo is not currently synced in the infra so
the queens repo patch is failing on Ubuntu jobs. I've proposed adding
queens to the infra configuration to resolve this:
https://review.openstack.org/529670

> Thanks,
> -Alex
>
> [0] 
> https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6
> [1] 
> http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190
> [2] https://review.openstack.org/#/c/529657/
>
> On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin
> <tobias.ur...@crystone.com> wrote:
>> Thanks for letting us know!
>>
>> I can push for time on this if we can get a list.
>>
>>
>> Best regards
>>
>> Tobias
>>
>>
>> On 12/21/2017 08:04 AM, Andrew Woodward wrote:
>>
>> Some pointers for perusal as to the observed problems would be helpful,
>> Thanks!
>>
>> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short <zul...@gmail.com> wrote:
>>>
>>> Hi Mohammed,
>>>
>>> I might be able to help where can I find this info?
>>>
>>> Thanks
>>> chuck
>>>
>>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser <mna...@vexxhost.com>
>>> wrote:
>>>>
>>>> Hi everyone,
>>>>
>>>> I'll get right into the point.
>>>>
>>>> At the moment, the Puppet OpenStack modules don't have much
>>>> contributors which can help maintain the Ubuntu support.  We deploy on
>>>> CentOS (so we try to get all the fixes in that we can) and there is a
>>>> lot of activity from the TripleO team as well which does their
>>>> deployments on CentOS which means that the CentOS support is very
>>>> reliable and CI is always sought after.
>>>>
>>>> However, starting a while back, we started seeing occasional failures
>>>> with Ubuntu deploys which lead us set the job to non-voting.  At the
>>>> moment, the Puppet integration jobs for Ubuntu are always failing
>>>> because of some Tempest issue.  This means that with every Puppet
>>>> change, we're wasting ~80 minutes of CI run time for a job that will
>>>> always fail.
>>>>
>>>> We've had a lot of support from the packaging team at RDO (which are
>>>> used in Puppet deployments) and they run our integration before
>>>> promoting packages which makes it helpful in finding issues together.
>>>> However, we do not have that with Ubuntu neither has there been anyone
>>>> who is taking initiative to look and investigate those issues.
>>>>
>>>> I understand that there are users out there who use Ubuntu with Puppet
>>>> OpenStack modules.  We need your help to come and try and clear those
>>>> issues out. We'd be more than happy to give assistance to lead you in
>>>> the right way to help fix those issues.
>>>>
>>>> Unfortunately, if we don't have any folks stepping up to resolving
>>>> this, we'll be forced to drop all CI for Ubuntu and make a note to
>>>> users that Ubuntu is not fully tested and hope that as users run into
>>>> issues, they can contribute fixes back (or that someone can work on
>>>> getting Ubuntu gating working again).
>>>>
>>>> Thanks for reading through this, I am quite sad that we'd have to drop
>>>> support for such a major operating system, but there's only so much we
>>>> can do with a much smaller team.
>>>>
>>>> Thank you,
>>>> Mohammed
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Andrew Woodward
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
Currently they are all globally failing in master (we are using pike
still[0] which is probably the problem) in the tempest run[1] due to:
AttributeError: 'module' object has no attribute 'requires_ext'

I've submit a patch[2] to switch UCA to queens. If history is any
indication, it will probably end up with a bunch of failing tests that
will need to be looked at. Feel free to follow along/help with the
switch.

Thanks,
-Alex

[0] 
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6
[1] 
http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190
[2] https://review.openstack.org/#/c/529657/

On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin
 wrote:
> Thanks for letting us know!
>
> I can push for time on this if we can get a list.
>
>
> Best regards
>
> Tobias
>
>
> On 12/21/2017 08:04 AM, Andrew Woodward wrote:
>
> Some pointers for perusal as to the observed problems would be helpful,
> Thanks!
>
> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short  wrote:
>>
>> Hi Mohammed,
>>
>> I might be able to help where can I find this info?
>>
>> Thanks
>> chuck
>>
>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser 
>> wrote:
>>>
>>> Hi everyone,
>>>
>>> I'll get right into the point.
>>>
>>> At the moment, the Puppet OpenStack modules don't have much
>>> contributors which can help maintain the Ubuntu support.  We deploy on
>>> CentOS (so we try to get all the fixes in that we can) and there is a
>>> lot of activity from the TripleO team as well which does their
>>> deployments on CentOS which means that the CentOS support is very
>>> reliable and CI is always sought after.
>>>
>>> However, starting a while back, we started seeing occasional failures
>>> with Ubuntu deploys which lead us set the job to non-voting.  At the
>>> moment, the Puppet integration jobs for Ubuntu are always failing
>>> because of some Tempest issue.  This means that with every Puppet
>>> change, we're wasting ~80 minutes of CI run time for a job that will
>>> always fail.
>>>
>>> We've had a lot of support from the packaging team at RDO (which are
>>> used in Puppet deployments) and they run our integration before
>>> promoting packages which makes it helpful in finding issues together.
>>> However, we do not have that with Ubuntu neither has there been anyone
>>> who is taking initiative to look and investigate those issues.
>>>
>>> I understand that there are users out there who use Ubuntu with Puppet
>>> OpenStack modules.  We need your help to come and try and clear those
>>> issues out. We'd be more than happy to give assistance to lead you in
>>> the right way to help fix those issues.
>>>
>>> Unfortunately, if we don't have any folks stepping up to resolving
>>> this, we'll be forced to drop all CI for Ubuntu and make a note to
>>> users that Ubuntu is not fully tested and hope that as users run into
>>> issues, they can contribute fixes back (or that someone can work on
>>> getting Ubuntu gating working again).
>>>
>>> Thanks for reading through this, I am quite sad that we'd have to drop
>>> support for such a major operating system, but there's only so much we
>>> can do with a much smaller team.
>>>
>>> Thank you,
>>> Mohammed
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Andrew Woodward
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tis the season...for a cloud reboot

2017-12-19 Thread Alex Schultz
On Tue, Dec 19, 2017 at 9:53 AM, Ben Nemec  wrote:
> The reboot is done (mostly...see below).
>
> On 12/18/2017 05:11 PM, Joe Talerico wrote:
>>
>> Ben - Can you provide some links to the ovs port exhaustion issue for
>> some background?
>
>
> I don't know if we ever had a bug opened, but there's some discussion of it
> in
> http://lists.openstack.org/pipermail/openstack-dev/2016-December/109182.html
> I've also copied Derek since I believe he was the one who found it
> originally.
>
> The gist is that after about 3 months of tripleo-ci running in this cloud we
> start to hit errors creating instances because of problems creating OVS
> ports on the compute nodes.  Sometimes we see a huge number of ports in
> general, other times we see a lot of ports that look like this:
>
> Port "qvod2cade14-7c"
> tag: 4095
> Interface "qvod2cade14-7c"
>
> Notably they all have a tag of 4095, which seems suspicious to me.  I don't
> know whether it's actually an issue though.
>
> I've had some offline discussions about getting someone on this cloud to
> debug the problem.  Originally we decided not to pursue it since it's not
> hard to work around and we didn't want to disrupt the environment by trying
> to move to later OpenStack code (we're still back on Mitaka), but it was
> pointed out to me this time around that from a downstream perspective we
> have users on older code as well and it may be worth debugging to make sure
> they don't hit similar problems.
>
> To that end, I've left one compute node un-rebooted for debugging purposes.
> The downstream discussion is ongoing, but I'll update here if we find
> anything.
>

I just so happened to wander across the bug from last time,
https://bugs.launchpad.net/tripleo/+bug/1719334

>
>>
>> Thanks,
>> Joe
>>
>> On Mon, Dec 18, 2017 at 10:43 AM, Ben Nemec 
>> wrote:
>>>
>>> Hi,
>>>
>>> It's that magical time again.  You know the one, when we reboot rh1 to
>>> avoid
>>> OVS port exhaustion. :-)
>>>
>>> If all goes well you won't even notice that this is happening, but there
>>> is
>>> the possibility that a few jobs will fail while the te-broker host is
>>> rebooted so I wanted to let everyone know.  If you notice anything else
>>> hosted in rh1 is down (tripleo.org, zuul-status, etc.) let me know.  I
>>> have
>>> been known to forget to restart services after the reboot.
>>>
>>> I'll send a followup when I'm done.
>>>
>>> -Ben
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Planning for job execution outside the gate with Zuul v3

2017-12-19 Thread Alex Schultz
On Mon, Nov 20, 2017 at 3:31 PM, David Moreau Simard  wrote:
> Hi,
>
> As the migration of review.rdoproject.org to Zuul v3 draws closer, I'd like
> to open up the discussion around how we want to approach an eventual
> migration to Zuul v3 outside the gate.
> I'd like to take this opportunity to allow ourselves to think outside the
> box, think about how we would like to shape the CI of TripleO from upstream
> to the product and then iterate to reach that goal.
>
> The reason why I mention "outside the gate" is because one of the features
> of Zuul v3 is to dynamically construct its configuration by including
> different repositories.
> For example, the Zuul v3 from review.rdoproject.org can selectively include
> parts of git.openstack.org/openstack-infra/tripleo-ci [1] and it will load
> the configuration found there for jobs, nodesets, projects, etc.
>
> This opens a great deal of opportunities for sharing content or centralizing
> the different playbooks, roles and job parameters in one single repository
> rather than spread across different repositories across the production
> chain.
> If we do things right, this could give us the ability to run the same jobs
> (which can be customized with parameters depending on the environment,
> release, scenario, etc.) from the upstream gate down to
> review.rdoproject.org and the later productization steps.
>
> There's pros and cons to the idea, but this is just an example of what we
> can do with Zuul v3.
>
> Another example of an interesting thought from Sagi is to boot virtual
> machines directly with pre-built images instead of installing the
> undercloud/overcloud every time.
> Something else to think about is how can we leverage all the Ansible things
> from TripleO Quickstart in Zuul v3 natively.
>
> There's of course constraints about what we can and can't do in the upstream
> gate... but let's avoid prematurely blocking ourselves and try to think
> about what we want to do ideally and figure out if, and how, we can do it.
> Whether it's about the things that we would like to do, can't do, or the
> things that don't work, I'm sure the feedback and outcome of this could
> prove useful to improve Zuul.
>
> How would everyone like to proceed ? Should we start an etherpad ? Do some
> "design dession" meetings ?
> I'm willing to help get the ball rolling and spearhead the effort but this
> is a community effort :)
>

So we had a meeting today around this topic and we chatted about two
distinct efforts on this front.  The first one is that we need to
figure out how/where to migrate the review.rdoproject jobs.

Some notes can be found at
https://etherpad.openstack.org/p/rdosf_zuulv3_planning

It was agreed that we should use the openstack-infra/tripleo-ci for
the job configuration for review.rdoproject as this is where we keep
the current upstream openstack Zuul v3 job definitions for tripleo.
The action items for this migration would be:

1) Compile a list of the jobs in review.rdo
  
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/zuul/upstream.yaml
  
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/jobs/tripleo-upstream.yml
2) Compare this list of jobs to already defined list of jobs in
openstack-infra/tripleoci
  https://github.com/openstack-infra/tripleo-ci/tree/master/zuul.d
3) Determine the ability to reuse existing jobs and convert any
missing jobs as necessary
4) Define new missing jobs in tripleo-ci
5) Import the project/jobs into a zuul v3 for review.rdoproject
6) Test
7) Switch over


The other future actions that need to be discussed around being able
to use Zuul v3 natively require investigating how Zuul should be
executing code from quickstart.  It was mentioned that there might
need to be improvements in Zuul depending on what the execution of
quickstart needs to look like (multiple playbooks, where do the
variables come from, etc). It was also mentioned that we need to
understand/document the expectations that we around what does the
invocation of quickstart actually mean to both a developer and CI and
we shouldn't necessarily just adapt it for primarily CI use case.
Since quickstart in essentially executing ansible, should be be
exposing that or should the v3 be running the exact same interaction
as a developer would.  This sounded like a longer discussion outside
of the scope of getting review.rdoproject switched over to leveraging
Zuul v3.


Thanks,
-Alex



> Thanks !
>
> [1]: http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/zuul.d
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Canceling weekly meetings for Dec 26th and Jan 2nd

2017-12-19 Thread Alex Schultz
Hey everyone,

Due to likely low attendance, we'll be canceling the next two weekly
meetings on Dec 26th and Jan 2nd. We'll resume weekly meetings back on
Jan 9th.  Happy holidays and stuff.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-15 Thread Alex Schultz
On Thu, Dec 14, 2017 at 5:01 PM, Tony Breeds <t...@bakeyournoodle.com> wrote:
> On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote:
>> I assume since some of this work was sort of done earlier outside of
>> tripleo and does not affect the default installation path that most
>> folks will consume, it shouldn't be impacting to general testing or
>> increase regressions. My general requirement for anyone who needed an
>> FFE for functionality that isn't essential is that it's off by
>> default, has minimal impact to the existing functionality and we have
>> a rough estimate on feature landing.  Do you have idea when you expect
>> to land this functionality? Additionally the patches seem to be
>> primarily around the ironic integration so have those been sorted out?
>
> Sadly this is going to be more impactful on x86 and anyone will like,
> and I appologise for not raising these issues before now.
>
> There are 3 main aspects:
> 1. Ironic integration/provisioning setup.
>1.1 Teaching ironic inspector how to deal with ppc64le memory
>detection.  There are a couple of approaches there but they don't
>directly impact tripleo
>1.2 I think there will be some work with puppet-ironic to setup the
>introspection dnsmasq in a way that's compatible with mulri-arch.
>right now this is the introduction of a new tag (based on options
>in the DHCP request and then sending diffent responses in the
>presense/absence of that.  Verymuch akin to the ipxe stuff there
>today.
>1.3 Helping tripleo understadn that there is now more than one
>deply/overcloud image and correctly using that.  These are mostly
>covered with the review Mark published but there is the backwards
>compat/corner cases to deal with.
>1.4 Right now ppc64le has very specific requirements with respect to
>the boot partition layout. Last time I checked these weren't
>handled by default in ironic.  The smiple workaround here is to
>make the overcloud image on ppc64le a whole disk rather than
>single partition and I think given the scope of everythign else
>that's the most likley outcome for queens.
>
> 2. Containers.
>Here we run in to several issues not least of which is my general
>lack of understanding of containers but the challenges as I
>understand them are:
>2.1 Having a venue to build/publish/test ppc64le container builds.
>This in many ways is tied to the CI issue below, but all of the
>potential solutions require some conatiner image for ppc64le to
>be available to validate that adding them doesn't impact x86_64.
>2.2 As I understand it the right way to do multi-arch containers is
>with an image manifest or manifest list images[1]  There are so
>many open questions here.
>2.2.1 If the container registry supports manifest lists when we
>  pull them onto the the undercloud can we get *all*
>  layers/objects - or will we just get the one that matches
>  the host CPU?
>2.2.2 If the container registry doesn't support manifest list
>  images, can we use somethign like manifest-tool[2] to pull
>  "nova" from multiple registreies or orgs on the same
>  registry and combine them into a single manifest image on
>  the underclud?
>2.2.3 Do we give up entirely on manifest images and just have
>  multiple images / tags on the undercloud for example:
> nova:latest
> nova:x86_64_latest
> nova:ppc64le_64_latest
>  and have the deployed node pull the $(arch)_latest tag
>  first and if $(arch) == x86_64 pull the :latest tag if the
>  first pull failed?
>2.3 All the things I can't describe/know about 'cause I haven't
>gotten there yet.
> 3. CI
>There isn't any ppc64le CI for tripleo and frankly there wont be in
>the forseeable future.  Given the CI that's inplace on x86 we can
>confidently assert that we won't break x86 but the code paths we add
>for power will largely be untested (beyonf unit tests) and any/all
>issues will have to be caught by downstream teams.
>
> So as you can see the aim is to have minimal impact on x86_64 and
> default to the existing behaviour in the absence of anything
> specifically requesting multi-arch support.  but minimal *may* be > none
> :(
>
> As to code ETAs realistically all of the ironic related code will be
> public by m3 but probably not merged, and the containers stuff is
> somewhat dpenedant on that work / directio

Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-14 Thread Alex Schultz
On Thu, Dec 14, 2017 at 12:38 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
> Alex Schultz <aschu...@redhat.com> wrote on 12/14/2017 09:24:54 AM:
>> On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
>> ... As I said previously, please post the
>> patches ASAP so we can get eyes on these changes.  Since this does
>> have an impact on the existing functionality this should have been
>> merged earlier in the cycle so we could work out any user facing
>> issues.
>
> Sorry about that.
> https://review.openstack.org/#/c/528000/
> https://review.openstack.org/#/c/528060/
>

I reviewed it a bit and I think you can put in the backwards
compatibility in the few spots I listed. The problem is really that a
Queens undercloud (tripleoclient/tripleo-common) needs to be able to
manage a Pike undercloud. For now I think we can grant the FFE because
it's not too bad if this is the only bit of changes we need to make.
But we will need to solve for the backwards compatibility prior to
merging.  I'll update the blueprint with this.

Thanks,
-Alex

> I will see how easy it is to also support the old naming convention...
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   >