Re: [openstack-dev] [tripleo] CI promotion blockers

2018-01-02 Thread Alex Schultz
On Tue, Jan 2, 2018 at 9:08 AM, Julie Pichon  wrote:
> Hi!
>
> On 27 December 2017 at 16:48, Emilien Macchi  wrote:
>> - Keystone removed _member_ role management, so we stopped using it
>> (only Member is enough): https://review.openstack.org/#/c/529849/
>
> There's been so many issues with the default member role and Horizon
> over the years, that one got my attention. I can see that
> puppet-horizon still expects '_member_' for role management [1].
> However trying to understand the Keystone patch linked to in the
> commit, it looks like there's total freedom in which role name to use
> so we can't just change the default in puppet-horizon to use 'Member'
> as other consumers may expect and settle on '_member_' in their
> environment. (Right?)
>
> In this case, the proper way to fix this for TripleO deployments may
> be to make the change in instack-undercloud (I presume in [2]) so that
> the default role is explicitly set to 'Member' for us? Does that sound
> like the correct approach to get to a working Horizon?
>

We probably should at least change _member_ to Member in
puppet-horizon. That fixes both projects for the default case.

Thanks,
-Alex

> Julie
>
> [1] 
> https://github.com/openstack/puppet-horizon/blob/master/manifests/init.pp#L458
> [2] 
> https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.yaml.template#L622
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
> Just a note, the queens repo is not currently synced in the infra so
> the queens repo patch is failing on Ubuntu jobs. I've proposed adding
> queens to the infra configuration to resolve this:
> https://review.openstack.org/529670
>

As a follow up, the mirrors have landed and two of the four scenarios
now pass.  Scenario001 is failing on ceilometer-api which was removed
so I have a patch[0] to remove it. Scenario004 is having issues with
neutron and the db looks to be very unhappy[1].

Thanks,
-Alex

[0] https://review.openstack.org/529787
[1] 
http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
On Thu, Dec 21, 2017 at 10:40 AM, Alex Schultz <aschu...@redhat.com> wrote:
> Currently they are all globally failing in master (we are using pike
> still[0] which is probably the problem) in the tempest run[1] due to:
> AttributeError: 'module' object has no attribute 'requires_ext'
>
> I've submit a patch[2] to switch UCA to queens. If history is any
> indication, it will probably end up with a bunch of failing tests that
> will need to be looked at. Feel free to follow along/help with the
> switch.
>

Just a note, the queens repo is not currently synced in the infra so
the queens repo patch is failing on Ubuntu jobs. I've proposed adding
queens to the infra configuration to resolve this:
https://review.openstack.org/529670

> Thanks,
> -Alex
>
> [0] 
> https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6
> [1] 
> http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190
> [2] https://review.openstack.org/#/c/529657/
>
> On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin
> <tobias.ur...@crystone.com> wrote:
>> Thanks for letting us know!
>>
>> I can push for time on this if we can get a list.
>>
>>
>> Best regards
>>
>> Tobias
>>
>>
>> On 12/21/2017 08:04 AM, Andrew Woodward wrote:
>>
>> Some pointers for perusal as to the observed problems would be helpful,
>> Thanks!
>>
>> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short <zul...@gmail.com> wrote:
>>>
>>> Hi Mohammed,
>>>
>>> I might be able to help where can I find this info?
>>>
>>> Thanks
>>> chuck
>>>
>>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser <mna...@vexxhost.com>
>>> wrote:
>>>>
>>>> Hi everyone,
>>>>
>>>> I'll get right into the point.
>>>>
>>>> At the moment, the Puppet OpenStack modules don't have much
>>>> contributors which can help maintain the Ubuntu support.  We deploy on
>>>> CentOS (so we try to get all the fixes in that we can) and there is a
>>>> lot of activity from the TripleO team as well which does their
>>>> deployments on CentOS which means that the CentOS support is very
>>>> reliable and CI is always sought after.
>>>>
>>>> However, starting a while back, we started seeing occasional failures
>>>> with Ubuntu deploys which lead us set the job to non-voting.  At the
>>>> moment, the Puppet integration jobs for Ubuntu are always failing
>>>> because of some Tempest issue.  This means that with every Puppet
>>>> change, we're wasting ~80 minutes of CI run time for a job that will
>>>> always fail.
>>>>
>>>> We've had a lot of support from the packaging team at RDO (which are
>>>> used in Puppet deployments) and they run our integration before
>>>> promoting packages which makes it helpful in finding issues together.
>>>> However, we do not have that with Ubuntu neither has there been anyone
>>>> who is taking initiative to look and investigate those issues.
>>>>
>>>> I understand that there are users out there who use Ubuntu with Puppet
>>>> OpenStack modules.  We need your help to come and try and clear those
>>>> issues out. We'd be more than happy to give assistance to lead you in
>>>> the right way to help fix those issues.
>>>>
>>>> Unfortunately, if we don't have any folks stepping up to resolving
>>>> this, we'll be forced to drop all CI for Ubuntu and make a note to
>>>> users that Ubuntu is not fully tested and hope that as users run into
>>>> issues, they can contribute fixes back (or that someone can work on
>>>> getting Ubuntu gating working again).
>>>>
>>>> Thanks for reading through this, I am quite sad that we'd have to drop
>>>> support for such a major operating system, but there's only so much we
>>>> can do with a much smaller team.
>>>>
>>>> Thank you,
>>>> Mohammed
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Andrew Woodward
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2017-12-21 Thread Alex Schultz
Currently they are all globally failing in master (we are using pike
still[0] which is probably the problem) in the tempest run[1] due to:
AttributeError: 'module' object has no attribute 'requires_ext'

I've submit a patch[2] to switch UCA to queens. If history is any
indication, it will probably end up with a bunch of failing tests that
will need to be looked at. Feel free to follow along/help with the
switch.

Thanks,
-Alex

[0] 
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/repos.pp#L6
[1] 
http://logs.openstack.org/62/529562/3/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/671f88e/job-output.txt.gz#_2017-12-21_14_54_49_779190
[2] https://review.openstack.org/#/c/529657/

On Thu, Dec 21, 2017 at 12:58 AM, Tobias Urdin
 wrote:
> Thanks for letting us know!
>
> I can push for time on this if we can get a list.
>
>
> Best regards
>
> Tobias
>
>
> On 12/21/2017 08:04 AM, Andrew Woodward wrote:
>
> Some pointers for perusal as to the observed problems would be helpful,
> Thanks!
>
> On Wed, Dec 20, 2017 at 11:09 AM Chuck Short  wrote:
>>
>> Hi Mohammed,
>>
>> I might be able to help where can I find this info?
>>
>> Thanks
>> chuck
>>
>> On Wed, Dec 20, 2017 at 12:03 PM, Mohammed Naser 
>> wrote:
>>>
>>> Hi everyone,
>>>
>>> I'll get right into the point.
>>>
>>> At the moment, the Puppet OpenStack modules don't have much
>>> contributors which can help maintain the Ubuntu support.  We deploy on
>>> CentOS (so we try to get all the fixes in that we can) and there is a
>>> lot of activity from the TripleO team as well which does their
>>> deployments on CentOS which means that the CentOS support is very
>>> reliable and CI is always sought after.
>>>
>>> However, starting a while back, we started seeing occasional failures
>>> with Ubuntu deploys which lead us set the job to non-voting.  At the
>>> moment, the Puppet integration jobs for Ubuntu are always failing
>>> because of some Tempest issue.  This means that with every Puppet
>>> change, we're wasting ~80 minutes of CI run time for a job that will
>>> always fail.
>>>
>>> We've had a lot of support from the packaging team at RDO (which are
>>> used in Puppet deployments) and they run our integration before
>>> promoting packages which makes it helpful in finding issues together.
>>> However, we do not have that with Ubuntu neither has there been anyone
>>> who is taking initiative to look and investigate those issues.
>>>
>>> I understand that there are users out there who use Ubuntu with Puppet
>>> OpenStack modules.  We need your help to come and try and clear those
>>> issues out. We'd be more than happy to give assistance to lead you in
>>> the right way to help fix those issues.
>>>
>>> Unfortunately, if we don't have any folks stepping up to resolving
>>> this, we'll be forced to drop all CI for Ubuntu and make a note to
>>> users that Ubuntu is not fully tested and hope that as users run into
>>> issues, they can contribute fixes back (or that someone can work on
>>> getting Ubuntu gating working again).
>>>
>>> Thanks for reading through this, I am quite sad that we'd have to drop
>>> support for such a major operating system, but there's only so much we
>>> can do with a much smaller team.
>>>
>>> Thank you,
>>> Mohammed
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Andrew Woodward
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tis the season...for a cloud reboot

2017-12-19 Thread Alex Schultz
On Tue, Dec 19, 2017 at 9:53 AM, Ben Nemec  wrote:
> The reboot is done (mostly...see below).
>
> On 12/18/2017 05:11 PM, Joe Talerico wrote:
>>
>> Ben - Can you provide some links to the ovs port exhaustion issue for
>> some background?
>
>
> I don't know if we ever had a bug opened, but there's some discussion of it
> in
> http://lists.openstack.org/pipermail/openstack-dev/2016-December/109182.html
> I've also copied Derek since I believe he was the one who found it
> originally.
>
> The gist is that after about 3 months of tripleo-ci running in this cloud we
> start to hit errors creating instances because of problems creating OVS
> ports on the compute nodes.  Sometimes we see a huge number of ports in
> general, other times we see a lot of ports that look like this:
>
> Port "qvod2cade14-7c"
> tag: 4095
> Interface "qvod2cade14-7c"
>
> Notably they all have a tag of 4095, which seems suspicious to me.  I don't
> know whether it's actually an issue though.
>
> I've had some offline discussions about getting someone on this cloud to
> debug the problem.  Originally we decided not to pursue it since it's not
> hard to work around and we didn't want to disrupt the environment by trying
> to move to later OpenStack code (we're still back on Mitaka), but it was
> pointed out to me this time around that from a downstream perspective we
> have users on older code as well and it may be worth debugging to make sure
> they don't hit similar problems.
>
> To that end, I've left one compute node un-rebooted for debugging purposes.
> The downstream discussion is ongoing, but I'll update here if we find
> anything.
>

I just so happened to wander across the bug from last time,
https://bugs.launchpad.net/tripleo/+bug/1719334

>
>>
>> Thanks,
>> Joe
>>
>> On Mon, Dec 18, 2017 at 10:43 AM, Ben Nemec 
>> wrote:
>>>
>>> Hi,
>>>
>>> It's that magical time again.  You know the one, when we reboot rh1 to
>>> avoid
>>> OVS port exhaustion. :-)
>>>
>>> If all goes well you won't even notice that this is happening, but there
>>> is
>>> the possibility that a few jobs will fail while the te-broker host is
>>> rebooted so I wanted to let everyone know.  If you notice anything else
>>> hosted in rh1 is down (tripleo.org, zuul-status, etc.) let me know.  I
>>> have
>>> been known to forget to restart services after the reboot.
>>>
>>> I'll send a followup when I'm done.
>>>
>>> -Ben
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Planning for job execution outside the gate with Zuul v3

2017-12-19 Thread Alex Schultz
On Mon, Nov 20, 2017 at 3:31 PM, David Moreau Simard  wrote:
> Hi,
>
> As the migration of review.rdoproject.org to Zuul v3 draws closer, I'd like
> to open up the discussion around how we want to approach an eventual
> migration to Zuul v3 outside the gate.
> I'd like to take this opportunity to allow ourselves to think outside the
> box, think about how we would like to shape the CI of TripleO from upstream
> to the product and then iterate to reach that goal.
>
> The reason why I mention "outside the gate" is because one of the features
> of Zuul v3 is to dynamically construct its configuration by including
> different repositories.
> For example, the Zuul v3 from review.rdoproject.org can selectively include
> parts of git.openstack.org/openstack-infra/tripleo-ci [1] and it will load
> the configuration found there for jobs, nodesets, projects, etc.
>
> This opens a great deal of opportunities for sharing content or centralizing
> the different playbooks, roles and job parameters in one single repository
> rather than spread across different repositories across the production
> chain.
> If we do things right, this could give us the ability to run the same jobs
> (which can be customized with parameters depending on the environment,
> release, scenario, etc.) from the upstream gate down to
> review.rdoproject.org and the later productization steps.
>
> There's pros and cons to the idea, but this is just an example of what we
> can do with Zuul v3.
>
> Another example of an interesting thought from Sagi is to boot virtual
> machines directly with pre-built images instead of installing the
> undercloud/overcloud every time.
> Something else to think about is how can we leverage all the Ansible things
> from TripleO Quickstart in Zuul v3 natively.
>
> There's of course constraints about what we can and can't do in the upstream
> gate... but let's avoid prematurely blocking ourselves and try to think
> about what we want to do ideally and figure out if, and how, we can do it.
> Whether it's about the things that we would like to do, can't do, or the
> things that don't work, I'm sure the feedback and outcome of this could
> prove useful to improve Zuul.
>
> How would everyone like to proceed ? Should we start an etherpad ? Do some
> "design dession" meetings ?
> I'm willing to help get the ball rolling and spearhead the effort but this
> is a community effort :)
>

So we had a meeting today around this topic and we chatted about two
distinct efforts on this front.  The first one is that we need to
figure out how/where to migrate the review.rdoproject jobs.

Some notes can be found at
https://etherpad.openstack.org/p/rdosf_zuulv3_planning

It was agreed that we should use the openstack-infra/tripleo-ci for
the job configuration for review.rdoproject as this is where we keep
the current upstream openstack Zuul v3 job definitions for tripleo.
The action items for this migration would be:

1) Compile a list of the jobs in review.rdo
  
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/zuul/upstream.yaml
  
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/jobs/tripleo-upstream.yml
2) Compare this list of jobs to already defined list of jobs in
openstack-infra/tripleoci
  https://github.com/openstack-infra/tripleo-ci/tree/master/zuul.d
3) Determine the ability to reuse existing jobs and convert any
missing jobs as necessary
4) Define new missing jobs in tripleo-ci
5) Import the project/jobs into a zuul v3 for review.rdoproject
6) Test
7) Switch over


The other future actions that need to be discussed around being able
to use Zuul v3 natively require investigating how Zuul should be
executing code from quickstart.  It was mentioned that there might
need to be improvements in Zuul depending on what the execution of
quickstart needs to look like (multiple playbooks, where do the
variables come from, etc). It was also mentioned that we need to
understand/document the expectations that we around what does the
invocation of quickstart actually mean to both a developer and CI and
we shouldn't necessarily just adapt it for primarily CI use case.
Since quickstart in essentially executing ansible, should be be
exposing that or should the v3 be running the exact same interaction
as a developer would.  This sounded like a longer discussion outside
of the scope of getting review.rdoproject switched over to leveraging
Zuul v3.


Thanks,
-Alex



> Thanks !
>
> [1]: http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/zuul.d
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Canceling weekly meetings for Dec 26th and Jan 2nd

2017-12-19 Thread Alex Schultz
Hey everyone,

Due to likely low attendance, we'll be canceling the next two weekly
meetings on Dec 26th and Jan 2nd. We'll resume weekly meetings back on
Jan 9th.  Happy holidays and stuff.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-15 Thread Alex Schultz
On Thu, Dec 14, 2017 at 5:01 PM, Tony Breeds <t...@bakeyournoodle.com> wrote:
> On Wed, Dec 13, 2017 at 03:01:41PM -0700, Alex Schultz wrote:
>> I assume since some of this work was sort of done earlier outside of
>> tripleo and does not affect the default installation path that most
>> folks will consume, it shouldn't be impacting to general testing or
>> increase regressions. My general requirement for anyone who needed an
>> FFE for functionality that isn't essential is that it's off by
>> default, has minimal impact to the existing functionality and we have
>> a rough estimate on feature landing.  Do you have idea when you expect
>> to land this functionality? Additionally the patches seem to be
>> primarily around the ironic integration so have those been sorted out?
>
> Sadly this is going to be more impactful on x86 and anyone will like,
> and I appologise for not raising these issues before now.
>
> There are 3 main aspects:
> 1. Ironic integration/provisioning setup.
>1.1 Teaching ironic inspector how to deal with ppc64le memory
>detection.  There are a couple of approaches there but they don't
>directly impact tripleo
>1.2 I think there will be some work with puppet-ironic to setup the
>introspection dnsmasq in a way that's compatible with mulri-arch.
>right now this is the introduction of a new tag (based on options
>in the DHCP request and then sending diffent responses in the
>presense/absence of that.  Verymuch akin to the ipxe stuff there
>today.
>1.3 Helping tripleo understadn that there is now more than one
>deply/overcloud image and correctly using that.  These are mostly
>covered with the review Mark published but there is the backwards
>compat/corner cases to deal with.
>1.4 Right now ppc64le has very specific requirements with respect to
>the boot partition layout. Last time I checked these weren't
>handled by default in ironic.  The smiple workaround here is to
>make the overcloud image on ppc64le a whole disk rather than
>single partition and I think given the scope of everythign else
>that's the most likley outcome for queens.
>
> 2. Containers.
>Here we run in to several issues not least of which is my general
>lack of understanding of containers but the challenges as I
>understand them are:
>2.1 Having a venue to build/publish/test ppc64le container builds.
>This in many ways is tied to the CI issue below, but all of the
>potential solutions require some conatiner image for ppc64le to
>be available to validate that adding them doesn't impact x86_64.
>2.2 As I understand it the right way to do multi-arch containers is
>with an image manifest or manifest list images[1]  There are so
>many open questions here.
>2.2.1 If the container registry supports manifest lists when we
>  pull them onto the the undercloud can we get *all*
>  layers/objects - or will we just get the one that matches
>  the host CPU?
>2.2.2 If the container registry doesn't support manifest list
>  images, can we use somethign like manifest-tool[2] to pull
>  "nova" from multiple registreies or orgs on the same
>  registry and combine them into a single manifest image on
>  the underclud?
>2.2.3 Do we give up entirely on manifest images and just have
>  multiple images / tags on the undercloud for example:
> nova:latest
> nova:x86_64_latest
> nova:ppc64le_64_latest
>  and have the deployed node pull the $(arch)_latest tag
>  first and if $(arch) == x86_64 pull the :latest tag if the
>  first pull failed?
>2.3 All the things I can't describe/know about 'cause I haven't
>gotten there yet.
> 3. CI
>There isn't any ppc64le CI for tripleo and frankly there wont be in
>the forseeable future.  Given the CI that's inplace on x86 we can
>confidently assert that we won't break x86 but the code paths we add
>for power will largely be untested (beyonf unit tests) and any/all
>issues will have to be caught by downstream teams.
>
> So as you can see the aim is to have minimal impact on x86_64 and
> default to the existing behaviour in the absence of anything
> specifically requesting multi-arch support.  but minimal *may* be > none
> :(
>
> As to code ETAs realistically all of the ironic related code will be
> public by m3 but probably not merged, and the containers stuff is
> somewhat dpenedant on that work / directio

Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-14 Thread Alex Schultz
On Thu, Dec 14, 2017 at 12:38 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
> Alex Schultz <aschu...@redhat.com> wrote on 12/14/2017 09:24:54 AM:
>> On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
>> ... As I said previously, please post the
>> patches ASAP so we can get eyes on these changes.  Since this does
>> have an impact on the existing functionality this should have been
>> merged earlier in the cycle so we could work out any user facing
>> issues.
>
> Sorry about that.
> https://review.openstack.org/#/c/528000/
> https://review.openstack.org/#/c/528060/
>

I reviewed it a bit and I think you can put in the backwards
compatibility in the few spots I listed. The problem is really that a
Queens undercloud (tripleoclient/tripleo-common) needs to be able to
manage a Pike undercloud. For now I think we can grant the FFE because
it's not too bad if this is the only bit of changes we need to make.
But we will need to solve for the backwards compatibility prior to
merging.  I'll update the blueprint with this.

Thanks,
-Alex

> I will see how easy it is to also support the old naming convention...
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-14 Thread Alex Schultz
On Wed, Dec 13, 2017 at 6:36 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
> Alex Schultz <aschu...@redhat.com> wrote on 12/13/2017 04:29:49 PM:
>> On Wed, Dec 13, 2017 at 3:22 PM, Mark Hamzy <ha...@us.ibm.com> wrote:
>> > What I have done at a high level is to rename the images into
>> > architecture
>> > specific
>> > images.  For example,
>> >
>> > (undercloud) [stack@oscloud5 ~]$ openstack image list
>> > +--
>> +---++
>> > | ID   | Name  |
>> > Status |
>> > +--
>> +---++
>> > | fa0ed7cb-21d7-427b-b8cb-7c62f0ff7760 | ppc64le-bm-deploy-kernel  |
>> > active |
>> > | 94dc2adf-49ce-4db5-b914-970b57a8127f | ppc64le-bm-deploy-ramdisk |
>> > active |
>> > | 6c50587d-dd29-41ba-8971-e0abf3429020 | ppc64le-overcloud-full|
>> > active |
>> > | 59e512a7-990e-4689-85d2-f1f4e1e6e7a8 | x86_64-bm-deploy-kernel   |
>> > active |
>> > | bcad2821-01be-4556-b686-31c70bb64716 | x86_64-bm-deploy-ramdisk  |
>> > active |
>> > | 3ab489fa-32c7-4758-a630-287c510fc473 | x86_64-overcloud-full |
>> > active |
>> > | 661f18f7-4d99-43e8-b7b8-f5c8a9d5b116 | x86_64-overcloud-full-initrd  |
>> > active |
>> > | 4a09c422-3de0-46ca-98c3-7c6f1f7717ff | x86_64-overcloud-full-vmlinuz |
>> > active |
>> > +--
>> +---++
>> >
>> > This will change existing functionality.
>> >
>>
>> Any chance of backwards compatibility if no arch is specified in the
>> image list so it's not that impacting?
>
> The patch as currently coded does not do that.  It is more consistent and
> cleaner as it is currently written.  How opposed is the community to a
> new convention?  I know we are pushing up against holidays and deadlines and
> don't know how much longer it will take to also support the old naming
> convention.
>

It's not that the community is averse to new conventions, the issue is
the lack of backwards compatibility especially late in the cycle. If
we need to extend out until the middle of January/the end of M3 for
this that is an option to get the backwards compatibility. I'm
wondering if this lack of backwards compatibility would be a problem
for upgrades or the more advanced use cases.  We do support the
ability for custom role images for the end users so I'm wondering what
the impact would be for that.  As I said previously, please post the
patches ASAP so we can get eyes on these changes.  Since this does
have an impact on the existing functionality this should have been
merged earlier in the cycle so we could work out any user facing
issues.

> RedHat is asking for another identifier along with ppc64le given that there
> are
> different optimizations and CPU instructions between a Power 8 system and a
> Power 9 system.  The kernel is certainly different and the base operating
> system might be as well.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-13 Thread Alex Schultz
On Wed, Dec 13, 2017 at 3:22 PM, Mark Hamzy  wrote:
>> I just need an understanding on the impact and the timeline. Replying
>> here is sufficient.
>>
>> I assume since some of this work was sort of done earlier outside of
>> tripleo and does not affect the default installation path that most
>> folks will consume, it shouldn't be impacting to general testing or
>> increase regressions. My general requirement for anyone who needed an
>> FFE for functionality that isn't essential is that it's off by
>> default, has minimal impact to the existing functionality and we have
>> a rough estimate on feature landing.  Do you have idea when you expect
>> to land this functionality? Additionally the patches seem to be
>> primarily around the ironic integration so have those been sorted out?
>
> I have been working on a multi-architecture patch for TripleO and am almost
> ready to submit a WIP to r.o.o.  I have delayed until I can get all of the
> testcases passing.
>

Please submit ASAP so we can get a proper review of what is actually
impacted. The failing test cases would also indicate how much of an
impact this really is.

> Currently the patches exist at:
> https://hamzy.fedorapeople.org/TripleO-multi-arch/05.bb2b96e/0001-fix_multi_arch-tripleo-common.patch
> https://hamzy.fedorapeople.org/TripleO-multi-arch/05.cc5fee3/0001-fix_multi_arch-python-tripleoclient.patch
>
> And the full installation instructions are at:
> https://fedoraproject.org/wiki/User:Hamzy/TripleO_mixed_undercloud_overcloud_try9
>
> What I have done at a high level is to rename the images into architecture
> specific
> images.  For example,
>
> (undercloud) [stack@oscloud5 ~]$ openstack image list
> +--+---++
> | ID   | Name  |
> Status |
> +--+---++
> | fa0ed7cb-21d7-427b-b8cb-7c62f0ff7760 | ppc64le-bm-deploy-kernel  |
> active |
> | 94dc2adf-49ce-4db5-b914-970b57a8127f | ppc64le-bm-deploy-ramdisk |
> active |
> | 6c50587d-dd29-41ba-8971-e0abf3429020 | ppc64le-overcloud-full|
> active |
> | 59e512a7-990e-4689-85d2-f1f4e1e6e7a8 | x86_64-bm-deploy-kernel   |
> active |
> | bcad2821-01be-4556-b686-31c70bb64716 | x86_64-bm-deploy-ramdisk  |
> active |
> | 3ab489fa-32c7-4758-a630-287c510fc473 | x86_64-overcloud-full |
> active |
> | 661f18f7-4d99-43e8-b7b8-f5c8a9d5b116 | x86_64-overcloud-full-initrd  |
> active |
> | 4a09c422-3de0-46ca-98c3-7c6f1f7717ff | x86_64-overcloud-full-vmlinuz |
> active |
> +--+---++
>
> This will change existing functionality.
>

Any chance of backwards compatibility if no arch is specified in the
image list so it's not that impacting?

> I still need to work with RedHat on changing the patch for their needs, but
> it currently can
> deploy an x86_64 undercloud, an x86_64 overcloud controller node and a
> ppc64le overcloud
> compute node.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-13 Thread Alex Schultz
On Wed, Dec 13, 2017 at 11:16 AM, Sven Anderson <s...@redhat.com> wrote:
>
> On Sat, Dec 9, 2017 at 12:35 AM Alex Schultz <aschu...@redhat.com> wrote:
>>
>> Please take some time to review the list of blueprints currently
>> associated with Rocky[0] to see if your efforts have been moved. If
>> you believe you're close to implementing the feature in the next week
>> or two, let me know and we can move it back into Queens. If you think
>> it will take an extended period of time (more than 2 weeks) to land
>> but we need it in Queens, please submit an FFE.
>
>
>  As discussed on IRC today, I'd like to try to implement
>
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-realtime
>
> until Queens M3. It has been punted many releases already, and depends now
> on the ironic ansible driver, which just merged and now gets it's finishing
> touch. Since it's a pure add-on feature that is off by default and shouldn't
> have impact on existing functionality, it's a pretty safe thing to try on
> best effort basis. If we see it becomes unfeasible to land this until M3 I
> will punt it.
>
> Even if I make good progress next week, it is very unlikely to finish it
> this year, so I also like to submit an FFE for it.
>

Thanks Sven. As discussed, I updated the blueprint. Please keep me in
the loop if it will not make Queens.

Thanks,
-Alex

> Cheers,
>
> Sven
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-13 Thread Alex Schultz
On Tue, Dec 12, 2017 at 5:50 PM, Tony Breeds <t...@bakeyournoodle.com> wrote:
> On Fri, Dec 08, 2017 at 04:34:09PM -0700, Alex Schultz wrote:
>> Hey folks,
>>
>> So I went through the list of blueprints and moved some that were
>> either not updated or appeared to have a bunch of patches not in a
>> mergable state.
>>
>> Please take some time to review the list of blueprints currently
>> associated with Rocky[0] to see if your efforts have been moved. If
>> you believe you're close to implementing the feature in the next week
>> or two, let me know and we can move it back into Queens. If you think
>> it will take an extended period of time (more than 2 weeks) to land
>> but we need it in Queens, please submit an FFE.
>
> I'd like to get the ball rolling on applying for an FFE for:
> https://blueprints.launchpad.net/tripleo/+spec/multiarch-support
>
> So how do I do that thing?  For requirements it's just a thread on the
> mailing list is there soemthing more formal for tripleo?
>

I just need an understanding on the impact and the timeline. Replying
here is sufficient.

I assume since some of this work was sort of done earlier outside of
tripleo and does not affect the default installation path that most
folks will consume, it shouldn't be impacting to general testing or
increase regressions. My general requirement for anyone who needed an
FFE for functionality that isn't essential is that it's off by
default, has minimal impact to the existing functionality and we have
a rough estimate on feature landing.  Do you have idea when you expect
to land this functionality? Additionally the patches seem to be
primarily around the ironic integration so have those been sorted out?

Thanks,
-Alex


> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-12 Thread Alex Schultz
On Tue, Dec 12, 2017 at 4:56 AM, Moshe Levi <mosh...@mellanox.com> wrote:
> I believe so,
> Just regarding the use of sample-env-generator tool is not supporting 
> parameters for roles:
> Such as  :
>
>   # Kernel arguments for ComputeSriov node
>   ComputeSriovParameters:
> KernelArgs: "intel_iommu=on iommu=pt"
> OvsHwOffload: True
>

It can, you just have to document them in the env generator file. See
https://github.com/openstack/tripleo-heat-templates/blob/master/sample-env-generator/composable-roles.yaml#L128

This has the advantage of being able to properly document the
parameters for end user consumption.

> So can we merged the patches as is and fix the sample-env-generator late?

If you throw up a WIP patch for this on top of the existing one, I'll
merge it.  I just don't want it forgotten as it's important for the
end user to be able to consume these environment files via the UI as
well.

Thanks,
-Alex

>
>> -Original Message-
>> From: Alex Schultz [mailto:aschu...@redhat.com]
>> Sent: Tuesday, December 12, 2017 12:20 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> <openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky
>>
>> On Mon, Dec 11, 2017 at 1:20 PM, Brad P. Crochet <b...@redhat.com>
>> wrote:
>> >
>> >
>> > On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz <aschu...@redhat.com> wrote:
>> >>
>> >> Hey folks,
>> >>
>> >> So I went through the list of blueprints and moved some that were
>> >> either not updated or appeared to have a bunch of patches not in a
>> >> mergable state.
>> >>
>> >> Please take some time to review the list of blueprints currently
>> >> associated with Rocky[0] to see if your efforts have been moved. If
>> >> you believe you're close to implementing the feature in the next week
>> >> or two, let me know and we can move it back into Queens. If you think
>> >> it will take an extended period of time (more than 2 weeks) to land
>> >> but we need it in Queens, please submit an FFE.
>> >>
>> >
>> > I think these are in a close enough state to warrant inclusion in Queens:
>> >
>> >
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fget-networks-
>> action=0
>> >
>> 2%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540e56
>> 51a%7Ca
>> >
>> 652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857
>> ta=EKi
>> > 0yRd07V01KvoZoiQY%2FMIZU9OPZHez6%2F06PL7e7EU%3D=0
>> >
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-common-list-
>> availa
>> > ble-roles-
>> action=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee862294
>> >
>> 7ea8ccc08d540e5651a%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7
>> C63648
>> >
>> 6276360618857=rNn1Ujf2XKCslsW7idN1qJwn0DWpN7I8A4gvYWiELbI
>> %3D
>> > erved=0
>> >
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Ftripleo-common-select-
>> role
>> > s-
>> workflow=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947
>> ea8cc
>> >
>> c08d540e5651a%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6364
>> 8627636
>> >
>> 0618857=jpH6L7d1c0w6WDZxPoeDeHBzb0T2FIgEhHiT3PElEMg%3D
>> served=
>> > 0
>> >
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fupdate-networks-
>> action
>> >
>> a=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540
>> e5651a%
>> >
>> 7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636486276360618857&
>> sdata=
>> > AlBpvtX3kdMedrO%2BehaRaTUhl%2BUclJdesiqsu7HvvEA%3D=0
>> >
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> > eprints.launchpad.net%2Ftripleo%2F%2Bspec%2Fvalidate-roles-
>> networks
>> >
>> ta=02%7C01%7Cmoshele%40mellanox.com%7Caf36fee8622947ea8ccc08d540
>> e5651a
>> >
>> %7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63648627636061885
>> 7
>> >
>> =uDK%2F4Wo9S0D9qySb%2BORYVE1HDtrJ0J6Ec8qWU8hUwkw%3D
>> d=0
>> >
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> > eprints.launc

Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-11 Thread Alex Schultz
On Mon, Dec 11, 2017 at 1:20 PM, Brad P. Crochet <b...@redhat.com> wrote:
>
>
> On Fri, Dec 8, 2017 at 6:35 PM Alex Schultz <aschu...@redhat.com> wrote:
>>
>> Hey folks,
>>
>> So I went through the list of blueprints and moved some that were
>> either not updated or appeared to have a bunch of patches not in a
>> mergable state.
>>
>> Please take some time to review the list of blueprints currently
>> associated with Rocky[0] to see if your efforts have been moved. If
>> you believe you're close to implementing the feature in the next week
>> or two, let me know and we can move it back into Queens. If you think
>> it will take an extended period of time (more than 2 weeks) to land
>> but we need it in Queens, please submit an FFE.
>>
>
> I think these are in a close enough state to warrant inclusion in Queens:
>
> https://blueprints.launchpad.net/tripleo/+spec/get-networks-action
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-list-available-roles-action
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-common-select-roles-workflow
> https://blueprints.launchpad.net/tripleo/+spec/update-networks-action
> https://blueprints.launchpad.net/tripleo/+spec/validate-roles-networks
> https://blueprints.launchpad.net/tripleo/+spec/update-roles-action
>

Ok I reviewed them and they do appear to have patches posted and are
getting reviews.  I'll pull them back in to Queens and set the
milestone to queens-3. Please make sure to update us on the status
during this week and next week's IRC meetings. I would like to make
sure these land ASAP. Do you think they should be in a state to land
by the end of next week say 12/21?

Thanks,
-Alex

> There is a good chance of these being completed in the coming week.
>
> Thanks,
>
> Brad
>>
>>
> --
> Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
> Principal Software Engineer
> (c) 704.236.9385
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-11 Thread Alex Schultz
On Fri, Dec 8, 2017 at 6:11 PM, Moshe Levi <mosh...@mellanox.com> wrote:
> Hi Alex,
>
> I don't see the tripleo ovs hardware offload feature. The spec was merge into 
> queens [1], but for some reason the blueprint is not in approve state [2].
>

Just as a reminder it's everyone's responsibility to make sure their
blueprints are properly up to date. I've mentioned this in the weekly
meeting a few[0][1] times in the last month. I've added the patches
from this email into the blueprint for tracking

> I has only 3 patches left:
> 1.  https://review.openstack.org/#/c/507401/ has 2 +2
> 2.  https://review.openstack.org/#/c/507100/ has 1 +2
> 3.  https://review.openstack.org/#/c/518715/
>
> I would appreciated if we can land all the patches to  queens release.

We should be able to land these and they appear to be in decent shape.
Please reach out on irc if you aren't getting additional reviews on
these.  It would be really beneficial to land these in the new week or
so if possible.

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-11-28-14.00.log.html#l-106
[1] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-11-14-14.01.log.html#l-170

>
> [1] - https://review.openstack.org/#/c/502313/
> [2] - https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-hw-offload
>
>> -Original Message-
>> From: Alex Schultz [mailto:aschu...@redhat.com]
>> Sent: Saturday, December 9, 2017 1:34 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> <openstack-dev@lists.openstack.org>
>> Subject: [openstack-dev] [tripleo] Blueprints moved out to Rocky
>>
>> Hey folks,
>>
>> So I went through the list of blueprints and moved some that were either not
>> updated or appeared to have a bunch of patches not in a mergable state.
>>
>> Please take some time to review the list of blueprints currently associated
>> with Rocky[0] to see if your efforts have been moved. If you believe you're
>> close to implementing the feature in the next week or two, let me know and
>> we can move it back into Queens. If you think it will take an extended period
>> of time (more than 2 weeks) to land but we need it in Queens, please submit
>> an FFE.
>>
>> If you have an blueprint that is currently not in implemented in Queens[1],
>> please make sure to update the blueprint status if possible.  For the ones I
>> left in due to the patches being in a decent state, please make sure those 
>> get
>> merged in the next few weeks or we will need to push them out to Rocky.
>>
>> Thanks,
>> -Alex
>>
>>
>> [0]
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> eprints.launchpad.net%2Ftripleo%2Frocky=02%7C01%7Cmoshele%40
>> mellanox.com%7C0abe7d8deef74a13d29508d53e945633%7Ca652971c7d2e4d
>> 9ba6a4d149256f461b%7C0%7C0%7C636483729194465277=xDwvHSmx
>> niqu6HN5FaQB2DTc6N8mRS879Ku1y4FDLss%3D=0
>> [1]
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblu
>> eprints.launchpad.net%2Ftripleo%2Fqueens=02%7C01%7Cmoshele%4
>> 0mellanox.com%7C0abe7d8deef74a13d29508d53e945633%7Ca652971c7d2e4
>> d9ba6a4d149256f461b%7C0%7C0%7C636483729194465277=v6v1yDjt1f
>> dc3FAILGsS2voCiMUmaLQGPwwxNtTzcso%3D=0
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.
>> openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
>> dev=02%7C01%7Cmoshele%40mellanox.com%7C0abe7d8deef74a13d2
>> 9508d53e945633%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636
>> 483729194465277=N%2B78nm8bMlSWsASO6uIX2mJlO1%2BX9VfTM2
>> A3qUu6GUk%3D=0
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Blueprints moved out to Rocky

2017-12-08 Thread Alex Schultz
Hey folks,

So I went through the list of blueprints and moved some that were
either not updated or appeared to have a bunch of patches not in a
mergable state.

Please take some time to review the list of blueprints currently
associated with Rocky[0] to see if your efforts have been moved. If
you believe you're close to implementing the feature in the next week
or two, let me know and we can move it back into Queens. If you think
it will take an extended period of time (more than 2 weeks) to land
but we need it in Queens, please submit an FFE.

If you have an blueprint that is currently not in implemented in
Queens[1], please make sure to update the blueprint status if
possible.  For the ones I left in due to the patches being in a decent
state, please make sure those get merged in the next few weeks or we
will need to push them out to Rocky.

Thanks,
-Alex


[0] https://blueprints.launchpad.net/tripleo/rocky
[1] https://blueprints.launchpad.net/tripleo/queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nominate Gaël Chamoulaud (gchamoul) for tripleo-validations core

2017-12-08 Thread Alex Schultz
+1

On Thu, Dec 7, 2017 at 7:34 AM, Ana Krivokapic  wrote:
> Hey TripleO devs,
>
> Gaël has consistently been contributing high quality reviews and patches to
> the tripleo-validations project. The project would highly benefit from his
> addition to the core reviewer team.
>
> Assuming that there are no objections, we will add Gaël to the core team
> next week.
>
> Thanks!
>
> --
> Regards,
> Ana Krivokapic
> Senior Software Engineer
> OpenStack team
> Red Hat Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Wesley Hayutin core on TripleO CI

2017-12-06 Thread Alex Schultz
+1

On Wed, Dec 6, 2017 at 8:45 AM, Emilien Macchi  wrote:
> Team,
>
> Wes has been consistently and heavily involved in TripleO CI work.
> He has a very well understanding on how tripleo-quickstart and
> tripleo-quickstart-extras work, his number and quality of reviews are
> excellent so far. His experience with testing TripleO is more than
> valuable.
> Also, he's always here to help on TripleO CI issues or just
> improvements (he's the guy filling bugs on a Saturday evening).
> I think he would be a good addition to the TripleO CI core team
> (tripleo-ci, t-q and t-q-e repos for now).
> Anyway, thanks a lot Wes for your hard work on CI, I think it's time
> to move on and get you +2 ;-)
>
> As usual, it's open for voting, feel free to bring any feedback.
> Thanks everyone,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Queens milestone 2 released

2017-12-05 Thread Alex Schultz
Hello everyone,

We have released TripleO for Queens milestone 2 yesterday. As we
discussed at the PTG, now is the time to stop working on new features
and start stabilizing the release.  As mentioned in the meeting
today[0], we need to start reviewing the open patches for features
still in flight and see about finalizing them.  As we released early
in the week, I would expect that we spend this week finishing up
things we expected to land by milestone 2. Starting next week we
should be more strict on what we allow to land in terms of features.

Please take care when reviewing patches to ensure no new features are
being added unless they have been previously discussed and approved.
We should only move forward on adding any feature items if they are
disabled by default and do not impact existing functionality.  If you
are unsure if a patch in question is a feature please ask before
approving.  If there is not existing test coverage for a feature, that
should be improved and code should not be added without proper
coverage.

We will not be accepting any new blueprints or specs from now until
the end of Queens. As previously mentioned[1], we will be closing the
specs repo after milestone 2. We will be allowing a short grace period
(+1 week) for existing specs with +2s[2]. Please review these specs
and we should merge them this week. After this week, any open specs
should be re-targeted to Rocky.

We will be reviewing the open blueprints and moving any not currently
started blueprints to a future release. Please make sure they are
accurate and up to date.

If you have a feature you feel has to land in Queens that hasn't
already been started and approved, please raise a feature freeze
exception and we can investigate if it we should allow it.

Thanks,
-Alex


[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-12-05-14.00.log.html#l-41
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123474.html
[2] 
https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open+label:Code-Review%253E%253D2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rename ovb jobs?

2017-12-01 Thread Alex Schultz
On Fri, Dec 1, 2017 at 7:54 AM, Emilien Macchi  wrote:
> Bogdan and Dmitry's suggestions are imho a bit too much and would lead
> to very very (very) long names... Do we actually want that?
>

No i don't think so. I think -- is ideal for communicating at least the
basics. If we did it this way for all and if we linked the featureset
docs[0] in the logs for reference it would be an improvement. I
personally dislike the scenarioXXX references because you have to
figure out the featureset/scenario mappings (and remember where those
docs live[1]).

Thanks,
-Alex

[0] 
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html
[1] 
https://github.com/openstack/tripleo-heat-templates/blob/master/README.rst#service-testing-matrix

> On Fri, Dec 1, 2017 at 2:02 AM, Sanjay Upadhyay  wrote:
>>
>>
>> On Fri, Dec 1, 2017 at 2:17 PM, Bogdan Dobrelya  wrote:
>>>
>>> On 11/30/17 8:11 PM, Emilien Macchi wrote:

 A few months ago, we renamed ovb-updates to be
 tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
 The name is much longer but it describes better what it's doing.
 We know it's a job with one controller, one compute and one storage
 node, deploying the quickstart featureset n°24.

 For consistency, I propose that we rename all OVB jobs this way.
 For example, tripleo-ci-centos-7-ovb-ha-oooq would become
 tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001
 etc.

 Any thoughts / feedback before we proceed?
 Before someone asks, I'm not in favor of renaming the multinode
 scenarios now, because they became quite familiar now, and it would
 confuse people to rename the jobs.

 Thanks,

>>>
>>> I'd like to see featuresets clarified in names as well. Just to bring the
>>> main message, w/o going into the test matrix details, like
>>> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-ovn/ceph/k8s/tempest
>>> whatever it is.
>>>
>>
>> How is this looking?
>>
>> tripleo-ci/os/centos/7/ovb/ha/nodes/3ctrlr_1comp.yaml
>> tripleo-ci/os/centos/7/ovb/ha/featureset/ovn_ceph_k8s_with-tempest.yaml
>>
>> I also think we should have clear demarcation of the oooq variables ie
>> machine specific goes to nodes/* and feature related goes to featureset/*
>>
>> regards
>> /sanjay
>>
>>
>>>
>>> --
>>> Best regards,
>>> Bogdan Dobrelya,
>>> Irc #bogdando
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Alex Schultz
On Fri, Dec 1, 2017 at 8:05 AM, Alex Schultz <aschu...@redhat.com> wrote:
> On Thu, Nov 30, 2017 at 2:36 PM, Wesley Hayutin <whayu...@redhat.com> wrote:
>> Greetings,
>>
>> Just wanted to share some progress with the containerized undercloud work.
>> Ian pushed some of the patches along and we now have a successful undercloud
>> install with containers.
>>
>> The initial undercloud install works [1]
>> The idempotency check failed where we reinstall the undercloud [2]
>>
>> Question: Do we expect the reinstallation to work at this point? Should the
>> check be turned off?
>
> So I would say for the undercloud-container's job it's not required at
> this point but for the main undercloud job yes it is required and
> should not be disabled. This is expected functionality that must be
> replicated in the containers version in order to make the switch.  The
> original ask that I had was that from an operator perspective the
> containerized install works exactly like the non-containerized
> undercloud.
>
>>
>> I will try it w/o the idempotency check, I suspect I will run into errors in
>> a full run with an overcloud deployment.  I ran into issues weeks ago.  I
>> suspect if we do hit something it will be CI related as Dan Price has been
>> deploying the overcloud for a while now.  Dan I may need to review your
>> latest doit.sh scripts to check for diffs in the CI.
>>
>
> What I would propose is switching the undercloud-containers job to use
> the 'openstack undercloud install --use-heat' command and we switch
> that to non-voting and see how it performs. Originally when we

Oops s/non-voting/voting/.  I would like that job voting but I know
we've seen failure issues in comparison with the instack-undercloud
job. That however might be related to the number of times we run the
undercloud-containers job (on all THT patches) than the instack jobs
(just puppet-tripleo and instack-undercloud). So we really need to
understand the passing numbers.

> discussed this I wanted that job voting my milestone 1. Milestone 2 is
> next week so I'm very concerned at the state of this feature.  Do we
> have updates and upgrades with the containerized undercloud being
> tested anywhere in CI? That was one of items that I had mentioned[0]
> as a requirement to do the switch during the queens cycle. What I
> would really like to see is that we get those stable and then we can
> work on actually testing overcloud deploys and the various scenarios
> with the containerized undercloud.  If we update oooq to support
> adding the --use-heat flag it would make testing all the scenarios
> fairly trivial with a single patch and we would be able to see where
> there are issues.
>
> Thanks,
> -Alex
>
> [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123065.html
>
>
>> Thanks
>>
>>
>> [1]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
>> [2]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Alex Schultz
On Fri, Dec 1, 2017 at 3:54 AM, Bogdan Dobrelya  wrote:
> On 11/30/17 10:36 PM, Wesley Hayutin wrote:
>>
>> Greetings,
>>
>> Just wanted to share some progress with the containerized undercloud work.
>> Ian pushed some of the patches along and we now have a successful
>> undercloud install with containers.
>>
>> The initial undercloud install works [1]
>> The idempotency check failed where we reinstall the undercloud [2]
>>
>> Question: Do we expect the reinstallation to work at this point? Should
>> the check be turned off?
>
>
> Yeah, there is a bug for that [0]. Not critical to fix, though nice to have
> for developers. I'm used to deploy with undercloud containers, and it's a
> pain to do a full teardown and reinstall for each change being tested.
>

It may not be critical now, but it is a critical requirement in order
to switch to containerized undercloud by default as this is the way it
functions today with instack-undercloud.

Thanks,
-Alex

> By the way, somewhat related, I have a PoC for undercloud containers
> all-in-one [1], by quickstart off-road. And a few 'enabler' bug-fixes
> [2],[3],[4], JFYI and review please.
>
> I think all-in-one uc may be useful either for CI, or dev cases. Like for
> those who want to deploy *things* on top of openstack, yet suffering from
> healing devstack and searching alternatives, like packstack et al. So they
> may want to switch suffering from healing tripleo (undercloud containers)
> instead.
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1698349
> [1] https://github.com/bogdando/oooq-warp/blob/master/rdocloud-guide.md
> [2] https://review.openstack.org/#/c/524114/
> [3] https://review.openstack.org/#/c/524133/
> [4] https://review.openstack.org/#/c/524187
>
>>
>> I will try it w/o the idempotency check, I suspect I will run into errors
>> in a full run with an overcloud deployment.  I ran into issues weeks ago.  I
>> suspect if we do hit something it will be CI related as Dan Price has been
>> deploying the overcloud for a while now.  Dan I may need to review your
>> latest doit.sh scripts to check for diffs in the CI.
>>
>> Thanks
>>
>>
>> [1]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
>> [2]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Alex Schultz
On Thu, Nov 30, 2017 at 2:36 PM, Wesley Hayutin  wrote:
> Greetings,
>
> Just wanted to share some progress with the containerized undercloud work.
> Ian pushed some of the patches along and we now have a successful undercloud
> install with containers.
>
> The initial undercloud install works [1]
> The idempotency check failed where we reinstall the undercloud [2]
>
> Question: Do we expect the reinstallation to work at this point? Should the
> check be turned off?

So I would say for the undercloud-container's job it's not required at
this point but for the main undercloud job yes it is required and
should not be disabled. This is expected functionality that must be
replicated in the containers version in order to make the switch.  The
original ask that I had was that from an operator perspective the
containerized install works exactly like the non-containerized
undercloud.

>
> I will try it w/o the idempotency check, I suspect I will run into errors in
> a full run with an overcloud deployment.  I ran into issues weeks ago.  I
> suspect if we do hit something it will be CI related as Dan Price has been
> deploying the overcloud for a while now.  Dan I may need to review your
> latest doit.sh scripts to check for diffs in the CI.
>

What I would propose is switching the undercloud-containers job to use
the 'openstack undercloud install --use-heat' command and we switch
that to non-voting and see how it performs. Originally when we
discussed this I wanted that job voting my milestone 1. Milestone 2 is
next week so I'm very concerned at the state of this feature.  Do we
have updates and upgrades with the containerized undercloud being
tested anywhere in CI? That was one of items that I had mentioned[0]
as a requirement to do the switch during the queens cycle. What I
would really like to see is that we get those stable and then we can
work on actually testing overcloud deploys and the various scenarios
with the containerized undercloud.  If we update oooq to support
adding the --use-heat flag it would make testing all the scenarios
fairly trivial with a single patch and we would be able to see where
there are issues.

Thanks,
-Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123065.html


> Thanks
>
>
> [1]
> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
> [2]
> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ronelle Landy for Tripleo-Quickstart/Extras/CI core

2017-11-29 Thread Alex Schultz
+1

On Wed, Nov 29, 2017 at 12:34 PM, John Trowbridge  wrote:
> I would like to propose Ronelle be given +2 for the above repos. She has
> been a solid contributor to tripleo-quickstart and extras almost since the
> beginning. She has solid review numbers, but more importantly has always
> done quality reviews. She also has been working in the very intense rover
> role on the CI squad in the past CI sprint, and has done very well in that
> role.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] configuring qemu.conf using puppet or ansible

2017-11-24 Thread Alex Schultz
On Fri, Nov 24, 2017 at 5:03 AM, Saravanan KR  wrote:
> Hello,
>
> For dpdk in ovs2.8, the default permission of vhost user ports are
> modified from root:root  to openvswitch:hugeltbfs. The vhost user
> ports are shared between ovs and libvirt (qemu). More details on BZ
> [1].
>
> The "group" option in /etc/libvirt/qemu.conf [2] need to set as
> "hugetlbfs" for vhost port to be shared between ovs and libvirt. In
> order to configure qemu.conf, I could think of multiple options:
>
> * By using puppet-libvirt[3] module, but this module is altering lot
> of configurations on the qemu.conf as it is trying to rewrite the
> complete qemu.conf file. It may be different version of conf file
> altogether as we might override the package defaults, depending on the
> package version used.
>

We currently do not use puppet-libvirt and qemu settings are managed
via puppet-nova with augeas[0][1].

> * Other possibility is to configure the qemu.conf file directly using
> the "init_setting" module like [4].
>
> * Considering the move towards ansible, I would prefer if we can add
> ansible based configuration along with docker-puppet for any new
> modules going forward. But I am not sure of the direction.
>

So you could use ansible provided that the existing settings are not
managed via another puppet module. The problem with mixing both puppet
and ansible is ensuring that only one owns the thing being touched.
Since we use augeas in puppet-nova, this should not conflict with the
usage of ini_setting with ansible.  Unfortunately libvirt is not
currently managed as a standalone service so perhaps it's time to
evaluate how we configure it since multiple services (nova/ovs) need
to factor into it's configuration.

Thanks,
-Alex

[0] 
https://github.com/openstack/puppet-nova/blob/30f9d47ec43519599f63f8a6f8da43b7dcb86242/manifests/compute/libvirt/qemu.pp
[1] 
https://github.com/openstack/puppet-nova/blob/9b98e3b0dee5f103c9fa32b37ff1a29df4296957/manifests/migration/qemu.pp

> Prefer the feedback before proceeding with an approach.
>
> Regards,
> Saravanan KR
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1515269
> [2]  https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu.conf#L412
> [3] https://github.com/thias/puppet-libvirt
> [4] https://review.openstack.org/#/c/522796/1/manifests/profile/base/dpdk.pp
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] IPSEC integration

2017-11-20 Thread Alex Schultz
On Thu, Nov 16, 2017 at 12:01 AM, Juan Antonio Osorio
 wrote:
> Hello folks!
>
> A few months ago Dan Sneddon and me worked in an ansible role that would
> enable IPSEC for the overcloud [1]. Currently, one would run it as an extra
> step after the overcloud deployment. But, I would like to start integrating
> it to TripleO itself, making it another option, probably as a composable
> service.
>

Is there a spec for this or at least some more detail as to what
exactly this is solving?  I would really like some more explanation
around this feature than just an ansible role proposal.

> For this, I'm planning to move the tripleo-ipsec ansible role repository
> under the TripleO umbrella. Would that be fine with everyone? Or should I
> add this ansible role as part of another repository? After that's available
> and packaged in RDO. I'll then look into the actual TripleO composable
> service.
>

As I've previously indicated it probably should live under the tripleo
umbrella but I would like to see more details around this prior to
further integration.  It's also very late in the cycle (almost m2) to
be proposing something like this. Is the target for this Rocky?

That being said I don't see anything specific to this role that would
cause problems as part of the deployment process as it exists today.
I do see some possible conflicts around the iptables configuration as
we currently manage that via heat/puppet but I think it's smart enough
to not stomp on each other if we carefully format the rules.  Another
implementation item that might be problematic is the more hard-coded
configuration via template files. What is the plan to make those more
dynamic to support other roles besides just compute/controller?  Right
now tripleo-heat-templates is the source of configuration items that
we expose for the deployment.  What would we be looking to expose to
deployers since what is currently exposed from the role is minimal?

> Any input and contributions are welcome!
>
> [1] https://github.com/JAORMX/tripleo-ipsec
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com
>
>

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Status of CI

2017-11-20 Thread Alex Schultz
Hey folks,

So over the weekend we have successfully moved our jobs in-tree with
zuul v3 thanks to Emilien's hard work.  There are a few outstanding
issues with some stable branches that he is working on.  Additionally
we have switched scenario001 and scenaio003 to non-voting due to Bug
1731063[0].  We are still occasionally hitting heat issues due to Bug
1731032[1], however there is a possible fix for that one.  So I think
we're OK to start merging items in master as needed.   Do take some
time to review the current outstanding alerts as they do affect our
ability to merge fixes.  Also please check any new failures that might
occur to ensure we are not just ignoring other possible issues.

Thanks,
-Alex

[0] https://bugs.launchpad.net/tripleo/+bug/1731063
[1] https://bugs.launchpad.net/tripleo/+bug/1731032

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] need help with tempest failures for Bug 1731063

2017-11-17 Thread Alex Schultz
Hello everyone,

Bug 1731063[0] has been kicking around for almost 10 days now. We're
now seeing something similar to it on scenario003 and will be
switching it to non-voting[1] as soon as the v3 cut over finishes.
This is removing additional test coverage and unless we start seeing
some movement on the critical bugs, I do not think we should continue
merging additional features until these bugs get resolved.  Since we
do not see this bug in Pike, this appears to be a regression and the
most recent review of the logs seems to point to neutron. If some
folks from the networking squad could take a look at the logs and help
that would be great.

Between this one and Bug 1731032[2], CI is randomly unhappy which is
not helping anyone get stuff landed.

Thanks,
-Alex

[0] https://bugs.launchpad.net/tripleo/+bug/1731063
[1] https://review.openstack.org/521205
[2] https://bugs.launchpad.net/tripleo/+bug/1731032

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Migrating TripleO CI in-tree tomorrow - please README

2017-11-17 Thread Alex Schultz
On Thu, Nov 16, 2017 at 11:20 AM, Emilien Macchi  wrote:
> TL;DR: don't approve or recheck any tripleo patch from now, until
> further notice on this thread.
>
> Some good progress has been made on migrating legacy tripleo CI jobs
> to be in-tree:
> https://review.openstack.org/#/q/topic:tripleo/migrate-to-zuulv3
>
> The next steps:
> - Let the current gate to finish their jobs running.
> - Stop approving patches from now, and wait the gate to be done and cleared
> - Alex and I will approve the migration patches tomorrow and we hope
> to have them in the gate by Friday afternoon (US time) when gate isn't
> busy anymore. We'll also have to backport them all.

They have been pushed to the gate. There are a few patches in front of
them before they will hit. please do not approve anything until the v3
cut over lands as you'll end up with double the amount of jobs running
on your gate patches until the project-config change lands.

Thanks,
-Alex


> - When these patches will be merged (it might take the weekend to
> land, depending how the gate will be), we'll run duplicated jobs until
> https://review.openstack.org/514778 is merged. I'll try to ping
> someone from Infra over the week-end if we can land it, that would be
> great.
> - Once https://review.openstack.org/514778 is merged, people are free
> to do recheck or approve any patches. We hope it should happen over
> the weekend.
> - I'll continue to migrate all other tripleo projects to have in-tree
> layout. On the list: t-p-e, t-i-e, paunch, os-*-config,
> tripleo-validations.
>
> Thanks for your help,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nominate akrivoka for tripleo-validations core

2017-11-16 Thread Alex Schultz
+1

On Mon, Nov 6, 2017 at 7:32 AM, Honza Pokorny  wrote:
> Hello people,
>
> I would like to nominate Ana Krivokapić (akrivoka) for the core team for
> tripleo-validations.  She has really stepped up her game on that project
> in terms of helpful reviews, and great patches.
>
> With Ana's help as a core, we can get more done, and innovate faster.
>
> If there are no objections within a week, we'll proceed with adding Ana
> to the team.
>
> Thanks
>
> Honza Pokorny
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Please do not approve or recheck anything not related to CI alert bugs

2017-11-15 Thread Alex Schultz
Ok so here's the latest. We've switched scenario001 to non-voting[0]
for now until Bug 1731063[1] can be resolved. We should be OK to start
merging thing in master as the other current issues don't appear to be
affecting the gate significantly as it stands.  We still need to
understand why we're hitting Bug 1731063 and address the problem so we
can revert the non-voting change ASAP.  Scenario001 provides lots of
coverage for TripleO so I do not want to see it non-voting for long.
If scenario001 is failing on your change, please make sure it is not
Bug 1731063 before rechecking or approving.  If you are approving
changes or rechecking and it fails, do not blindly recheck. Please
file a new bug and ping #tripleo so we can make sure we don't have
other things that may affect the gate.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/520155/
[1] https://bugs.launchpad.net/tripleo/+bug/1731063

On Sat, Nov 11, 2017 at 8:47 PM, Alex Schultz <aschu...@redhat.com> wrote:
> Ok so here's the current status of things.  I've gone through some of
> the pending patches and sent them to the gate over the weekend since
> the gate was empty (yay!).  We've managed to land a bunch of patches.
> That being said for any patch for master with scenario jobs, please do
> not recheck/approve. Currently the non-containerized scenario001/004
> jobs are broken due to Bug 1731688[0] (these run on
> tripleo-quickstart-extras/tripleo-ci).  There is a patch[1] out for a
> revert of the breaking change. The scenario001-container job is super
> flaky due to Bug 1731063[2] and we could use some help figuring out
> what's going on.  We're also seeing some issues around heat
> interactions[3][4] but those seems to be less of a problem than the
> previously mentioned bugs.
>
> So at the moment any changes that don't have scenario jobs associated
> with them may be approved/rechecked freely.  We can discuss on Monday
> what to do about the scenario jobs if we still are running into issues
> without a solution in sight.  Also please keep an eye on the gate
> queue[5] and don't approve things if it starts getting excessively
> long.
>
> Thanks,
> -Alex
>
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1731688
> [1] https://review.openstack.org/#/c/519041/
> [2] https://bugs.launchpad.net/tripleo/+bug/1731063
> [3] https://bugs.launchpad.net/tripleo/+bug/1731032
> [4] https://bugs.launchpad.net/tripleo/+bug/1731540
> [5] http://zuulv3.openstack.org/
>
> On Wed, Nov 8, 2017 at 3:39 PM, Alex Schultz <aschu...@redhat.com> wrote:
>> So we have some good news and some bad news.  The good news is that
>> we've managed to get the gate queue[0] under control since we've held
>> off on pushing new things to the gate.  The bad news is that we've
>> still got some random failures occurring during the deployment of
>> master.  Since we're not seeing infra related issues, we should be OK
>> to merge things to stable/* branches.  Unfortunately until we resolve
>> the issues in master[1] we could potentially backup the queue.  Please
>> do not merge things that are not critical bugs.  I would ask that
>> folks please take a look at the open bugs and help figure out what is
>> going wrong. I've created two issues today that I've seen in the gate
>> that we don't appear to have open patches for. One appears to be an
>> issue in the heat deployment process[3] and the other is related to
>> the tempest verification of being able to launch a VM & ssh to it[4].
>>
>> Thanks,
>> -Alex
>>
>> [3] https://bugs.launchpad.net/tripleo/+bug/1731032
>> [4] https://bugs.launchpad.net/tripleo/+bug/1731063
>>
>> On Tue, Nov 7, 2017 at 8:33 AM, Alex Schultz <aschu...@redhat.com> wrote:
>>> Hey Folks
>>>
>>> So we're at 24+ hours again in the gate[0] and the queue only
>>> continues to grow. We currently have 6 ci/alert bugs[1]. Please do not
>>> approve of recheck anything that isn't related to these bugs.  I will
>>> most likely need to go through the queue and abandon everything to
>>> clear it up as we are consistently hitting timeouts on various jobs
>>> which is preventing anything from merging.
>>>
>>> Thanks,
>>> -Alex
>>>
>> [0] http://zuulv3.openstack.org/
>> [1] 
>> https://bugs.launchpad.net/tripleo/+bugs?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS%3Alist=CRITICAL_option=any=_reporter=_commenter==_subscriber==ci+alert_combinator=ALL_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [ironic] automatic migration from classic drivers to hardware types?

2017-11-14 Thread Alex Schultz
On Tue, Nov 14, 2017 at 8:10 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> This was raised several times, now I want to bring it to the wider audience.
> We're planning [1] to deprecate classic drivers in Queens and remove them in
> Rocky. It was pointed at the Forum that we'd better provide an automatic
> migration.
>
> I'd like to hear your opinion on the options:
>
> (1) Migration as part of 'ironic-dbsync upgrade'
>
> Pros:
> * nothing new to do for the operators
>
> Cons:
> * upgrade will fail completely, if for some nodes the matching hardware
> types and/or interfaces are not enabled in ironic.conf
>
> (2) A separate script for migration
>
> Pros:
> * can be done in advance (even while still on Pike)
> * a failure won't fail the whole upgrade
> * will rely on drivers enabled in actually running conductors, not on
> ironic.conf
>
> Cons:
> * a new upgrade action before Rocky
> * won't be available in packaging
> * unclear how to update nodes that are in some process (e.g. cleaning), will
> probably have to be run several times
>
> (3) Migration as part of 'ironic-dbsync online_data_migration'
>
> Pros:
> * nothing new to do for the operators, similar to (1)
> * probably a more natural place to do this than (1)
> * can rely on drivers enabled in actually running conductors, not on
> ironic.conf
>
> Cons:
> * data migration will fail, if for some nodes the matching hardware types
> and/or interfaces are not enabled in ironic.conf
>

Rather than fail in various ways, why not do like what nova has with a
pre-upgrade status check[0] and then just handle it in ironic-dbsync
upgrade?   This would allow operators to check prior to running the
upgrade to understand what might need to be changed.  Additionally the
upgrade command itself could leverage the status check to fail nicely.


> (4) Do nothing, let operators handle the migration.
>

Please no.

>
> The most reasonable option for me seems (3), then (4). What do you think?
>

So this was chatted about in relation to some environment tooling we
have where we currently have where older 'pxe_ipmitool' defined and
this will need to switch to be 'ipmi'[1]. The issue with the hard
cutover on this one is any tooling which may have been written that
currently works with multiple openstack releases to generate the
required json for ironic will now have to take that into account.  I
know in our case we'll be needing to support newton for longer so
making the tooling openstack aware around this is just further
tech-debt that we'll be creating. Is there a better solution that
could be done either in ironic client or in the API to gracefully
handle this transition for a longer period of time?  I think this may
be one of those decisions that has a far reaching impact on
deployers/operators due changes they will have to make to support
multiple versions or as they upgrade between versions and they aren't
fully aware of yet as many may not be on Ocata.  This change seems
like it has a high UX impact and IMHO should be done very carefully.

Thanks,
-Alex

[0] https://docs.openstack.org/nova/pike/cli/nova-status.html
[1] 
http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2017-11-14.log.html#t2017-11-14T15:36:45


> Dmitry
>
> [1]
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/classic-drivers-future.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [ironic] automatic migration from classic drivers to hardware types?

2017-11-14 Thread Alex Schultz
On Tue, Nov 14, 2017 at 8:10 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> This was raised several times, now I want to bring it to the wider audience.
> We're planning [1] to deprecate classic drivers in Queens and remove them in
> Rocky. It was pointed at the Forum that we'd better provide an automatic
> migration.
>
> I'd like to hear your opinion on the options:
>
> (1) Migration as part of 'ironic-dbsync upgrade'
>
> Pros:
> * nothing new to do for the operators
>
> Cons:
> * upgrade will fail completely, if for some nodes the matching hardware
> types and/or interfaces are not enabled in ironic.conf
>
> (2) A separate script for migration
>
> Pros:
> * can be done in advance (even while still on Pike)
> * a failure won't fail the whole upgrade
> * will rely on drivers enabled in actually running conductors, not on
> ironic.conf
>
> Cons:
> * a new upgrade action before Rocky
> * won't be available in packaging
> * unclear how to update nodes that are in some process (e.g. cleaning), will
> probably have to be run several times
>
> (3) Migration as part of 'ironic-dbsync online_data_migration'
>
> Pros:
> * nothing new to do for the operators, similar to (1)
> * probably a more natural place to do this than (1)
> * can rely on drivers enabled in actually running conductors, not on
> ironic.conf
>
> Cons:
> * data migration will fail, if for some nodes the matching hardware types
> and/or interfaces are not enabled in ironic.conf
>

Rather than fail in various ways, why not do like what nova has with a
pre-upgrade status check[0] and then just handle it in ironic-dbsync
upgrade?   This would allow operators to check prior to running the
upgrade to understand what might need to be changed.  Additionally the
upgrade command itself could leverage the status check to fail nicely.


> (4) Do nothing, let operators handle the migration.
>

Please no.

>
> The most reasonable option for me seems (3), then (4). What do you think?
>

So this was chatted about in relation to some environment tooling we
have where we currently have where older 'pxe_ipmitool' defined and
this will need to switch to be 'ipmi'[1]. The issue with the hard
cutover on this one is any tooling which may have been written that
currently works with multiple openstack releases to generate the
required json for ironic will now have to take that into account.  I
know in our case we'll be needing to support newton for longer so
making the tooling openstack aware around this is just further
tech-debt that we'll be creating. Is there a better solution that
could be done either in ironic client or in the API to gracefully
handle this transition for a longer period of time?  I think this may
be one of those decisions that has a far reaching impact on
deployers/operators due changes they will have to make to support
multiple versions or as they upgrade between versions and they aren't
fully aware of yet as many may not be on Ocata.  This change seems
like it has a high UX impact and IMHO should be done very carefully.

Thanks,
-Alex

[0] https://docs.openstack.org/nova/pike/cli/nova-status.html
[1] 
http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2017-11-14.log.html#t2017-11-14T15:36:45


> Dmitry
>
> [1]
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/classic-drivers-future.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/puppet-swift failed

2017-11-13 Thread Alex Schultz
On Mon, Nov 13, 2017 at 8:02 AM, Doug Hellmann  wrote:
> Excerpts from zuul's message of 2017-11-13 09:24:15 +:
>> Build failed.
>>
>> - release-openstack-puppet 
>> http://logs.openstack.org/08/087081b12493632186b60dc2c5ab6c95ade61ef6/release/release-openstack-puppet/a7bbd5c/
>>  : POST_FAILURE in 1m 33s
>> - announce-release announce-release : SKIPPED
>>
>
> Another missing file:
>
> 2017-11-13 09:23:42.407283 | TASK [Build puppet module]
> 2017-11-13 09:23:43.527939 | ubuntu-xenial | ERROR
> 2017-11-13 09:23:43.528350 | ubuntu-xenial | {
> 2017-11-13 09:23:43.528449 | ubuntu-xenial |   "failed": true,
> 2017-11-13 09:23:43.528538 | ubuntu-xenial |   "msg": "[Errno 2] No such file 
> or directory",
> 2017-11-13 09:23:43.528632 | ubuntu-xenial |   "rc": 2
> 2017-11-13 09:23:43.528716 | ubuntu-xenial | }
>

So i think this is because the release hash is missing the change that
included the bindep.txt to pull in puppet.  Same for the other module
failures.  Have these been tagged or should we resubmit with updated
hashes for the failed modules?

Thanks,
-Alex

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Please do not approve or recheck anything not related to CI alert bugs

2017-11-11 Thread Alex Schultz
Ok so here's the current status of things.  I've gone through some of
the pending patches and sent them to the gate over the weekend since
the gate was empty (yay!).  We've managed to land a bunch of patches.
That being said for any patch for master with scenario jobs, please do
not recheck/approve. Currently the non-containerized scenario001/004
jobs are broken due to Bug 1731688[0] (these run on
tripleo-quickstart-extras/tripleo-ci).  There is a patch[1] out for a
revert of the breaking change. The scenario001-container job is super
flaky due to Bug 1731063[2] and we could use some help figuring out
what's going on.  We're also seeing some issues around heat
interactions[3][4] but those seems to be less of a problem than the
previously mentioned bugs.

So at the moment any changes that don't have scenario jobs associated
with them may be approved/rechecked freely.  We can discuss on Monday
what to do about the scenario jobs if we still are running into issues
without a solution in sight.  Also please keep an eye on the gate
queue[5] and don't approve things if it starts getting excessively
long.

Thanks,
-Alex


[0] https://bugs.launchpad.net/tripleo/+bug/1731688
[1] https://review.openstack.org/#/c/519041/
[2] https://bugs.launchpad.net/tripleo/+bug/1731063
[3] https://bugs.launchpad.net/tripleo/+bug/1731032
[4] https://bugs.launchpad.net/tripleo/+bug/1731540
[5] http://zuulv3.openstack.org/

On Wed, Nov 8, 2017 at 3:39 PM, Alex Schultz <aschu...@redhat.com> wrote:
> So we have some good news and some bad news.  The good news is that
> we've managed to get the gate queue[0] under control since we've held
> off on pushing new things to the gate.  The bad news is that we've
> still got some random failures occurring during the deployment of
> master.  Since we're not seeing infra related issues, we should be OK
> to merge things to stable/* branches.  Unfortunately until we resolve
> the issues in master[1] we could potentially backup the queue.  Please
> do not merge things that are not critical bugs.  I would ask that
> folks please take a look at the open bugs and help figure out what is
> going wrong. I've created two issues today that I've seen in the gate
> that we don't appear to have open patches for. One appears to be an
> issue in the heat deployment process[3] and the other is related to
> the tempest verification of being able to launch a VM & ssh to it[4].
>
> Thanks,
> -Alex
>
> [3] https://bugs.launchpad.net/tripleo/+bug/1731032
> [4] https://bugs.launchpad.net/tripleo/+bug/1731063
>
> On Tue, Nov 7, 2017 at 8:33 AM, Alex Schultz <aschu...@redhat.com> wrote:
>> Hey Folks
>>
>> So we're at 24+ hours again in the gate[0] and the queue only
>> continues to grow. We currently have 6 ci/alert bugs[1]. Please do not
>> approve of recheck anything that isn't related to these bugs.  I will
>> most likely need to go through the queue and abandon everything to
>> clear it up as we are consistently hitting timeouts on various jobs
>> which is preventing anything from merging.
>>
>> Thanks,
>> -Alex
>>
> [0] http://zuulv3.openstack.org/
> [1] 
> https://bugs.launchpad.net/tripleo/+bugs?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS%3Alist=CRITICAL_option=any=_reporter=_commenter==_subscriber==ci+alert_combinator=ALL_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [puppet] keystone.conf and 'federation/trusted_dashboard' (multi valued)

2017-11-10 Thread Alex Schultz
On Fri, Nov 10, 2017 at 12:45 PM, Red Cricket
 wrote:
> Hi,
>
> I am using https://github.com/openstack/puppet-keystone (stable/newton
> branch) and we would like to implement a design that uses federation openid.
>
> As part of this design I need to add these lines to the keystone.conf file:
>
> [federation]
> ...
> trusted_dashboard = https://example.com/auth/websso
> trusted_dashboard = https://example.com/dashboard/auth/websso/
>
> I have attempted to use this yaml in my hiera data ...
>
> keystone::config::keystone_config:
> ...
> 'federation/trusted_dashboard':
> value: "https://example.com/auth/websso;
> 'federation/trusted_dashboard':
> value: "https://example.com/dashboard/auth/websso/;
>
> ... and some other various, but the resulting keystone.conf only gets the
> second federation/trusted_dashboard setting:
>
> keystone::config::keystone_config:
> ...
> 'federation/trusted_dashboard':
> value: "https://example.com/dashboard/auth/websso/;
>
> If you could tell what I am doing wrong I'd appreciate it, but I suspect
> that the puppet-keystone module does not support
> 'federation/trusted_dashboard' (multi valued).
>

It appears from our other implementations that it can be a comma
seperated value.

https://github.com/openstack/puppet-keystone/blob/41f12aa800d46f914869618bd7afd6ccc4a4fa98/manifests/federation/mellon.pp#L114

So you may just try

 'federation/trusted_dashboard':
 value:
"https://example.com/auth/websso,https://example.com/dashboard/auth/websso/;

Thanks,
-Alex
> Thank you.
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tripleo] Nominate chem and matbu for tripleo-core !

2017-11-09 Thread Alex Schultz
+1

On Thu, Nov 9, 2017 at 1:48 AM, Marios Andreou  wrote:
> Hello fellow owls,
> (appologies for duplicate, forgot to add the tripleo in subject so worried
> it would be missed)
>
> I would like to nominate (and imo these are both long overdue already):
>
> Sofer Athlan Guyot (chem)  and
>
> Mathieu Bultel (matbu)
>
> to tripleo-core. They have both made many many core contributions to the
> upgrades & updates over the last 3 cycles touching many of the tripleo repos
> (tripleo-heat-templates, tripleo-common, python-tripleoclient, tripleo-ci,
> tripleo-docs and others tripleo-quickstart/extras too unless am mistaken).
>
> IMO their efforts and contributions are invaluable for the upgrades squad
> (and beyond - see openstack overcloud config download for example) and we
> will be very lucky to have them as fully voting cores.
>
> Please vote with +1 or -1 for either or both chem and matbu - I'll keep it
> open for a week as customary,
>
> thank you,
>
> marios
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Please do not approve or recheck anything not related to CI alert bugs

2017-11-08 Thread Alex Schultz
So we have some good news and some bad news.  The good news is that
we've managed to get the gate queue[0] under control since we've held
off on pushing new things to the gate.  The bad news is that we've
still got some random failures occurring during the deployment of
master.  Since we're not seeing infra related issues, we should be OK
to merge things to stable/* branches.  Unfortunately until we resolve
the issues in master[1] we could potentially backup the queue.  Please
do not merge things that are not critical bugs.  I would ask that
folks please take a look at the open bugs and help figure out what is
going wrong. I've created two issues today that I've seen in the gate
that we don't appear to have open patches for. One appears to be an
issue in the heat deployment process[3] and the other is related to
the tempest verification of being able to launch a VM & ssh to it[4].

Thanks,
-Alex

[3] https://bugs.launchpad.net/tripleo/+bug/1731032
[4] https://bugs.launchpad.net/tripleo/+bug/1731063

On Tue, Nov 7, 2017 at 8:33 AM, Alex Schultz <aschu...@redhat.com> wrote:
> Hey Folks
>
> So we're at 24+ hours again in the gate[0] and the queue only
> continues to grow. We currently have 6 ci/alert bugs[1]. Please do not
> approve of recheck anything that isn't related to these bugs.  I will
> most likely need to go through the queue and abandon everything to
> clear it up as we are consistently hitting timeouts on various jobs
> which is preventing anything from merging.
>
> Thanks,
> -Alex
>
[0] http://zuulv3.openstack.org/
[1] 
https://bugs.launchpad.net/tripleo/+bugs?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS%3Alist=CRITICAL_option=any=_reporter=_commenter==_subscriber==ci+alert_combinator=ALL_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing John Fulton core on TripleO

2017-11-08 Thread Alex Schultz
On Wed, Nov 8, 2017 at 3:24 PM, Giulio Fidente  wrote:
> Hi,
>
> I would like to propose John Fulton core on TripleO.
>
> I think John did an awesome work during the Pike cycle around the
> integration of ceph-ansible as a replacement for puppet-ceph, for the
> deployment of Ceph in containers.
>
> I think John has good understanding of many different parts of TripleO
> given that the ceph-ansible integration has been a complicated effort
> involving changes in heat/tht/mistral workflows/ci and last but not
> least, docs and he is more recently getting busier with reviews outside
> his main comfort zone.
>
> I am sure John would be a great addition to the team and I welcome him
> first to tune into radioparadise with the rest of us when joining #tripleo
>

+1. Excellent work with the ceph-ansible items.

> Feedback is welcomed!
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-11-08 Thread Alex Schultz
On Tue, Nov 7, 2017 at 2:59 PM, Emilien Macchi  wrote:
> On Wed, Nov 8, 2017 at 3:30 AM, James Slagle  wrote:
>> On Sun, Nov 5, 2017 at 7:01 PM, Emilien Macchi  wrote:
>>> On Mon, Oct 2, 2017 at 5:02 AM, Dan Prince  wrote:
>>> [...]
>>>
  -CI resources: better use of CI resources. At the PTG we received
 feedback from the OpenStack infrastructure team that our upstream CI
 resource usage is quite high at times (even as high as 50% of the
 total). Because of the shared framework and single node capabilities we
 can re-architecture much of our upstream CI matrix around single node.
 We no longer require multinode jobs to be able to test many of the
 services in tripleo-heat-templates... we can just use a single cloud VM
 instead. We'll still want multinode undercloud -> overcloud jobs for
 testing things like HA and baremetal provisioning. But we can cover a
 large set of the services (in particular many of the new scenario jobs
 we added in Pike) with single node CI test runs in much less time.
>>>
>>> After the last (terrible) weeks in CI, it's pretty clear we need to
>>> find a solution to reduce and optimize our testing.
>>> I'm now really convinced by switching our current scenarios jobs to
>>> NOT deploy the overcloud, and just an undercloud with composable
>>> services & run tempest.
>>
>> +1 if you mean just the scenarios.
>
> Yes, just scenarios.
>
>> I think we need to keep at least 1 multinode job voting that deploys
>> the overcloud, probably containers-multinode.
>
> Yes, exactly, and also work on optimizing OVB jobs (maybe just keep
> one or 2 jobs, instead 3).
>
>>> Benefits:
>>> - deploy 1 node instead of 2 nodes, so we save nodepool resources
>>> - faster (no overcloud)
>>> - reduce gate queue time, faster development process, faster CI
>>>
>>> Challenges:
>>> - keep overcloud testing, with OVB
>>
>> This is why I'm not sure what you're proposing. Do you mean switch all
>> multinode jobs to be just an undercloud, or just the scenarios?
>
> Keep 1 or 2 OVB jobs, to test ironic + mistral + HA (HA could be
> tested with multinode though but well).
>
>>> - reduce OVB to strict minimum: Ironic, Nova, Mistral and basic
>>> containerized services on overcloud.
>>>
>>> I really want to get consensus on these points, please raise your
>>> voice now before we engage some work on that front.
>>
>> I'm fine to optimize the scenarios to be undercloud driven, but feel
>> we still need a multinode job that deploys the overcloud in the gate.
>> Otherwise, we'll have nothing that deploys an overcloud in the gate,
>> which is a step in the wrong direction imo. Primarily, b/c of the loss
>> of coverage around mistral and all of our workflows. Perhaps down the
>> road we could find ways to optimize that by using an ephemeral Mistral
>> (similar to the ephemeral Heat container), and then use a single node,
>> but we're not there yet.
>>
>> On the other hand, if the goal is just to test less upstream so that
>> we can more quickly merge code, then *not* deploying an overcloud in
>> the gate at all seems to fit that goal. Is that what you're after?
>
> Yes. Thanks for reformulate with better words.
> Just to be clear, I want to transform the scenarios into single-node
> jobs that deploy the SAME services (using composable services) from
> the undercloud, using the new ansible installer. I also want to keep
> running Tempest.
> And of course, like we said, keep one multinode job to test overcloud
> workflow, and OVB with some adjustments.
>

So I'm ok with switching to use the containerized undercloud deploy to
smoke test functionality of more complex openstack service
deployments. What I would like to see prior to investing in this is
that the plain containerized undercloud deploy job reliability is on
par with the existing undercloud install.  We had to switch the
undercloud-containers back to non-voting due to higher failure rates
and it is still not voting.  With the current state of CI being
questionable due to random failures which are not fully have resolved,
I would prefer that we ensure existing CI is stable and that what we
plan to move is as stable.

Additionally I think we need to ensure that the ovb jobs that still do
full deployment process become voting by switching to 3rd party CI.

Thanks,
-Alex

> Is it good?
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [tripleo] Please do not approve or recheck anything not related to CI alert bugs

2017-11-07 Thread Alex Schultz
Hey Folks

So we're at 24+ hours again in the gate[0] and the queue only
continues to grow. We currently have 6 ci/alert bugs[1]. Please do not
approve of recheck anything that isn't related to these bugs.  I will
most likely need to go through the queue and abandon everything to
clear it up as we are consistently hitting timeouts on various jobs
which is preventing anything from merging.

Thanks,
-Alex

[0] http://zuulv3.openstack.org/
[1] 
https://bugs.launchpad.net/tripleo/+bugs?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS%3Alist=CRITICAL_option=any=_reporter=_commenter==_subscriber==ci+alert_combinator=ALL_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] legacy-tripleo-ci-centos-7-undercloud-containers job needs attention

2017-11-03 Thread Alex Schultz
On Thu, Nov 2, 2017 at 7:22 PM, Wesley Hayutin  wrote:
> Greetings,
>
> We noticed the job  legacy-tripleo-ci-centos-7-undercloud-containers failing
> in the gate in the following patches [1], [2].  The job was voting in the
> review here [3].  ATM the job is non-voting in check and voting in the gate.
> This is a dangerous combination where the job can fail in check unnoticed
> and also fail in the gate.  This can cause the gate to reset often and delay
> other patches from merging.
>
> We either need the job to become voting in check, or removed from the gate.
> Either action is fine but needs to be taken immediately.
>
> Looking at some stats for the job itself comparing the containerized
> undercloud vs. the old non-containerized job via [4].
>
> legacy-tripleo-ci-centos-7-undercloud-containers
> pass rate overall: 78%  as of 11/2/2017
>
> legacy-tripleo-ci-centos-7-undercloud-oooq
> pass rate overall: 92.6% as of 11/2/2017
>

The revert is https://review.openstack.org/#/c/517643/. I would like
to see this back voting again ASAP since we want to move to this
method for Queens. That being said, we need to track the stability of
this job better to ensure it's on par with the existing legacy one.

Thanks,
-Alex

> Thanks for reading through this and for helping out in advance!
>
> [1] https://review.openstack.org/#/c/514576/
> [2] https://review.openstack.org/#/c/517023/
> [3] https://review.openstack.org/#/c/513163/
> [4] http://cistatus.tripleo.org/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][tripleo][puppet] Logging format: let's discuss a bit about default format, format configuration and so on

2017-11-03 Thread Alex Schultz
On Fri, Nov 3, 2017 at 12:21 AM, Cédric Jeanneret
 wrote:
> On 11/02/2017 05:18 PM, Ben Nemec wrote:
>> Adding tags for the relevant projects.  I _think_ this is mostly
>> directed at Puppet/TripleO, but Oslo is obviously relevant as well.
>
> Thank you! First mail in there, wasn't really sure how to do that.
>
>>
>> On 11/01/2017 08:54 AM, Cédric Jeanneret wrote:
>>> Dear Stackers,
>>>
>>> While working on my locally deployed Openstack (Pike using TripleO), I
>>> was a bit struggling with the logging part. Currently, all logs are
>>> pushed to per-service files, in the standard format "one line per
>>> entry", plain flat text.
>>>
>>> It's nice, but if one is wanting to push and index those logs in an ELK,
>>> the current, default format isn't really good.
>>>
>>> After some discussions about oslo.log, it appears it provides a nice
>>> JSONFormatter handler¹ one might want to use for each (python) service
>>> using oslo central library.
>>>
>>> A JSON format is really cool, as it's easy to parse for machines, and it
>>> can be on a multi-line without any bit issue - this is especially
>>> important for stack traces, as their output is multi-line without real
>>> way to have a common delimiter so that we can re-format it and feed it
>>> to any log parser (logstash, fluentd, …).
>>>
>>> After some more talks, olso.log will not provide a unified interface in
>>> order to output all received logs as JSON - this makes sens, as that
>>> would mean "rewrite almost all the python logging management
>>> interface"², and that's pretty useless, since (all?) services have their
>>> own "logging.conf" file.
>>>
>>> That said… to the main purpose of that mail:
>>>
>>> - Default format for logs
>>> A first question would be "are we all OK with the default output format"
>>> - I'm pretty sure "humans" are happy with that, as it's really
>>> convenient to read and grep. But on a "standard" Openstack deploy, I'm
>>> pretty sure one does not have only one controller, one ceph node and one
>>> compute. Hence comes the log centralization, and with that, the log
>>> indexation and treatments.
>>>
>>> For that, one might argue "I'm using plain files on my logger, and
>>> grep-ing -r in them". That's a way to do things, and for that, plain,
>>> flat logs are great.
>>>
>>> But… I'm pretty sure I'm not the only one wanting to use some kind of
>>> ELK cluster for that kind of purpose. So the right question is: what
>>> about switching the default log format to JSON? On my part, I don't see
>>> "cons", only "pros", but my judgment is of course biased, as I'm "alone
>>> in my corner". But what about you, Community?
>>
>> The major con I see is that we don't require an ELK stack and reading
>> JSON logs if you don't have one of those is probably worse, although I
>> will admit I haven't spent much time reading our JSON-formatted logs. My
>> experience with things that log in JSON has not generally been positive
>> though.  It's just not as human-readable.
>
> We're on the same line on that - hence the following proposal. I was
> pretty sure switching the default format was a bad thing, but I had to
> submit it in order to be complete ;).
> Let's focus on the other one, as it's more friendly for everyone.
>
>>
>> The other problem with changing log format defaults is that many people
>> have already set up filters and processing based on the existing log
>> format.  There are significant user impacts to changing that default.
>>
>>>
>>> - Provide a way to configure the output format/handler
>>> While poking around in the puppet modules code, I didn't find any way to
>>> set the output handler for the logs. For example, in puppet-nova³ we can
>>> set a lot of things, but not the useful handler for the output.
>>>
>>> It would be really cool to get, for each puppet module, the capability
>>> to set the handler so that one can just push some stuff in hiera, and
>>> Voilà, we have JSON logs.
>>>
>>> Doing so would allow people to chose between the default  (current)
>>> output, and something more "computable".
>>
>> I think we should do this regardless.  There are valid arguments for
>> people to want both log formats IMHO and we should allow people to use
>> what they want.
>
> If I understand the things correctly, that would require a "small"
> change in every puppet modules so that it configures the service with
> the proper log output. Any way to automate something on that? It might
> be worth to do some PoC on a specific module maybe?
>

So for the puppet modules, the available logging configuration lives
in puppet-oslo so if there's a configuration around that that needs to
be updated[0] then that is the first change. From there you would need
to go through each module to expose this new configuration in the
logging classes. For example puppet-nova[1].  You could probably write
a script to modify each but I'm not sure they are completely
consistent between classes. I think they are but it would 

[openstack-dev] [tripleo] CI Status & Where to find your patch's status in CI

2017-10-19 Thread Alex Schultz
Hey Folks,

So the gate queue is quite backed up due to various reasons. If your
patch has been approved but you're uncertain of the CI status please,
please, please check the dashboard[0] before doing anything.  Do not
rebase or recheck things currently in a queue somewhere. When you
rebase a patch that's in the gate queue it will reset every job behind
it and restart the jobs for that change.

I've noticed that due to various restarts we did lose some comments on
things that are actually in the gate but there was no update in
gerrit. So please take some time and checkout the dashboard if you are
not certain if it's currently being checked.

Thanks,
-Alex


[0] http://zuulv3.openstack.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Alex Schultz
On Thu, Oct 19, 2017 at 11:49 AM, Lance Bragstad <lbrags...@gmail.com> wrote:
> Yeah - we specifically talked about this in a recent meeting [0]. We
> will be more verbose about this in the future.
>

I'm glad to see a review of this. In reading the meeting logs, I
understand it was well communicated that the api was going to go away
at some point. Yes we all knew it was coming, but the exact time of
impact wasn't known outside of Keystone.  Also saying "oh it works in
devstack" is not enough when you do something this major.   So a "FYI,
patches to remove v2.0 to start landing next week (or today)" is more
what would have been helpful for the devs who consume master.  It
dramatically shortens the time spent debugging failures if you have an
idea about when something major changes and then we don't have to go
through git logs/gerrit to figure out what happened :)

IMHO when large efforts that affect the usage of your service are
going to start to land, it's good to send a note before landing those
patches. Or at least at the same time. Anyway I hope other projects
will also follow a similar pattern when they ultimately need to do
something like this in the future.

Thanks,
-Alex

>
> [0]
> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-10-10-18.00.log.html#l-107
>
> On 10/19/2017 12:00 PM, Alex Schultz wrote:
>> On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad <lbrags...@gmail.com> wrote:
>>> Hey all,
>>>
>>> Now that we're finishing up the last few bits of v2.0 removal, I'd like to
>>> send out a reminder that Queens will not include the v2.0 keystone APIs
>>> except the ec2-api. Authentication and validation of v2.0 tokens has been
>>> removed (in addition to the public and admin APIs) after a lengthy
>>> deprecation period.
>>>
>> In the future can we have a notice before the actual code removal
>> starts?  We've been battling various places where we thought we had
>> converted to v3 only to find out we hadn't correctly done so because
>> it use to just 'work' and the only way we know now is that CI blew up.
>> A heads up on the ML probably wouldn't have lessened the pain in this
>> instance but at least we might have been able to pinpoint the exact
>> problem quicker.
>>
>> Thanks,
>> -Alex
>>
>>
>>> Let us know if you have any questions.
>>>
>>> Thanks!
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Alex Schultz
On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad  wrote:
> Hey all,
>
> Now that we're finishing up the last few bits of v2.0 removal, I'd like to
> send out a reminder that Queens will not include the v2.0 keystone APIs
> except the ec2-api. Authentication and validation of v2.0 tokens has been
> removed (in addition to the public and admin APIs) after a lengthy
> deprecation period.
>

In the future can we have a notice before the actual code removal
starts?  We've been battling various places where we thought we had
converted to v3 only to find out we hadn't correctly done so because
it use to just 'work' and the only way we know now is that CI blew up.
A heads up on the ML probably wouldn't have lessened the pain in this
instance but at least we might have been able to pinpoint the exact
problem quicker.

Thanks,
-Alex


> Let us know if you have any questions.
>
> Thanks!
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] Proposing changes in stable policy for installers

2017-10-16 Thread Alex Schultz
On Mon, Oct 16, 2017 at 7:33 AM, Steven Dake (stdake)  wrote:
> Emilien,
>
> I generally thought the stable policy seemed reasonable enough for lifecycle 
> management tools.  I’m not sure what specific problems you had in TripleO 
> although I did read your review.  Kolla was just tagged with the stable 
> policy, and TMK, we haven’t run into trouble yet, although the Kolla project 
> is stable and has been following the stable policy for about 18 months.  If 
> the requirements are watered down, the tag could potentially be meaningless.  
> We haven’t experienced this specific tag enough to know if it needs some 
> refinement for the specific use case of lifecycle management tools.  That 
> said, the follows release policy was created to handle the special case of 
> lifecycle management tool’s upstream sources not being ready for lifecycle 
> management tools to release at one coordinated release time.
>
> Kollians?  Any problems thus far with the stable policy?
>
> Cheers
> -steve
>

I'm not a Kolla person, but from the Puppet OpenStack stand point we
haven't been able to follow stable because we can't guarantee complete
configuration coverage for all the services. So while we don't
backport breaking changes (ie removing parameters from resources), we
do have to backport additions (adding params to resources/etc) as
folks start trying to use the upstream services. Since people are not
necessarily following master in their deployments, for example there's
a significant lag from operators who start trying to upgrade to Newton
about the time we're releasing Pike, etc etc.  These types of
additions could be seen as features because we didn't know we had to
add additional code to support the use case in the previous cycle.
Generally we're supporting our basic scenarios (which are pretty
extensive), but there are end user cases we don't test on a regular
basis which pop up from time to time where being able to say we
support a 'stable-policy' but will backport non-breaking changes if
necessary.

Thanks,
-Alex

>
> On 10/16/17, 4:27 AM, "Thierry Carrez"  wrote:
>
> Emilien Macchi wrote:
> > [...]
> > ## Proposal
> >
> > Proposal 1: create a new policy that fits for projects like installers.
> > I kicked-off something here: https://review.openstack.org/#/c/511968/
> > (open for feedback).
> > Content can be read here:
> > 
> http://docs-draft.openstack.org/68/511968/1/check/gate-project-team-guide-docs-ubuntu-xenial/1a5b40e//doc/build/html/stable-branches.html#support-phases
> > Tag created here: https://review.openstack.org/#/c/511969/ (same,
> > please review).
> >
> > The idea is really to not touch the current stable policy and create a
> > new one, more "relax" that suits well for projects like installers.
> >
> > Proposal 2: change the current policy and be more relax for projects
> > like installers.
> > I haven't worked on this proposal while it was something I was
> > considering doing first, because I realized it could bring confusion
> > in which projects actually follow the real stable policy and the ones
> > who have exceptions.
> > That's why I thought having a dedicated tag would help to separate them.
> >
> > Proposal 3: no change anywhere, projects like installer can't claim
> > stability etiquette (not my best option in my opinion).
> >
> > Anyway, feedback is welcome, I'm now listening. If you work on Kolla,
> > TripleO, OpenStack-Ansible, PuppetOpenStack (or any project who has
> > this need), please get involved in the review process.
>
> My preference goes to proposal 1, however rather than call it "relaxed"
> I would make it specific to deployment/lifecycle or cycle-trailing
> projects.
>
> Ideally this policy could get adopted by any such project. The
> discussion started on the review and it's going well, so let's see where
> it goes :)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Alfredo Moralejo core on puppet-openstack-integration

2017-10-13 Thread Alex Schultz
+1 thanks for the great contributions

On Fri, Oct 13, 2017 at 11:49 AM, Mohammed Naser  wrote:
>
>
> On Fri, Oct 13, 2017 at 1:21 PM, Emilien Macchi  wrote:
>>
>> Alfredo has been doing an incredible work on maintaining Puppet
>> OpenStack CI; by always testing OpenStack from trunk and taking care
>> of issues. He has been involved in fixing the actual CI problems in
>> OpenStack projects but also maintaining puppet-openstack-integration
>> repository in a consistent and solid manner.
>> Also, he has an excellent understanding how things work in this
>> project and I would like to propose him part of p-o-i maintainers
>> (among the rest of the whole core team and also dmsimard).
>
>
> Indeed, Alfredo has helped us a lot by giving assistance from the packagers
> side of things and making sure that they release working packages, and
> proposing fixes for issues that block promotion of packages to avoid
> breaking our CI.
>
> +2 from me :)
>
>>
>>
>> As usual, feel free to vote and give feedback.
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Reminder about specs for Queens

2017-10-11 Thread Alex Schultz
Hey folks,

As we approach milestone 1 for Queens, we need to be mindful of the
open specs[0]. Please take some time to review the posted specs. Also
if you need to propose a new spec, please do so as soon as possible.
The goal for this cycle is to close the spec repo at milestone 2.  The
hope is to use the time after milestone 2 for additional stabilization
and hardening so we would like to get as much merged prior to
milestone 2 as possible.

If you have any questions or issues with this plan, please let us
know. We chatted about this a bit the last IRC meeting[1] but we can
certainly review this plan as necessary.

Thanks,
-Alex

[0] https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open
[1] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-10-10-14.00.log.html#l-315

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella

2017-10-10 Thread Alex Schultz
On Tue, Oct 10, 2017 at 5:24 AM, Flavio Percoco  wrote:
> On 09/10/17 12:41 -0700, Emilien Macchi wrote:
>>
>> On Mon, Oct 9, 2017 at 2:29 AM, Flavio Percoco  wrote:
>> [...]
>>>
>>> 1. A repo per role: Each role would have its own repo - this is the way
>>> I've
>>> been developing it on Github. This model is closer to the ansible way of
>>> doing
>>> things and it'll make it easier to bundle, ship, and collaborate on,
>>> individual
>>> roles. Going this way would produce something similar to what the
>>> openstack-ansible folks have.
>>
>>
>> +1 on #1 for the composability.
>>
>> [...]
>>
>> Have we considered renaming it to something without tripleo in the name?
>> Or is it too specific to TripleO that we want it in the name?
>
>
> The roles don't have tripleo in their names. The only role that mentions
> tripleo
> is tripleo specific. As for the APB, yeah, I had thought about renaming that
> repo to something without tripleo in there: Perhaps just `ansible-k8s-apbs`.
>
> I'm about to refactor this repo to remove all the code duplication. We
> should be
> able to generate most of the APB code that's in there from a python script.
> We
> could even have this script in tripleo_common, if it sounds sensible.
>

It should be it's own thing and not in tripleo_common.  When I was
proposing a cookiecutter repo it was because in Puppet we do the same
thing to bootstrap the modules[0].  It would be a good idea to
establish this upfront with the appropriate repo & zuul v3
configurations that could be used to test these modules. We have a
similar getting started with a new module doc[1] that we should
probably establish for these ansible-k8s-* roles.

Thanks,
-Alex

[0] https://github.com/openstack/puppet-openstack-cookiecutter
[1] 
https://docs.openstack.org/puppet-openstack-guide/latest/contributor/new-module.html

> Thoughts?
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-09 Thread Alex Schultz
On Thu, Oct 5, 2017 at 5:01 PM, Tony Breeds <t...@bakeyournoodle.com> wrote:
> On Thu, Oct 05, 2017 at 09:00:00AM -0600, Alex Schultz wrote:
>> On Tue, Oct 3, 2017 at 9:51 PM, Tony Breeds <t...@bakeyournoodle.com> wrote:
>
>> Would it be possible to delay the Newton EOL for the TripleO projects
>> for ~1month? We still have some patches outstanding and would like to
>> delay the EOL for now.  As previously mentioned elsewhere it would be
>> beneficial for us to wait until the end of Queens but for now I'd like
>> to pencil in ~1month to give us additional time to evaulate if we need
>> more time.  Let me know if this isn't realistic or there are any
>> glaring issues with this.
>
> I'm happy to exclude tripleo from the initial EOL cycle (and had added
> tripleo's repos to my list of 'opt-out' repos based on previous emails).
> It'd be good if we could setup a one-off time with tripleo, stable (me)
> and infra to look at what CI you have an which parts are generally
> impacted by repo's EOLing.  For example it's probable that
> openstack/requiremnets (and that implies openstack-dev/devstack) need to
> be the last projects to retire.
>
> That isn't terrible but I'd still like to make sure we're on the same
> page so we can level set everyone's expectations.
>
> So who from tripleo needs to be there and what TZ are they in?
>

So probably Emilien and/or some folks from the TripleO CI Squad to
review.  I will bring it up tomorrow during the tripleo meeting.

Thanks,
-Alex

> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Repo structure for ansible-k8s-roles-* under TripleO's umbrella

2017-10-09 Thread Alex Schultz
On Mon, Oct 9, 2017 at 3:29 AM, Flavio Percoco  wrote:
> Greetings,
>
> I've been working on something called triple-apbs (and it's respective
> roles) in
> the last couple of months. You can find more info about this work
> here[0][1][2]
>
> This work is at the point where I think it would be worth start discussing
> how
> we want these repos to exist under the TripleO umbrella. As far as I can
> tell,
> we have 2 options (please comment with alternatives if there are more):
>
> 1. A repo per role: Each role would have its own repo - this is the way I've
> been developing it on Github. This model is closer to the ansible way of
> doing
> things and it'll make it easier to bundle, ship, and collaborate on,
> individual
> roles. Going this way would produce something similar to what the
> openstack-ansible folks have.
>

I think we've proven that this is a better way to handle these types
of things so I would prefer option #1. I would say that it might be
useful to also create a basic cookiecutter template for these repos so
we can quickly bootstrap new ones. One thing that has a been a
repeated problem when you do split these modules is having to do bulk
updates for requirements or shared structure items and making sure we
don't accrue a ton of tech-debt over time.

Thanks,
-Alex


> 2. Everything in a single repo: this would ease the import process and
> integration with the rest of TripleO. It'll make the early days of this work
> a
> bit easier but it will take us in a direction that doesn't serve one of the
> goals of this work.
>
> My preferred option is #1 because one of the goals of this work is to have
> independent roles that can also be consumed standalone. In other words, I
> would
> like to stay closer to the ansible recommended structure for roles. Some
> examples[3][4]
>
> Any thoughts? preferences?
> Flavio
>
> [0] http://blog.flaper87.com/deploy-mariadb-kubernetes-tripleo.html
> [1]
> http://blog.flaper87.com/glance-keystone-mariadb-on-k8s-with-tripleo.html
> [2] https://github.com/tripleo-apb
> [3] https://github.com/tripleo-apb/ansible-role-k8s-mariadb
> [4] https://github.com/tripleo-apb/ansible-role-k8s-glance
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Unbranched repositories and testing

2017-10-05 Thread Alex Schultz
Hey folks,

So I wandered across the policy spec[0] for how we should be handling
unbranched repository reviews and I would like to start a broader
discussion around this topic.  We've seen it several times over the
recent history where a change in oooqe or tripleo-ci ends up affecting
either a stable branch or an additional set of jobs that were not run
on the change.  I think it's unrealistic to run every possible job
combination on every submission and it's also a giant waste of CI
resources.  I also don't necessarily agree that we should be using
depends-on to prove things are fine for a given patch for the same
reasons. That being said, we do need to minimize our risk for patches
to these repositories.

At the PTG retrospective I mentioned component design structure[1] as
something we need to be more aware of. I think this particular topic
is one of those types of things where we could benefit from evaluating
the structure and policy around these unbranched repositories to see
if we can improve it.  Is there a particular reason why we continue to
try and support deployment of (at least) 3 or 4 different versions
within a single repository?  Are we adding new features that really
shouldn't be consumed by these older versions such that perhaps it
makes sense to actually create stable branches?  Perhaps there are
some other ideas that might work?

Thanks,
-Alex

[0] https://review.openstack.org/#/c/478488/
[1] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-05 Thread Alex Schultz
On Tue, Oct 3, 2017 at 9:51 PM, Tony Breeds  wrote:
> Hi All,
> This is a quick update on the process for tagging stable/newton as
> EOL:
>
> The published[1][2] timeline is:
> Sep 29 : Final newton library releases
> Oct 09 : stable/newton branches enter Phase III
> Oct 11 : stable/newton branches get tagged EOL
>
> Given that those key dates were a little disrupted I'm proposing adding
> a week to each so the new timeline looks like:
> Oct 08 : Final newton library releases
> Oct 16 : stable/newton branches enter Phase III
> Oct 18 : stable/newton branches get tagged EOL
>
> The transition to Phase II is important to set expectation about what
> backports are applicable while we process the EOL.
>
> I'll prep the list of repos that will be tagged EOL real soon now for
> review.
>

Would it be possible to delay the Newton EOL for the TripleO projects
for ~1month? We still have some patches outstanding and would like to
delay the EOL for now.  As previously mentioned elsewhere it would be
beneficial for us to wait until the end of Queens but for now I'd like
to pencil in ~1month to give us additional time to evaulate if we need
more time.  Let me know if this isn't realistic or there are any
glaring issues with this.

Thanks,
-Alex

> Yours Tony.
>
> [1] https://releases.openstack.org/index.html
> [2] https://releases.openstack.org/queens/schedule.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] bugs moved to queens-2

2017-10-04 Thread Alex Schultz
Hey everyone,

So during our IRC meeting[0], it was mentioned that we should move <=
Medium bugs over to queens-2 to help with visibility of more important
bugs. Since we had 470+ total bugs targeted to queens-1, I have moved
all the medium bugs that were not yet In Progress to queens-2 and will
be reviewing the remaining list[1] and checking if the patches were
actually merged or if they should be moved to queens-2 as they do not
have any activity. Please take some time to review the High/Critical
bugs as we should be working on those or closing them out if they are
no longer valid.


Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-10-03-13.59.log.html#l-156
[1] https://launchpad.net/tripleo/+milestone/queens-1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-04 Thread Alex Schultz
On Wed, Oct 4, 2017 at 7:00 AM, Dan Prince <dpri...@redhat.com> wrote:
> On Tue, 2017-10-03 at 16:03 -0600, Alex Schultz wrote:
>> On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince <dpri...@redhat.com>
>> wrote:
>> >
>> >
>> > On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz <aschu...@redhat.com>
>> > wrote:
>> > >
>> > > On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com>
>> > > wrote:
>> > > > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>> > > > > Hey Dan,
>> > > > >
>> > > > > Thanks for sending out a note about this. I have a few
>> > > > > questions
>> > > > > inline.
>> > > > >
>> > > > > On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.co
>> > > > > m>
>> > > > > wrote:
>> > > > > > One of the things the TripleO containers team is planning
>> > > > > > on
>> > > > > > tackling
>> > > > > > in Queens is fully containerizing the undercloud. At the
>> > > > > > PTG we
>> > > > > > created
>> > > > > > an etherpad [1] that contains a list of features that need
>> > > > > > to be
>> > > > > > implemented to fully replace instack-undercloud.
>> > > > > >
>> > > > >
>> > > > > I know we talked about this at the PTG and I was skeptical
>> > > > > that this
>> > > > > will land in Queens. With the exception of the Container's
>> > > > > team
>> > > > > wanting this, I'm not sure there is an actual end user who is
>> > > > > looking
>> > > > > for the feature so I want to make sure we're not just doing
>> > > > > more work
>> > > > > because we as developers think it's a good idea.
>> > > >
>> > > > I've heard from several operators that they were actually
>> > > > surprised we
>> > > > implemented containers in the Overcloud first. Validating a new
>> > > > deployment framework on a single node Undercloud (for
>> > > > operators) before
>> > > > overtaking their entire cloud deployment has a lot of merit to
>> > > > it IMO.
>> > > > When you share the same deployment architecture across the
>> > > > overcloud/undercloud it puts us in a better position to decide
>> > > > where to
>> > > > expose new features to operators first (when creating the
>> > > > undercloud or
>> > > > overcloud for example).
>> > > >
>> > > > Also, if you read my email again I've explicitly listed the
>> > > > "Containers" benefit last. While I think moving the undercloud
>> > > > to
>> > > > containers is a great benefit all by itself this is more of a
>> > > > "framework alignment" in TripleO and gets us out of maintaining
>> > > > huge
>> > > > amounts of technical debt. Re-using the same framework for the
>> > > > undercloud and overcloud has a lot of merit. It effectively
>> > > > streamlines
>> > > > the development process for service developers, and 3rd parties
>> > > > wishing
>> > > > to integrate some of their components on a single node. Why be
>> > > > forced
>> > > > to create a multi-node dev environment if you don't have to
>> > > > (aren't
>> > > > using HA for example).
>> > > >
>> > > > Lets be honest. While instack-undercloud helped solve the old
>> > > > "seed" VM
>> > > > issue it was outdated the day it landed upstream. The entire
>> > > > premise of
>> > > > the tool is that it uses old style "elements" to create the
>> > > > undercloud
>> > > > and we moved away from those as the primary means driving the
>> > > > creation
>> > > > of the Overcloud years ago at this point. The new
>> > > > 'undercloud_deploy'
>> > > > installer gets us back to our roots by once again sharing the
>> > > > same
>> > > > architecture to create the over and underclouds. A demo from
>> > > > long ago
>> > > &

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Alex Schultz
On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince <dpri...@redhat.com> wrote:
>
>
> On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz <aschu...@redhat.com> wrote:
>>
>> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com> wrote:
>> > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>> >> Hey Dan,
>> >>
>> >> Thanks for sending out a note about this. I have a few questions
>> >> inline.
>> >>
>> >> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.com>
>> >> wrote:
>> >> > One of the things the TripleO containers team is planning on
>> >> > tackling
>> >> > in Queens is fully containerizing the undercloud. At the PTG we
>> >> > created
>> >> > an etherpad [1] that contains a list of features that need to be
>> >> > implemented to fully replace instack-undercloud.
>> >> >
>> >>
>> >> I know we talked about this at the PTG and I was skeptical that this
>> >> will land in Queens. With the exception of the Container's team
>> >> wanting this, I'm not sure there is an actual end user who is looking
>> >> for the feature so I want to make sure we're not just doing more work
>> >> because we as developers think it's a good idea.
>> >
>> > I've heard from several operators that they were actually surprised we
>> > implemented containers in the Overcloud first. Validating a new
>> > deployment framework on a single node Undercloud (for operators) before
>> > overtaking their entire cloud deployment has a lot of merit to it IMO.
>> > When you share the same deployment architecture across the
>> > overcloud/undercloud it puts us in a better position to decide where to
>> > expose new features to operators first (when creating the undercloud or
>> > overcloud for example).
>> >
>> > Also, if you read my email again I've explicitly listed the
>> > "Containers" benefit last. While I think moving the undercloud to
>> > containers is a great benefit all by itself this is more of a
>> > "framework alignment" in TripleO and gets us out of maintaining huge
>> > amounts of technical debt. Re-using the same framework for the
>> > undercloud and overcloud has a lot of merit. It effectively streamlines
>> > the development process for service developers, and 3rd parties wishing
>> > to integrate some of their components on a single node. Why be forced
>> > to create a multi-node dev environment if you don't have to (aren't
>> > using HA for example).
>> >
>> > Lets be honest. While instack-undercloud helped solve the old "seed" VM
>> > issue it was outdated the day it landed upstream. The entire premise of
>> > the tool is that it uses old style "elements" to create the undercloud
>> > and we moved away from those as the primary means driving the creation
>> > of the Overcloud years ago at this point. The new 'undercloud_deploy'
>> > installer gets us back to our roots by once again sharing the same
>> > architecture to create the over and underclouds. A demo from long ago
>> > expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
>> > Q=5s
>> >
>> > In short, we aren't just doing more work because developers think it is
>> > a good idea. This has potential to be one of the most useful
>> > architectural changes in TripleO that we've made in years. Could
>> > significantly decrease our CI reasources if we use it to replace the
>> > existing scenarios jobs which take multiple VMs per job. Is a building
>> > block we could use for other features like and HA undercloud. And yes,
>> > it does also have a huge impact on developer velocity in that many of
>> > us already prefer to use the tool as a means of streamlining our
>> > dev/test cycles to minutes instead of hours. Why spend hours running
>> > quickstart Ansible scripts when in many cases you can just doit.sh. htt
>> > ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
>> >
>>
>> So like I've repeatedly said, I'm not completely against it as I agree
>> what we have is not ideal.  I'm not -2, I'm -1 pending additional
>> information. I'm trying to be realistic and reduce our risk for this
>> cycle.
>
>
> This reduces our complexity greatly I think in that once it is completed
> will allow us to eliminate two project (instack and instack-undercloud) and
> th

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Alex Schultz
On Tue, Oct 3, 2017 at 1:50 PM, Alex Schultz <aschu...@redhat.com> wrote:
> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com> wrote:
>> On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>>> Hey Dan,
>>>
>>> Thanks for sending out a note about this. I have a few questions
>>> inline.
>>>
>>> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.com>
>>> wrote:
>>> > One of the things the TripleO containers team is planning on
>>> > tackling
>>> > in Queens is fully containerizing the undercloud. At the PTG we
>>> > created
>>> > an etherpad [1] that contains a list of features that need to be
>>> > implemented to fully replace instack-undercloud.
>>> >
>>>
>>> I know we talked about this at the PTG and I was skeptical that this
>>> will land in Queens. With the exception of the Container's team
>>> wanting this, I'm not sure there is an actual end user who is looking
>>> for the feature so I want to make sure we're not just doing more work
>>> because we as developers think it's a good idea.
>>
>> I've heard from several operators that they were actually surprised we
>> implemented containers in the Overcloud first. Validating a new
>> deployment framework on a single node Undercloud (for operators) before
>> overtaking their entire cloud deployment has a lot of merit to it IMO.
>> When you share the same deployment architecture across the
>> overcloud/undercloud it puts us in a better position to decide where to
>> expose new features to operators first (when creating the undercloud or
>> overcloud for example).
>>
>> Also, if you read my email again I've explicitly listed the
>> "Containers" benefit last. While I think moving the undercloud to
>> containers is a great benefit all by itself this is more of a
>> "framework alignment" in TripleO and gets us out of maintaining huge
>> amounts of technical debt. Re-using the same framework for the
>> undercloud and overcloud has a lot of merit. It effectively streamlines
>> the development process for service developers, and 3rd parties wishing
>> to integrate some of their components on a single node. Why be forced
>> to create a multi-node dev environment if you don't have to (aren't
>> using HA for example).
>>
>> Lets be honest. While instack-undercloud helped solve the old "seed" VM
>> issue it was outdated the day it landed upstream. The entire premise of
>> the tool is that it uses old style "elements" to create the undercloud
>> and we moved away from those as the primary means driving the creation
>> of the Overcloud years ago at this point. The new 'undercloud_deploy'
>> installer gets us back to our roots by once again sharing the same
>> architecture to create the over and underclouds. A demo from long ago
>> expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
>> Q=5s
>>
>> In short, we aren't just doing more work because developers think it is
>> a good idea. This has potential to be one of the most useful
>> architectural changes in TripleO that we've made in years. Could
>> significantly decrease our CI reasources if we use it to replace the
>> existing scenarios jobs which take multiple VMs per job. Is a building
>> block we could use for other features like and HA undercloud. And yes,
>> it does also have a huge impact on developer velocity in that many of
>> us already prefer to use the tool as a means of streamlining our
>> dev/test cycles to minutes instead of hours. Why spend hours running
>> quickstart Ansible scripts when in many cases you can just doit.sh. htt
>> ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
>>
>
> So like I've repeatedly said, I'm not completely against it as I agree
> what we have is not ideal.  I'm not -2, I'm -1 pending additional
> information. I'm trying to be realistic and reduce our risk for this
> cycle.   IMHO doit.sh is not acceptable as an undercloud installer and
> this is what I've been trying to point out as the actual impact to the
> end user who has to use this thing. We have an established
> installation method for the undercloud, that while isn't great, isn't
> a bash script with git fetches, etc.  So as for the implementation,
> this is what I want to see properly flushed out prior to accepting
> this feature as complete for Queens (and the new default).  I would
> like to see a plan of what features need to be added (eg. the stuff on
> the etherpad), folks assigned to do this

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Alex Schultz
On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince <dpri...@redhat.com> wrote:
> On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>> Hey Dan,
>>
>> Thanks for sending out a note about this. I have a few questions
>> inline.
>>
>> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince <dpri...@redhat.com>
>> wrote:
>> > One of the things the TripleO containers team is planning on
>> > tackling
>> > in Queens is fully containerizing the undercloud. At the PTG we
>> > created
>> > an etherpad [1] that contains a list of features that need to be
>> > implemented to fully replace instack-undercloud.
>> >
>>
>> I know we talked about this at the PTG and I was skeptical that this
>> will land in Queens. With the exception of the Container's team
>> wanting this, I'm not sure there is an actual end user who is looking
>> for the feature so I want to make sure we're not just doing more work
>> because we as developers think it's a good idea.
>
> I've heard from several operators that they were actually surprised we
> implemented containers in the Overcloud first. Validating a new
> deployment framework on a single node Undercloud (for operators) before
> overtaking their entire cloud deployment has a lot of merit to it IMO.
> When you share the same deployment architecture across the
> overcloud/undercloud it puts us in a better position to decide where to
> expose new features to operators first (when creating the undercloud or
> overcloud for example).
>
> Also, if you read my email again I've explicitly listed the
> "Containers" benefit last. While I think moving the undercloud to
> containers is a great benefit all by itself this is more of a
> "framework alignment" in TripleO and gets us out of maintaining huge
> amounts of technical debt. Re-using the same framework for the
> undercloud and overcloud has a lot of merit. It effectively streamlines
> the development process for service developers, and 3rd parties wishing
> to integrate some of their components on a single node. Why be forced
> to create a multi-node dev environment if you don't have to (aren't
> using HA for example).
>
> Lets be honest. While instack-undercloud helped solve the old "seed" VM
> issue it was outdated the day it landed upstream. The entire premise of
> the tool is that it uses old style "elements" to create the undercloud
> and we moved away from those as the primary means driving the creation
> of the Overcloud years ago at this point. The new 'undercloud_deploy'
> installer gets us back to our roots by once again sharing the same
> architecture to create the over and underclouds. A demo from long ago
> expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
> Q=5s
>
> In short, we aren't just doing more work because developers think it is
> a good idea. This has potential to be one of the most useful
> architectural changes in TripleO that we've made in years. Could
> significantly decrease our CI reasources if we use it to replace the
> existing scenarios jobs which take multiple VMs per job. Is a building
> block we could use for other features like and HA undercloud. And yes,
> it does also have a huge impact on developer velocity in that many of
> us already prefer to use the tool as a means of streamlining our
> dev/test cycles to minutes instead of hours. Why spend hours running
> quickstart Ansible scripts when in many cases you can just doit.sh. htt
> ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
>

So like I've repeatedly said, I'm not completely against it as I agree
what we have is not ideal.  I'm not -2, I'm -1 pending additional
information. I'm trying to be realistic and reduce our risk for this
cycle.   IMHO doit.sh is not acceptable as an undercloud installer and
this is what I've been trying to point out as the actual impact to the
end user who has to use this thing. We have an established
installation method for the undercloud, that while isn't great, isn't
a bash script with git fetches, etc.  So as for the implementation,
this is what I want to see properly flushed out prior to accepting
this feature as complete for Queens (and the new default).  I would
like to see a plan of what features need to be added (eg. the stuff on
the etherpad), folks assigned to do this work, and estimated
timelines.  Given that we shouldn't be making major feature changes
after M2 (~9 weeks), I want to get an understanding of what is
realistically going to make it.  If after reviewing the initial
details we find that it's not actually going to make M2, then let's
agree to this now rather than trying to force it in at the end.

I know you've been a great proponent of the c

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-02 Thread Alex Schultz
Hey Dan,

Thanks for sending out a note about this. I have a few questions inline.

On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince  wrote:
> One of the things the TripleO containers team is planning on tackling
> in Queens is fully containerizing the undercloud. At the PTG we created
> an etherpad [1] that contains a list of features that need to be
> implemented to fully replace instack-undercloud.
>

I know we talked about this at the PTG and I was skeptical that this
will land in Queens. With the exception of the Container's team
wanting this, I'm not sure there is an actual end user who is looking
for the feature so I want to make sure we're not just doing more work
because we as developers think it's a good idea. Given that etherpad
appears to contain a pretty big list of features, are we going to be
able to land all of them by M2?  Would it be beneficial to craft a
basic spec related to this to ensure we are not missing additional
things?

> Benefits of this work:
>
>  -Alignment: aligning the undercloud and overcloud installers gets rid
> of dual maintenance of services.
>

I like reusing existing stuff. +1

>  -Composability: tripleo-heat-templates and our new Ansible
> architecture around it are composable. This means any set of services
> can be used to build up your own undercloud. In other words the
> framework here isn't just useful for "underclouds". It is really the
> ability to deploy Tripleo on a single node with no external
> dependencies. Single node TripleO installer. The containers team has
> already been leveraging existing (experimental) undercloud_deploy
> installer to develop services for Pike.
>

Is this something that is actually being asked for or is this just an
added bonus because it allows developers to reduce what is actually
being deployed for testing?

>  -Development: The containerized undercloud is a great development
> tool. It utilizes the same framework as the full overcloud deployment
> but takes about 20 minutes to deploy.  This means faster iterations,
> less waiting, and more testing.  Having this be a first class citizen
> in the ecosystem will ensure this platform is functioning for
> developers to use all the time.
>

Seems to go with the previous question about the re-usability for
people who are not developers.  Has everyone (including non-container
folks) tried this out and attest that it's a better workflow for them?
 Are there use cases that are made worse by switching?

>  -CI resources: better use of CI resources. At the PTG we received
> feedback from the OpenStack infrastructure team that our upstream CI
> resource usage is quite high at times (even as high as 50% of the
> total). Because of the shared framework and single node capabilities we
> can re-architecture much of our upstream CI matrix around single node.
> We no longer require multinode jobs to be able to test many of the
> services in tripleo-heat-templates... we can just use a single cloud VM
> instead. We'll still want multinode undercloud -> overcloud jobs for
> testing things like HA and baremetal provisioning. But we can cover a
> large set of the services (in particular many of the new scenario jobs
> we added in Pike) with single node CI test runs in much less time.
>

I like this idea but would like to see more details around this.
Since this is a new feature we need to make sure that we are properly
covering the containerized undercloud with CI as well.  I think we
need 3 jobs to properly cover this feature before marking it done. I
added them to the etherpad but I think we need to ensure the following
3 jobs are defined and voting by M2 to consider actually switching
from the current instack-undercloud installation to the containerized
version.

1) undercloud-containers - a containerized install, should be voting by m1
2) undercloud-containers-update - minor updates run on containerized
underclouds, should be voting by m2
3) undercloud-containers-upgrade - major upgrade from
non-containerized to containerized undercloud, should be voting by m2.

If we have these jobs, is there anything we can drop or mark as
covered that is currently being covered by an overcloud job?

>  -Containers: There are no plans to containerize the existing instack-
> undercloud work. By moving our undercloud installer to a tripleo-heat-
> templates and Ansible architecture we can leverage containers.
> Interestingly, the same installer also supports baremetal (package)
> installation as well at this point. Like to overcloud however I think
> making containers our undercloud default would better align the TripleO
> tooling.
>
> We are actively working through a few issues with the deployment
> framework Ansible effort to fully integrate that into the undercloud
> installer. We are also reaching out to other teams like the UI and
> Security folks to coordinate the efforts around those components. If
> there are any questions about the effort or you'd like to be involved
> in the 

Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-27 Thread Alex Schultz
On Tue, Sep 26, 2017 at 11:57 PM, Tony Breeds  wrote:
> On Tue, Sep 26, 2017 at 10:31:59PM -0700, Emilien Macchi wrote:
>> On Tue, Sep 26, 2017 at 10:17 PM, Tony Breeds  
>> wrote:
>> > With that in mind I'd suggest that your review isn't appropriate for
>>
>> If we have to give up backports that help customers to get
>> production-ready environments, I would consider giving up stable
>> policy tag which probably doesn't fit for projects like installers. In
>> a real world, users don't deploy master or Pike (even not Ocata) but
>> are still on Liberty, and most of the time Newton.
>
> I agree the stable policy doesn't map very well to deployment projects
> and that's something I'd like to address.  I admit I'm not certain *how*
> to address it but it almost certainly starts with a discussion like this
> ;P
>
> I've proposed a forum session to further this discussion, even if that
> doesn't happen there's always the hall-way track :)
>

One idea would be to allow trailing projects additional trailing on
the phases as well.  Honestly 2 weeks for trailing for just GA is hard
enough. Let alone the fact that the actual end-users are 18+ months
behind.  For some deployment project like tripleo, there are sections
that should probably follow stable-policy as it exists today but
elements where there's 3rd party integration or upgrade implications
(in the case of tripleo, THT/puppet-tripleo) and they need to be more
flexible to modify things as necessary.  The word 'feature' isn't
necessarily the same for these projects than something like
nova/neutron/etc.

>> What proposing Giulio probably comes from the real world, the field,
>> who actually manage OpenStack at scale and on real environments (not
>> in devstack from master). If we can't have this code in-tree, we'll
>> probably carry this patch downstream (which is IMHO bad because of
>> maintenance and lack of CI). In that case, I'll vote to give up
>> stable:follows-policy so we can do what we need.
>
> Rather than give up on the stable:follows policy tag it is possibly
> worth looking at which portions of tripleo make that assertion.
>
> In this specific case, there isn't anything in the bug that indicates
> it comes from a user report which is all the stable team has to go on
> when making these types of decisions.
>

We'll need to re-evaulate what stable-policy means for tripleo.  We
don't want to allow the world for backporting but we also want to
reduce the patches carried downstream for specific use cases.  I think
in the case of 3rd party integrations we need a better definition of
what that means and perhaps creating a new repository like THT-extras
that doesn't follow stable-policy while the main one does.

Thanks,
-Alex

> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Alex Schultz
On Tue, Sep 26, 2017 at 2:34 PM, Michał Jastrzębski  wrote:
> In Kolla, during this PTG, we came up with idea of scenario based
> testing+documentation. Basically what we want to do is to provide set
> of kolla configurations, howtos and tempest configs to test out
> different "constellations" or use-cases. If, instead of in Kolla, do
> these in cross-community manner (and just host kolla-specific things
> in kolla), I think that would partially address what you're asking for
> here.
>

So I'd like to point out that we do a lot of these similar deployments
in puppet[0] and tripleo[1] for a while now but more to get the most
coverage out of the fewest jobs in terms of CI.  They aren't
necessarily realistic deployment use cases. We can't actually fully
test deployment scenarios given the limited resources available.

The problem with trying to push the constellation concept to
deployment tools is that you're effectively saying in that the
upstream isn't going to bother to doing it and is relying on an
understaffed (see chef/puppet people emails) groups to now implement
the thing you expect end users to use.  Simplification in openstack
needs to not be pushed off to someone else as we're all responsible
for it.  Have you seen the number of feature/configuration options the
upstream services have? Now multiply by 20-30. Welcome to OpenStack
configuration management.  Oh an try and keep up with all the new ones
and the ones being deprecated every 6 months. /me cries

Honestly it's time to stop saying yes to things unless they have some
sort of minimum viability or it makes sense why we would force it on
the end user (as confirmed by the end user, not because it sounds like
a good idea).

OpenStack has always been a pick your poison and construct your own
cloud. The problem is that those pieces used for building are getting
more complex and have even more inter-dependencies being added each
cycle without a simple way for the operator to install or be able to
migrate between versions.

Thanks,
-Alex

[0] https://github.com/openstack/puppet-openstack-integration
[1] 
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html

> On 26 September 2017 at 13:01, Jonathan Proulx  wrote:
>> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>>
>> :OpenStack is big. Big enough that a user will likely be fine with learning
>> :a new set of tools to manage it.
>>
>> New users in the startup sense of new, probably.
>>
>> People with entrenched environments, I doubt it.
>>
>> But OpenStack is big. Big enough I think all the major config systems
>> are fairly well represented, so whether I'm right or wrong this
>> doesn't seem like an issue to me :)
>>
>> Having common targets (constellations, reference architectures,
>> whatever) so all the config systems build the same things (or a subset
>> or superset of the same things) seems like it would have benefits all
>> around.
>>
>> -Jon
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-09-26 Thread Alex Schultz
On Mon, Sep 18, 2017 at 12:50 PM, Alex Schultz <aschu...@redhat.com> wrote:
> Hey folks,
>
> We started off our PTG with a retrospective for Pike. The output of
> which can be viewed here[0][1].
>
> One of the recurring themes from the retrospective and the PTG was the
> need for better communication during the cycle.  One of the ideas that
> was mentioned was adding a section to the weekly meeting calling for
> current status from the various tripleo squads[2].  Starting next week
> (Sept 26th), I would like for folks who are members of one of the
> squads be able to provide a brief status or a link to the current
> status during the weekly meeting.  There will be a spot added to the
> agenda to do a status roll call.

I forgot to do this during the meeting[0] this week. I will make sure
to add it for the meeting next week.  Please remember to have a person
prepare a squad status for next time.

As a remember for those who didn't want to click the link, the listed
squads are:
ci
ui/cli
upgrade
validations
workflows
containers
networking
integration
python3

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-09-26-14.00.html

> It was mentioned that folks may
> prefer to send a message to the ML and just be able to link to it
> similar to what the CI squad currently does[3].  We'll give this a few
> weeks and review how it works.
>
> Additionally it might be a good time to re-evaluate the squad
> breakdown as currently defined. I'm not sure we have anyone working on
> python3 items.
>
> Thanks,
> -Alex
>
> [0] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg
> [1] https://etherpad.openstack.org/p/tripleo-ptg-queens-pike-retrospective
> [2] 
> https://github.com/openstack/tripleo-specs/blob/master/specs/policy/squads.rst#squads
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-September/121881.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][puppet-ceph] missing dependencies for openstacklib::openstackclient in metadata.json

2017-09-25 Thread Alex Schultz
On Mon, Sep 25, 2017 at 5:42 AM, Emil Enemærke  wrote:
> There is also missing dependency for openstack/keystone, which is also used
> in ceph::rgw::keystone::auth line 63, 68, 75, 85
>
> On Mon, Sep 25, 2017 at 1:00 PM, Emil Enemærke  wrote:
>>
>> Hi
>>
>> In class ceph::rgw::keystone::auth there is an include for
>> ::openstacklib::openstackclient (line 61), but there is no dependency for it
>> in the metadata.json file.
>>

https://review.openstack.org/507141

Thanks,
-Alex

>> Cheers
>> Emil
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Blueprints for Queens

2017-09-18 Thread Alex Schultz
Hey folks,

At the end of the PTG we did some work to triage the blueprints for
TripleO. The goal was to ensure that some of the items we talked about
were properly captured for the Queens cycle.  Please take a look at
the blueprints targeted towards queens[0] and update them with a
reasonable milestone for delivery (queens-1/queens-2 ideally).  If you
need to add additional blueprints, please triage appropriately.  I
will be using this list for tracking completion of features during
this cycle and reaching out the the assignee on the blueprints for
status.  Please make sure there is an assignee for your blueprint(s).

In addition to the Queens blueprints, we also created a future target
that can be used for tracking future efforts or things that won't be
worked on during the Queens cycle. It would be advisable to review
this list[1] and make sure we did not accidentally move out work that
will be completed in Queens.  The same goes for the Pike list[2] to
make sure they have all been properly updated to reflect that they
have been implemented or need to be moved.

Thanks,
-Alex

[0] https://blueprints.launchpad.net/tripleo/queens
[1] https://blueprints.launchpad.net/tripleo/future
[2] https://blueprints.launchpad.net/tripleo/pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [tripleo] Making containerized service deployment the default

2017-09-18 Thread Alex Schultz
Hey ops & devs,

We talked about containers extensively at the PTG and one of the items
that needs to be addressed is that currently we still deploy the
services as bare metal services via puppet. For Queens we would like
to switch the default to be containerized services.  With this switch
we would also start the deprecation process for deploying services as
bare metal services via puppet.  We still execute the puppet
configuration as part of the container configuration process so the
code will continue to be leveraged but we would be investing more in
the continual CI of the containerized deployments and reducing the
traditional scenario coverage.

As we switch over to containerized services by default, we would also
begin to reduce installed software on the overcloud images that we
currently use.  We have an open item to better understand how we can
switch away from the golden images to a traditional software install
process during the deployment and make sure this is properly tested.
In theory it should work today by switching the default for
EnablePackageInstall[0] to true and configuring repositories, but this
is something we need to verify.

If anyone has any objections to this default switch, please let us know.

Thanks,
-Alex

[0] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml#L33-L36

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [tripleo] Making containerized service deployment the default

2017-09-18 Thread Alex Schultz
Hey ops & devs,

We talked about containers extensively at the PTG and one of the items
that needs to be addressed is that currently we still deploy the
services as bare metal services via puppet. For Queens we would like
to switch the default to be containerized services.  With this switch
we would also start the deprecation process for deploying services as
bare metal services via puppet.  We still execute the puppet
configuration as part of the container configuration process so the
code will continue to be leveraged but we would be investing more in
the continual CI of the containerized deployments and reducing the
traditional scenario coverage.

As we switch over to containerized services by default, we would also
begin to reduce installed software on the overcloud images that we
currently use.  We have an open item to better understand how we can
switch away from the golden images to a traditional software install
process during the deployment and make sure this is properly tested.
In theory it should work today by switching the default for
EnablePackageInstall[0] to true and configuring repositories, but this
is something we need to verify.

If anyone has any objections to this default switch, please let us know.

Thanks,
-Alex

[0] 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml#L33-L36

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-09-18 Thread Alex Schultz
Hey folks,

We started off our PTG with a retrospective for Pike. The output of
which can be viewed here[0][1].

One of the recurring themes from the retrospective and the PTG was the
need for better communication during the cycle.  One of the ideas that
was mentioned was adding a section to the weekly meeting calling for
current status from the various tripleo squads[2].  Starting next week
(Sept 26th), I would like for folks who are members of one of the
squads be able to provide a brief status or a link to the current
status during the weekly meeting.  There will be a spot added to the
agenda to do a status roll call.  It was mentioned that folks may
prefer to send a message to the ML and just be able to link to it
similar to what the CI squad currently does[3].  We'll give this a few
weeks and review how it works.

Additionally it might be a good time to re-evaluate the squad
breakdown as currently defined. I'm not sure we have anyone working on
python3 items.

Thanks,
-Alex

[0] http://people.redhat.com/aschultz/denver-ptg/tripleo-ptg-retro.jpg
[1] https://etherpad.openstack.org/p/tripleo-ptg-queens-pike-retrospective
[2] 
https://github.com/openstack/tripleo-specs/blob/master/specs/policy/squads.rst#squads
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/121881.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] question about healthcheck

2017-09-05 Thread Alex Schultz
On Sun, Sep 3, 2017 at 7:44 PM, Emilien Macchi  wrote:
> Can someone explain me (sorry for the dumb question) why do we force
> healthcheck to be positive when deploying TripleO containers?
>
> healthcheck:
> test: /bin/true
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/keystone.yaml#L180-L181
>
> Instead of real checks?
> I probably missed something (it's WIP?) but I found useful to clarify
> now, since we're about to release final Pike.
>

Since that's the cron container, other than checking that cron is
running (which is the container command) I'm not sure if there's much
to check.

Thanks,
-Alex

> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Add Mohammed Naser to cores

2017-09-05 Thread Alex Schultz
Hey folks,

I'm writing to ask that we add mnaser to the cores list for the puppet
modules.  He's been a user and contributor for some time and is also
the new PTL.  Let me know if there are any objections.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can we remove the monkey_patch_modules config option?

2017-08-28 Thread Alex Schultz
On Fri, Aug 25, 2017 at 3:51 PM, Matt Riedemann  wrote:
> I'm having a hard time tracing what this is necessary for. It's related to
> the notify_decorator which is around for legacy notifications but I don't
> actually see that decorator used anywhere. Given there are other options
> related to the notify_decorator, like "default_publisher_id" if we can start
> unwinding and removing this legacy stuff it would make the config (.005%)
> simpler.
>
> It also just looks like we have a monkey_patch option that is run at the
> beginning of every service, uses monkey_patch_modules and if loaded, monkey
> patches whatever is configured for modules.
>
> I mean, if we thought hooks were bad, this is pretty terrible.
>

JFYI, https://review.openstack.org/#/c/494305/

Since this was just added, someone is looking to use it or is using it.

Thanks,
-Alex

> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet neutron]

2017-08-21 Thread Alex Schultz
On Sat, Aug 19, 2017 at 12:37 AM, hanish gogada
 wrote:
> Hi all,
>
> Currently neutron ml2 ovs agent puppet module does not support the
> configuration of  ovsdb_connection. Does any work on this is in progress.
>

We had 
https://github.com/openstack/puppet-neutron/blob/721fb14e1654d002b49d363dfbcca8fdddb46167/manifests/plugins/ovn.pp
but it seems that is deprecated in favor of
https://github.com/openstack/puppet-neutron/blob/721fb14e1654d002b49d363dfbcca8fdddb46167/manifests/plugins/ml2/ovn.pp
which would leverage, https://github.com/openstack/puppet-ovn

Not sure what you need specifically and I'm not aware of any work in
this area at the moment.

Thanks,
-Alex

> Thanks
> hanish gogada
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposal to require bugs for tech debt

2017-08-16 Thread Alex Schultz
On Wed, Aug 16, 2017 at 8:24 AM, Markus Zoeller
<mzoel...@linux.vnet.ibm.com> wrote:
> On 16.08.2017 02:59, Emilien Macchi wrote:
>> On Tue, Aug 15, 2017 at 5:46 PM, Alex Schultz <aschu...@redhat.com> wrote:
>>> Hey folks,
>>>
>>> I'm proposing that in order to track tech debt that we're adding in as
>>> part of development that we create a way to track these items and not
>>> approve them without a bug (and a reference to said bug)[0].  Please
>>> take a moment to review the proposed policy and comment. I would like
>>> to start this for the queens cycle.
>>
>> I also think we should frequently review the status of these bugs.
>> Maybe unofficially from time to time and officially during milestone-3
>> of each cycle.
>>
>> I like the proposal so far, thanks.
>>
>
> FWIW, for another (in-house) project, I create a page called "technical
> debt" in the normal docs directory of the project. That way, I can add
> the "reminder" with the same commit which introduced the technical debt
> in the code. Similar to what OpenStack already does with the
> release-notes. The list of technical debt items is then always visible
> in the docs and not a query in the bug-tracker with tags (or something
> like that).
> Just an idea, maybe it applicable here.
>

Yea that would a good choice if we only had a single or a low number
of projects under the tripleo umbrella. The problem is we have many
different components which contribute to tech debt so storing it in
each repo would be hard to track. I proposed bugs because it would be
a singular place for reporting. For projects with fewer deliverable
storing it like release notes is a good option.

Thanks,
-Alex

> --
> Regards, Markus Zoeller (markus_z)
>
>>> A real world example of where this would beneficial would be the
>>> workaround we had for buggy ssh[1]. This patch was merged 6 months ago
>>> to work around an issue in ssh that was recently fixed. However we
>>> would most likely never have remembered to revert this. It was only
>>> because someone[2] spotted it and mentioned it that it is being
>>> reverted now.
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/#/c/494044/
>>> [1] 
>>> https://review.openstack.org/#/q/6e8e27488da31b3b282fe1ce5e07939b3fa11b2f,n,z
>>> [2] Thanks pabelanger
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Proposal to require bugs for tech debt

2017-08-15 Thread Alex Schultz
Hey folks,

I'm proposing that in order to track tech debt that we're adding in as
part of development that we create a way to track these items and not
approve them without a bug (and a reference to said bug)[0].  Please
take a moment to review the proposed policy and comment. I would like
to start this for the queens cycle.

A real world example of where this would beneficial would be the
workaround we had for buggy ssh[1]. This patch was merged 6 months ago
to work around an issue in ssh that was recently fixed. However we
would most likely never have remembered to revert this. It was only
because someone[2] spotted it and mentioned it that it is being
reverted now.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/494044/
[1] 
https://review.openstack.org/#/q/6e8e27488da31b3b282fe1ce5e07939b3fa11b2f,n,z
[2] Thanks pabelanger

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Hold off on approving patches until further notice

2017-08-10 Thread Alex Schultz
On Thu, Aug 10, 2017 at 11:12 AM, Paul Belanger <pabelan...@redhat.com> wrote:
> On Thu, Aug 10, 2017 at 07:03:32AM -0600, Alex Schultz wrote:
>> FYI,
>>
>> The gates are hosed for a variety of reasons[0][1] and we can't get
>> critical patches merged. Please hold off on rechecking or approving
>> anything new until further notice.   We're hoping to get some of the
>> fixes for this merged today.  I will send a note when it's OK to merge
>> again.
>>
>> [0] https://bugs.launchpad.net/tripleo/+bug/1709428
>> [1] https://bugs.launchpad.net/tripleo/+bug/1709327
>>
> So far, these are the 3 patches we need to land today:
>
>   Exclude networking-bagpipe from dlrn
> - https://review.openstack.org/491878
>
>   Disable existing repositories in tripleo-ci
> - https://review.openstack.org/492289
>
>   Stop trying to build networking-bagpipe with DLRN
> - https://review.openstack.org/492339
>
> These 3 fixes will take care of the large amount of gate resets tripleo is
> currently seeing. Like Alex says, please try not to approve / recheck anything
> until we land these.
>

Ok so we've managed to land patches to improve the reliability.

https://review.openstack.org/492339 - merged
https://review.openstack.org/491878 - still pending but we managed to
get the package fixed so this one is not as critical anymore
https://review.openstack.org/491522 - merged
https://review.openstack.org/492289 - merged

We found that the undercloud-container's job is still trying to pull
from buildlogs.centos.org, and I've proposed a fix
https://review.openstack.org/#/c/492786/

I've restored (and approved) previously approved patches that have a
high/critical bug or a FFE approved blueprint associated.

It should be noted that the following patches for tripleo do not have
a bug or bp reference so they should be updated prior to being
re-approved:
https://review.openstack.org/#/c/400407/
https://review.openstack.org/#/c/489083/
https://review.openstack.org/#/c/475457/

For tripleo patches, please refer to Emilien's email[0] about the RC
schedule with includes these rules about what patches should be
merged.  Please be careful on rechecks and check failures. Do not
blindly recheck.  We have noticed some issues with citycloud nodes, so
if you spot problems with specific clouds please let us know so we can
track these and work with infra on it.

Thanks,
-Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-August/120806.html

> Thanks,
> PB
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][puppet] Hold off on approving patches until further notice

2017-08-10 Thread Alex Schultz
FYI,

The gates are hosed for a variety of reasons[0][1] and we can't get
critical patches merged. Please hold off on rechecking or approving
anything new until further notice.   We're hoping to get some of the
fixes for this merged today.  I will send a note when it's OK to merge
again.

[0] https://bugs.launchpad.net/tripleo/+bug/1709428
[1] https://bugs.launchpad.net/tripleo/+bug/1709327

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [elections][tripleo] Queens PTL candidacy

2017-08-02 Thread Alex Schultz
I would like to nominate myself for the TripleO PTL role for the Queens cycle.

I have been a contributor to various OpenStack projects since Liberty. I have
spent most of my time working on the deployment of OpenStack and with the
engineers who deploy it.  As many of you know, I believe the projects we work
on should simplify workflows and improve the end user's lives. During my time
as Puppet OpenStack PTL, I have promoted efforts to simplify and establishing
reusable patterns and best practices. I feel confident that TripleO is on the
right path and hope to continue to lead it in the right direction.

For the last few cycles we have moved TripleO forwards and improved not only
TripleO itself, but have provided additional tooling around deploying and
managing OpenStack. As we look forward to the Queens cycle, it is imporant
to recognize the work we have done and can continue to improve on.

* Improving deployment of containerized services.
  We started the effort to switch over to containerized services being deployed
  with TripleO as part of the Pike cycle and we need to finalize the last few
  services. As we start the transition to including Kubernetes, we need to be
  mindful of the transition and make sure we evaluate and leverage already
  existing solutions.
* Continue making the deployers' lives easier.
  The recent cycles have been full of efforts to allow users to do more with
  TripleO. With the work to expose composable roles, composable networks and
  containerization we have added additional flexibility for the deployment
  engineers to be able to build out architectures needed for the end user.
  That being said, there are still efforts to be done to make the deployment
  process less error prone and more user friendly.
* Continued improvement of CI
  The process to transition over to tripleo-quickstart has made excellent
  progress over the last few cycles. We need to continue to refine the steps
  to ensure that Developers can reuse the work and be able to quickly and
  easily troubleshoot when things break down.  Additionally we need to make
  sure that we can classify repeated failures and work to address them quickly
  as to not hold up bugs and features.
* Improve visibility of the project status
  As part of the Queens cycle, I would like to devote some time into capturing
  metrics and information about the status of the various projects under the
  TripleO umbrella. We've been doing lots of work but it I think it would be
  beneficial for us to know where this work has been occurring. I'm hoping to
  work on some of the reporting around the status of our CI, bugs and reviews
  to be able to see where we could use some more efforts to hopefully improve
  our development velocities.

Thanks,
Alex Schultz
irc: mwhahaha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] new "linter" rules for openstack/releases repository

2017-08-01 Thread Alex Schultz
On Tue, Aug 1, 2017 at 1:45 PM, Doug Hellmann  wrote:
> Excerpts from Alex Schultz's message of 2017-08-01 12:55:15 -0600:
>> On Tue, Aug 1, 2017 at 12:07 PM, Doug Hellmann  wrote:
>> > The release team is working on a series of patches to improve the
>> > data validation within openstack/releases. Part of that work is to
>> > apply consistent formatting rules for all of the YAML files, so it
>> > is easier to build tools that automatically update those files. To
>> > enable those consistent rules, we have had to "normalize" the use
>> > of whitespace in a bunch of the existing files.
>> >
>> > These changes mean that if you have built your own automation for
>> > adding new releases, you might have to make adjustments. If you do,
>> > please take the time to look at the tools within the repo (in
>> > particular the new-release, interactive-release, and edit-deliverable
>> > commands) to see if they meet your needs, or can be extended to do
>> > so. There's not much point in all of us building our own tools. :-)
>> >
>> > If you're curious about the actual changes, you can have a look at the
>> > patch series for "queens-indentation" at
>> > https://review.openstack.org/#/q/project:openstack/releases+topic:queens-indentation
>> >
>>
>> Was the linting rules applied via templates or is this something that
>> can be done programatically via pyyaml/ruamel.yaml or some other
>> library?  I know for the puppet release files I was using ruamel.yaml
>
> We're using yamllint and jsonschema. The whitepsace rules are enforced
> by yamllint, and now also by the yamlutils module in the releases repo
> which configures PyYAML to emit YAML that matches the linter rules.
>

Ok I'll take a look. The issues I ran into with PyYAML was the
ordering would end up differently than what was previously being
generated.  I'm really not too keen on these types of changes as they
seem to be making things harder than they need to be.  In the future
can we ask for commentary on this prior to making these changes or
only doing it at the beginning of the cycle? Now that we're working
towards the RCs, the last thing I want to be doing right now is
messing with yaml formatting while trying to get the final release
stuff together.

Thanks,
-Alex

> Doug
>
>> to match up with what was being generated but this new formatting
>> seems to diverge from what it generates.   The last time I went
>> looking the tooling provided seemed to be using templates which didn't
>> match anything if you were trying to manage the files via normal yaml
>> objects which was kinda the problem.
>>
>> Thanks,
>> -Alex
>>
>> > Doug
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] new "linter" rules for openstack/releases repository

2017-08-01 Thread Alex Schultz
On Tue, Aug 1, 2017 at 12:07 PM, Doug Hellmann  wrote:
> The release team is working on a series of patches to improve the
> data validation within openstack/releases. Part of that work is to
> apply consistent formatting rules for all of the YAML files, so it
> is easier to build tools that automatically update those files. To
> enable those consistent rules, we have had to "normalize" the use
> of whitespace in a bunch of the existing files.
>
> These changes mean that if you have built your own automation for
> adding new releases, you might have to make adjustments. If you do,
> please take the time to look at the tools within the repo (in
> particular the new-release, interactive-release, and edit-deliverable
> commands) to see if they meet your needs, or can be extended to do
> so. There's not much point in all of us building our own tools. :-)
>
> If you're curious about the actual changes, you can have a look at the
> patch series for "queens-indentation" at
> https://review.openstack.org/#/q/project:openstack/releases+topic:queens-indentation
>

Was the linting rules applied via templates or is this something that
can be done programatically via pyyaml/ruamel.yaml or some other
library?  I know for the puppet release files I was using ruamel.yaml
to match up with what was being generated but this new formatting
seems to diverge from what it generates.   The last time I went
looking the tooling provided seemed to be using templates which didn't
match anything if you were trying to manage the files via normal yaml
objects which was kinda the problem.

Thanks,
-Alex

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] End of scheduled IRC meetings

2017-08-01 Thread Alex Schultz
Hey everyone,

So over the last cycle as participation has dropped off, we've been
having less of the scheduled meetings. In today's meeting[0] it was
expressed that the formal meeting is nice but realistically it's more
work than it's worth these days. So we will be dropping the formal
schedule meeting in #openstack-meeting-4 in favor of ad-hoc meetings
in #puppet-openstack when needed.  I will be updating the appropriate
documentation and will be proposing the change to
openstack-infra/irc-meetings.  As always if anyone objects, please
feel free to reply.

Thanks,
-Alex


[0] 
http://eavesdrop.openstack.org/meetings/puppet_openstack/2017/puppet_openstack.2017-08-01-15.00.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Meeting tomorrow Aug 1, 2017 @ 1500 UTC

2017-07-31 Thread Alex Schultz
Just as a reminder, we do have a meeting tomorrow.  If you would like
to talk about anything, please make sure it's on the agenda[0].

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170801

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Saravanan KR core

2017-07-21 Thread Alex Schultz
On Fri, Jul 21, 2017 at 9:35 AM, Jason E. Rist  wrote:
> On 07/21/2017 09:08 AM, Yolanda Robla Mota wrote:
>> +1 from my side!
>> Saravanan is part of our team, and he is brilliant. He is one of our team
>> gurus when talking about TripleO and NFV integration. From our NFVPE point
>> of view, it will be great and well deserved, that Saravanan becomes core.
>>
>> On Fri, Jul 21, 2017 at 5:01 PM, Emilien Macchi  wrote:
>>
>> > Saravanan KR has shown an high level of expertise in some areas of
>> > TripleO, and also increased his involvement over the last months:
>> > - Major contributor in DPDK integration
>> > - Derived parameter works
>> > - and a lot of other things like improving UX and enabling new
>> > features to improve performances and networking configurations.
>> >
>> > I would like to propose Saravanan part of TripleO core and we expect
>> > his particular focus on t-h-t, os-net-config and tripleoclient for now
>> > but we hope to extend it later.
>> >
>> > As usual, we'll vote :-)
>> > Thanks,
>> > --
>> > Emilien Macchi
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> +1
>

+1

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bogdan Dobrelya core on TripleO / Containers

2017-07-21 Thread Alex Schultz
On Fri, Jul 21, 2017 at 12:58 PM, Pradeep Kilambi  wrote:
> On Fri, Jul 21, 2017 at 1:36 PM, Brent Eagles  wrote:
>>
>>
>> On Fri, Jul 21, 2017 at 12:25 PM, Emilien Macchi  wrote:
>>>
>>> Hi,
>>>
>>> Bogdan (bogdando on IRC) has been very active in Containerization of
>>> TripleO and his quality of review has increased over time.
>>> I would like to give him core permissions on container work in TripleO.
>>> Any feedback is welcome as usual, we'll vote as a team.
>>>
>>> Thanks,
>>> --
>>> Emilien Macchi
>>
>>
>> +1
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> +1
>
>
> --
> Cheers,
> ~ Prad
>

+1

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Intermittent Jenkins failures

2017-07-20 Thread Alex Schultz
(updated topic to include [tripleo])

On Thu, Jul 20, 2017 at 9:20 AM, Abhishek Kane
 wrote:
> Hi,
>
>
>
> Recently saw intermittent jenkins failures in difference scenarios for patch
> https://review.openstack.org/#/c/475765/17.
>
>
>
> Current ones are-
>
> In overcloud deploy:
>
> http://logs.openstack.org/65/475765/17/check/gate-tripleo-ci-centos-7-scenario001-multinode-oooq/685b8bd/console.html
> http://logs.openstack.org/65/475765/17/check/gate-tripleo-ci-centos-7-scenario001-multinode-oooq-container/1dbde7d/console.html
>
>

https://bugs.launchpad.net/tripleo/+bug/1705481

>
> and undercloud install:
>
> http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82
>
>

http://logs.openstack.org/65/475765/17/gate/gate-tripleo-ci-centos-7-nonha-multinode-oooq/82bd9ff/logs/undercloud/home/jenkins/undercloud_install.log.txt.gz#_2017-07-19_10_59_21

mirror issues

>
> Anybody else facing such issue?
>
>
>
> Thanks,
>
> Abhishek
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2-gateway] Added Ricardo Noriega to the core team

2017-07-19 Thread Alex Schultz
On Wed, Jul 19, 2017 at 9:58 AM, Ricardo Noriega De Soto <
rnori...@redhat.com> wrote:

> Thanks Gary for the opportunity! We'll keep fighting! :-)
>
>
Congrats. Your efforts in the puppet openstack repos to also get this
properly supported and tested have also been very appreciated.

Thanks,
-Alex


> On Wed, Jul 19, 2017 at 8:52 AM, Gary Kotton  wrote:
>
>> Hi,
>>
>> Over the last few months Ricardo Noriega has been making many
>> contributions to the project and has actually helped get it to the stage
>> where it’s a lot healthier than before ☺. I am adding him to the core
>> team.
>>
>> Congratulations!
>>
>> A luta continua
>>
>> Gary
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
> Red Hat
> irc: rnoriega @freenode
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Alex Schultz
On Fri, Jul 7, 2017 at 11:50 AM, James Slagle  wrote:
> I proposed a session for the PTG
> (https://etherpad.openstack.org/p/tripleo-ptg-queens) about forming a
> common plan and vision around Ansible in TripleO.
>
> I think it's important however that we kick this discussion off more
> broadly before the PTG, so that we can hopefully have some agreement
> for deeper discussions and prototyping when we actually meet in
> person.
>
> Right now, we have multiple uses of Ansible in TripleO:
>
> (0) tripleo-quickstart which follows the common and well accepted
> approach to bundling a set of Ansible playbooks/roles.
>
> (1) Mistral calling Ansible. This is the approach used by
> tripleo-validations where Mistral directly executes ansible playbooks
> using a dynamic inventory. The inventory is constructed from the
> server related stack outputs of the overcloud stack.
>
> (2) Ansible running playbooks against localhost triggered by the
> heat-config Ansible hook. This approach is used by
> tripleo-heat-templates for upgrade tasks and various tasks for
> deploying containers.
>
> (3) Mistral calling Heat calling Mistral calling Ansible. In this
> approach, we have Mistral resources in tripleo-heat-templates that are
> created as part of the overcloud stack and in turn, the created
> Mistral action executions run ansible. This has been prototyped with
> using ceph-ansible to install Ceph as part of the overcloud
> deployment, and some of the work has already landed. There are also
> proposed WIP patches using this approach to install Kubernetes.
>
> There are also some ideas forming around pulling the Ansible playbooks
> and vars out of Heat so that they can be rerun (or run initially)
> independently from the Heat SoftwareDeployment delivery mechanism:
>
> (4) https://review.openstack.org/#/c/454816/
>
> (5) Another idea I'd like to prototype is a local tool that runs on
> the undercloud and pulls all of the SoftwareDeployment data out of
> Heat as the stack is being created and generates corresponding Ansible
> playbooks to apply those deployments. Once a given playbook is
> generated by the tool, the tool would signal back to Heat that the
> deployment is complete. Heat then creates the whole stack without
> actually applying a single deployment to an overcloud node. At that
> point, Ansible (or Mistral->Ansible for an API) would be used to do
> the actual deployment of the Overcloud with the Undercloud as the
> ansible runner.
>
> All of this work has merit as we investigate longer term plans, and
> it's all at different stages with some being for dev/CI (0), some
> being used already in production (1 and 2), some just at the
> experimental stage (3 and 4), and some does not exist other than an
> idea (5).
>
> My intent with this mail is to start a discussion around what we've
> learned from these approaches and start discussing a consolidated plan
> around Ansible. And I'm not saying that whatever we come up with
> should only use Ansible a certain way. Just that we ought to look at
> how users/operators interact with Ansible and TripleO today and try
> and come up with the best solution(s) going forward.
>
> I think that (1) has been pretty successful, and my idea with (5)
> would use a similar approach once the playbooks were generated.
> Further, my idea with (5) would give us a fully backwards compatible
> solution with our existing template interfaces from
> tripleo-heat-templates. Longer term (or even in parallel for some
> time), the generated playbooks could stop being generated (and just
> exist in git), and we could consider moving away from Heat more
> permanently
>
> I recognize that saying "moving away from Heat" may be quite
> controversial. While it's not 100% the same discussion as what we are
> doing with Ansible, I think it is a big part of the discussion and if
> we want to continue with Heat as the primary orchestration tool in
> TripleO.
>
> I've been hearing a lot of feedback from various operators about how
> difficult the baremetal deployment is with Heat. While feedback about
> Ironic is generally positive, a lot of the negative feedback is around
> the Heat->Nova->Ironic interaction. And, if we also move more towards
> Ansible for the service deployment, I wonder if there is still a long
> term place for Heat at all.
>
> Personally, I'm pretty apprehensive about the approach taken in (3). I
> feel that it is a lot of complexity that could be done simpler if we
> took a step back and thought more about a longer term approach. I
> recognize that it's mostly an experiment/POC at this stage, and I'm
> not trying to directly knock down the approach. It's just that when I
> start to see more patches (Kubernetes installation) using the same
> approach, I figure it's worth discussing more broadly vs trying to
> have a discussion by -1'ing patch reviews, etc.
>
> I'm interested in all feedback of course. And I plan to take a shot at
> working on the prototype I 

[openstack-dev] [puppet] Meeting on July 11, 2017

2017-07-10 Thread Alex Schultz
Hey folks,

Just a heads up we have a meeting scheduled for tomorrow July 11, 2017
@ 1500 UTC.  The agenda is available here[0].  I would like to take a
few mins to review any outstanding work for Pike.  Feel free to add
any additional topics.

Thanks,
-Alex

[0] https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20170711

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][fuel] Proposal to drop Fuel CI jobs from Puppet CI

2017-07-06 Thread Alex Schultz
Hey folks,

Since Fuel has been recently moved to a hosted project and there is
very little activity anymore, I am proposing (with much sadness) that
we remove the Fuel CI jobs from Puppet CI.  This is being brought up
because we have a change in puppet-swift that is going to break the
fuel jobs and without any movement on the proposed fix[1] for it in
Fuel, it would seem that we'll end up just breaking Fuel without
anyone to fix it.  I'd love to keep it around because it adds
additional coverage for us but since it seems to no longer being kept
current, I'd rather not block future patches due to broken Fuel CI.
Let me know if there are any objections or happy alternatives.  If
there's no response I'll work on the removal next week (~July 13,
2017).

Thanks,
-Alex

[0] https://review.openstack.org/#/c/445998/
[1] https://review.openstack.org/#/c/475043/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] quickstart failing due to unresolved dependencies

2017-07-05 Thread Alex Schultz
On Mon, Jul 3, 2017 at 11:18 PM, Udi Kalifon <ukali...@redhat.com> wrote:
> I'm trying to install Pike. The undercloud has versions:
> puppet-tripleo-7.1.1-0.20170627224658.f99b72a.el7.centos.noarch
> instack-undercloud-7.1.1-0.20170619172442.4de8226.el7.centos.noarch
>

I think you're missing, https://review.openstack.org/#/c/475352/
because your puppet-tripleo has
https://review.openstack.org/#/c/475387/ which removed that profile.
Update your instack-undercloud.

Thanks,
-Alex

>
>
> Regards,
> Udi Kalifon; Senior QE; RHOS-UI Automation
>
>
> On Mon, Jul 3, 2017 at 8:58 PM, Alex Schultz <aschu...@redhat.com> wrote:
>>
>> On Sun, Jul 2, 2017 at 7:11 AM, Udi Kalifon <ukali...@redhat.com> wrote:
>> > Hi.
>> >
>> > I tried to install the latest TripleO with quickstart, and it failed in
>> > the
>> > undercloud install. I logged in to the undercloud and checked the
>> > install
>> > log, and I see errors there like "ModuleLoader: module 'rabbitmq' has
>> > unresolved dependencies". I pasted the output here:
>> > http://paste.openstack.org/show/614247/
>> >
>> > Any help on how to fix this will be appreciated !
>>
>> It seems to have failed while trying to run puppet and it couldn't
>> find a specific class.
>>
>> 2017-07-02 12:37:12,614 INFO: Error: Evaluation Error: Error while
>> evaluating a Function Call, Could not find class
>> ::tripleo::profile::base::ui for undercloud at
>> /etc/puppet/manifests/puppet-stack-config.pp:585:3 on node undercloud
>>
>> What version are you deploying?  This may happen if there's an
>> mismatch between the instack-undercloud and the puppet-tripleo
>> packages
>>
>> Thanks,
>> -Alex
>>
>>
>> >
>> > Regards,
>> > Udi Kalifon
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][magnum][kolla][ansible][puppet][rally] removing SSH drivers from Ironic

2017-07-05 Thread Alex Schultz
On Wed, Jul 5, 2017 at 5:01 AM, Pavlo Shchelokovskyy
 wrote:
> Hi all,
>
> as mitaka branch was finally EOLed, Ironic team is going to proceed with
> removal of SSH-based power and management drivers for virtualized HW which
> were deprecated back in newton release.
>
> Since newton the virtualbmc-based simulation of IPMI-capable HW is
> officially supported, and we plan to switch all gates away from using *_ssh
> drivers and eventually remove those from the ironic code, most probably in
> Pike release.
>
> I've skimmed through the project-config/zuul/layout.yaml and found a number
> of projects that use some gate jobs with 'ironic' in the job name which are
> not defined in project-config/jenkins/jobs/ironic.yaml, those project are
> tagged in the subject of this message.
>
> While kolla, magnum, OSA and rally seem to have only non-voting jobs with
> ironic and thus should not be completely broken by removal of SSH drivers
> anyway, puppet-ironic seems to have a voting
> "puppet-openstack-integration-jobs-scenario002" job.
>

We're OK, we don't have the ssh driver[0] in our test job.  We also
had previously deprecated the ssh driver in puppet-ironic[1] so we can
remove it now that we're in Pike.

Thanks,
-Alex

[0] 
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/ironic.pp#L73
[1] https://review.openstack.org/#/c/446918/

> This message especially concerns deployment-specific projects that do not
> install ironic via DevStack in their gate jobs. To successfully install a
> working ironic which is capable to actually deploy nodes, such projects will
> have to incorporate some additional steps to setup virtualbmc-based HW
> simulation for baremetal nodes and configure nodes enrolled in ironic
> accordingly.
>
> If your project is mentioned, please ensure that any ironic-including gate
> jobs you use do not setup ironic with *_ssh drivers, and switch to other
> supported drivers if needed. Examples of necessary steps can be found in
> ironic's DevStack plugin and in the playbooks of openstack/bifrost project.
>
> For any questions please do not hesitate to contact ironic team, we'll be
> glad to help you.
>
> Best regards,
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] quickstart failing due to unresolved dependencies

2017-07-03 Thread Alex Schultz
On Sun, Jul 2, 2017 at 7:11 AM, Udi Kalifon  wrote:
> Hi.
>
> I tried to install the latest TripleO with quickstart, and it failed in the
> undercloud install. I logged in to the undercloud and checked the install
> log, and I see errors there like "ModuleLoader: module 'rabbitmq' has
> unresolved dependencies". I pasted the output here:
> http://paste.openstack.org/show/614247/
>
> Any help on how to fix this will be appreciated !

It seems to have failed while trying to run puppet and it couldn't
find a specific class.

2017-07-02 12:37:12,614 INFO: Error: Evaluation Error: Error while
evaluating a Function Call, Could not find class
::tripleo::profile::base::ui for undercloud at
/etc/puppet/manifests/puppet-stack-config.pp:585:3 on node undercloud

What version are you deploying?  This may happen if there's an
mismatch between the instack-undercloud and the puppet-tripleo
packages

Thanks,
-Alex


>
> Regards,
> Udi Kalifon
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral][deployment] how to add deployment roles

2017-07-03 Thread Alex Schultz
On Tue, Jun 27, 2017 at 2:19 PM, Dan Trainor  wrote:
> Hi -
>
> I'm looking for the glue that populates the overcloud role list.
>
> I can get a list of roles via 'openstack overcloud role list', however I'm
> looking to create new roles to incorporate in to this list.
>
> I got as far as using 'mistral action-update' against what I beleive to be
> the proper action (tripleo.role.list) but am not sure what to use as the
> source of what I would be updating, not am I  finding any information about
> how that runs and where it gets its data from.  I also had a nice exercise
> pruning the output of 'mistral action-*' commands which was pretty
> insightful and helped me hone in on what I was looking for, but still
> uncertain of.
>
> Pretty sure I'm missing a few details along the way here, too.
>
> Can someone please shed some light on this so I can have a better
> understanding of the process?
>

Sorry for the delay in replying (pto). The overcloud roles actions
operate on a folder. By default this is
/usr/share/openstack-tripleo-heat-templates/roles. However it does
provide a command line parameter to change this folder to where ever
you would like to keep your custom roles.  The docs around this are
currently up for review[0] but we do have an existing readme in the
roles folder[1] that has some of this information.  The plan is to
eventually allow for some mistral actions around roles, but we
probably won't have that until later.  Right now this is purely a
tripleoclient function.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/476236/
[1] 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst

> Thanks!
> -dant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-14 Thread Alex Schultz
On Tue, Jun 13, 2017 at 11:11 AM, Alex Schultz <aschu...@redhat.com> wrote:
> On Tue, Jun 13, 2017 at 6:58 AM, Dan Prince <dpri...@redhat.com> wrote:
>> On Fri, 2017-06-09 at 09:24 -0600, Alex Schultz wrote:
>>> Hey folks,
>>>
>>> I wanted to bring to your attention that we've merged the change[0]
>>> to
>>> add a basic set of roles that can be combined to create your own
>>> roles_data.yaml as needed.  With this change the roles_data.yaml and
>>> roles_data_undercloud.yaml files in THT should not be changed by
>>> hand.
>>
>> In general I like the feature.
>>
>> I added some comments to your validations [1] patch below. We need
>> those validations, but I think we need to carefully consider adding a
>> hard dependency on python-tripleoclient simply to have validations in
>> tree. Wondering if perhaps a t-h-t-utils library project might be in
>> order here to contain routines we use in t-h-t and in higher level
>> workflow tools in Mistral and on the CLI? This might also make the
>> tools/process-templates.py stuff cleaner as well.
>>
>> Thoughts?
>
> So my original implementation of the roles stuff included a standalone
> script in THT to generate the roles_data.yaml files.  This was -1'd as
> realistically the actions for managing this should probably live
> within python-tripleoclient.  This made sense to me as that's how the
> end user really should be interacting with these things.  Given that
> the tripleoclient and the UI are the two ways and operator is going to
> consume with THT I think there is already an undocumented requirement
> that should be there.
>
> An alternative would be to move the roles generation items into
> tripleo-common but then we would have to write two distinct ways of
> then executing this code. One being tripleoclient and the other being
> a standalone script which basically would have to reinvent the
> interface provided by tripleoclient/openstackclient.  Since we're not
> allowing folks to dynamically construct the roles_data.yaml as part of
> the overcloud deployment yet, I'm not sure we should try and move this
> around further unless there's an agreed upon way we want to handle
> this.
>
> I think the better work would be to split the
> tripleoclient/instack-undercloud dependency which is really where the
> problem lies.  We shouldn't be pulling in the world for tripleoclient
> if we are just going to operate on only the overcloud.

As a follow up, I've taken some time to move the roles functions in to
tripleo-common[0] and out of tripleoclient[1]. With this, I've also
updated the validation patch with a small python script that leverages
the tripleo-common work.

Of course while writing this email I noticed that tripleo-common also
pulls in instack-undercloud[3] like tripleoclient[4] so I'm not sure
this is actually an improvement.  ¯\_(ツ)_/¯

Thanks,
-Alex

[0] https://review.openstack.org/#/c/474332/
[1] https://review.openstack.org/#/c/474343/
[2] https://review.openstack.org/#/c/472731/
[3] 
https://github.com/rdo-packages/tripleo-common-distgit/blob/rpm-master/openstack-tripleo-common.spec#L21
[4] 
https://github.com/rdo-packages/tripleoclient-distgit/blob/rpm-master/python-tripleoclient.spec#L36

>
> Thanks,
> -Alex
>
>>
>> Dan
>>
>>> Instead if you have an update to a role, please update the
>>> appropriate
>>> roles/*.yaml file. I have proposed a change[1] to THT with additional
>>> tools to validate that the roles/*.yaml files are updated and that
>>> there are no unaccounted for roles_data.yaml changes.  Additionally
>>> this change adds in a new tox target to assist in the generate of
>>> these basic roles data files that we provide.
>>>
>>> Ideally I would like to get rid of the roles_data.yaml and
>>> roles_data_undercloud.yaml so that the end user doesn't have to
>>> generate this file at all but that won't happen this cycle.  In the
>>> mean time, additional documentation around how to work with roles has
>>> been added to the roles README[2].
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/#/c/445687/
>>> [1] https://review.openstack.org/#/c/472731/
>>> [2] https://github.com/openstack/tripleo-heat-templates/blob/master/r
>>> oles/README.rst
>>>
>>> _
>>> _
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>>> cribe
>>> http://lists.openstack.org/cgi

Re: [openstack-dev] [puppet][tripleo] Add ganesha puppet module

2017-06-13 Thread Alex Schultz
On Mon, Jun 12, 2017 at 4:27 AM, Jan Provaznik  wrote:
> Hi,
> we would like to use nfs-ganesha for accessing shares on ceph storage
> cluster[1]. There is not yet a puppet module which would install and
> configure nfs-ganesha service. So to be able to set up nfs-ganesha with
> TripleO, I'd like to create a new ganesha puppet module under
> openstack-puppet umbrella unless there is a disagreement?
>

I don't have any particular issue with it.  Feel free to follow the guide[0]

Thanks,
-Alex

[0] https://docs.openstack.org/developer/puppet-openstack-guide/new-module.html

> Thanks, Jan
>
> [1] https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-13 Thread Alex Schultz
On Tue, Jun 13, 2017 at 6:58 AM, Dan Prince <dpri...@redhat.com> wrote:
> On Fri, 2017-06-09 at 09:24 -0600, Alex Schultz wrote:
>> Hey folks,
>>
>> I wanted to bring to your attention that we've merged the change[0]
>> to
>> add a basic set of roles that can be combined to create your own
>> roles_data.yaml as needed.  With this change the roles_data.yaml and
>> roles_data_undercloud.yaml files in THT should not be changed by
>> hand.
>
> In general I like the feature.
>
> I added some comments to your validations [1] patch below. We need
> those validations, but I think we need to carefully consider adding a
> hard dependency on python-tripleoclient simply to have validations in
> tree. Wondering if perhaps a t-h-t-utils library project might be in
> order here to contain routines we use in t-h-t and in higher level
> workflow tools in Mistral and on the CLI? This might also make the
> tools/process-templates.py stuff cleaner as well.
>
> Thoughts?

So my original implementation of the roles stuff included a standalone
script in THT to generate the roles_data.yaml files.  This was -1'd as
realistically the actions for managing this should probably live
within python-tripleoclient.  This made sense to me as that's how the
end user really should be interacting with these things.  Given that
the tripleoclient and the UI are the two ways and operator is going to
consume with THT I think there is already an undocumented requirement
that should be there.

An alternative would be to move the roles generation items into
tripleo-common but then we would have to write two distinct ways of
then executing this code. One being tripleoclient and the other being
a standalone script which basically would have to reinvent the
interface provided by tripleoclient/openstackclient.  Since we're not
allowing folks to dynamically construct the roles_data.yaml as part of
the overcloud deployment yet, I'm not sure we should try and move this
around further unless there's an agreed upon way we want to handle
this.

I think the better work would be to split the
tripleoclient/instack-undercloud dependency which is really where the
problem lies.  We shouldn't be pulling in the world for tripleoclient
if we are just going to operate on only the overcloud.

Thanks,
-Alex

>
> Dan
>
>> Instead if you have an update to a role, please update the
>> appropriate
>> roles/*.yaml file. I have proposed a change[1] to THT with additional
>> tools to validate that the roles/*.yaml files are updated and that
>> there are no unaccounted for roles_data.yaml changes.  Additionally
>> this change adds in a new tox target to assist in the generate of
>> these basic roles data files that we provide.
>>
>> Ideally I would like to get rid of the roles_data.yaml and
>> roles_data_undercloud.yaml so that the end user doesn't have to
>> generate this file at all but that won't happen this cycle.  In the
>> mean time, additional documentation around how to work with roles has
>> been added to the roles README[2].
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/445687/
>> [1] https://review.openstack.org/#/c/472731/
>> [2] https://github.com/openstack/tripleo-heat-templates/blob/master/r
>> oles/README.rst
>>
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-12 Thread Alex Schultz
On Mon, Jun 12, 2017 at 2:55 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:
> On 06/09/2017 05:24 PM, Alex Schultz wrote:
>>
>> Hey folks,
>>
>> I wanted to bring to your attention that we've merged the change[0] to
>> add a basic set of roles that can be combined to create your own
>> roles_data.yaml as needed.  With this change the roles_data.yaml and
>> roles_data_undercloud.yaml files in THT should not be changed by hand.
>> Instead if you have an update to a role, please update the appropriate
>> roles/*.yaml file. I have proposed a change[1] to THT with additional
>> tools to validate that the roles/*.yaml files are updated and that
>> there are no unaccounted for roles_data.yaml changes.  Additionally
>> this change adds in a new tox target to assist in the generate of
>> these basic roles data files that we provide.
>>
>> Ideally I would like to get rid of the roles_data.yaml and
>> roles_data_undercloud.yaml so that the end user doesn't have to
>> generate this file at all but that won't happen this cycle.  In the
>> mean time, additional documentation around how to work with roles has
>> been added to the roles README[2].
>
>
> Hi, this is awesome! Do we expect more example roles to be added? E.g. I
> could add a role for a reference Ironic Conductor node.
>

Yes. My expectation is that as we come up with new roles for supported
deployment types that we add them to the THT/roles directory so end
user can also use them.  The base set came from some work we did
during the Ocata cycle to have 3 base sets of architectures.

3 controller, 3 compute, 1 ceph (ha)
1 controller, 1 compute, 1 ceph (nonha)
3 controller, 3 database, 3 messaging, 2 networker, 1 compute, 1 ceph (advanced)

Feel free to propose additional roles if you have architectures you'd
like to have be reusable.

Thanks,
-Alex


>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/445687/
>> [1] https://review.openstack.org/#/c/472731/
>> [2]
>> https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-09 Thread Alex Schultz
On Tue, May 30, 2017 at 3:08 PM, Emilien Macchi  wrote:
> On Tue, May 30, 2017 at 8:36 PM, Matthew Thode
>  wrote:
>> We have a problem in requirements that projects that don't have the
>> cycle-with-intermediary release model (most of the cycle-with-milestones
>> model) don't get integrated with requirements until the cycle is fully
>> done.  This causes a few problems.
>>
>> * These projects don't produce a consumable release for requirements
>> until end of cycle (which does not accept beta releases).
>>
>> * The former causes old requirements to be kept in place, meaning caps,
>> exclusions, etc. are being kept, which can cause conflicts.
>>
>> * Keeping the old version in requirements means that cross dependencies
>> are not tested with updated versions.
>>
>> This has hit us with the mistral and tripleo projects particularly
>> (tagged in the title).  They disallow pbr-3.0.0 and in the case of
>> mistral sqlalchemy updates.
>>
>> [mistral]
>> mistral - blocking sqlalchemy - milestones
>>
>> [tripleo]
>> os-refresh-config - blocking pbr - milestones
>> os-apply-config - blocking pbr - milestones
>> os-collect-config - blocking pbr - milestones
>
> These are cycle-with-milestones., like os-net-config for example,
> which wasn't mentioned in this email. It has the same releases as
> os-net-config also, so I'm confused why these 3 cause issue, I
> probably missed something.
>
> Anyway, I'm happy to change os-*-config (from TripleO) to be
> cycle-with-intermediary. Quick question though, which tag would you
> like to see, regarding what we already did for pike-1?
>

I ran into a case where I wanted to add python-tripleoclient to
test-requirements for tripleo-heat-templates but it's not in the
global requirements. In looking into adding this, I noticed that
python-tripleoclient and tripleo-common are not
cycle-with-intermediary either. Should/can we update these as well?
tripleo-common is already in the global requirements but I guess since
we've been releasing non-prerelease versions fairly regularly with the
milestones it hasn't been a problem.

Thanks,
-Alex

> Thanks,
>
>> [nova]
>> os-vif - blocking pbr - intermediary
>>
>> [horizon]
>> django-openstack-auth - blocking django - intermediary
>>
>>
>> So, here's what needs doing.
>>
>> Those projects that are already using the cycle-with-intermediary model
>> should just do a release.
>>
>> For those that are using cycle-with-milestones, you will need to change
>> to the cycle-with-intermediary model, and do a full release, both can be
>> done at the same time.
>>
>> If anyone has any questions or wants clarifications this thread is good,
>> or I'm on irc as prometheanfire in the #openstack-requirements channel.
>>
>> --
>> Matthew Thode (prometheanfire)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Role updates

2017-06-09 Thread Alex Schultz
Hey folks,

I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed.  With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if you have an update to a role, please update the appropriate
roles/*.yaml file. I have proposed a change[1] to THT with additional
tools to validate that the roles/*.yaml files are updated and that
there are no unaccounted for roles_data.yaml changes.  Additionally
this change adds in a new tox target to assist in the generate of
these basic roles data files that we provide.

Ideally I would like to get rid of the roles_data.yaml and
roles_data_undercloud.yaml so that the end user doesn't have to
generate this file at all but that won't happen this cycle.  In the
mean time, additional documentation around how to work with roles has
been added to the roles README[2].

Thanks,
-Alex

[0] https://review.openstack.org/#/c/445687/
[1] https://review.openstack.org/#/c/472731/
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci] Adding idempotency job on overcloud deployment.

2017-06-07 Thread Alex Schultz
On Wed, Jun 7, 2017 at 5:20 AM, Sofer Athlan-Guyot  wrote:
> Hi,
>
> Emilien Macchi  writes:
>
>> On Wed, Jun 7, 2017 at 12:45 PM, Sofer Athlan-Guyot  
>> wrote:
>>> Hi,
>>>
>>> I don't think we have such a job in place.  Basically that would check
>>> that re-running the "openstack deploy ..." command won't do anything.
>>>
>>> We had such an error by the past[1], but I'm not sure this has been
>>> captured by an associated job.
>>>
>>> WDYT ?
>>
>> It would be interesting to measure how much time does it take to run
>> it again.
>
> Could you point out how such an experiment could be done ?
>
>> If it's short, we could add it to all our scenarios + ovb
>> jobs.  If it's long, maybe we need an additional job, but it would
>> take more resources, so maybe we could run it in periodic pipeline
>> (note that periodic jobs are not optimal since we could break
>> something quite easily).
>
> Just adding as context that the issue was already raised[1].  Beside
> time constraint, it was pointed out that we would also need to parse the
> log to find out if anything was restarted.  But it could be a second
> step.  For parsing, this code was pointed out[2].
>

There's a few things that would need to be enabled in order to reuse
some of this work.  We'll need to add the ability to generate a report
on the puppet run[0]. And then we'll need to be able to capture it[1]
somewhere that we could then use that parsing code on.  From there,
just rerunning the installation would be a simple start to the
idempotency check.  In fuel, we had hacked in a special flag[2] that
we used in testing to actually rerun the task immediately to find when
a specific task was not idempotent in addition to also rerunning the
entire deployment. For tripleo a similar concept would be to rerun the
steps twice but that's usually not where the issues crop us for us. So
rerunning the entire installation deployment would be better as we
tend to have issues with configuration items between steps
conflicting.

Thanks,
-Alex

[0] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@204
[1] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@102
[2] https://review.openstack.org/#/c/273737/

> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/114836.html
> [2] 
> https://review.openstack.org/#/c/279271/9/fuelweb_test/helpers/astute_log_parser.py@212
>
>>
>>> [1] https://bugs.launchpad.net/tripleo/+bug/1664650
>>> --
>>> Sofer Athlan-Guyot
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   >