Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-07-17 Thread Lance Bragstad
On Mon, Jul 17, 2017 at 6:39 PM, Zane Bitter  wrote:

> So the application credentials spec has merged - huge thanks to Monty and
> the Keystone team for getting this done:
>
> https://review.openstack.org/#/c/450415/
> http://specs.openstack.org/openstack/keystone-specs/specs/
> keystone/pike/application-credentials.html
>
> However, it appears that there was a disconnect in how two groups of folks
> were reading the spec that only became apparent towards the end of the
> process. Specifically, at this exact moment:
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
> /%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59
>
> To summarise, Keystone folks are uncomfortable with the idea of
> application credentials that share the lifecycle of the project (rather
> than the user that created them), because a consumer could surreptitiously
> create an application credential and continue to use that to access the
> OpenStack APIs even after their User account is deleted. The agreed
> solution was to delete the application credentials when the User that
> created them is deleted, thus tying the lifecycle to that of the User.
>
> This means that teams using this feature will need to audit all of their
> applications for credential usage and rotate any credentials created by a
> soon-to-be-former team member *before* removing said team member's User
> account, or risk breakage. Basically we're relying on users to do the Right
> Thing (bad), but when they don't we're defaulting to breaking [some of]
> their apps over leaving them insecure (all things being equal, good).
>
> Unfortunately, if we do regard this as a serious problem, I don't think
> this solution is sufficient. Assuming that application credentials are
> stored on VMs in the project for use by the applications running on them,
> then anyone with access to those servers can obtain the credentials and
> continue to use them even if their own account is deleted. The solution to
> this is to rotate *all* application keys when a user is deleted. So really
> we're relying on users to do the Right Thing (bad), but when they don't
> we're defaulting to breaking [some of] their apps *and* [potentially]
> leaving them insecure (worst possible combination).
>
> (We're also being inconsistent, because according to the spec if you
> revoke a role from a User then any application credentials they've created
> that rely on that role continue to work. It's only if you delete the User
> that they're revoked.)
>
>
> As far as I can see, there are only two solutions to the fundamental
> problem:
>
> 1) Fine-grained user-defined access control. We can minimise the set of
> things that the application credentials are authorised to do. That's out of
> scope for this spec, but something we're already planning as a future
> enhancement.
> 2) Automated regular rotation of credentials. We can make sure that
> whatever a departing team member does manage to hang onto quickly becomes
> useless.
>
> By way of comparison, AWS does both. There's fine-grained defined access
> control in the form of IAM Roles, and these Roles can be associated with
> EC2 servers. The servers have an account with rotating keys provided
> through the metadata server. I can't find the exact period of rotation
> documented, but it's on the order of magnitude of 1 hour.
>
> There's plenty not to like about this design. Specifically, it's 2017 not
> 2007 and the idea that there's no point offering to segment permissions at
> a finer grained level than that of a VM no longer holds water IMHO, thanks
> to SELinux and containers. It'd be nice to be able to provide multiple sets
> of credentials to different services running on a VM, and it's probably
> essential to our survival that we find a way to provide individual
> credentials to containers. Nevertheless, what they have does solve the
> problem.
>
> Note that there's pretty much no sane way for the user to automate
> credential rotation themselves, because it's turtles all the way down. e.g.
> it's easy in principle to set up a Heat template with a Mistral workflow
> that will rotate the credentials for you, but they'll do so using trusts
> that are, in turn, tied back to the consumer who created the stack. (It
> suddenly occurs to me that this is a problem that all services using trusts
> are going to need to solve.) Somewhere it all has to be tied back to
> something that survives the entire lifecycle of the project.
>
> Would Keystone folks be happy to allow persistent credentials once we have
> a way to hand out only the minimum required privileges?
>

If I'm understanding correctly, this would make application credentials
dependent on several cycles of policy work. Right?


>
> If not I think we're back to https://review.openstack.org/#/c/93/
>
> cheers,
> Zane.
>
> __
> OpenStack Development Mailing List (not for usage 

[openstack-dev] [tripleo] [kolla] [rdo] EPEL and RDO in Kolla containers

2017-07-17 Thread David Moreau Simard
Hi,

Just FYI...

I took a bit of time to document what seems to be most (if not all)
dependencies that vanilla Kolla currently pulls from EPEL on CentOS
Binary builds.
This list is documented here [1].

TripleO doesn't use all the CentOS binary containers from Kolla, only
a subset of those is relevant to the projects supported in TripleO
right now.
In that context, removing EPEL means adding a few overrides [2] to get
the image builds to work in the first place.

I haven't tested the containers deployed with these overrides so there
might be issues, I'm not sure.
There's some odd package mismatches due to the use of EPEL that
haven't been picked up, too.
For example, both "python2-msgpack" and "python2-crypto" are provided
by EPEL, however these packages aren't available from RDO and are
instead respectively named "python-msgpack" and "python-crypto".

[1]: https://etherpad.openstack.org/p/kolla-epel
[2]: https://review.openstack.org/#/c/48/

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][nova] Corrupt nova-specs repo

2017-07-17 Thread Jeremy Stanley
On 2017-06-30 16:11:42 +1000 (+1000), Ian Wienand wrote:
> Unfortunately it seems the nova-specs repo has undergone some
> corruption, currently manifesting itself in an inability to be pushed
> to github for replication.
[...]
> So you may notice this is refs/changes/26/463526/[2-9]
> 
> Just deleting these refs and expiring the objects might be the easiest
> way to go here, and seems to get things purged and fix up fsck
[...]

This plan seems reasonable to me. I can't personally think of any
alternatives and if someone else here knows of some arcane git
repair wizardry you haven't tried, they haven't chimed in to suggest
it either.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
I re-read this and maybe you mean, some containers will live only outside of 
k8s and some will live in k8s, not that you want to to support not having k8s 
at all with the same code base? That would be a much easier thing, and agree 
ansible would be very good at that.

Thanks,
Kevin

From: Fox, Kevin M
Sent: Monday, July 17, 2017 4:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

I think if you try to go down the Kubernetes & !Kubernetes path, you'll end up 
re-implementing pretty much all of Kubernetes, or you will use Kubernetes just 
like !Kubernetes and gain very little benefit from it.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, July 17, 2017 8:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On 17/07/17 09:47 -0400, James Slagle wrote:
>On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco  wrote:
>> Thanks for all the feedback so far. This is one of the things I appreciate
>> the
>> most about this community, Open conversations, honest feedback and will to
>> collaborate.
>>
>> I'm top-posting to announce that we'll have a joint meeting with the Kolla
>> team
>> on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not
>> for
>> me) but I do want to have a live discussion with the rest of the Kolla team.
>>
>> Some questions about the meeting:
>>
>> * How much time can we allocate?
>> * Can we prepare an agenda rather than just discussing "TripleO is thinking
>> of
>>  using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
>>  agenda)
>
>It may help to prepare some high level requirements around what we
>need out of a solution. For the ansible discussion I started this
>etherpad:
>
>https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible
>
>How we use Ansible and what we want to use it for, is related to this
>discussion around Helm. Although, it's not the exact same discussion,
>so if you wanted to start a new etherpad more specific to
>tripleo/kubernetes that may be good as well.
>
>One thing I think is important in this discussion is that we should be
>thinking about deploying containers on both Kubernetes and
>!Kubernetes. That is one of the reasons I like the ansible approach,
>in that I think it could address both cases with a common interface
>and API. I don't think we should necessarily choose a solution that
>requires to deploy on Kubernetes. Because then we are stuck with that
>choice. It'd be really nice to just "docker run" sometimes for
>dev/test. I don't know if Helm has that abstraction or not, I'm just
>trying to capture the requirement.

Yes!

Thanks for pointing this out as this is one of the reasons why I was proposing
ansible as our common interface w/o any extra layer.

I'll probably start a new etherpad for this as I would prefer not to distract
the rest of the TripleO + ansible discussion. At the end, if ansible ends up
being the tool we pick, I'll make sure to update your etherpad.

Flavio

>If you consider the parallel with Heat in this regard, we are
>currently "stuck" deploying on OpenStack (undercloud with Heat). We've
>had to work an a lot of complimentary features to add the flexibility
>to TripleO that are a result of having to use OpenStack (OVB,
>split-stack).
>
>That's exactly why we are starting a discussion around using Ansible,
>and is one of the fundamental changes that operators have been
>requesting in TripleO.
>
>--
>-- James Slagle
>--
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] git review -d + git rebase changing author?

2017-07-17 Thread melanie witt

On Tue, 18 Jul 2017 09:22:31 +0900, Ghanshyam Mann wrote:

Yes, this is same case when we do fetch patch set using git checkout,
i do not think its something to do with gite review -d.


Doing that shouldn't change the author, at least in my experience. I 
constantly 'git fetch' or 'git review -d ' to fetch a patch, 
update it, and push a new revision. When I do that, the author stays the 
same but it shows me as the "Committer".


I have seen the behavior Matt describes happen to other people though 
and I would guess it has something to do with the rebase. Though I feel 
like I've also rebased other people's changes and it didn't change the 
Author, it only changed the Committer to me.


-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
I think thats a good question without an easy answer. I think TripleO's own 
struggle with orchestration has shown that its maybe one of the hardest pieces. 
There are a lot of orchestration tools out there. Each has its 
strengths/weaknesses. I  personally can't really pick what the best one is for 
this sort of thing. I've been trying to stay neutral, and let the low level 
kolla-kubernetes components be easily sharable between all the projects that 
already have chosen an orchestration strategy. I think the real answer is 
probably that the best orchestration tool for the job depends entirely on the 
deployment tool. So, TripleO's answer might be different then say, something 
Ubuntu does.

Kolla-kubernetes has implemented reference orchestration a few different ways 
now. We deploy the gates using pure shell. Its not the prettiest way but works 
reliably now. (I would not recommend users do this)

We have a document for manual orchestration.  (slow and very manual, but you 
get to see all the pieces, which can be instructive)

We have helm based orchestration that bundles several microservice charts into 
service charts and deploys similarly to openstack-helm. We built them to test 
the waters of this approach and they do work, but I have doubts they could be 
made robust enough to handle things like live rolling upgrades of OpenStack. It 
may be robust enough to do upgrades that require downtimes. I think it also may 
be hard to debug if the upgrade fails half way through. I admit I could totally 
be wrong though.

Theres also been a couple of ansible based orchestrators that have been 
proposed. They seem to work well, and I think they would be much easier to 
extend to do a live rolling OpenStack upgrade. I'd very much like to see an 
Ansible one finished and kick the tires with it. I do think having both some 
folks in Kolla-Kubernetes and folks in TripleO independently implement k8s 
deployment with it shows there is a lot of potential in that form of 
orchestration and that there's even more room for collaboration between the two 
projects.

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Monday, July 17, 2017 1:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On 14.07.2017 22:55, Fox, Kevin M wrote:
> Part of the confusion I think is in the different ways helm can be used.
>
> Helm can be used to orchestrate the deployment of a whole service (ex, nova). 
> "launch these 3 k8s objects, template out this config file, run this job to 
> init the db, or this job to upgrade the db, etc", all as a single unit.
>
> It can also be used purely for its templating ability.
>
> So, "render this single k8s object using these values".
>
> This is one of the main differences between openstack-helm and 
> kolla-kubernetes.
>
> Openstack-helm has charts only for orchestrating the deployment of whole 
> openstack services.
>
> Kolla-kubernetes has taken a different track though. While it does use helm 
> for its golang templater, it has taken a microservices approach to be 
> shareable with other tools. So, each openstack process (nova-api, 
> neutron-server, neutron-openvswitch-agent), etc, has its own chart and can be 
> independently configured/placed as needed by an external orchestration 
> system. Kolla-Kubernetes microservice charts are to Kubernetes what 
> Kolla-Containers are to Docker. Reusable building blocks of known tested 
> functionality and assemblable anyway the orchestration system/user feels is 
> in their best interest.

A great summary!
As TripleO Pike docker-based containers architecture rely on
Kolla-Containers bits a lot, which is run-time kolla config/bootstrap
and build-time images overrides, it seems reasonable to continue
following that path by relying on Kolla-Kubernetes microservice Helm
charts for Kubernetes based architecture. Isn't it?

The remaining question is though, if Kolla-kubernetes doesn't consume
the Openstack-helm's opinionated "orchestration of the deployment of
whole openstack services", which tools to use then to fill the advanced
data parameterization gaps like "happens before/after" relationships and
data dependencies/ordering?

>
> This is why I think kolla-kubernetes would be a good fit for TripleO, as you 
> can replace a single component at a time, however you want, using the config 
> files you already have and upgrade the system a piece at a time from non 
> container to containered. It doesn't have to happen all at once, even within 
> a single service, or within a single TripleO release. The orchestration of it 
> is totally up to you, and can be tailored very precisely to deal with the 
> particulars of the upgrade strategy needed by TripleO's existing deployments.
>
> Does that help to alleviate some of the confusion?
>
> Thanks,
> Kevin


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando


Re: [openstack-dev] git review -d + git rebase changing author?

2017-07-17 Thread Ghanshyam Mann
On Tue, Jul 18, 2017 at 8:24 AM, Matt Riedemann  wrote:
> I don't have a strict recreate on this right now, but wanted to bring it up
> in case others have seen it. I've done this unknowingly and seen it happen
> to other changes, like:
>
> https://review.openstack.org/#/c/428241/7..8//COMMIT_MSG
>
> https://review.openstack.org/#/c/327564/3..4//COMMIT_MSG
>
> Where the author changes in the commit.
>
> When I've seen this, I think it's because I'm doing some combination of:
>
> 1. git review -d
> 2. git rebase -i master
> 3. change something
> 4. git commit

Yes, this is same case when we do fetch patch set using git checkout,
i do not think its something to do with gite review -d. I usually do
mention author explicitly during git commit to keep the original
author.
git commit --author <>

> 5. git rebase --continue (if in the middle of a series)
> 6. git review
>
> Something about the combination of the git review/rebase/commit changes the
> author.
>
> Again, I can try to recreate and come up with repeatable steps later, but
> wanted to bring this up while I'm thinking about it again.
>
> My versions:
>
> user@ubuntu:~/git/nova$ git --version
> git version 2.7.4
> user@ubuntu:~/git/nova$ pip show git-review
> Name: git-review
> Version: 1.25.0
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] git review -d + git rebase changing author?

2017-07-17 Thread Jeremy Stanley
On 2017-07-17 18:24:39 -0500 (-0500), Matt Riedemann wrote:
> I don't have a strict recreate on this right now, but wanted to bring it up
> in case others have seen it.
[...]

Any chance you can find the author change in your git reflog?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Ian Wienand

On 07/18/2017 10:01 AM, Tony Breeds wrote:

It wasn't forgotten as suchi, there are jobs still using it/them.  If
keeping the branches around cuases bigger probelsm then EOLing them is
fine.  I'll try to generate a list of the affected projects/jobs and
turn them off.


Thanks; yeah this was pointed out to me later.

I think any jobs can use the -eol tag, rather than the
branch if required (yes, maybe easier said than done depending on how
many layers of magic there are :).  There doesn't seem to be much
point in branches we can't commit to due to broken CI, and I doubt
anyone is keen to maintain it.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Steven Dake
On Mon, Jul 17, 2017 at 10:13 AM, Emilien Macchi  wrote:

> On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
> > On 14/07/17 08:08 -0700, Emilien Macchi wrote:
> >>
> >> On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco 
> wrote:
> >>>
> >>>
> >>> Greetings,
> >>>
> >>> As some of you know, I've been working on the second phase of TripleO's
> >>> containerization effort. This phase if about migrating the docker based
> >>> deployment onto Kubernetes.
> >>>
> >>> These phase requires work on several areas: Kubernetes deployment,
> >>> OpenStack
> >>> deployment on Kubernetes, configuration management, etc. While I've
> been
> >>> diving
> >>> into all of these areas, this email is about the second point,
> OpenStack
> >>> deployment on Kubernetes.
> >>>
> >>> There are several tools we could use for this task. kolla-kubernetes,
> >>> openstack-helm, ansible roles, among others. I've looked into these
> tools
> >>> and
> >>> I've come to the conclusion that TripleO would be better of by having
> >>> ansible
> >>> roles that would allow for deploying OpenStack services on Kubernetes.
> >>>
> >>> The existing solutions in the OpenStack community require using Helm.
> >>> While
> >>> I
> >>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
> >>> projects,
> >>> I
> >>> believe using any of them would add an extra layer of complexity to
> >>> TripleO,
> >>> which is something the team has been fighting for years years -
> >>> especially
> >>> now
> >>> that the snowball is being chopped off.
> >>>
> >>> Adopting any of the existing projects in the OpenStack communty would
> >>> require
> >>> TripleO to also write the logic to manage those projects. For example,
> in
> >>> the
> >>> case of openstack-helm, the TripleO team would have to write either
> >>> ansible
> >>> roles or heat templates to manage - install, remove, upgrade - the
> charts
> >>> (I'm
> >>> happy to discuss this point further but I'm keepping it at a high-level
> >>> on
> >>> purpose for the sake of not writing a 10k-words-long email).
> >>>
> >>> James Slagle sent an email[0], a couple of days ago, to form TripleO
> >>> plans
> >>> around ansible. One take-away from this thread is that TripleO is
> >>> adopting
> >>> ansible more and more, which is great and it fits perfectly with the
> >>> conclusion
> >>> I reached.
> >>>
> >>> Now, what this work means is that we would have to write an ansible
> role
> >>> for
> >>> each service that will deploy the service on a Kubernetes cluster.
> >>> Ideally
> >>> these
> >>> roles will also generate the configuration files (removing the need of
> >>> puppet
> >>> entirely) and they would manage the lifecycle. The roles would be
> >>> isolated
> >>> and
> >>> this will reduce the need of TripleO Heat templates. Doing this would
> >>> give
> >>> TripleO full control on the deployment process too.
> >>>
> >>> In addition, we could also write Ansible Playbook Bundles to contain
> >>> these
> >>> roles
> >>> and run them using the existing docker-cmd implementation that is
> coming
> >>> out
> >>> in
> >>> Pike (you can find a PoC/example of this in this repo[1]).
> >>>
> >>> Now, I do realize the amount of work this implies and that this is my
> >>> opinion/conclusion. I'm sending this email out to kick-off the
> discussion
> >>> and
> >>> gather thoughts and opinions from the rest of the community.
> >>>
> >>> Finally, what I really like about writing pure ansible roles is that
> >>> ansible
> >>> is
> >>> a known, powerfull, tool that has been adopted by many operators
> already.
> >>> It'll
> >>> provide the flexibility needed and, if structured correctly, it'll
> allow
> >>> for
> >>> operators (and other teams) to just use the parts they need/want
> without
> >>> depending on the full-stack. I like the idea of being able to separate
> >>> concerns
> >>> in the deployment workflow and the idea of making it simple for users
> of
> >>> TripleO
> >>> to do the same at runtime. Unfortunately, going down this road means
> that
> >>> my
> >>> hope of creating a field where we could collaborate even more with
> other
> >>> deployment tools will be a bit limited but I'm confident the result
> would
> >>> also
> >>> be useful for others and that we all will benefit from it... My hopes
> >>> might
> >>> be a
> >>> bit naive *shrugs*
> >>
> >>
> >> Of course I'm biased since I've been (a little) involved in that work
> >> but I like the idea of :
> >>
> >> - Moving forward with our containerization. docker-cmd will help us
> >> for sure for this transition (I insist on the fact TripleO is a
> >> product that you can upgrade and we try to make it smooth for our
> >> operators), so we can't just trash everything and switch to a new
> >> tool. I think the approach that we're taking is great and made of baby
> >> steps where we try to solve different problems.
> >> - Using more Ansible - the right way - when it makes sense : with the
> 

Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-17 Thread Tony Breeds
On Mon, Jul 17, 2017 at 07:09:13PM +0200, Andreas Jaeger wrote:
> On 2017-07-17 15:51, Andy McCrae wrote:
> > We held back the openstack-ansible repo to allow us to point to the
> > mitaka-eol tag for the other role/project repos.
> > That's been done - I've created the mitaka-eol tag in the
> > openstack-ansible repo, can we remove the stable/mitaka branch from
> > openstack-ansible too.
> 
> So, only from openstack/openstack-ansible?

Yup that was left so it could include the taggs for the service projects
and roles.

> > Thanks again to all who have helped EOL all the branches etc!
> 
> I'll add to the list,

Thanks.
Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Tony Breeds
On Mon, Jul 17, 2017 at 07:22:34PM +1000, Ian Wienand wrote:

> I have taken the liberty of EOL-ing stable/liberty and stable/mitaka
> for devstack.  I get the feeling it was just forgotten at the time.
> Comments in [4] support this theory.  I have also taken the liberty of
> approving backports of the fix to newton and ocata branches [5],[6].

It wasn't forgotten as suchi, there are jobs still using it/them.  If
keeping the branches around cuases bigger probelsm then EOLing them is
fine.  I'll try to generate a list of the affected projects/jobs and
turn them off.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
I think if you try to go down the Kubernetes & !Kubernetes path, you'll end up 
re-implementing pretty much all of Kubernetes, or you will use Kubernetes just 
like !Kubernetes and gain very little benefit from it.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, July 17, 2017 8:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On 17/07/17 09:47 -0400, James Slagle wrote:
>On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco  wrote:
>> Thanks for all the feedback so far. This is one of the things I appreciate
>> the
>> most about this community, Open conversations, honest feedback and will to
>> collaborate.
>>
>> I'm top-posting to announce that we'll have a joint meeting with the Kolla
>> team
>> on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not
>> for
>> me) but I do want to have a live discussion with the rest of the Kolla team.
>>
>> Some questions about the meeting:
>>
>> * How much time can we allocate?
>> * Can we prepare an agenda rather than just discussing "TripleO is thinking
>> of
>>  using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
>>  agenda)
>
>It may help to prepare some high level requirements around what we
>need out of a solution. For the ansible discussion I started this
>etherpad:
>
>https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible
>
>How we use Ansible and what we want to use it for, is related to this
>discussion around Helm. Although, it's not the exact same discussion,
>so if you wanted to start a new etherpad more specific to
>tripleo/kubernetes that may be good as well.
>
>One thing I think is important in this discussion is that we should be
>thinking about deploying containers on both Kubernetes and
>!Kubernetes. That is one of the reasons I like the ansible approach,
>in that I think it could address both cases with a common interface
>and API. I don't think we should necessarily choose a solution that
>requires to deploy on Kubernetes. Because then we are stuck with that
>choice. It'd be really nice to just "docker run" sometimes for
>dev/test. I don't know if Helm has that abstraction or not, I'm just
>trying to capture the requirement.

Yes!

Thanks for pointing this out as this is one of the reasons why I was proposing
ansible as our common interface w/o any extra layer.

I'll probably start a new etherpad for this as I would prefer not to distract
the rest of the TripleO + ansible discussion. At the end, if ansible ends up
being the tool we pick, I'll make sure to update your etherpad.

Flavio

>If you consider the parallel with Heat in this regard, we are
>currently "stuck" deploying on OpenStack (undercloud with Heat). We've
>had to work an a lot of complimentary features to add the flexibility
>to TripleO that are a result of having to use OpenStack (OVB,
>split-stack).
>
>That's exactly why we are starting a discussion around using Ansible,
>and is one of the fundamental changes that operators have been
>requesting in TripleO.
>
>--
>-- James Slagle
>--
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
We do support some upstream charts but we started mariadb/rabbit before some of 
the upstream charts were written, so we duplicate a little bit of functionality 
at the moment. You can mix and match though. If an upstream chart doesn't work 
with kolla-kubernetes, I consider that a bug we should fix. Likewise, you 
should be able to run noncontainerized stuff mixed in too. If it doesn't work, 
its likewise a bug. You should be able to run kolla-kubernetes with a baremetal 
db.

Some known working stuff: prometheus/grafana upstream charts start collecting 
data from the containers as soon as they are launched.
I have also tested a bit with the upstream fluent-bit chart and have a ps in 
the works to make it work much better.

Thanks,
Kevin

From: Emilien Macchi [emil...@redhat.com]
Sent: Monday, July 17, 2017 10:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
> On 14/07/17 08:08 -0700, Emilien Macchi wrote:
>>
>> On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:
>>>
>>>
>>> Greetings,
>>>
>>> As some of you know, I've been working on the second phase of TripleO's
>>> containerization effort. This phase if about migrating the docker based
>>> deployment onto Kubernetes.
>>>
>>> These phase requires work on several areas: Kubernetes deployment,
>>> OpenStack
>>> deployment on Kubernetes, configuration management, etc. While I've been
>>> diving
>>> into all of these areas, this email is about the second point, OpenStack
>>> deployment on Kubernetes.
>>>
>>> There are several tools we could use for this task. kolla-kubernetes,
>>> openstack-helm, ansible roles, among others. I've looked into these tools
>>> and
>>> I've come to the conclusion that TripleO would be better of by having
>>> ansible
>>> roles that would allow for deploying OpenStack services on Kubernetes.
>>>
>>> The existing solutions in the OpenStack community require using Helm.
>>> While
>>> I
>>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>>> projects,
>>> I
>>> believe using any of them would add an extra layer of complexity to
>>> TripleO,
>>> which is something the team has been fighting for years years -
>>> especially
>>> now
>>> that the snowball is being chopped off.
>>>
>>> Adopting any of the existing projects in the OpenStack communty would
>>> require
>>> TripleO to also write the logic to manage those projects. For example, in
>>> the
>>> case of openstack-helm, the TripleO team would have to write either
>>> ansible
>>> roles or heat templates to manage - install, remove, upgrade - the charts
>>> (I'm
>>> happy to discuss this point further but I'm keepping it at a high-level
>>> on
>>> purpose for the sake of not writing a 10k-words-long email).
>>>
>>> James Slagle sent an email[0], a couple of days ago, to form TripleO
>>> plans
>>> around ansible. One take-away from this thread is that TripleO is
>>> adopting
>>> ansible more and more, which is great and it fits perfectly with the
>>> conclusion
>>> I reached.
>>>
>>> Now, what this work means is that we would have to write an ansible role
>>> for
>>> each service that will deploy the service on a Kubernetes cluster.
>>> Ideally
>>> these
>>> roles will also generate the configuration files (removing the need of
>>> puppet
>>> entirely) and they would manage the lifecycle. The roles would be
>>> isolated
>>> and
>>> this will reduce the need of TripleO Heat templates. Doing this would
>>> give
>>> TripleO full control on the deployment process too.
>>>
>>> In addition, we could also write Ansible Playbook Bundles to contain
>>> these
>>> roles
>>> and run them using the existing docker-cmd implementation that is coming
>>> out
>>> in
>>> Pike (you can find a PoC/example of this in this repo[1]).
>>>
>>> Now, I do realize the amount of work this implies and that this is my
>>> opinion/conclusion. I'm sending this email out to kick-off the discussion
>>> and
>>> gather thoughts and opinions from the rest of the community.
>>>
>>> Finally, what I really like about writing pure ansible roles is that
>>> ansible
>>> is
>>> a known, powerfull, tool that has been adopted by many operators already.
>>> It'll
>>> provide the flexibility needed and, if structured correctly, it'll allow
>>> for
>>> operators (and other teams) to just use the parts they need/want without
>>> depending on the full-stack. I like the idea of being able to separate
>>> concerns
>>> in the deployment workflow and the idea of making it simple for users of
>>> TripleO
>>> to do the same at runtime. Unfortunately, going down this road means that
>>> my
>>> hope of creating a field where we could collaborate even more with other
>>> deployment tools will be a bit limited but I'm confident the result would
>>> also
>>> be useful 

[openstack-dev] [keystone][nova] Persistent application credentials

2017-07-17 Thread Zane Bitter
So the application credentials spec has merged - huge thanks to Monty 
and the Keystone team for getting this done:


https://review.openstack.org/#/c/450415/
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html

However, it appears that there was a disconnect in how two groups of 
folks were reading the spec that only became apparent towards the end of 
the process. Specifically, at this exact moment:


http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-06-09.log.html#t2017-06-09T17:43:59

To summarise, Keystone folks are uncomfortable with the idea of 
application credentials that share the lifecycle of the project (rather 
than the user that created them), because a consumer could 
surreptitiously create an application credential and continue to use 
that to access the OpenStack APIs even after their User account is 
deleted. The agreed solution was to delete the application credentials 
when the User that created them is deleted, thus tying the lifecycle to 
that of the User.


This means that teams using this feature will need to audit all of their 
applications for credential usage and rotate any credentials created by 
a soon-to-be-former team member *before* removing said team member's 
User account, or risk breakage. Basically we're relying on users to do 
the Right Thing (bad), but when they don't we're defaulting to breaking 
[some of] their apps over leaving them insecure (all things being equal, 
good).


Unfortunately, if we do regard this as a serious problem, I don't think 
this solution is sufficient. Assuming that application credentials are 
stored on VMs in the project for use by the applications running on 
them, then anyone with access to those servers can obtain the 
credentials and continue to use them even if their own account is 
deleted. The solution to this is to rotate *all* application keys when a 
user is deleted. So really we're relying on users to do the Right Thing 
(bad), but when they don't we're defaulting to breaking [some of] their 
apps *and* [potentially] leaving them insecure (worst possible combination).


(We're also being inconsistent, because according to the spec if you 
revoke a role from a User then any application credentials they've 
created that rely on that role continue to work. It's only if you delete 
the User that they're revoked.)



As far as I can see, there are only two solutions to the fundamental 
problem:


1) Fine-grained user-defined access control. We can minimise the set of 
things that the application credentials are authorised to do. That's out 
of scope for this spec, but something we're already planning as a future 
enhancement.
2) Automated regular rotation of credentials. We can make sure that 
whatever a departing team member does manage to hang onto quickly 
becomes useless.


By way of comparison, AWS does both. There's fine-grained defined access 
control in the form of IAM Roles, and these Roles can be associated with 
EC2 servers. The servers have an account with rotating keys provided 
through the metadata server. I can't find the exact period of rotation 
documented, but it's on the order of magnitude of 1 hour.


There's plenty not to like about this design. Specifically, it's 2017 
not 2007 and the idea that there's no point offering to segment 
permissions at a finer grained level than that of a VM no longer holds 
water IMHO, thanks to SELinux and containers. It'd be nice to be able to 
provide multiple sets of credentials to different services running on a 
VM, and it's probably essential to our survival that we find a way to 
provide individual credentials to containers. Nevertheless, what they 
have does solve the problem.


Note that there's pretty much no sane way for the user to automate 
credential rotation themselves, because it's turtles all the way down. 
e.g. it's easy in principle to set up a Heat template with a Mistral 
workflow that will rotate the credentials for you, but they'll do so 
using trusts that are, in turn, tied back to the consumer who created 
the stack. (It suddenly occurs to me that this is a problem that all 
services using trusts are going to need to solve.) Somewhere it all has 
to be tied back to something that survives the entire lifecycle of the 
project.


Would Keystone folks be happy to allow persistent credentials once we 
have a way to hand out only the minimum required privileges?


If not I think we're back to https://review.openstack.org/#/c/93/

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] git review -d + git rebase changing author?

2017-07-17 Thread Matt Riedemann
I don't have a strict recreate on this right now, but wanted to bring it 
up in case others have seen it. I've done this unknowingly and seen it 
happen to other changes, like:


https://review.openstack.org/#/c/428241/7..8//COMMIT_MSG

https://review.openstack.org/#/c/327564/3..4//COMMIT_MSG

Where the author changes in the commit.

When I've seen this, I think it's because I'm doing some combination of:

1. git review -d
2. git rebase -i master
3. change something
4. git commit
5. git rebase --continue (if in the middle of a series)
6. git review

Something about the combination of the git review/rebase/commit changes 
the author.


Again, I can try to recreate and come up with repeatable steps later, 
but wanted to bring this up while I'm thinking about it again.


My versions:

user@ubuntu:~/git/nova$ git --version
git version 2.7.4
user@ubuntu:~/git/nova$ pip show git-review
Name: git-review
Version: 1.25.0


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] restrictive umask / file permissions in target hosts

2017-07-17 Thread Major Hayden
On 07/04/2017 03:54 AM, Markus Zoeller wrote:
> How do you deal with hosts which have a restrictive umask of 077
> *before* openstack-ansible starts the setup? Do you start with the
> default umask of 022 and opt-in later to that security hardening[1]?

We don't test for that in the OpenStack-Ansible gates since those settings from 
openstack-ansible-security/ansible-hardening are disabled by default. It's 
possible to start with 022 and switch to 077 later, but that could cause 
additional problems.

> What's the development policy of openstack-ansible regarding setting
> file or directory permissions in tasks?
> 
> * is a umask value of 022 assumed for tasks to work?

Yes.

> * should tasks always explicitly set the file/dir mode?

They certainly should, and if they don't, we should adjust those tasks. I'd 
rather be as explicit as possible to reduce the chances of problems down the 
road if distribution defaults change.

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Ryan Hallisey
> I think this at some point might be single biggest benefit of using
> helm on the long run - leverage infrastructure charts that aren't
> openstack-centric. Things like etcd are already written and
> potentially we can just help support them.

I think the tools that are being discussion here are both very good
(helm & ansible), but I have a slightly different opinion about how
Helm should be used.

Helm is a *package manager*. It's scope is for simple applications
that need to bundle resources.  It's great at saving me time on doing
simple recipes like: kubectl create -f  and kubectl create -f
 over and over again. But, beyond a single app with a few
resources, Helm is going to struggle on it's own. Meaning, either Helm
would have to change or another tool would have to fill the gaps.

If Helm wants to change, it becomes less differentiated from what
Ansible already does. It's niche as a simple app package manager will
evaporate and Ansible already owns the orchestration space. Therefore,
I think long term Helm as an orchestration tool doesn't make sense
because it's limited to Kubernetes and Ansible adoption is wide
spread.

That doesn't mean that Helm is useless.  In fact, I think the Helm
charts are great when used as simple standalone recipes. However, for
a complex app like OpenStack, I think you need something like Ansible
to provide the orchestration. Underneath, you can use whatever you
want to create the Kubernetes resources. In the end, the difference
will be `helm create mariadb` vs `kubectl create -f mariadb-pod.yaml`.
Both solutions will work, but the Helm work seems to be much farther
along.

One other thing to mention. Maybe folks can speed up writing these
playbooks by using kolla-ansible's playbooks as a shell. Here's an
example: [1] Take lines 1-16 and replace it with helm install mariadb
or
kubectl create -f mariabd-pod.yaml and set inventory to localhost.
Just a thought.

There may be some other playbooks out there I don' know about that you
can use, but that could at least get some of the collaboration started
so folks don't have to start from scratch.

[1] - 
https://github.com/openstack/kolla-ansible/blob/afdd11b9a22ecca70962a4637d89ad50b7ded2e5/ansible/roles/mariadb/tasks/start.yml#L1-L16

Sincerely,
Ryan

On Mon, Jul 17, 2017 at 1:37 PM, Michał Jastrzębski  wrote:
> On 17 July 2017 at 10:13, Emilien Macchi  wrote:
>> On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
>>> On 14/07/17 08:08 -0700, Emilien Macchi wrote:

 On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:
>
>
> Greetings,
>
> As some of you know, I've been working on the second phase of TripleO's
> containerization effort. This phase if about migrating the docker based
> deployment onto Kubernetes.
>
> These phase requires work on several areas: Kubernetes deployment,
> OpenStack
> deployment on Kubernetes, configuration management, etc. While I've been
> diving
> into all of these areas, this email is about the second point, OpenStack
> deployment on Kubernetes.
>
> There are several tools we could use for this task. kolla-kubernetes,
> openstack-helm, ansible roles, among others. I've looked into these tools
> and
> I've come to the conclusion that TripleO would be better of by having
> ansible
> roles that would allow for deploying OpenStack services on Kubernetes.
>
> The existing solutions in the OpenStack community require using Helm.
> While
> I
> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
> projects,
> I
> believe using any of them would add an extra layer of complexity to
> TripleO,
> which is something the team has been fighting for years years -
> especially
> now
> that the snowball is being chopped off.
>
> Adopting any of the existing projects in the OpenStack communty would
> require
> TripleO to also write the logic to manage those projects. For example, in
> the
> case of openstack-helm, the TripleO team would have to write either
> ansible
> roles or heat templates to manage - install, remove, upgrade - the charts
> (I'm
> happy to discuss this point further but I'm keepping it at a high-level
> on
> purpose for the sake of not writing a 10k-words-long email).
>
> James Slagle sent an email[0], a couple of days ago, to form TripleO
> plans
> around ansible. One take-away from this thread is that TripleO is
> adopting
> ansible more and more, which is great and it fits perfectly with the
> conclusion
> I reached.
>
> Now, what this work means is that we would have to write an ansible role
> for
> each service that will deploy the service on a Kubernetes cluster.
> Ideally
> these
> roles will also generate the configuration files (removing the 

[openstack-dev] [os-upstream-institute] Meeting Cancelled

2017-07-17 Thread Kendall Nelson
Hello Everyone!

We will not be having our meeting today at 20:00 UTC because there isn't
much on the agenda :) If there is something you want to discuss, please
drop into the #openstack-upstream-insutute channel and ping us.

Enjoy your extra hour!

-Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Election Season, PTL July/August 2017

2017-07-17 Thread Kendall Nelson
Hello Everyone!

Election details: https://governance.openstack.org/election/

Please read the stipulations and timelines for candidates and electorate
contained in this governance documentation.

Be aware, in the PTL elections if the program only has one candidate, that
candidate is acclaimed and there will be no poll. There will only be a poll
if there is more than one candidate stepping forward for a program's PTL
position.

There will be further announcements posted to the mailing list as action is
required from the electorate or candidates. This email is for information
purposes only.

If you have any questions which you feel affect others please reply to this
email thread. If you have any questions that you which to discuss in
private please email myself Kendall Nelson (diablo_rojo), knelson at
openstack dot org and Emmet Hikory (persia) email: emmet dot hikory at
codethink dot co dot uk so that we may address your concerns.

Thank you,

Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]notification update week 29

2017-07-17 Thread Matt Riedemann

On 7/17/2017 2:36 AM, Balazs Gibizer wrote:

Hi,

Here is the status update / focus setting mail about notification work
for week 29.

Bugs

[Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned
server notifications don't include updated_at
The fix https://review.openstack.org/#/c/475276/ is in focus but 
comments needs to be addressed.


[Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications
use nova-api as binary name instead of nova-osapi_compute
Agreed not to change the binary name in the notifications. Instead we
make an enum for that name to show that the name is intentional.
Patch needs review:  https://review.openstack.org/#/c/476538/

[Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id of 
the versioned instance.update notification is not consistent with other 
notifications
The inconsistency of publisher_ids was revealed by #1696152. Patch needs 
review: https://review.openstack.org/#/c/480984


[Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault
notification is never emitted
Still no response on the ML thread about the way forward.
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html

[Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications
are emitted per-cell instead of globally
Fix is to configure a global MQ endpoint for the notifications in cells 
v2. Patch looks good from notification perspective but affects other 
part of the system as well: https://review.openstack.org/#/c/477556/



Versioned notification transformation
-
The last week's merge conflicts are mostly cleaned up and there is 11 
patches that are waiting for core revew:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Verified%253C0+AND+NOT+label:Code-Review%253C0 



If you are affraid of the long list then here is a short list of live 
migration related transformations to look at:

* https://review.openstack.org/#/c/480214/
* https://review.openstack.org/#/c/420453/
* https://review.openstack.org/#/c/480119/
* https://review.openstack.org/#/c/469784/


Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
The BDM addition has been merged.

As a last piece of the bp we are still missing the Add tags to 
instance.create Notification https://review.openstack.org/#/c/459493/ 
patch but that depends on supporting tags and instance boot 
https://review.openstack.org/#/c/394321/ which is getting closer to be 
merged. Focus is on these patches.


There are a set of follow up patches for the BDM addition to optimize 
the payload generation but these are not mandatory for the functionality 
https://review.openstack.org/#/c/483324/



Instability of the notification sample tests

Multiple instability of the sample test was detected last week. The nova 
functional test fails intermittenly at least for two distinct reasons:
* https://bugs.launchpad.net/nova/+bug/1704423 _test_unshelve_server 
intermittently fails in functional versioned notification tests
Possible solution found, fix proposed and it only needs a second +2: 
https://review.openstack.org/#/c/483986/
* https://bugs.launchpad.net/nova/+bug/1704392 
TestInstanceNotificationSample.test_volume_swap_server fails with 
"testtools.matchers._impl.MismatchError: 7 != 6"
Patch that improves logging of the failure has been merged 
https://review.openstack.org/#/c/483939/ and detailed log now available 
to look at 
http://logs.openstack.org/82/482382/4/check/gate-nova-tox-functional-ubuntu-xenial/38a4cb4/console.html#_2017-07-16_01_14_36_313757 




Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting
* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 18th of July.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170718T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


What do you want to do 

[openstack-dev] [keystone] feature freeze and spec status

2017-07-17 Thread Lance Bragstad
Hi all,

I wanted to send a friendly reminder that feature freeze for keystone
will be in R-5 [0], which is the end of next week. That leaves just
under 10 business days for feature work (8 considering the time to get
through the gate). Of the specifications we've committed to for Pike,
the following are still in progress:

*Application* *Credentials*
Specification:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/application-credentials.html

*Project* *Tags*
Specification:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/project-tags.html
Implementation: https://review.openstack.org/#/c/470317/

*Extending the User API to support federated attributes*
Specification:
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/pike/support-federated-attr.html
Implementation:
https://review.openstack.org/#/q/topic:bp/support-federated-attr

With feature freeze just around the corner, we should be scaling up our
focus on bugs. We'll be continuing bug work tomorrow after the weekly
keystone meeting.

Thanks and let me know if you have any questions,

Lance


[0] https://releases.openstack.org/pike/schedule.html



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] weekly meetings on #tripleo

2017-07-17 Thread Emilien Macchi
Since we have mixed feelings but generally agree that we should give
it a try, let's give it a try and see how it goes, at least one time,
tomorrow.

On Mon, Jul 10, 2017 at 10:01 AM, Michele Baldessari  wrote:
> On Mon, Jul 10, 2017 at 11:36:03AM -0230, Brent Eagles wrote:
>> +1 for giving it a try.
>
> Agreed.
>
>>
>> On Wed, Jul 5, 2017 at 2:26 PM, Emilien Macchi  wrote:
>>
>> > After reading http://lists.openstack.org/pipermail/openstack-dev/2017-
>> > June/118899.html
>> > - we might want to collect TripleO's community feedback on doing
>> > weekly meetings on #tripleo instead of #openstack-meeting-alt.
>> >
>> > I see some direct benefits:
>> > - if you come up late in meetings, you could easily read backlog in
>> > #tripleo
>> > - newcomers not aware about the meeting channel wouldn't have to search
>> > for it
>> > - meeting would maybe get more activity and we would expose the
>> > information more broadly
>> >
>> > Any feedback on this proposal is welcome before we make any change (or
>> > not).
>> >
>> > Thanks,
>> > --
>> > Emilien Macchi
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Michele Baldessari
> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-07-17 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Docs due to the docs re-org - See 
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119221.html
1.1. Ironic - 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:doc-migration
1.2. Ironic-ui - 
https://review.openstack.org/#/q/status:open+project:openstack/ironic-ui+branch:master+topic:doc-migration
1.3. ironic-python-agent - 
https://review.openstack.org/#/q/status:open+project:openstack/ironic-python-agent+branch:master+topic:doc-migration
1.4. ironic-inspector - 
https://review.openstack.org/#/q/status:open+project:openstack/ironic-inspector+branch:master+topic:doc-migration
1.5. Other subprojects and repos have not been started: virtualbmc, sushy, 
sushy-tools, moltenironic, ironic-inspector-client
2. Booting from volume:
2.1. https://review.openstack.org/#/c/472740 - Tempest Scenario
2.2. https://review.openstack.org/#/c/466333 - Devstack support
2.3. https://review.openstack.org/#/c/215385 - Nova patch
3. Rolling upgrades:
3.1. Modifications for rolling upgrades: 
https://review.openstack.org/#/c/476779/
3.2.  'Add new dbsync command with first online data migration': 
https://review.openstack.org/#/c/408556/
4. Physnet awareness:
4.1. API patch: https://review.openstack.org/469933
5. Nova patch for VIF attach/detach: https://review.openstack.org/#/c/419975/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 10 Jul 2017 and 17 Jul 2017)
- Ironic: 257 bugs (-2) + 258 wishlist items. 28 new (+1), 209 in progress 
(-5), 1 critical, 32 high (-1) and 31 incomplete
- Inspector: 14 bugs (+1) + 28 wishlist items. 2 new, 12 in progress, 1 
critical (+1), 4 high (+1) and 3 incomplete
- Nova bugs with Ironic tag: 17 (+1). 5 new (+1), 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- Python-ironicclient API support for volume connectors was landed last 
week.
- Version 1.14.0 was released last week and global-requirements was 
updated accordingly
- These should be part of the next release, meaning 1.15.0 when 
released.
- We have observed some review activity on the nova patch: 
https://review.openstack.org/#/c/215385/
- Mostly positive review feedback. Some concern from nova as to the 
lack of n-1 support where nova is upgraded prior to ironic, which is contrary 
to our upgrade guide.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change MERGED
https://review.openstack.org/#/c/463930/ - CRUD notification 
updates for volume objects. MERGED
https://review.openstack.org/#/c/463908/ - Enable cinder storage 
interface for generic hardware - MERGED
https://review.openstack.org/#/c/463972/ - Add storage_interface to 
notifications  MERGED
https://review.openstack.org/#/c/466333/ - Devstack changes or Boot 
from Volume - Has review feedback
https://review.openstack.org/#/c/472740/ - Tempest test scenario 
for BFV
https://review.openstack.org/#/c/479326/ - BFV deploy follow-up - 
Requires revision
python-ironicclient:
https://review.openstack.org/#/c/427053/ - OSC volume connector - 
MERGED
https://review.openstack.org/#/c/427738 - OSC 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Michał Jastrzębski
On 17 July 2017 at 10:13, Emilien Macchi  wrote:
> On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
>> On 14/07/17 08:08 -0700, Emilien Macchi wrote:
>>>
>>> On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:


 Greetings,

 As some of you know, I've been working on the second phase of TripleO's
 containerization effort. This phase if about migrating the docker based
 deployment onto Kubernetes.

 These phase requires work on several areas: Kubernetes deployment,
 OpenStack
 deployment on Kubernetes, configuration management, etc. While I've been
 diving
 into all of these areas, this email is about the second point, OpenStack
 deployment on Kubernetes.

 There are several tools we could use for this task. kolla-kubernetes,
 openstack-helm, ansible roles, among others. I've looked into these tools
 and
 I've come to the conclusion that TripleO would be better of by having
 ansible
 roles that would allow for deploying OpenStack services on Kubernetes.

 The existing solutions in the OpenStack community require using Helm.
 While
 I
 like Helm and both, kolla-kubernetes and openstack-helm OpenStack
 projects,
 I
 believe using any of them would add an extra layer of complexity to
 TripleO,
 which is something the team has been fighting for years years -
 especially
 now
 that the snowball is being chopped off.

 Adopting any of the existing projects in the OpenStack communty would
 require
 TripleO to also write the logic to manage those projects. For example, in
 the
 case of openstack-helm, the TripleO team would have to write either
 ansible
 roles or heat templates to manage - install, remove, upgrade - the charts
 (I'm
 happy to discuss this point further but I'm keepping it at a high-level
 on
 purpose for the sake of not writing a 10k-words-long email).

 James Slagle sent an email[0], a couple of days ago, to form TripleO
 plans
 around ansible. One take-away from this thread is that TripleO is
 adopting
 ansible more and more, which is great and it fits perfectly with the
 conclusion
 I reached.

 Now, what this work means is that we would have to write an ansible role
 for
 each service that will deploy the service on a Kubernetes cluster.
 Ideally
 these
 roles will also generate the configuration files (removing the need of
 puppet
 entirely) and they would manage the lifecycle. The roles would be
 isolated
 and
 this will reduce the need of TripleO Heat templates. Doing this would
 give
 TripleO full control on the deployment process too.

 In addition, we could also write Ansible Playbook Bundles to contain
 these
 roles
 and run them using the existing docker-cmd implementation that is coming
 out
 in
 Pike (you can find a PoC/example of this in this repo[1]).

 Now, I do realize the amount of work this implies and that this is my
 opinion/conclusion. I'm sending this email out to kick-off the discussion
 and
 gather thoughts and opinions from the rest of the community.

 Finally, what I really like about writing pure ansible roles is that
 ansible
 is
 a known, powerfull, tool that has been adopted by many operators already.
 It'll
 provide the flexibility needed and, if structured correctly, it'll allow
 for
 operators (and other teams) to just use the parts they need/want without
 depending on the full-stack. I like the idea of being able to separate
 concerns
 in the deployment workflow and the idea of making it simple for users of
 TripleO
 to do the same at runtime. Unfortunately, going down this road means that
 my
 hope of creating a field where we could collaborate even more with other
 deployment tools will be a bit limited but I'm confident the result would
 also
 be useful for others and that we all will benefit from it... My hopes
 might
 be a
 bit naive *shrugs*
>>>
>>>
>>> Of course I'm biased since I've been (a little) involved in that work
>>> but I like the idea of :
>>>
>>> - Moving forward with our containerization. docker-cmd will help us
>>> for sure for this transition (I insist on the fact TripleO is a
>>> product that you can upgrade and we try to make it smooth for our
>>> operators), so we can't just trash everything and switch to a new
>>> tool. I think the approach that we're taking is great and made of baby
>>> steps where we try to solve different problems.
>>> - Using more Ansible - the right way - when it makes sense : with the
>>> TripleO containerization, we only use Puppet for Configuration
>>> Management, managing a few resources but not for orchestration (or not
>>> all the 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Emilien Macchi
On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
> On 14/07/17 08:08 -0700, Emilien Macchi wrote:
>>
>> On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:
>>>
>>>
>>> Greetings,
>>>
>>> As some of you know, I've been working on the second phase of TripleO's
>>> containerization effort. This phase if about migrating the docker based
>>> deployment onto Kubernetes.
>>>
>>> These phase requires work on several areas: Kubernetes deployment,
>>> OpenStack
>>> deployment on Kubernetes, configuration management, etc. While I've been
>>> diving
>>> into all of these areas, this email is about the second point, OpenStack
>>> deployment on Kubernetes.
>>>
>>> There are several tools we could use for this task. kolla-kubernetes,
>>> openstack-helm, ansible roles, among others. I've looked into these tools
>>> and
>>> I've come to the conclusion that TripleO would be better of by having
>>> ansible
>>> roles that would allow for deploying OpenStack services on Kubernetes.
>>>
>>> The existing solutions in the OpenStack community require using Helm.
>>> While
>>> I
>>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>>> projects,
>>> I
>>> believe using any of them would add an extra layer of complexity to
>>> TripleO,
>>> which is something the team has been fighting for years years -
>>> especially
>>> now
>>> that the snowball is being chopped off.
>>>
>>> Adopting any of the existing projects in the OpenStack communty would
>>> require
>>> TripleO to also write the logic to manage those projects. For example, in
>>> the
>>> case of openstack-helm, the TripleO team would have to write either
>>> ansible
>>> roles or heat templates to manage - install, remove, upgrade - the charts
>>> (I'm
>>> happy to discuss this point further but I'm keepping it at a high-level
>>> on
>>> purpose for the sake of not writing a 10k-words-long email).
>>>
>>> James Slagle sent an email[0], a couple of days ago, to form TripleO
>>> plans
>>> around ansible. One take-away from this thread is that TripleO is
>>> adopting
>>> ansible more and more, which is great and it fits perfectly with the
>>> conclusion
>>> I reached.
>>>
>>> Now, what this work means is that we would have to write an ansible role
>>> for
>>> each service that will deploy the service on a Kubernetes cluster.
>>> Ideally
>>> these
>>> roles will also generate the configuration files (removing the need of
>>> puppet
>>> entirely) and they would manage the lifecycle. The roles would be
>>> isolated
>>> and
>>> this will reduce the need of TripleO Heat templates. Doing this would
>>> give
>>> TripleO full control on the deployment process too.
>>>
>>> In addition, we could also write Ansible Playbook Bundles to contain
>>> these
>>> roles
>>> and run them using the existing docker-cmd implementation that is coming
>>> out
>>> in
>>> Pike (you can find a PoC/example of this in this repo[1]).
>>>
>>> Now, I do realize the amount of work this implies and that this is my
>>> opinion/conclusion. I'm sending this email out to kick-off the discussion
>>> and
>>> gather thoughts and opinions from the rest of the community.
>>>
>>> Finally, what I really like about writing pure ansible roles is that
>>> ansible
>>> is
>>> a known, powerfull, tool that has been adopted by many operators already.
>>> It'll
>>> provide the flexibility needed and, if structured correctly, it'll allow
>>> for
>>> operators (and other teams) to just use the parts they need/want without
>>> depending on the full-stack. I like the idea of being able to separate
>>> concerns
>>> in the deployment workflow and the idea of making it simple for users of
>>> TripleO
>>> to do the same at runtime. Unfortunately, going down this road means that
>>> my
>>> hope of creating a field where we could collaborate even more with other
>>> deployment tools will be a bit limited but I'm confident the result would
>>> also
>>> be useful for others and that we all will benefit from it... My hopes
>>> might
>>> be a
>>> bit naive *shrugs*
>>
>>
>> Of course I'm biased since I've been (a little) involved in that work
>> but I like the idea of :
>>
>> - Moving forward with our containerization. docker-cmd will help us
>> for sure for this transition (I insist on the fact TripleO is a
>> product that you can upgrade and we try to make it smooth for our
>> operators), so we can't just trash everything and switch to a new
>> tool. I think the approach that we're taking is great and made of baby
>> steps where we try to solve different problems.
>> - Using more Ansible - the right way - when it makes sense : with the
>> TripleO containerization, we only use Puppet for Configuration
>> Management, managing a few resources but not for orchestration (or not
>> all the features that Puppet provide) and for Data Binding (Hiera). To
>> me, it doesn't make sense for us to keep investing much in Puppet
>> modules if we go k8s & Ansible. That said, see the next point.

Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-17 Thread Andreas Jaeger
On 2017-07-17 15:51, Andy McCrae wrote:
> We held back the openstack-ansible repo to allow us to point to the
> mitaka-eol tag for the other role/project repos.
> That's been done - I've created the mitaka-eol tag in the
> openstack-ansible repo, can we remove the stable/mitaka branch from
> openstack-ansible too.

So, only from openstack/openstack-ansible?

> Thanks again to all who have helped EOL all the branches etc!

I'll add to the list,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][ironic] Patch needed to merge to fix ironic-inspector grenade job

2017-07-17 Thread milanisko k
John++

Thanks for looking into this!

--
milan

po 17. 7. 2017 v 17:13 odesílatel John Villalovos <
openstack@sodarock.com> napsal:

> Hi Infra,
>
> We were hoping to get a review of:
> https://review.openstack.org/483059
> Add the ironic-inspector services to features.yaml
> openstack-infra/devstack-gate
>
> As it is needed to fix the ironic-inspector grenade job.
>
> Thanks,
> John
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] release reminders

2017-07-17 Thread Emilien Macchi
A few reminders for our dear TripleO community about releases that are coming:

- pike-3 will be released by the end of next week:
https://releases.openstack.org/pike/schedule.html#p-3
- because we're using release:cycle-trailing model, it means our
Feature Freeze will happen during Aug 07 - Aug 11 week
https://releases.openstack.org/pike/schedule.html#p-trailing-ff
- the same week as Feature Freeze, we'll cut RC1:
https://releases.openstack.org/pike/schedule.html#p-rc1
- Final RCs will be cut by Aug 28 - Sep 01 week:
https://releases.openstack.org/pike/schedule.html#p-trailing-rc
- Final TripleO release: week of Sep 11 - Sep 15:
https://releases.openstack.org/pike/schedule.html#p-trailing-release
- OpenStack Queens PTG: same week as final TripleO release (we'll
probably do it live):
https://etherpad.openstack.org/p/tripleo-ptg-queens

Whole schedule can be found here:
https://releases.openstack.org/pike/schedule.html

Any question or feedback is very welcome,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-17 Thread Jay Pipes

On 07/17/2017 11:31 AM, Balazs Gibizer wrote:
On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent  
wrote:

On Thu, 13 Jul 2017, Balazs Gibizer wrote:

/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" 
but placement returns an empty response. Then nova scheduler falls 
back to legacy behavior [4] and places the instance without 
considering the custom resource request.


As far as I can tell at least one missing piece of the puzzle here
is that your MAGIC provider does not have the
'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute
and MAGIC to be in the same aggregate, the MAGIC needs to announce
that its inventory is for sharing. The comments here have a bit more
on that:

https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678 


Thanks a lot for the detailed answer. Yes, this was the missing piece. 
However I had to add that trait both the the MAGIC provider and to my 
compute provider to make it work. Is it intentional that the compute 
also has to have that trait?


No. The compute node doesn't need that trait. It only needs to be 
associated to an aggregate that is associated to the provider that is 
marked with the MISC_SHARES_VIA_AGGREGATE trait.


In other words, you need to do this:

1) Create the provider record for the thing that is going to share the 
CUSTOM_MAGIC resources


2) Create an inventory record on that provider

3) Set the MISC_SHARES_VIA_AGGREGATE trait on that provider

4) Create an aggregate

5) Associate both the above provider and the compute node provider with 
the aggregate


That's it. The compute node provider will now have access to the 
CUSTOM_MAGIC resources that the other provider has in inventory.


Magic. :)

Best,
-jay


I updated my script with the trait. [3]



It's quite likely this is not well documented yet as this style of
declaring that something is shared was a later development. The
initial code that added the support for GET /resource_providers
was around, it was later reused for GET /allocation_candidates:

https://review.openstack.org/#/c/460798/


What would be a good place to document this? I think I can help with 
enhancing the documentation from this perspective.


Thanks again.
Cheers,
gibi



--
Chris Dent  ┬──┬◡ノ(° -°ノ) https://anticdent.org/
freenode: cdent tw: @anticdent


[3] http://paste.openstack.org/show/615629/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptl][tc] IMPORTANT upcoming change to technical elections

2017-07-17 Thread Jeremy Stanley
For those who won't read the giant wall of text below, a quick
summary...

If you want to run or vote in upcoming elections for PTL and TC,
make sure your Foundation Individual Membership is active and has at
least one Email address which matches an Email address in your
Gerrit account: log in at https://www.openstack.org/profile/ and
check that it says "Current Member Level: Foundation Member" near
the top and that at least one of the Primary, Second or Third Email
Address fields contains an address which matches at least one of the
entries available in the Preferred Email drop-down at
https://review.openstack.org/#/settings/contact (case sensitivity
doesn't matter but they at least need to be spelled the same).

If you're an "extra-ATC" and don't have a Gerrit account (this is
common for translators on the I18n team) then you still need to be a
Foundation Member to participate in technical elections and should
make sure your member profile includes the Email address listed for
you on your team's page at
https://governance.openstack.org/tc/reference/projects/ .

Now on to the long, boring details for people (like me!) who enjoy
reading them; much of this is taken from an Infra specification[1]
related to some ongoing work in this area...

According to the Bylaws of the OpenStack Foundation Appendix 4
Technical Committee Member Policy[2] (section 3b) along with the
OpenStack Technical Committee Charter definitions for APC[3] and
ATC[4], we limit the voter rolls for technical elections to
Foundation Individual Members. In order to comply with this
requirement in prior elections, we required all contributors to
CLA-enforced Git repositories to submit contact info to the Gerrit
contact store[5] which in turn pinged a simple API in the foundation
member system to confirm the preferred Email address in Gerrit
matched the primary Email address of an existing OpenStack
Foundation Individual Member.

This had a number of drawbacks:

1. It forced contributors to join the OpenStack Foundation even if
they had no interest in voting or participating in other member
benefits.

2. Our interpretation of the meaning of "contributor" for these
purposes was unnaturally limited to change owners in Gerrit, in part
because commit authors and co-authors weren't constrained by the
contact store process and so might not have been members; manual
listing as extra ATCs in the governance repo was the sole
workaround, and required cumbersome manual verification of
foundation membership for each addition.

3. The model was inherently flawed since it's been possible for a
couple years now for a member to officially resign or allow their
membership to lapse, but contact store submission was only ever
enforced once when the account was first set up and so we may have
been allowing lapsed or resigned members to vote in technical
elections.

4. The implementation was brittle and process confusing, resulting
in opaque errors which often confounded new contributors and overall
inhibited onboarding.

5. Because the protocol only submitted a single Email address and
backend implementation in the member system only queried against a
single address field, it forced contributors to use the same
primary/preferred address in both systems (at least initially).

6. Gerrit removed contact store functionality[6] upstream after the
version we're currently running, and we'd like to be able to upgrade
to a newer Gerrit release soon.

So what's changing?

Very recently the OpenStackID Resources system introduced a member
directory API[7] which is public and anonymous. Integrating this
into the change owners script[8] we use for generating electoral
rolls allows us to expressly filter out non-member contributors.

Side effect benefits include:

* it will properly limit voting rights for extra ATCs who have not
joined the foundation, eliminating any need for the current
cumbersome vetting process

* it may help further identify duplicate contributors where there
exist multiple Email addresses in the member system for a single
membership corresponding to multiple accounts in Gerrit with those
different addresses

* it can even enable us (should we choose) to more easily expand the
interpreted definition of ATC/APC to include a variety of other
types of verifiable contribution tied to a known Email address
including commit authors and co-authors

The reasons for this message:

1. Preliminary runs of the patched script suggest that nearly 13%
of our active contributors in the last year may not be eligible to
vote in or run in upcoming technical elections, so I want to make
sure this change doesn't take too many people by surprise.

2. I'd like to assess whether it's reasonable timing to implement
this change before the Queens PTL elections, or between the PTL and
TC elections, or should wait until after the coming TC election.

3. It would be nice to initiate a debate over ways we can
reinterpret the term "contributor" (per drawback #2 above) in the
future to 

Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-17 Thread Emilien Macchi
On Mon, Jul 17, 2017 at 9:04 AM, Michele Baldessari  wrote:
> On Wed, Jul 12, 2017 at 02:23:25PM -0700, Emilien Macchi wrote:
>> Hey folks,
>>
>> Derek, it seems like you want to deploy Ironic on scenario006
>> (https://review.openstack.org/#/c/474802). I was wondering how it
>> would work with multinode jobs.
>> Also, Flavio would like to test k8s on scenario006:
>> https://review.openstack.org/#/c/471759/ . To avoid having too much
>> scenarios and complexity, I think if ironic tests can be done on a
>> 2nodes job, then we can deploy ironic on scenario004 maybe. If not,
>> then please give the requirements so we can see how to structure it.
>>
>> For Flavio's need, I think we need a dedicated scenario for now, since
>> he's not going to deploy any OpenStack service on the overcloud for
>> now, just k8s.
>>
>> Thanks for letting us know the plans, so we can keep the scenarios in
>> good shape.
>> Note: Numans also wants to test OVN and I suggested to create
>> scenario007 (since we can't deploy OVN before Pike, so upgrades
>> wouldn't work).
>> Note2: it seems like efforts done to test complex HA architectures
>> weren't finished in scenario005 - Michele: any thoughts on this one?
>> should we remove it now or do we expect it working one day?
>
> I'm a bit on the fence on this. On one side I'd love to resurrect it
> and try and complete it (previous attempts were failing due to unrelated
> issues and we never circled back to it). What we could do is catch two
> birds with one stone: namely use scenario005 for both OVN and more
> complex HA. As long as we deploy a node which hosts OVN in a dedicate
> role (be it via full pacemaker or pacemaker remote) we are exercising
> both OVN *and* the composable HA work.
>
> What do you think? Does that sound feasible?

Yes but warnings:

* scenario005 is a 4nodes deployments in OpenStack Infra, it takes
quite a lot of CI resources.
* we would like to run the scenario which has OVN into
openstack/networking-ovn gate.

So I'm not sure this is really efficient. Scenario005 was created to
be run when touching pacemaker files in THT / puppet-tripleo and test
composable HA. Adding OVN to the stack is fine, but it also means the
job would run much more often (every time we patch OVN files in
TripleO and also for every patch in openstack/networking-ovn) - do we
really want that?

> cheers,
> Michele
> --
> Michele Baldessari
> Email:
> C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare] Application for inclusion of Glare in the list of official projects - Answers

2017-07-17 Thread Flavio Percoco

On 17/07/17 12:34 +0300, Mikhail Fedosin wrote:

Hello! Thank you all for your answers and advises!

I will try to summarize all of the above.

The purpose of the application was to get the community's views on
including Glare in the list of official projects, and a potential
replacement of Glance in the foreseeable future. A large number of
inspirational mails were received, and there were a number of questions
that I should answer.



Hey Mike,

Thanks for addressing these comments and summirizing them in this email.


1. "Glare is being developed by one company and a very limited circle of
people."
   At this stage this is undoubtedly so. But I think this is more a plus
than a minus. Working out in a small team allows us to move much faster,
and do not spend months discussing simple things. Also I want to note that
three full-time engineers is enough. Obviously, this will not always be the
same. When we give the project to the community (i.e. make it an official
project), I can guarantee that the distribution by companies will increase.


I understand your optimism on this point and how being agile is helping you move
Glare forward. I'd like to highlight, however, how important having a
bigger/more diverse team is. Teams change, priorities change and I'd recommend
not understimating this factor. There are many examples of projects that are
interesting and important for OpenStack that are not receiving the right amount
of contributions just yet.


2. "Glance is used everywhere, Glare will be very hard to replace him."
   Well, no one said that it would be easy. For our part, we did our best
to simplify this transition as much as possible: the data of Glance can be
migrated to Glare by a simple script, Glare API is a cleaned and improved
version of the Glance v2 API (
https://docs.google.com/document/d/18Tqad0NUPyFfHUo1KMr6bDDISpQtzacvZtEQIGhNkf4/edit?usp=sharing).
From my experience I can say that the transition from Glance v1 to Glance
v2 was at times more painful than this.

3. "What are the pros / cons of the transition to Glare"
   I'll start with the pros:
   OpenStack will get the features that the customers wanted from us
for several years: dynamic quotas that determine how much data a particular
tenant can upload, versioning of artifacts, support for layers, which will
make a universal COW in Cinder regardless of the proposed backend, and many
others, including missing "copy-from" from Glance v1.
   Glare is much stabler by design. There are no race conditions
(artifacts are locked before updates), all the known problems of Glance
were also solved in Glare.
   Subjectively, but it seems to me that the Glare code is better and
the architecture is cleaner. This will allow people unfamiliar with the
project to adapt more quickly to it.
   Cons:
   Glance was developed long enough and it has good documentation,
also there are many tests. In other words, the project has been studied,
which can not be said about Glare. According to my feelings after the
transfer of the project, we will need a year's minimum for its adaptation
in the industry.
   It will take some effort to move from one project to another in
existing clouds. I believe that this process can be automated, but at the
same time I understand the complexity of such operations.

4. "How can a transition be made".
   I have several ideas how to organize this. But still I believe that the
decision should be taken by all together after a series of discussions. In
the basic version, I see it like this:
   1. We create an adapter in glare client that hides the minimal
differences between Glance v2 and Glare v1 APIs. For example, the image
will be activated immediately after upload.
   2. In Nova, another glare.py module will be created, which in fact
is just a copy of glance.py with cosmetic changes.
   3. Existing data migrate without loss by a simple script.
   4. ?
   5. PROFIT!



I would like us to focus first in how we can get Glare in as an official
project, what changes (if any) are required and whether this makes sense. Then
it would be a good time for us to start discussing if/when/how we could replace
Glance with Glare.

I believe mixing both discussions is not helping and we cannot do any
replacement if Glare is not part of the big tent first. I understand that some
inclusion questions could be answering by having some of the replacement points
figured out but still, I believe doing the latter before the former is just
getting ahead of ourselves.


5. "There's enough overlap between glare and glance + barbican + swift"
   I do not think there are any overlapping with Barbican and Swift. Swift
is used as one of the possible backends (as in Glance), Glare only stores
links to the data in it.
   Like in Barbican there is a potential opportunity to keep secrets in
Glare. This logic can be added with just one plugin. But in order to avoid
potential collisions, it was decided not to include this 

Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-17 Thread Balazs Gibizer
On Thu, Jul 13, 2017 at 11:37 AM, Chris Dent  
wrote:

On Thu, 13 Jul 2017, Balazs Gibizer wrote:

/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" 
but placement returns an empty response. Then nova scheduler falls 
back to legacy behavior [4] and places the instance without 
considering the custom resource request.


As far as I can tell at least one missing piece of the puzzle here
is that your MAGIC provider does not have the
'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute
and MAGIC to be in the same aggregate, the MAGIC needs to announce
that its inventory is for sharing. The comments here have a bit more
on that:


https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678


Thanks a lot for the detailed answer. Yes, this was the missing piece. 
However I had to add that trait both the the MAGIC provider and to my 
compute provider to make it work. Is it intentional that the compute 
also has to have that trait?


I updated my script with the trait. [3]



It's quite likely this is not well documented yet as this style of
declaring that something is shared was a later development. The
initial code that added the support for GET /resource_providers
was around, it was later reused for GET /allocation_candidates:

https://review.openstack.org/#/c/460798/


What would be a good place to document this? I think I can help with 
enhancing the documentation from this perspective.


Thanks again.
Cheers,
gibi



--
Chris Dent  ┬──┬◡ノ(° -°ノ)   
https://anticdent.org/

freenode: cdent tw: @anticdent


[3] http://paste.openstack.org/show/615629/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Flavio Percoco

On 17/07/17 09:47 -0400, James Slagle wrote:

On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco  wrote:

Thanks for all the feedback so far. This is one of the things I appreciate
the
most about this community, Open conversations, honest feedback and will to
collaborate.

I'm top-posting to announce that we'll have a joint meeting with the Kolla
team
on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not
for
me) but I do want to have a live discussion with the rest of the Kolla team.

Some questions about the meeting:

* How much time can we allocate?
* Can we prepare an agenda rather than just discussing "TripleO is thinking
of
 using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
 agenda)


It may help to prepare some high level requirements around what we
need out of a solution. For the ansible discussion I started this
etherpad:

https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible

How we use Ansible and what we want to use it for, is related to this
discussion around Helm. Although, it's not the exact same discussion,
so if you wanted to start a new etherpad more specific to
tripleo/kubernetes that may be good as well.

One thing I think is important in this discussion is that we should be
thinking about deploying containers on both Kubernetes and
!Kubernetes. That is one of the reasons I like the ansible approach,
in that I think it could address both cases with a common interface
and API. I don't think we should necessarily choose a solution that
requires to deploy on Kubernetes. Because then we are stuck with that
choice. It'd be really nice to just "docker run" sometimes for
dev/test. I don't know if Helm has that abstraction or not, I'm just
trying to capture the requirement.


Yes!

Thanks for pointing this out as this is one of the reasons why I was proposing
ansible as our common interface w/o any extra layer.

I'll probably start a new etherpad for this as I would prefer not to distract
the rest of the TripleO + ansible discussion. At the end, if ansible ends up
being the tool we pick, I'll make sure to update your etherpad.

Flavio


If you consider the parallel with Heat in this regard, we are
currently "stuck" deploying on OpenStack (undercloud with Heat). We've
had to work an a lot of complimentary features to add the flexibility
to TripleO that are a result of having to use OpenStack (OVB,
split-stack).

That's exactly why we are starting a discussion around using Ansible,
and is one of the fundamental changes that operators have been
requesting in TripleO.

--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-17 Thread Flavio Percoco

On 17/07/17 15:56 +0100, Derek Higgins wrote:

On 17 July 2017 at 15:37, Emilien Macchi  wrote:

On Thu, Jul 13, 2017 at 6:01 AM, Emilien Macchi  wrote:

On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins  wrote:

On 12 July 2017 at 22:33, Emilien Macchi  wrote:

On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  wrote:
[...]

Derek, it seems like you want to deploy Ironic on scenario006
(https://review.openstack.org/#/c/474802). I was wondering how it
would work with multinode jobs.


Derek, I also would like to point out that
https://review.openstack.org/#/c/474802 is missing the environment
file for non-containerized deployments & and also the pingtest file.
Just for the record, if we can have it before the job moves in gate.


I knew I had left out the ping test file, this is the next step but I
can create a noop one for now if you'd like?


Please create a basic pingtest with common things we have in other scenarios.


Is the non-containerized deployments a requirement?


Until we stop supporting non-containerized deployments, I would say yes.



Thanks,
--
Emilien Macchi


So if you create a libvirt domain, would it be possible to do it on
scenario004 for example and keep coverage for other services that are
already on scenario004? It would avoid to consume a scenario just for
Ironic. If not possible, then talk with Flavio and one of you will
have to prepare scenario007 or 0008, depending where Numans is in his
progress to have OVN coverage as well.


I haven't seen much resolution / answers about it. We still have the
conflict right now and open questions.

Derek, Flavio - let's solve this one this week if we can.

Yes, I'll be looking into using scenario004 this week. I was traveling
last week so wasn't looking at it.


Awesome! Thanks, Derek.
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] PTL nominations

2017-07-17 Thread ChangBo Guo
Hi oslo folks,

The PTL nomination week is fast approaching [0], and as you might have
guessed by the subject of this email, I am not planning to run for Queens,
I'm still in the team and give some guidance about oslo PTL's daily work as
previous PTL did before .
It' my honor to be oslo PTL, I learned a lot  and grew quickly. It's time
to give someone else the opportunity to grow in the amazing role of oslo PTL

[0]https://review.openstack.org/#/c/481768/4/configuration.yaml

-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][ironic] Patch needed to merge to fix ironic-inspector grenade job

2017-07-17 Thread John Villalovos
Hi Infra,

We were hoping to get a review of:
https://review.openstack.org/483059
Add the ironic-inspector services to features.yaml
openstack-infra/devstack-gate

As it is needed to fix the ironic-inspector grenade job.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Queens Goal for policy-in-code

2017-07-17 Thread Emilien Macchi
On Wed, Jul 12, 2017 at 7:31 AM, Lance Bragstad  wrote:
> [...] I can look into organizing a slot at the PTG so we can work through 
> things as a group.

Good idea! In our experience from our last PTG, having one room for
each goal was a bit overkill and the room was under-used.
I would suggest to book one room for all Goals, which would bring
folks involved with goals working in the same room and being able to
collaborate without a wall :-)

Any feedback on this is welcome,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] next week is deadline for final release for non-client libraries

2017-07-17 Thread ChangBo Guo
We scheduled the final release(s) for oslo libraries on July 19, please
focus on bug fixing. thanks

2017-07-14 20:48 GMT+08:00 ChangBo Guo :

> Just a reminder,  I will add final releases of oslo libraries for Pike on
> next Monday.
>
> 2017-07-10 22:37 GMT+08:00 ChangBo Guo :
>
>> OpenStackers,
>>
>> According to Pike Schedule https://releases.openstack.org
>> /pike/schedule.html
>>
>> Jul 17 - Jul 21 is the deadline for final release for oslo libraries, so
>> please pay more attentions to your reviews which are needed for Pike. Feel
>> free to ping me if you want to quicken the review process.
>>
>> --
>> ChangBo Guo(gcb)
>>
>
>
>
> --
> ChangBo Guo(gcb)
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-17 Thread Derek Higgins
On 17 July 2017 at 15:37, Emilien Macchi  wrote:
> On Thu, Jul 13, 2017 at 6:01 AM, Emilien Macchi  wrote:
>> On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins  wrote:
>>> On 12 July 2017 at 22:33, Emilien Macchi  wrote:
 On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  wrote:
 [...]
> Derek, it seems like you want to deploy Ironic on scenario006
> (https://review.openstack.org/#/c/474802). I was wondering how it
> would work with multinode jobs.

 Derek, I also would like to point out that
 https://review.openstack.org/#/c/474802 is missing the environment
 file for non-containerized deployments & and also the pingtest file.
 Just for the record, if we can have it before the job moves in gate.
>>>
>>> I knew I had left out the ping test file, this is the next step but I
>>> can create a noop one for now if you'd like?
>>
>> Please create a basic pingtest with common things we have in other scenarios.
>>
>>> Is the non-containerized deployments a requirement?
>>
>> Until we stop supporting non-containerized deployments, I would say yes.
>>

 Thanks,
 --
 Emilien Macchi
>>
>> So if you create a libvirt domain, would it be possible to do it on
>> scenario004 for example and keep coverage for other services that are
>> already on scenario004? It would avoid to consume a scenario just for
>> Ironic. If not possible, then talk with Flavio and one of you will
>> have to prepare scenario007 or 0008, depending where Numans is in his
>> progress to have OVN coverage as well.
>
> I haven't seen much resolution / answers about it. We still have the
> conflict right now and open questions.
>
> Derek, Flavio - let's solve this one this week if we can.
Yes, I'll be looking into using scenario004 this week. I was traveling
last week so wasn't looking at it.

>
> Thanks,
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] scenario006 conflict

2017-07-17 Thread Emilien Macchi
On Thu, Jul 13, 2017 at 6:01 AM, Emilien Macchi  wrote:
> On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins  wrote:
>> On 12 July 2017 at 22:33, Emilien Macchi  wrote:
>>> On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  wrote:
>>> [...]
 Derek, it seems like you want to deploy Ironic on scenario006
 (https://review.openstack.org/#/c/474802). I was wondering how it
 would work with multinode jobs.
>>>
>>> Derek, I also would like to point out that
>>> https://review.openstack.org/#/c/474802 is missing the environment
>>> file for non-containerized deployments & and also the pingtest file.
>>> Just for the record, if we can have it before the job moves in gate.
>>
>> I knew I had left out the ping test file, this is the next step but I
>> can create a noop one for now if you'd like?
>
> Please create a basic pingtest with common things we have in other scenarios.
>
>> Is the non-containerized deployments a requirement?
>
> Until we stop supporting non-containerized deployments, I would say yes.
>
>>>
>>> Thanks,
>>> --
>>> Emilien Macchi
>
> So if you create a libvirt domain, would it be possible to do it on
> scenario004 for example and keep coverage for other services that are
> already on scenario004? It would avoid to consume a scenario just for
> Ironic. If not possible, then talk with Flavio and one of you will
> have to prepare scenario007 or 0008, depending where Numans is in his
> progress to have OVN coverage as well.

I haven't seen much resolution / answers about it. We still have the
conflict right now and open questions.

Derek, Flavio - let's solve this one this week if we can.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-07-17 Thread Andy McCrae
Hi Andreas,

OK, added. I'm waiting for packstack confirmation.
>
> The current list is now:
>
>
> openstack/sahara-extra   stable/icehousePlease retire
> openstack/sahara-extra   stable/mitaka  Please retire
> openstack/packstack  stable/kiloWaiting for confirmation
> openstack/packstack  stable/liberty Waiting for confirmation
> openstack/packstack  stable/mitaka  Waiting for confirmation
> openstack/bareon-ironic  stable/mitaka  Please retire
>
> openstack/rpm-packaging  stable/mitaka  Please retire
> openstack/training-labs  stable/mitaka  Please retire
>
> Not done in
> https://gist.githubusercontent.com/tbreeds/c99e62bf8da19380e4eb130be8783b
> e7/raw/6d02deb40e07516ce8fc529d2ba8c74af11a5a6b/mitaka_eol_data.txt
>
> openstack/astara stable/mitaka  Please retire
>
>
> Special treatment:
> openstack/training-labs icehouse-eol (just delete branch, tag exists)
> openstack/training-labs juno-eol (delete branch, create tag instead)
>
>
> Any other late comers?
>
> We held back the openstack-ansible repo to allow us to point to the
mitaka-eol tag for the other role/project repos.
That's been done - I've created the mitaka-eol tag in the openstack-ansible
repo, can we remove the stable/mitaka branch from openstack-ansible too.

Thanks again to all who have helped EOL all the branches etc!
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread James Slagle
On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco  wrote:
> Thanks for all the feedback so far. This is one of the things I appreciate
> the
> most about this community, Open conversations, honest feedback and will to
> collaborate.
>
> I'm top-posting to announce that we'll have a joint meeting with the Kolla
> team
> on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not
> for
> me) but I do want to have a live discussion with the rest of the Kolla team.
>
> Some questions about the meeting:
>
> * How much time can we allocate?
> * Can we prepare an agenda rather than just discussing "TripleO is thinking
> of
>  using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
>  agenda)

It may help to prepare some high level requirements around what we
need out of a solution. For the ansible discussion I started this
etherpad:

https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible

How we use Ansible and what we want to use it for, is related to this
discussion around Helm. Although, it's not the exact same discussion,
so if you wanted to start a new etherpad more specific to
tripleo/kubernetes that may be good as well.

One thing I think is important in this discussion is that we should be
thinking about deploying containers on both Kubernetes and
!Kubernetes. That is one of the reasons I like the ansible approach,
in that I think it could address both cases with a common interface
and API. I don't think we should necessarily choose a solution that
requires to deploy on Kubernetes. Because then we are stuck with that
choice. It'd be really nice to just "docker run" sometimes for
dev/test. I don't know if Helm has that abstraction or not, I'm just
trying to capture the requirement.

If you consider the parallel with Heat in this regard, we are
currently "stuck" deploying on OpenStack (undercloud with Heat). We've
had to work an a lot of complimentary features to add the flexibility
to TripleO that are a result of having to use OpenStack (OVB,
split-stack).

That's exactly why we are starting a discussion around using Ansible,
and is one of the fundamental changes that operators have been
requesting in TripleO.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Paul Belanger
On Mon, Jul 17, 2017 at 07:22:34PM +1000, Ian Wienand wrote:
> Hi,
> 
> The removal of the mysql.qcow2 image [1] had a flow-on effect noticed
> first by Paul in [2] that the tools/image_list.sh "sanity" check was
> not updated, leading to DIB builds failing in a most unhelpful way as
> it tries to cache the images for CI builds.
> 
> So while [2] fixes the problem; one complication here is that the
> caching script [3] loops through the open devstack branches and tries
> to collect the images to cache.
> 
> Now it seems we hadn't closed the liberty or mitaka branches.  This
> causes a problem, because the old branches refer to the old image, but
> we can't actually commit a fix to change them because the branch is
> broken (such as [4]).
> 
> I have taken the liberty of EOL-ing stable/liberty and stable/mitaka
> for devstack.  I get the feeling it was just forgotten at the time.
> Comments in [4] support this theory.  I have also taken the liberty of
> approving backports of the fix to newton and ocata branches [5],[6].
> 
> A few 3rd-party CI people using dib have noticed this failure.  As the
> trio of [4],[5],[6] move through, your builds should start working
> again.
> 
> Thanks,
> 
> -i
> 
> [1] https://review.openstack.org/482600
> [2] https://review.openstack.org/484001
> [3] 
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
> [4] https://review.openstack.org/482604
> [5] https://review.openstack.org/484299
> [6] https://review.openstack.org/484298
> 
Thanks, I had patches up last week but was hitting random devstack job failures.
I'll watch this today and make sure our image builds are back online.

Also, thanks for cleaning up the old branches.  I was planning on doing that
this week.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-17 Thread Waines, Greg
When MAGNUM launches a VM or Ironic instance for a COE master or minion node, 
with the COE Image,
What is the interface configuration and assumptions for these nodes ?
e.g.
- only a single interface ?
- master and minion communication over that interface ?
- communication to Docker Registry or public Docker Hub over that interface ?
- public communications for containers over that interface ?

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Architecture support for either VM or Ironic instance as Containers' Host ?

2017-07-17 Thread Waines, Greg
I believe the MAGNUM architecture supports using either a VM Instance or an 
Ironic Instance as the Host for the COE’s masters and minions.

How is this done / abstracted within the MAGNUM Architecture ?
i.e. is there a ‘container-host-driver API’ that is defined; and implemented 
for both VM and Ironic ?
( Feel free to just refer me to a URL that describes this. )

The reason I ask is that I have a proprietary bare metal service that I would 
like to have MAGNUM run on top of.

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-17 Thread Dmitry Tantsur

On 07/12/2017 04:18 AM, Steve Baker wrote:



On Wed, Jul 12, 2017 at 11:47 AM, James Slagle > wrote:


On Tue, Jul 11, 2017 at 6:53 PM, Steve Baker > wrote:
 >
 >
 > On Tue, Jul 11, 2017 at 6:51 AM, James Slagle >
 > wrote:
 >>
 >> On Mon, Jul 10, 2017 at 11:37 AM, Lars Kellogg-Stedman >
 >> wrote:
 >> > On Fri, Jul 7, 2017 at 1:50 PM, James Slagle >
 >> > wrote:
 >> >>
 >> >> There are also some ideas forming around pulling the Ansible 
playbooks
 >> >>
 >> >> and vars out of Heat so that they can be rerun (or run initially)
 >> >> independently from the Heat SoftwareDeployment delivery mechanism:
 >> >
 >> >
 >> > I think the closer we can come to "the operator runs ansible-playbook 
to
 >> > configure the overcloud" the better, but not because I think Ansible 
is
 >> > inherently a great tool: rather, I think the many layers of 
indirection
 >> > in
 >> > our existing model make error reporting and diagnosis much more
 >> > complicated
 >> > that it needs to be.  Combined with Puppet's "fail as late as 
possible"
 >> > model, this means that (a) operators waste time waiting for a 
deployment
 >> > that is ultimately going to fail but hasn't yet, and (b) when it does
 >> > fail,
 >> > they need relatively intimate knowledge of our deployment tools to
 >> > backtrack
 >> > through logs and find the root cause of the failure.
 >> >
 >> > If we can offer a deployment mode that reduces the number of layers
 >> > between
 >> > the operator and the actions being performed on the hosts I think we
 >> > would
 >> > win on both fronts: faster failures and reporting errors as close as
 >> > possible to the actual problem will result in less frustration across
 >> > the
 >> > board.
 >> >
 >> > I do like Steve's suggestion of a split model where Heat is 
responsible
 >> > for
 >> > instantiating OpenStack resources while Ansible is used to perform 
host
 >> > configuration tasks.  Despite all the work done on Ansible's OpenStack
 >> > modules, they feel inflexible and frustrating to work with when 
compared
 >> > to
 >> > Heat's state-aware, dependency ordered deployments.  A solution that
 >> > allows
 >> > Heat to output configuration that can subsequently be consumed by
 >> > Ansible --
 >> > either running manually or perhaps via Mistral for
 >> > API-driven-deployments --
 >> > seems like an excellent goal.  Using Heat as a "front-end" to the
 >> > process
 >> > means that we get to keep the parameter validation and documentation
 >> > that is
 >> > missing in Ansible, while still following the Unix philosophy of 
giving
 >> > you
 >> > enough rope to hang yourself if you really want it.
 >>
 >> This is excellent input, thanks for providing it.
 >>
 >> I think it lends itself towards suggesting that we may like to persue
 >> (again) adding native Ironic resources to Heat. If those were written
 >> in a way that also addressed some of the feedback about TripleO and
 >> the baremetal deployment side, then we could continue to get the
 >> advantages from Heat that you mention.
 >>
 >> My personal opinion to date is that Ansible's os_ironic* modules are
 >> superior in some ways to the Heat->Nova->Ironic model. However, just a
 >> Heat->Ironic model may work in a way that has the advantages of both.
 >
 >
 > I too would dearly like to get nova out of the picture. Our placement 
needs
 > mean the scheduler is something we need to work around, and it discards
 > basically all context for the operator when ironic can't deploy for some
 > reason.
 >
 > Whether we use a mistral workflow[1], a heat resource, or ansible 
os_ironic,
 > there will still need to be some python logic to build the config drive 
ISO
 > that injects the ssh keys and os-collect-config bootstrap.
 >
 > Unfortunately ironic iPXE boot from iSCSI[2] doesn't support config-drive
 > (still?) so the only option to inject ssh keys is the nova ec2-metadata
 > service (or equivalent). I suspect if we can't make every ironic 
deployment
 > method support config-drive then we're stuck with nova.
 >
 > I don't have a strong preference for a heat resource vs mistral vs 
ansible
 > os_ironic, but given there is some python logic required anyway, I would
 > lean towards a heat resource. If the resource is general enough we could
 > propose it to heat upstream, otherwise we could carry it in 
tripleo-common.
 >
 > Alternatively, we 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Flavio Percoco

On 14/07/17 08:08 -0700, Emilien Macchi wrote:

On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools
and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While
I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects,
I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially
now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would
require
TripleO to also write the logic to manage those projects. For example, in
the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts
(I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the
conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally
these
roles will also generate the configuration files (removing the need of
puppet
entirely) and they would manage the lifecycle. The roles would be isolated
and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these
roles
and run them using the existing docker-cmd implementation that is coming out
in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion
and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that ansible
is
a known, powerfull, tool that has been adopted by many operators already.
It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate
concerns
in the deployment workflow and the idea of making it simple for users of
TripleO
to do the same at runtime. Unfortunately, going down this road means that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result would
also
be useful for others and that we all will benefit from it... My hopes might
be a
bit naive *shrugs*


Of course I'm biased since I've been (a little) involved in that work
but I like the idea of :

- Moving forward with our containerization. docker-cmd will help us
for sure for this transition (I insist on the fact TripleO is a
product that you can upgrade and we try to make it smooth for our
operators), so we can't just trash everything and switch to a new
tool. I think the approach that we're taking is great and made of baby
steps where we try to solve different problems.
- Using more Ansible - the right way - when it makes sense : with the
TripleO containerization, we only use Puppet for Configuration
Management, managing a few resources but not for orchestration (or not
all the features that Puppet provide) and for Data Binding (Hiera). To
me, it doesn't make sense for us to keep investing much in Puppet
modules if we go k8s & Ansible. That said, see the next point.
- Having a transition path between TripleO with Puppet and TripleO
with apbs and have some sort of binding between previous hieradata
generated by TripleO & a similar data binding within Ansible playbooks
would help. I saw your PoC Flavio, I found it great and I think we
should make 
https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/hiera.yaml
optional when running apbs, and allow to provide another format (more
Ansiblish) to let folks not 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Flavio Percoco

Thanks for all the feedback so far. This is one of the things I appreciate the
most about this community, Open conversations, honest feedback and will to
collaborate.

I'm top-posting to announce that we'll have a joint meeting with the Kolla team
on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not for
me) but I do want to have a live discussion with the rest of the Kolla team.

Some questions about the meeting:

* How much time can we allocate?
* Can we prepare an agenda rather than just discussing "TripleO is thinking of
 using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
 agenda)

One last point. I'm not interested in conversations around competition,
re-invention, etc. I think I speak for the entire TripleO team when I say that
this is not about "winning" in this space but rather seeing how/if we can
collaborate and how/if it makes sense to keep exploring the path described in
the email below.

Flavio

On 14/07/17 11:17 +0200, Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that ansible is
a known, powerfull, tool that has been adopted by many operators already. It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate concerns
in the deployment workflow and the idea of making it simple for users of TripleO
to do the same at runtime. Unfortunately, going down this road means that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result would also
be useful for others and that we all will benefit from it... My hopes might be a
bit naive *shrugs*

Flavio

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
[1] https://github.com/tripleo-apb/tripleo-apbs

--
@flaper87
Flavio Percoco




--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 28) - some announcements

2017-07-17 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

= Announcements =

TripleO Cores who would like to +workflow changes on tripleo-quickstart, 
tripleo-quickstart-extras and tripleo-ci should attend the Squad meeting 
to gain the necessary overview for deciding when to submit changes to 
these repos. This was discussed by the repo specific cores over this 
meeting.


In other news the https://thirdparty-logs.rdoproject.org/ logserver 
(hosted on OS1) migrated to https://thirdparty.logs.rdoproject.org/ (on 
RDO cloud).


= Discussion topics =

This week we had a more balanced agenda, with multiple small topics. 
Here they are:


* John started working on the much requested 3 node multinode feature 
for Quickstart. Here's his WIP change[1]. This is necessary to test HA + 
containers on multinode jobs.


* The OVB job transition is almost over complete. Sagi was cleaning up 
the last few tasks, replacing the 
gate-tripleo-ci-centos-7-ovb-nonha-puppet-* jobs of ceph and cinder to 
featureset024 which deploys ceph (former updates job) and 
gate-tripleo-ci-centos-7-ovb-nonha-convergence jobs which runs on 
experimental for Heat repo.


* Gabriele made a nice solution to run periodic jobs on demand if 
necessary. The patch[2] is still not merged, but it looks promising.


* Ronelle and Gabriele continues to work on the RDO cloud migration 
(both OVB and multinode). There are already some new and already 
exisitng jobs migrated there as a test.


That's it for last week.

Best regards,
Attila

[1] https://review.openstack.org/483078
[2] https://review.openstack.org/478516

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [horizon-plugin] Raising Django version cap

2017-07-17 Thread Rob Cresswell (rcresswe)
Awesome, I appreciate the work here. I spoke with a couple Django folk on IRC 
and they didn’t have any other solution than “vendor the code”, so I think your 
approach is probably the most reasonable. I’m fairly annoyed that they dropped 
an interface like that, but oh well.

I’ll take a look at the patch and give some feedback. Hoping to also get some 
feedback on the D_O_A merge this week too, otherwise I’ll probably just go 
ahead and do it.

Rob

On 17 Jul 2017, at 09:12, Adrian Turjak 
> wrote:


Was hoping to play with this much sooner, but here is a quick hack for horizon 
working with django 1.11

https://review.openstack.org/#/c/484277/

The main issues are with widgets from Django which are no longer there, and all 
I've done is move them to our repo from django 1.10's code. This is probably 
not a good long term solution.

Then django-babel doesn't yet have a version that has django 1.11 support, 
although the change has been merged to master. Just needs a new release. For 
now a install from source works to test.

And... because it was easier, I did this off the patch that brings 
openstack_auth into horizon. Because of some import order changes in django (I 
assume), there was an issue due to an import that caused a call to 
get_user_model before openstack_auth was fully loaded.

Beyond that, it all 'appears' to work. I launched some instances, created some 
networks, changed my password, managed some projects and users. There is tons 
to actually test, but mostly it just seems to work.

On 05/07/17 22:24, Rob Cresswell (rcresswe) wrote:
If you want to install Django 1.11 and test it, that would be very helpful, 
even if its just to open bugs. I’m in the process of adding a non-voting job 
for 1.11 right now, so we should be able to move quickly.

Rob

On 5 Jul 2017, at 01:36, Adrian Turjak 
> wrote:

Great work!

Is there anything that can be done to help meet that July deadline and get 
1.11.x in? I'm more than happy to help with reviews or even fixes for newer 
Django in Horizon, and we should try and get others involved if it will help as 
I think this is a little urgent.

Running anything less than Django 1.11 leaves us in a weird spot because of the 
point where support ends for any versions below it.

Looking at the release timelines, if we don't get 1.11 in for Pike, we'll have 
released a version of Horizon that will be for an unsupported version of Django 
in about 6 months time (8 if deployers stick with django 1.8):
https://releases.openstack.org/pike/schedule.html
https://www.djangoproject.com/download/

It isn't as bad it could be, but it's an awkward situation to be in. 1.9 is no 
longer supported, 1.10 support stops at 2018, so realistically 1.8 is the only 
version to have Pike 'safe' until Queens thats not particularly great either. 
Getting 1.11 support in would be ideal.


On 05/07/17 03:01, Rob Cresswell wrote:
Hi everyone,

I've put up a patch to global-requirements to raise the Django cap to "<1.11", 
with the upper-constraint at "1.10.7" 
(https://review.openstack.org/#/c/480215). Depending on the remaining time, I'd 
quite like to move us to "1.11.x" before Requirements Freeze, which will fall 
around the 27th of July.

Rob



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glare] Application for inclusion of Glare in the list of official projects - Answers

2017-07-17 Thread Mikhail Fedosin
Hello! Thank you all for your answers and advises!

I will try to summarize all of the above.

The purpose of the application was to get the community's views on
including Glare in the list of official projects, and a potential
replacement of Glance in the foreseeable future. A large number of
inspirational mails were received, and there were a number of questions
that I should answer.

1. "Glare is being developed by one company and a very limited circle of
people."
At this stage this is undoubtedly so. But I think this is more a plus
than a minus. Working out in a small team allows us to move much faster,
and do not spend months discussing simple things. Also I want to note that
three full-time engineers is enough. Obviously, this will not always be the
same. When we give the project to the community (i.e. make it an official
project), I can guarantee that the distribution by companies will increase.

2. "Glance is used everywhere, Glare will be very hard to replace him."
Well, no one said that it would be easy. For our part, we did our best
to simplify this transition as much as possible: the data of Glance can be
migrated to Glare by a simple script, Glare API is a cleaned and improved
version of the Glance v2 API (
https://docs.google.com/document/d/18Tqad0NUPyFfHUo1KMr6bDDISpQtzacvZtEQIGhNkf4/edit?usp=sharing).
>From my experience I can say that the transition from Glance v1 to Glance
v2 was at times more painful than this.

3. "What are the pros / cons of the transition to Glare"
I'll start with the pros:
OpenStack will get the features that the customers wanted from us
for several years: dynamic quotas that determine how much data a particular
tenant can upload, versioning of artifacts, support for layers, which will
make a universal COW in Cinder regardless of the proposed backend, and many
others, including missing "copy-from" from Glance v1.
Glare is much stabler by design. There are no race conditions
(artifacts are locked before updates), all the known problems of Glance
were also solved in Glare.
Subjectively, but it seems to me that the Glare code is better and
the architecture is cleaner. This will allow people unfamiliar with the
project to adapt more quickly to it.
Cons:
Glance was developed long enough and it has good documentation,
also there are many tests. In other words, the project has been studied,
which can not be said about Glare. According to my feelings after the
transfer of the project, we will need a year's minimum for its adaptation
in the industry.
It will take some effort to move from one project to another in
existing clouds. I believe that this process can be automated, but at the
same time I understand the complexity of such operations.

4. "How can a transition be made".
I have several ideas how to organize this. But still I believe that the
decision should be taken by all together after a series of discussions. In
the basic version, I see it like this:
1. We create an adapter in glare client that hides the minimal
differences between Glance v2 and Glare v1 APIs. For example, the image
will be activated immediately after upload.
2. In Nova, another glare.py module will be created, which in fact
is just a copy of glance.py with cosmetic changes.
3. Existing data migrate without loss by a simple script.
4. ?
5. PROFIT!

5. "There's enough overlap between glare and glance + barbican + swift"
I do not think there are any overlapping with Barbican and Swift. Swift
is used as one of the possible backends (as in Glance), Glare only stores
links to the data in it.
Like in Barbican there is a potential opportunity to keep secrets in
Glare. This logic can be added with just one plugin. But in order to avoid
potential collisions, it was decided not to include this plugin in the
official repository, since it has not yet been properly tested.

6. "Is there any documentation to familiarize with the project closer"
Yes, there is documentation, but it obviously is not enough. Here are
the main links:
Glare repo: https://github.com/openstack/glare
Glare client repo: https://github.com/openstack/python-glareclient

How to deploy Glare in Docker:
https://github.com/Fedosin/docker-glare

How to deploy Glare in Devstack:
https://github.com/openstack/glare/tree/master/devstack

Glare API description:
https://github.com/openstack/glare/blob/master/doc/source/developer/webapi/v1.rst

Glare architecture description:
https://github.com/openstack/glare/blob/master/doc/source/architecture.rst

Set of glare demos (slightly outdated):
  Glare artifact lifecycle: https://asciinema.org/a/97985
  Listing of artifacts in Glare: https://asciinema.org/a/97986
  Creating a new artifact type: https://asciinema.org/a/97987
  Locations, Tags, Links and Folders: https://asciinema.org/a/99771

Now I'm 

[openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Ian Wienand
Hi,

The removal of the mysql.qcow2 image [1] had a flow-on effect noticed
first by Paul in [2] that the tools/image_list.sh "sanity" check was
not updated, leading to DIB builds failing in a most unhelpful way as
it tries to cache the images for CI builds.

So while [2] fixes the problem; one complication here is that the
caching script [3] loops through the open devstack branches and tries
to collect the images to cache.

Now it seems we hadn't closed the liberty or mitaka branches.  This
causes a problem, because the old branches refer to the old image, but
we can't actually commit a fix to change them because the branch is
broken (such as [4]).

I have taken the liberty of EOL-ing stable/liberty and stable/mitaka
for devstack.  I get the feeling it was just forgotten at the time.
Comments in [4] support this theory.  I have also taken the liberty of
approving backports of the fix to newton and ocata branches [5],[6].

A few 3rd-party CI people using dib have noticed this failure.  As the
trio of [4],[5],[6] move through, your builds should start working
again.

Thanks,

-i

[1] https://review.openstack.org/482600
[2] https://review.openstack.org/484001
[3] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
[4] https://review.openstack.org/482604
[5] https://review.openstack.org/484299
[6] https://review.openstack.org/484298

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the Vitrage Graph

2017-07-17 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Volodymyr,

Thinking about this again, I understand that Vitrage is not supposed to use the 
Collectd message as part of the alarm unique key. We currently have a bug with 
clearing Collectd alarms, as a result of the Vitrage ID refactoring that 
happened in Pike. Until we fix it, you can you this patch[1] for your demo.

[1] https://review.openstack.org/#/c/484300

Let me know if it helped,
Ifat.


From: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Date: Sunday, 16 July 2017 at 12:28
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Tahhan, Maryam" 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

According to the vitrage-collector.log, when the alarm is cleared it has a 
different message:

Raise alarm:
{'vitrage_datasource_action': 'update', 'resource_name': u'qvo818dd156-be', 
u'severity': u'WARNING', u'plugin': u'ovs_events', 'vitrage_entity_type': 
'collectd', u'id': u'd211725834f26fa268016d8b23adf7d7', 'vitrage_sample_date': 
'2017-07-14 07:31:21.405670+00:00', u'host': u'silpixa00399503', u'time': 
1500017481.363748, u'collectd_type': u'gauge', u'plugin_instance': 
u'qvo818dd156-be', u'type_instance': u'link_status', 'vitrage_event_type': 
u'collectd.alarm.warning', u'message': u'link state of "qvo818dd156-be" 
interface has been changed to "DOWN"', 'resource_type': u'neutron.port'}

Clear alarm:
{'vitrage_datasource_action': 'update', 'resource_name': u'qvo818dd156-be', 
u'severity': u'OK', u'plugin': u'ovs_events', 'vitrage_entity_type': 
'collectd', u'id': u'd211725834f26fa268016d8b23adf7d7', 'vitrage_sample_date': 
'2017-07-14 07:31:35.851112+00:00', u'host': u'silpixa00399503', u'time': 
1500017495.841522, u'collectd_type': u'gauge', u'plugin_instance': 
u'qvo818dd156-be', u'type_instance': u'link_status', 'vitrage_event_type': 
u'collectd.alarm.ok', u'message': u'link state of "qvo818dd156-be" interface 
has been changed to "UP"', 'resource_type': u'neutron.port'}

The ‘message’ is converted to the name of the alarm, which is considered part 
of its unique key. If the message is changed from “DOWN” to “UP”, we don’t 
recognize that it’s the same alarm.
Any idea how this can be solved? Can you modify the message so it will be the 
same in both cases? Or is there another field that can uniquely identify the 
alarm?

Thanks,
Ifat.


From: "Mytnyk, VolodymyrX" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 14 July 2017 at 10:56
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Tahhan, Maryam" 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Thank you for fixing the issue. The patch works and I’m able to 
map the alarm to port now. Also, as a workaround, I was able to fix/resolve the 
issue by creating the static datasource (attached static_port.yaml) and 
disabling the neutron port datasource in the vitrage.conf.

Another issue that I still observe is the deleting of the alarm from the graph 
when OK collectd notification is sent (port is becomes up). Currently, it is 
not removed from the entity graph. Is it an issue in the Vitrage too? Attaching 
all logs (collected using the fix provided by you).

The 3rd issue is the Vitrage-Mistral integration, but I will describe this as a 
separate mail thread.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Thursday, July 13, 2017 5:47 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Tahhan, Maryam 
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Volodymyr,

I believe that this change[1] will fix your problem.

[1] https://review.openstack.org/#/c/482212/

Best Regards,
Ifat.

From: "Mytnyk, VolodymyrX" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, 11 July 2017 at 12:48
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: "Tahhan, Maryam" >
Subject: Re: [openstack-dev] [Vitrage] Collectd notification isn't shown on the 
Vitrage Graph

Hi Ifat,

Thank you for investigating the issue.

The port name is unique on the graph.  The ovs port name in collectd ovs_events 
plugin is identified by the ‘plugin_instance’ notification field.

Thanks and Regards,
Volodymyr

From: Afek, Ifat (Nokia - IL/Kfar Sava) 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Jiří Stránský

On 14.7.2017 23:00, Ben Nemec wrote:



On 07/14/2017 11:43 AM, Joshua Harlow wrote:

Out of curiosity, since I keep on hearing/reading all the tripleo
discussions on how tripleo folks are apparently thinking/doing?
redesigning the whole thing to use ansible + mistral + heat, or ansible
+ kubernetes or ansible + mistral + heat + ansible (a second time!) or ...

Seeing all those kinds of questions and suggestions around what should
be used and why and how (and even this thread) makes me really wonder
who actually uses tripleo and can afford/understand such kinds of changes?

Does anyone?

If there are  is there going to be an upgrade
path for there existing 'cloud/s' to whatever this solution is?

What operator(s) has the ability to do such a massive shift at this
point in time? Who are these 'mystical' operators?

All this has really peaked my curiosity because I am personally trying
to do that shift (not exactly the same solution...) and I know it is a
massive undertaking (that will take quite a while to get right) even for
a simple operator with limited needs out of openstack (ie godaddy); so I
don't really understand how the generic solution for all existing
tripleo operators can even work...


This is a valid point.  Up until now the answer has been that we
abstracted most of the ugliness of major changes behind either Heat or
tripleoclient.  If we end up essentially dropping those two in favor of
some other method of driving deployments it's going to be a lot harder
to migrate.  And I could be wrong, but I'm pretty sure it _is_ important
to our users to have an in-place upgrade path (see the first bullet
point in [1]).

New, shiny technology is great and all, but we do need to remember that
we have a lot of users out there already depending on the old,
not-so-shiny bits too.  They're not going to be happy if we leave them
hanging.


Exactly. Reuse is nice to have, while some sort of an upgrade path is a 
must have. We should be aware of this when selecting tools for Kubernetes.


Jirka



1: http://lists.openstack.org/pipermail/openstack-dev/2017-June/119063.html



Flavio Percoco wrote:


Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment,
OpenStack
deployment on Kubernetes, configuration management, etc. While I've been
diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these
tools and
I've come to the conclusion that TripleO would be better of by having
ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm.
While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack
projects, I
believe using any of them would add an extra layer of complexity to
TripleO,
which is something the team has been fighting for years years -
especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would
require
TripleO to also write the logic to manage those projects. For example,
in the
case of openstack-helm, the TripleO team would have to write either
ansible
roles or heat templates to manage - install, remove, upgrade - the
charts (I'm
happy to discuss this point further but I'm keepping it at a
high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO
plans
around ansible. One take-away from this thread is that TripleO is
adopting
ansible more and more, which is great and it fits perfectly with the
conclusion
I reached.

Now, what this work means is that we would have to write an ansible role
for
each service that will deploy the service on a Kubernetes cluster.
Ideally these
roles will also generate the configuration files (removing the need of
puppet
entirely) and they would manage the lifecycle. The roles would be
isolated and
this will reduce the need of TripleO Heat templates. Doing this would
give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain
these roles
and run them using the existing docker-cmd implementation that is coming
out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the
discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that
ansible is
a known, powerfull, tool that has been adopted by many operators
already. It'll
provide the flexibility needed and, if 

Re: [openstack-dev] [horizon] [horizon-plugin] Raising Django version cap

2017-07-17 Thread Adrian Turjak
Was hoping to play with this much sooner, but here is a quick hack for
horizon working with django 1.11

https://review.openstack.org/#/c/484277/

The main issues are with widgets from Django which are no longer there,
and all I've done is move them to our repo from django 1.10's code. This
is probably not a good long term solution.

Then django-babel doesn't yet have a version that has django 1.11
support, although the change has been merged to master. Just needs a new
release. For now a install from source works to test.

And... because it was easier, I did this off the patch that brings
openstack_auth into horizon. Because of some import order changes in
django (I assume), there was an issue due to an import that caused a
call to get_user_model before openstack_auth was fully loaded.

Beyond that, it all 'appears' to work. I launched some instances,
created some networks, changed my password, managed some projects and
users. There is tons to actually test, but mostly it just seems to work.


On 05/07/17 22:24, Rob Cresswell (rcresswe) wrote:
> If you want to install Django 1.11 and test it, that would be very
> helpful, even if its just to open bugs. I’m in the process of adding a
> non-voting job for 1.11 right now, so we should be able to move quickly.
>
> Rob
>
>> On 5 Jul 2017, at 01:36, Adrian Turjak > > wrote:
>>
>> Great work!
>>
>> Is there anything that can be done to help meet that July deadline
>> and get 1.11.x in? I'm more than happy to help with reviews or even
>> fixes for newer Django in Horizon, and we should try and get others
>> involved if it will help as I think this is a little urgent.
>>
>> Running anything less than Django 1.11 leaves us in a weird spot
>> because of the point where support ends for any versions below it.
>>
>> Looking at the release timelines, if we don't get 1.11 in for Pike,
>> we'll have released a version of Horizon that will be for an
>> unsupported version of Django in about 6 months time (8 if deployers
>> stick with django 1.8):
>> https://releases.openstack.org/pike/schedule.html
>> https://www.djangoproject.com/download/
>>
>> It isn't as bad it could be, but it's an awkward situation to be in.
>> 1.9 is no longer supported, 1.10 support stops at 2018, so
>> realistically 1.8 is the only version to have Pike 'safe' until
>> Queens thats not particularly great either. Getting 1.11 support in
>> would be ideal.
>>
>>
>> On 05/07/17 03:01, Rob Cresswell wrote:
>>> Hi everyone,
>>>
>>> I've put up a patch to global-requirements to raise the Django cap
>>> to "<1.11", with the upper-constraint at "1.10.7"
>>> (https://review.openstack.org/#/c/480215). Depending on the
>>> remaining time, I'd quite like to move us to "1.11.x" before
>>> Requirements Freeze, which will fall around the 27th of July.
>>>
>>> Rob
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Bogdan Dobrelya
On 14.07.2017 22:55, Fox, Kevin M wrote:
> Part of the confusion I think is in the different ways helm can be used.
> 
> Helm can be used to orchestrate the deployment of a whole service (ex, nova). 
> "launch these 3 k8s objects, template out this config file, run this job to 
> init the db, or this job to upgrade the db, etc", all as a single unit.
> 
> It can also be used purely for its templating ability.
> 
> So, "render this single k8s object using these values".
> 
> This is one of the main differences between openstack-helm and 
> kolla-kubernetes.
> 
> Openstack-helm has charts only for orchestrating the deployment of whole 
> openstack services.
> 
> Kolla-kubernetes has taken a different track though. While it does use helm 
> for its golang templater, it has taken a microservices approach to be 
> shareable with other tools. So, each openstack process (nova-api, 
> neutron-server, neutron-openvswitch-agent), etc, has its own chart and can be 
> independently configured/placed as needed by an external orchestration 
> system. Kolla-Kubernetes microservice charts are to Kubernetes what 
> Kolla-Containers are to Docker. Reusable building blocks of known tested 
> functionality and assemblable anyway the orchestration system/user feels is 
> in their best interest.

A great summary!
As TripleO Pike docker-based containers architecture rely on
Kolla-Containers bits a lot, which is run-time kolla config/bootstrap
and build-time images overrides, it seems reasonable to continue
following that path by relying on Kolla-Kubernetes microservice Helm
charts for Kubernetes based architecture. Isn't it?

The remaining question is though, if Kolla-kubernetes doesn't consume
the Openstack-helm's opinionated "orchestration of the deployment of
whole openstack services", which tools to use then to fill the advanced
data parameterization gaps like "happens before/after" relationships and
data dependencies/ordering?

> 
> This is why I think kolla-kubernetes would be a good fit for TripleO, as you 
> can replace a single component at a time, however you want, using the config 
> files you already have and upgrade the system a piece at a time from non 
> container to containered. It doesn't have to happen all at once, even within 
> a single service, or within a single TripleO release. The orchestration of it 
> is totally up to you, and can be tailored very precisely to deal with the 
> particulars of the upgrade strategy needed by TripleO's existing deployments.
> 
> Does that help to alleviate some of the confusion?
> 
> Thanks,
> Kevin


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]notification update week 29

2017-07-17 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work
for week 29.

Bugs

[Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned
server notifications don't include updated_at
The fix https://review.openstack.org/#/c/475276/ is in focus but 
comments needs to be addressed.


[Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications
use nova-api as binary name instead of nova-osapi_compute
Agreed not to change the binary name in the notifications. Instead we
make an enum for that name to show that the name is intentional.
Patch needs review:  https://review.openstack.org/#/c/476538/

[Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id 
of the versioned instance.update notification is not consistent with 
other notifications
The inconsistency of publisher_ids was revealed by #1696152. Patch 
needs review: https://review.openstack.org/#/c/480984


[Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault
notification is never emitted
Still no response on the ML thread about the way forward.
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html

[Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications
are emitted per-cell instead of globally
Fix is to configure a global MQ endpoint for the notifications in cells 
v2. Patch looks good from notification perspective but affects other 
part of the system as well: https://review.openstack.org/#/c/477556/



Versioned notification transformation
-
The last week's merge conflicts are mostly cleaned up and there is 11 
patches that are waiting for core revew:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Verified%253C0+AND+NOT+label:Code-Review%253C0

If you are affraid of the long list then here is a short list of live 
migration related transformations to look at:

* https://review.openstack.org/#/c/480214/
* https://review.openstack.org/#/c/420453/
* https://review.openstack.org/#/c/480119/
* https://review.openstack.org/#/c/469784/


Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
The BDM addition has been merged.

As a last piece of the bp we are still missing the Add tags to 
instance.create Notification https://review.openstack.org/#/c/459493/ 
patch but that depends on supporting tags and instance boot 
https://review.openstack.org/#/c/394321/ which is getting closer to be 
merged. Focus is on these patches.


There are a set of follow up patches for the BDM addition to optimize 
the payload generation but these are not mandatory for the 
functionality https://review.openstack.org/#/c/483324/



Instability of the notification sample tests

Multiple instability of the sample test was detected last week. The 
nova functional test fails intermittenly at least for two distinct 
reasons:
* https://bugs.launchpad.net/nova/+bug/1704423 _test_unshelve_server 
intermittently fails in functional versioned notification tests
Possible solution found, fix proposed and it only needs a second +2:  
https://review.openstack.org/#/c/483986/
* https://bugs.launchpad.net/nova/+bug/1704392 
TestInstanceNotificationSample.test_volume_swap_server fails with 
"testtools.matchers._impl.MismatchError: 7 != 6"
Patch that improves logging of the failure has been merged 
https://review.openstack.org/#/c/483939/ and detailed log now available 
to look at 
http://logs.openstack.org/82/482382/4/check/gate-nova-tox-functional-ubuntu-xenial/38a4cb4/console.html#_2017-07-16_01_14_36_313757



Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting
* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 18th of July.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170718T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][karbor]

2017-07-17 Thread Chen Ying
Hi Chandan,

   Thank your work about  packaging abclient  .

chenying

2017-07-14 16:29 GMT+08:00 Jiong Liu :

> Message: 2
> Date: Fri, 14 Jul 2017 12:10:00 +0530
> From: Chandan kumar 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [rpm-packaging][karbor]
> Message-ID:
>  gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello Jiong,
>
> Thank you for packaging karbor.
>
> On Fri, Jul 14, 2017 at 11:49 AM, Jiong Liu 
> wrote:
> > Hello rpm-packaging team and folks,
> >
> >
> >
> > I got trouble with packaging OpenStack project(karbor), which depends
> > on two
> > packages: icalendar and abclient.
> >
> > icalendar has pip package and RPM package, but RPM package can not be
> > found by RDO CI.
>
> python-icalender is available in fedora:
> https://koji.fedoraproject.org/koji/packageinfo?packageID=10783
> We can pull it soon in RDO.
>
> >
> > While abclient only has pip package but no RPM package.
> >
>
> abclient is not available in Fedora or RDO. I am packaging it. It will be
> soon available in RDO.
>
> >
> >
> > So in this case, what should I do to make sure these two packages can
> > be installed via RPM when packaing karbor?
> >
> >
> >
> > My patch is uploaded to rpm-package review list, as you can find here
> > https://review.openstack.org/#/c/480806/
> >
>
> Thanks,
>
> Chandan Kumar
>
>
>
>
> Hi Chandan,
>
> Thank you so much for doing this.
>
> So theoretically, how long will it take to have these two packages
> installed via RPM successfully by rpm-packaging CI? Would it possible to
> have them in before OpenStack Pike release?
>
> Thanks!
> Jeremy
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Deprecated Parameters Warning

2017-07-17 Thread Saravanan KR
Thanks Emilien.

Now, the warning message for using a deprecated message is available
for CLI. Sample message from CI [1] looks like below:

2017-07-14 19:45:09 | WARNING: Following parameters are deprecated and
still defined. Deprecated parameters will be removed soon!
2017-07-14 19:45:09 |   NeutronL3HA

The next step is to add a warning (or rather error) message if a
deployment contains a parameter which is not part of the plan
(including custom templates). I will work on it.

Regards,
Saravanan KR

[1] 
http://logs.openstack.org/77/479277/6/check-tripleo/gate-tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024/fb07fd6/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz#_2017-07-14_19_45_09

On Tue, Jun 6, 2017 at 9:47 PM, Emilien Macchi  wrote:
> On Tue, Jun 6, 2017 at 6:53 AM, Saravanan KR  wrote:
>> Hello,
>>
>> I am working on a patch [1] to list the deprecated parameters of the
>> current plan. It depends on a heat patch[2] which provides
>> parameter_group support for nested stacks. The change is to add a new
>> workflow to analyze the plan templates and find out the list of
>> deprecated parameters, identified by parameter_groups with label
>> "deprecated".
>>
>> This workflow can be used by CLI and UI to provide a warning to the
>> user about the deprecated parameters. This is only the listing,
>> changes are required in tripleoclient to invoke and and provide
>> warning. I am sending this mail to update the group, to bring
>> awareness on the parameter deprecation.
>
> I find this feature very helpful, specially with all the THT
> parameters that we have and that are moving quite fast over the
> cycles.
> Thanks for working on it!
>
>> Regards,
>> Saravanan KR
>>
>> [1] https://review.openstack.org/#/c/463949/
>> [2] https://review.openstack.org/#/c/463941/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Deriving Parameters from Introspection Data

2017-07-17 Thread Saravanan KR
On Sun, Jul 16, 2017 at 6:10 AM, Don maillist  wrote:
> Looks interesting. Wish I had this or something like it now for Newton and
> OVS 2.6.1 which just dropped. Wondering why you don't include the grub
> command line?
KernelArgs parameter which will have iommu and huge page args are
derived as part of this workflow, which will be applied to grub. Are
you looking for any specific parameter?

>
> Do you have a stand alone utility?
Not as of now. But we are looking in to the possibility of developing
a utility tool for using it for Newton. I will post it when we have
it.

Regards,
Saravanan KR

>
> Best Regards,
> Don
>
> On Thu, Jul 6, 2017 at 4:10 AM, Saravanan KR  wrote:
>>
>> Hello,
>>
>> DPDK is integrated with TripleO deployment during the newton cycle.
>> From then on, we used to get queries on how to decide the right
>> parameters for the deployment, which cpus to choose, how much memory
>> to allocate and there on.
>>
>> In Pike, a new feature "derive parameters", has been brought in to
>> help operators to automatically derive the parameters from the
>> introspection data. I have created a 2 mins demo [1] to illustrate the
>> feature integrated with CLI. This demo is created by integrating the
>> in-progress patches. Let me know if you have any comments.
>>
>> The feature is almost at the last leg with the help from many folks.
>> Following are the list of patches pending:
>> https://review.openstack.org/#/c/480525/ (tripleoclient)
>> https://review.openstack.org/#/c/468989/ (tripleo-common)
>> https://review.openstack.org/#/c/471462/ (tripleo-common)
>>
>> Regards,
>> Saravanan KR
>>
>> [1] https://asciinema.org/a/127903
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev