Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-01 Thread Zhipeng Huang
For me nitpicking during review is really not a good experience, however i
do think we should tolerate at least one round of nitpicking.

On another aspect, the nitpicking review culture also in some way
encourage, and provide legitimacy in some way, to the padding activities.
People are feeling ok about "fixing dictionary" as we joked.



On Fri, Jun 1, 2018 at 4:55 AM, Jeremy Stanley  wrote:

> On 2018-05-31 16:49:13 -0400 (-0400), John Dennis wrote:
> > On 05/30/2018 08:23 PM, Jeremy Stanley wrote:
> > > I think this is orthogonal to the thread. The idea is that we should
> > > avoid nettling contributors over minor imperfections in their
> > > submissions (grammatical, spelling or typographical errors in code
> > > comments and documentation, mild inefficiencies in implementations,
> > > et cetera). Clearly we shouldn't merge broken features, changes
> > > which fail tests/linters, and so on. For me the rule of thumb is,
> > > "will the software be better or worse if this is merged?" It's not
> > > about perfection or imperfection, it's about incremental
> > > improvement. If a proposed change is an improvement, that's enough.
> > > If it's not perfect... well, that's just opportunity for more
> > > improvement later.
> >
> > I appreciate the sentiment concerning accepting any improvement yet on
> the
> > other hand waiting for improvements to the patch to occur later is
> folly, it
> > won't happen.
> >
> > Those of us familiar with working with large bodies of code from multiple
> > authors spanning an extended time period will tell you it's very
> confusing
> > when it's obvious most of the code follows certain conventions but there
> are
> > odd exceptions (often without comments). This inevitably leads to
> investing
> > a lot of time trying to understand why the exception exists because
> "clearly
> > it's there for a reason and I'm just missing the rationale" At that point
> > the reason for the inconsistency is lost.
> >
> > At the end of the day it is more important to keep the code base clean
> and
> > consistent for those that follow than it is to coddle in the near term.
>
> Sure, I suppose it comes down to your definition of "improvement." I
> don't consider a change proposing incomplete or unmaintainable code
> to be an improvement. On the other hand I think it's fine to approve
> changes which are "good enough" even if there's room for
> improvement, so long as they're "good enough" that you're fine with
> them possibly never being improved on due to shifts in priorities.
> I'm certainly not suggesting that it's a good idea to merge
> technical debt with the expectation that someone will find time to
> solve it later (any more than it's okay to merge obvious bugs in
> hopes someone will come along and fix them for you).
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-01 Thread Zhipeng Huang
I agree with Zane's proposal here, it is a good rule to have 2 core
reviewer from different companies to provide +2 for a patch. However it
should not be very strict given that project in early stage usually have to
rely on devs from one or two companies.

But it should be recommended that project apply for the diversity-tag
should at least expressed that they have adopted this rule.

On Sat, Jun 2, 2018 at 3:19 AM, Zane Bitter  wrote:

> On 01/06/18 12:18, Doug Hellmann wrote:
>
>> Excerpts from Zane Bitter's message of 2018-06-01 10:10:31 -0400:
>>
>>> Crazy idea: what if we dropped the idea of measuring the diversity and
>>> allowed teams to decide when they applied the tag to themselves like we
>>> do for other tags. (No wait! Come back!)
>>>
>>> Some teams enforce a requirement that the 2 core +2s come from reviewers
>>> with different affiliations. We would say that any project that enforces
>>> that rule would get the diversity tag. Then it's actually attached to
>>> something concrete, and teams could decide for themselves when to drop
>>> it (because they would start having difficulty merging stuff otherwise).
>>>
>>> I'm not entirely sold on this, but it's an idea I had that I wanted to
>>> throw out there :)
>>>
>>> cheers,
>>> Zane.
>>>
>>>
>> The point of having the tags is to help consumers of the projects
>> understand their health in some capacity. In this case we were
>> trying to use measures of actual activity within the project to
>> help spot projects that are really only maintained by one company,
>> with the assumption that such projects are less healthy than others
>> being maintained by contributors with more diverse backing.
>>
>
> (Clarification for readers: there are actually 3 levels; getting the
> diverse-affiliations tag has a higher bar than dropping the single-vendor
> tag.)
>
> Does basing the tag definition on whether approvals need to come
>> from people with diverse affiliation provide enough project health
>> information that it would let us use it to replace the current tag?
>>
>
> Yes. Project teams will soon drop this rule if it's the only way to get
> patches in. A single-vendor project by definition cannot adopt this rule
> and continue to... exist as a project, really.
>
> It would tell potential users that if one organisation drops out it there
> is at least somebody left to review patches, and also guarantee that the
> project's direction is not down to the whim of one organisation.
>
> How many teams enforce the rule you describe?
>>
>
> I don't know.
>
> I do know that in Heat we never enforced it - at first because it was a
> single-vendor project, and then later because it was so diverse (and not
> subject to any particular cross-company animosity) that nobody particularly
> saw the need to change, and now that many of those vendors have pulled out
> of OpenStack because it would be an obstacle to getting patches approved
> again.
>
> I was kind of under the impression that all of the projects used this rule
> prior to Heat and Ceilometer being incubated. That may be incorrect. At
> least Nova and the projects that have a lot of vendor drivers (and are thus
> susceptible to suspicions of bias) - i.e. Cinder & Neutron mainly - may
> still follow this rule? I haven't yet found a mention of it in any of the
> contributor guides though, so possibly it was dropped OpenStack-wide and I
> never noticed.
>
> Is that rule a sign of a healthy team dynamic, that we would want
>> to spread to the whole community?
>>
>
> Yeah, this part I am pretty unsure about too. For some projects it
> probably is. For others it may just be an unnecessary obstacle, although I
> don't think it'd actually be *un*healthy for any project, assuming a big
> enough and diverse enough team (which should be a goal for the whole
> community).
>
> For most projects with small core teams it would obviously be a
> showstopper, but the idea would be for them to continue to opt out.
>
> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad

2018-06-01 Thread Chris Dent

On Wed, 9 May 2018, Chris Dent wrote:


I've started an etherpad for the forum session in Vancouver devoted
to discussing the possibility of tracking and allocation resources
in Cinder using the Placement service. This is not a done deal.
Instead the session is to discuss if it could work and how to make
it happen if it seems like a good idea.

The etherpad is at

   https://etherpad.openstack.org/p/YVR-cinder-placement


The session went well. Some of the members of the cinder team who
might have had more questions had not been able to be at summit so
we were unable to get their input.

We clarified some of the things that cinder wants to be able to
accomplish (run multiple schedulers in active-active and avoid race
conditions) and the fact that this is what placement is built for.
We also made it clear that placement itself can be highly available
(and scalable) because of its nature as a dead-simple web app over a
database.

The next steps are for the cinder team to talk amongst themselves
and socialize the capabilities of placement (with the help of
placement people) and see if it will be suitable. It is unlikely
there will be much visible progress in this area before Stein.

See the etherpad for a bit more detail.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 28 May 2018

2018-06-01 Thread Colleen Murphy
# Keystone Team Update - Week of 28 May 2018

## News

### Summit Recap

We had a productive summit last week. Lance has posted a recap[1].

[1] https://www.lbragstad.com/blog/openstack-summit-vancouver-recap

### Quota Models

There was a productive discussion at the forum on hierarchical quotas (which I 
missed), but which resulted in some new thoughts about safely tracking quota 
which Adam captured[2]. We then discussed some performance implications for 
unlimited-depth project trees[3]. The spec for a strict two-level model still 
needs reviews[4].

[2] http://adam.younglogic.com/2018/05/tracking-quota/#more-5542
[3] 
http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-29-16.02.log.html#l-9
[4] https://review.openstack.org/540803

## Open Specs

Search query: https://bit.ly/2G8Ai5q

Last week we merged the Default Roles spec[5] after discussing it at the 
Summit. We still need to review and merge the update the hierarchical unified 
limits spec[6] which has been updated following discussions at the summit.

[5] https://review.openstack.org/566377
[6] https://review.openstack.org/540803

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 5 changes this week. One of those was to partially remove the 
deprecated TokenAuth middleware[7], which has implications for upgrades.

[7] https://review.openstack.org/508412

## Changes that need Attention

Changes with no negative feedback:  https://bit.ly/2wv7QLK
Changes with only human negative feedback: https://bit.ly/2LeW1vC

There are 42 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots. This data is provided to 
highlight patches that are currently waiting for any feedback.

There are 81 total changes that are ready for review.

## Bugs

These week we opened 6 new bugs and closed 4.

One of the bugs opened and fixed was for our docs builds which had broken since 
the latest docs PTI updates[8]. I also opened a bug regarding the usage of 
groups with application credentials[9], which has implications for federated 
users using application credentials.

[8] https://bugs.launchpad.net/keystone/+bug/1774508
[9] https://bugs.launchpad.net/keystone/+bug/1773967

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

Next week is specification freeze (I think unified limits is the only remaining 
specification that needs attention). Our next deadline after that is feature 
proposal freeze on June 22nd.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and 
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Dan Smith
> FWIW, I don't have a problem with the virt driver "knowing about
> allocations". What I have a problem with is the virt driver *claiming
> resources for an instance*.

+1000.

> That's what the whole placement claims resources things was all about,
> and I'm not interested in stepping back to the days of long racy claim
> operations by having the compute nodes be responsible for claiming
> resources.
>
> That said, once the consumer generation microversion lands [1], it
> should be possible to *safely* modify an allocation set for a consumer
> (instance) and move allocation records for an instance from one
> provider to another.

Agreed. I'm hesitant to have the compute nodes arguing with the
scheduler even to patch things up, given the mess we just cleaned
up. The thing that I think makes this okay is that one compute node
cleaning/pivoting allocations for instances isn't going to be fighting
anything else whilst doing it. Migrations and new instance builds where
the source/destination or scheduler/compute aren't clear who owns the
allocation is a problem.

That said, we need to make sure we can handle the case where an instance
is in resize_confirm state across a boundary where we go from non-NRP to
NRP. It *should* be okay for the compute to handle this by updating the
instance's allocation held by the migration instead of the instance
itself, if the compute determines that it is the source.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-01 Thread Zane Bitter

On 01/06/18 12:18, Doug Hellmann wrote:

Excerpts from Zane Bitter's message of 2018-06-01 10:10:31 -0400:

Crazy idea: what if we dropped the idea of measuring the diversity and
allowed teams to decide when they applied the tag to themselves like we
do for other tags. (No wait! Come back!)

Some teams enforce a requirement that the 2 core +2s come from reviewers
with different affiliations. We would say that any project that enforces
that rule would get the diversity tag. Then it's actually attached to
something concrete, and teams could decide for themselves when to drop
it (because they would start having difficulty merging stuff otherwise).

I'm not entirely sold on this, but it's an idea I had that I wanted to
throw out there :)

cheers,
Zane.



The point of having the tags is to help consumers of the projects
understand their health in some capacity. In this case we were
trying to use measures of actual activity within the project to
help spot projects that are really only maintained by one company,
with the assumption that such projects are less healthy than others
being maintained by contributors with more diverse backing.


(Clarification for readers: there are actually 3 levels; getting the 
diverse-affiliations tag has a higher bar than dropping the 
single-vendor tag.)



Does basing the tag definition on whether approvals need to come
from people with diverse affiliation provide enough project health
information that it would let us use it to replace the current tag?


Yes. Project teams will soon drop this rule if it's the only way to get 
patches in. A single-vendor project by definition cannot adopt this rule 
and continue to... exist as a project, really.


It would tell potential users that if one organisation drops out it 
there is at least somebody left to review patches, and also guarantee 
that the project's direction is not down to the whim of one organisation.



How many teams enforce the rule you describe?


I don't know.

I do know that in Heat we never enforced it - at first because it was a 
single-vendor project, and then later because it was so diverse (and not 
subject to any particular cross-company animosity) that nobody 
particularly saw the need to change, and now that many of those vendors 
have pulled out of OpenStack because it would be an obstacle to getting 
patches approved again.


I was kind of under the impression that all of the projects used this 
rule prior to Heat and Ceilometer being incubated. That may be 
incorrect. At least Nova and the projects that have a lot of vendor 
drivers (and are thus susceptible to suspicions of bias) - i.e. Cinder & 
Neutron mainly - may still follow this rule? I haven't yet found a 
mention of it in any of the contributor guides though, so possibly it 
was dropped OpenStack-wide and I never noticed.



Is that rule a sign of a healthy team dynamic, that we would want
to spread to the whole community?


Yeah, this part I am pretty unsure about too. For some projects it 
probably is. For others it may just be an unnecessary obstacle, although 
I don't think it'd actually be *un*healthy for any project, assuming a 
big enough and diverse enough team (which should be a goal for the whole 
community).


For most projects with small core teams it would obviously be a 
showstopper, but the idea would be for them to continue to opt out.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Jay Pipes

On 06/01/2018 03:02 PM, Dan Smith wrote:

Dan, you are leaving out the parts of my response where I am agreeing
with you and saying that your "Option #2" is probably the things we
should go with.


No, what you said was:


I would vote for Option #2 if it comes down to it.


Implying (to me at least) that you still weren't in favor of either, but
would choose that as the least offensive option :)

I didn't quote it because I didn't have any response. I just wanted to
address the other assertions about what is and isn't a common upgrade
scenario, which I think is the important data we need to consider when
making a decision here.


Understood. I've now accepted fact that we will need to do something to 
transform the data model without requiring operators to move workloads.



I didn't mean to imply or hide anything with my message trimming, so
sorry if it came across as such.


No worries.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Dan Smith
> Dan, you are leaving out the parts of my response where I am agreeing
> with you and saying that your "Option #2" is probably the things we
> should go with.

No, what you said was:

>> I would vote for Option #2 if it comes down to it.

Implying (to me at least) that you still weren't in favor of either, but
would choose that as the least offensive option :)

I didn't quote it because I didn't have any response. I just wanted to
address the other assertions about what is and isn't a common upgrade
scenario, which I think is the important data we need to consider when
making a decision here.

I didn't mean to imply or hide anything with my message trimming, so
sorry if it came across as such.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Jay Pipes

On 05/31/2018 02:26 PM, Eric Fried wrote:

1. Make everything perform the pivot on compute node start (which can be
re-used by a CLI tool for the offline case)
2. Make everything default to non-nested inventory at first, and provide
a way to migrate a compute node and its instances one at a time (in
place) to roll through.


I agree that it sure would be nice to do ^ rather than requiring the
"slide puzzle" thing.

But how would this be accomplished, in light of the current "separation
of responsibilities" drawn at the virt driver interface, whereby the
virt driver isn't supposed to talk to placement directly, or know
anything about allocations?
FWIW, I don't have a problem with the virt driver "knowing about 
allocations". What I have a problem with is the virt driver *claiming 
resources for an instance*.


That's what the whole placement claims resources things was all about, 
and I'm not interested in stepping back to the days of long racy claim 
operations by having the compute nodes be responsible for claiming 
resources.


That said, once the consumer generation microversion lands [1], it 
should be possible to *safely* modify an allocation set for a consumer 
(instance) and move allocation records for an instance from one provider 
to another.


[1] https://review.openstack.org/#/c/565604/


Here's a first pass:

The virt driver, via the return value from update_provider_tree, tells
the resource tracker that "inventory of resource class A on provider B
have moved to provider C" for all applicable AxBxC.  E.g.

[ { 'from_resource_provider': ,
 'moved_resources': [VGPU: 4],
 'to_resource_provider': 
   },
   { 'from_resource_provider': ,
 'moved_resources': [VGPU: 4],
 'to_resource_provider': 
   },
   { 'from_resource_provider': ,
 'moved_resources': [
 SRIOV_NET_VF: 2,
 NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000,
 NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000,
 ],
 'to_resource_provider': 
   }
]

As today, the resource tracker takes the updated provider tree and
invokes [1] the report client method update_from_provider_tree [2] to
flush the changes to placement.  But now update_from_provider_tree also
accepts the return value from update_provider_tree and, for each "move":

- Creates provider C (as described in the provider_tree) if it doesn't
already exist.
- Creates/updates provider C's inventory as described in the
provider_tree (without yet updating provider B's inventory).  This ought
to create the inventory of resource class A on provider C.


Unfortunately, right here you'll introduce a race condition. As soon as 
this operation completes, the scheduler will have the ability to throw 
new instances on provider C and consume the inventory from it that you 
intend to give to the existing instance that is consuming from provider B.



- Discovers allocations of rc A on rp B and POSTs to move them to rp C*.


For each consumer of resources on rp B, right?


- Updates provider B's inventory.


Again, this is problematic because the scheduler will have already begun 
to place new instances on B's inventory, which could very well result in 
incorrect resource accounting on the node.


We basically need to have one giant new REST API call that accepts the 
list of "move instructions" and performs all of the instructions in a 
single transaction. :(



(*There's a hole here: if we're splitting a glommed-together inventory
across multiple new child providers, as the VGPUs in the example, we
don't know which allocations to put where.  The virt driver should know
which instances own which specific inventory units, and would be able to
report that info within the data structure.  That's getting kinda close
to the virt driver mucking with allocations, but maybe it fits well
enough into this model to be acceptable?)


Well, it's not really the virt driver *itself* mucking with the 
allocations. It's more that the virt driver is telling something *else* 
the move instructions that it feels are needed...



Note that the return value from update_provider_tree is optional, and
only used when the virt driver is indicating a "move" of this ilk.  If
it's None/[] then the RT/update_from_provider_tree flow is the same as
it is today.

If we can do it this way, we don't need a migration tool.  In fact, we
don't even need to restrict provider tree "reshaping" to release
boundaries.  As long as the virt driver understands its own data model
migrations and reports them properly via update_provider_tree, it can
shuffle its tree around whenever it wants.


Due to the many race conditions we would have in trying to fudge 
inventory amounts (the reserved/total thing) and allocation movement for 
>1 consumer at a time, I'm pretty sure the only safe thing to do is 
have a single new HTTP endpoint that would take this list of move 
operations and perform them atomically (on the placement server side of 
course).


Here's a strawman for how that HTTP 

Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-01 Thread Sean McGinnis
On Fri, Jun 01, 2018 at 01:29:41PM -0400, Doug Hellmann wrote:
> That presentation says "Users should do their own tagging/release
> management" (6:31). I don't think that's really an approach we want
> to be encouraging project teams to take.
> 
I hadn't had a chance to watch the presentation yet. It also states right
aroung there that there is only one dev on the project. That really concerns
me.

And in very strong agreement - we definitely do not want to be encouraging
project consumers to be the ones tagging and doing their own releases.

We would certainly welcome anyone interested to get involved in the project and
be added as an official release liaison so they can request official releases
though.

> I would suggest placing Dragonflow in maintenance mode, but if the
> team doesn't have the resources to participate in the normal community
> processes, maybe it should be moved out of the official project
> list instead?
> 
> Do we have any sort of indication of how many deployments rely on
> Dragonflow? Does the neutron team have capacity to bring Dragonflow
> back in to their list of managed repos and help them with releases
> and other common process tasks?
> 
> Excerpts from Miguel Lavalle's message of 2018-06-01 11:38:53 -0500:
> > There was an project update presentation in Vancouver:
> > https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2
> > 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-01 Thread Doug Hellmann
That presentation says "Users should do their own tagging/release
management" (6:31). I don't think that's really an approach we want
to be encouraging project teams to take.

I would suggest placing Dragonflow in maintenance mode, but if the
team doesn't have the resources to participate in the normal community
processes, maybe it should be moved out of the official project
list instead?

Do we have any sort of indication of how many deployments rely on
Dragonflow? Does the neutron team have capacity to bring Dragonflow
back in to their list of managed repos and help them with releases
and other common process tasks?

Excerpts from Miguel Lavalle's message of 2018-06-01 11:38:53 -0500:
> There was an project update presentation in Vancouver:
> https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2
> 
> On Fri, Jun 1, 2018 at 11:31 AM, Sean McGinnis 
> wrote:
> 
> > Hello DragonFlow team,
> >
> > As part of reviewing release activities it was noticed that there was
> > never a
> > final Queens release for DragonFlow and there was never a stable/queens
> > branch
> > created.
> >
> > It appears there is still activity with this project [1], so I am
> > wondering if
> > we could get an update on the status of the DragonFlow.
> >
> > DragonFlow is under the "independent" release model, so it does not need to
> > have regular cycle milestone releases [2], but we just want to make sure
> > the
> > project should continue under OpenStack governance and that we are not just
> > missing communication on release needs.
> >
> > Thanks!
> > Sean
> >
> > [1] https://github.com/openstack/dragonflow/compare/stable/pike...master
> > [2] http://git.openstack.org/cgit/openstack/releases/tree/
> > deliverables/_independent/dragonflow.yaml
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Jay Pipes
Dan, you are leaving out the parts of my response where I am agreeing 
with you and saying that your "Option #2" is probably the things we 
should go with.


-jay

On 06/01/2018 12:22 PM, Dan Smith wrote:

So, you're saying the normal process is to try upgrading the Linux
kernel and associated low-level libs, wait the requisite amount of
time that takes (can be a long time) and just hope that everything
comes back OK? That doesn't sound like any upgrade I've ever seen.


I'm saying I think it's a process practiced by some to install the new
kernel and libs and then reboot to activate, yeah.


No, sorry if I wasn't clear. They can live-migrate the instances off
of the to-be-upgraded compute host. They would only need to
cold-migrate instances that use the aforementioned non-movable
resources.


I don't think it's reasonable to force people to have to move every
instance in their cloud (live or otherwise) in order to upgrade. That
means that people who currently do their upgrades in-place in one step,
now have to do their upgrade in N steps, for N compute nodes. That
doesn't seem reasonable to me.


If we are going to go through the hassle of writing a bunch of
transformation code in order to keep operator action as low as
possible, I would prefer to consolidate all of this code into the
nova-manage (or nova-status) tool and put some sort of
attribute/marker on each compute node record to indicate whether a
"heal" operation has occurred for that compute node.


We need to know details of each compute node in order to do that. We
could make the tool external and something they run per-compute node,
but that still makes it N steps, even if the N steps are lighter
weight.


Someone (maybe Gibi?) on this thread had mentioned having the virt
driver (in update_provider_tree) do the whole set reserved = total
thing when first attempting to create the child providers. That would
work to prevent the scheduler from attempting to place workloads on
those child providers, but we would still need some marker on the
compute node to indicate to the nova-manage heal_nested_providers (or
whatever) command that the compute node has had its provider tree
validated/healed, right?


So that means you restart your cloud and it's basically locked up until
you perform the N steps to unlock N nodes? That also seems like it's not
going to make us very popular on the playground :)

I need to go read Eric's tome on how to handle the communication of
things from virt to compute so that this translation can be done. I'm
not saying I have the answer, I'm just saying that making this the
problem of the operators doesn't seem like a solution to me, and that we
should figure out how we're going to do this before we go down the
rabbit hole.

--Dan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][tc] Tagging rights

2018-06-01 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-06-01 11:45:04 -0500:
> Hi Andrey,
> 
> Sorry for the delay getting back to this. I had meant to wait for the 
> responses
> from the other projects included in the original thread, but never made it 
> back
> to follow up.
> 
> Officially governed projects are required to use the releases repo for driving
> the automated release process. This ensure peer-reviewed releases and
> consistency through the release. So to be a governed project, we really do 
> need
> to switch you over to this process.

I'm curious about the relationship between rally and "xRally"
(https://github.com/xrally). The repo there says the core of rally is
going to be moved to github soon, can you elaborate on that? Is there a
plan to remove Rally from OpenStack?

Doug

> 
> Some other notes inline below.
> 
> Thanks,
> Sean
> 
> > Hi Sean!
> > 
> > Thanks for raising this question.
> > 
> > As for Rally team, we are using self-tagging approach for several reasons:
> >
> > - Release notes
> >
> >   Check the difference between
> > https://github.com/openstack/nova/releases/tag/17.0.2 and
> > https://github.com/openstack/rally-openstack/releases/tag/1.0.0.
> >   The first one includes just autogenerated metadata. The second one
> > user-friendly notes (they are not ideal, but we are working on making them
> > better).
> >   I do not find a way to add custom release notes via openstack/releases
> > project.
> 
> Nearly all projects have standardized on reno for release notes. This is the
> preferred method for this and where general consumers of OpenStack 
> deliverables
> are now used to looking for these details. I would strongly recommend doing
> that instead.
> 
> >
> > - Time
> >
> >   Self-tagging the repo allows me to schedule/reschedule the release in
> > whatever timeframe I decide without pinging anyone and waiting for folks to
> > return from summit/PTG.
> >   I do not want to offend anyone, but we all know that such events take
> > much time for preparation, holding and resting after it.
> >
> >   Since there are no official OpenStack projects built on top of Rally,
> > launching any of "integration" jobs while making Rally release is a wasting
> > of time and money(resources).
> >   Also, such jobs can block to make a release. I remember sometimes it can
> > take weeks to pass all gates with tons of rechecks
> >
> >   https://github.com/openstack/releases#release-approval == "Freezes and no
> > late releases". It is an opensource and I want to make releases on weekends
> > if there is any
> >   reason for doing this (critical fix or the last blocking feature is
> > merged or whatever).
> 
> We do generally avoid releasing on Friday's or weekends, but now that our
> requirements management has some checks, and especially for projects that are
> not dependencies for other projects, we can certainly do releases on these 
> days
> as long as we are told of the urgency of getting them out there. The release
> team does not want to be a bottleneck for getting other work done.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][tc] Tagging rights

2018-06-01 Thread Sean McGinnis
Hi Andrey,

Sorry for the delay getting back to this. I had meant to wait for the responses
from the other projects included in the original thread, but never made it back
to follow up.

Officially governed projects are required to use the releases repo for driving
the automated release process. This ensure peer-reviewed releases and
consistency through the release. So to be a governed project, we really do need
to switch you over to this process.

Some other notes inline below.

Thanks,
Sean

> Hi Sean!
> 
> Thanks for raising this question.
> 
> As for Rally team, we are using self-tagging approach for several reasons:
>
> - Release notes
>
>   Check the difference between
> https://github.com/openstack/nova/releases/tag/17.0.2 and
> https://github.com/openstack/rally-openstack/releases/tag/1.0.0.
>   The first one includes just autogenerated metadata. The second one
> user-friendly notes (they are not ideal, but we are working on making them
> better).
>   I do not find a way to add custom release notes via openstack/releases
> project.

Nearly all projects have standardized on reno for release notes. This is the
preferred method for this and where general consumers of OpenStack deliverables
are now used to looking for these details. I would strongly recommend doing
that instead.

>
> - Time
>
>   Self-tagging the repo allows me to schedule/reschedule the release in
> whatever timeframe I decide without pinging anyone and waiting for folks to
> return from summit/PTG.
>   I do not want to offend anyone, but we all know that such events take
> much time for preparation, holding and resting after it.
>
>   Since there are no official OpenStack projects built on top of Rally,
> launching any of "integration" jobs while making Rally release is a wasting
> of time and money(resources).
>   Also, such jobs can block to make a release. I remember sometimes it can
> take weeks to pass all gates with tons of rechecks
>
>   https://github.com/openstack/releases#release-approval == "Freezes and no
> late releases". It is an opensource and I want to make releases on weekends
> if there is any
>   reason for doing this (critical fix or the last blocking feature is
> merged or whatever).

We do generally avoid releasing on Friday's or weekends, but now that our
requirements management has some checks, and especially for projects that are
not dependencies for other projects, we can certainly do releases on these days
as long as we are told of the urgency of getting them out there. The release
team does not want to be a bottleneck for getting other work done.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-01 Thread Miguel Lavalle
There was an project update presentation in Vancouver:
https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2

On Fri, Jun 1, 2018 at 11:31 AM, Sean McGinnis 
wrote:

> Hello DragonFlow team,
>
> As part of reviewing release activities it was noticed that there was
> never a
> final Queens release for DragonFlow and there was never a stable/queens
> branch
> created.
>
> It appears there is still activity with this project [1], so I am
> wondering if
> we could get an update on the status of the DragonFlow.
>
> DragonFlow is under the "independent" release model, so it does not need to
> have regular cycle milestone releases [2], but we just want to make sure
> the
> project should continue under OpenStack governance and that we are not just
> missing communication on release needs.
>
> Thanks!
> Sean
>
> [1] https://github.com/openstack/dragonflow/compare/stable/pike...master
> [2] http://git.openstack.org/cgit/openstack/releases/tree/
> deliverables/_independent/dragonflow.yaml
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-06-01 Thread Davanum Srinivas
Josh,

The Kata team is talking to QEMU maintainers about how best to move
forward. Specially around stripping down things that's not needed for
their use case. They are not adding code from what i got to know (just
removing stuff).

-- Dims

On Fri, Jun 1, 2018 at 12:12 PM, Joshua Harlow  wrote:
> Slightly off topic but,
>
> Have you by any chance looked at what kata has forked for qemu:
>
> https://github.com/kata-containers/qemu/tree/qemu-lite-2.11.0
>
> I'd be interested in an audit of that code for similar reasons to this
> libvirt fork (hard to know from my view point if there are new issues in
> that code like the ones you are finding in libvirt).
>
> Kashyap Chamarthy wrote:
>>
>> On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote:
>>>
>>> StarlingX (aka STX) was announced this week at the summit, there is a
>>> PR to create project repos in Gerrit at [0]. STX is basically Wind
>>
>>
>>  From a cursory look at the libvirt fork, there are some questionable
>> choices.  E.g. the config code (libvirt/src/qemu/qemu.conf) is modified
>> such that QEMU is launched as 'root'.  That means a bug in QEMU ==
>> instant host compromise.
>>
>> All Linux distributions (that matter) configure libvirt to launch QEMU
>> as a regular user ('qemu').  E.g. from Fedora's libvirt RPM spec file:
>>
>>  libvirt.spec:%define qemu_user  qemu
>>  libvirt.spec:   --with-qemu-user=%{qemu_user} \
>>
>>  * * *
>>
>> There are multiple other such issues in the forked libvirt code.
>>
>> [...]
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DragonFlow][TC] State of the DragonFlow project

2018-06-01 Thread Sean McGinnis
Hello DragonFlow team,

As part of reviewing release activities it was noticed that there was never a
final Queens release for DragonFlow and there was never a stable/queens branch
created.

It appears there is still activity with this project [1], so I am wondering if
we could get an update on the status of the DragonFlow.

DragonFlow is under the "independent" release model, so it does not need to
have regular cycle milestone releases [2], but we just want to make sure the
project should continue under OpenStack governance and that we are not just
missing communication on release needs.

Thanks!
Sean

[1] https://github.com/openstack/dragonflow/compare/stable/pike...master
[2] 
http://git.openstack.org/cgit/openstack/releases/tree/deliverables/_independent/dragonflow.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Dan Smith
> So, you're saying the normal process is to try upgrading the Linux
> kernel and associated low-level libs, wait the requisite amount of
> time that takes (can be a long time) and just hope that everything
> comes back OK? That doesn't sound like any upgrade I've ever seen.

I'm saying I think it's a process practiced by some to install the new
kernel and libs and then reboot to activate, yeah.

> No, sorry if I wasn't clear. They can live-migrate the instances off
> of the to-be-upgraded compute host. They would only need to
> cold-migrate instances that use the aforementioned non-movable
> resources.

I don't think it's reasonable to force people to have to move every
instance in their cloud (live or otherwise) in order to upgrade. That
means that people who currently do their upgrades in-place in one step,
now have to do their upgrade in N steps, for N compute nodes. That
doesn't seem reasonable to me.

> If we are going to go through the hassle of writing a bunch of
> transformation code in order to keep operator action as low as
> possible, I would prefer to consolidate all of this code into the
> nova-manage (or nova-status) tool and put some sort of
> attribute/marker on each compute node record to indicate whether a
> "heal" operation has occurred for that compute node.

We need to know details of each compute node in order to do that. We
could make the tool external and something they run per-compute node,
but that still makes it N steps, even if the N steps are lighter
weight.

> Someone (maybe Gibi?) on this thread had mentioned having the virt
> driver (in update_provider_tree) do the whole set reserved = total
> thing when first attempting to create the child providers. That would
> work to prevent the scheduler from attempting to place workloads on
> those child providers, but we would still need some marker on the
> compute node to indicate to the nova-manage heal_nested_providers (or
> whatever) command that the compute node has had its provider tree
> validated/healed, right?

So that means you restart your cloud and it's basically locked up until
you perform the N steps to unlock N nodes? That also seems like it's not
going to make us very popular on the playground :)

I need to go read Eric's tome on how to handle the communication of
things from virt to compute so that this translation can be done. I'm
not saying I have the answer, I'm just saying that making this the
problem of the operators doesn't seem like a solution to me, and that we
should figure out how we're going to do this before we go down the
rabbit hole.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-01 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2018-06-01 10:10:31 -0400:
> On 26/05/18 17:46, Mohammed Naser wrote:
> > Hi everyone!
> > 
> > During the TC retrospective at the OpenStack summit last week, the
> > topic of the organizational diversity tag is becoming irrelevant was
> > brought up by Thierry (ttx)[1].  It seems that for projects that are
> > not very active, they can easily lose this tag with a few changes by
> > perhaps the infrastructure team for CI related fixes.
> > 
> > As an action item, Thierry and I have paired up in order to look into
> > a way to resolve this issue.  There have been ideas to switch this to
> > a report that is published at the end of the cycle rather than
> > continuously.  Julia (TheJulia) suggested that we change or track
> > different types of diversity.
> > 
> > Before we start diving into solutions, I wanted to bring this topic up
> > to the mailing list and ask for any suggestions.  In digging the
> > codebase behind this[2], I've found that there are some knobs that we
> > can also tweak if need-be, or perhaps we can adjust those numbers
> > depending on the number of commits.
> 
> Crazy idea: what if we dropped the idea of measuring the diversity and 
> allowed teams to decide when they applied the tag to themselves like we 
> do for other tags. (No wait! Come back!)
> 
> Some teams enforce a requirement that the 2 core +2s come from reviewers 
> with different affiliations. We would say that any project that enforces 
> that rule would get the diversity tag. Then it's actually attached to 
> something concrete, and teams could decide for themselves when to drop 
> it (because they would start having difficulty merging stuff otherwise).
> 
> I'm not entirely sold on this, but it's an idea I had that I wanted to 
> throw out there :)
> 
> cheers,
> Zane.
> 

The point of having the tags is to help consumers of the projects
understand their health in some capacity. In this case we were
trying to use measures of actual activity within the project to
help spot projects that are really only maintained by one company,
with the assumption that such projects are less healthy than others
being maintained by contributors with more diverse backing.

Does basing the tag definition on whether approvals need to come
from people with diverse affiliation provide enough project health
information that it would let us use it to replace the current tag?

How many teams enforce the rule you describe?

Is that rule a sign of a healthy team dynamic, that we would want
to spread to the whole community?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-06-01 Thread Joshua Harlow

Slightly off topic but,

Have you by any chance looked at what kata has forked for qemu:

https://github.com/kata-containers/qemu/tree/qemu-lite-2.11.0

I'd be interested in an audit of that code for similar reasons to this 
libvirt fork (hard to know from my view point if there are new issues in 
that code like the ones you are finding in libvirt).


Kashyap Chamarthy wrote:

On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote:

StarlingX (aka STX) was announced this week at the summit, there is a
PR to create project repos in Gerrit at [0]. STX is basically Wind


 From a cursory look at the libvirt fork, there are some questionable
choices.  E.g. the config code (libvirt/src/qemu/qemu.conf) is modified
such that QEMU is launched as 'root'.  That means a bug in QEMU ==
instant host compromise.

All Linux distributions (that matter) configure libvirt to launch QEMU
as a regular user ('qemu').  E.g. from Fedora's libvirt RPM spec file:

 libvirt.spec:%define qemu_user  qemu
 libvirt.spec:   --with-qemu-user=%{qemu_user} \

 * * *

There are multiple other such issues in the forked libvirt code.

[...]



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] Extraroute support

2018-06-01 Thread Kevin Benton
The neutron API now supports compare and swap updates with an If-Match
header so the race condition can be avoided.
https://bugs.launchpad.net/neutron/+bug/1703234



On Fri, Jun 1, 2018, 04:57 Rabi Mishra  wrote:

>
> On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona 
> wrote:
>
>> Hi,
>>
>> Could somebody help me out with Neutron's Extraroute support in Hot
>> templates.
>> The support status of the Extraroute is support.UNSUPPORTED in heat, and
>> only create and delete are the supported operations.
>> see:
>> https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/extraroute.py#LC35
>>
>>
> As I see the unsupported tag was added when the feature was moved from the
>> contrib folder to in-tree (https://review.openstack.org/186608)
>> Perhaps you can help me out why only create and delete are supported and
>> update not.
>>
>>
> I think most of the resources when moved from contrib to in-tree are
> marked as unsupported. Adding routes to an existing router by multiple
> stacks can be racy and is probably the reason use of this resource is not
> encouraged and hence it's not supported. You can see the discussion in the
> original patch that proposed this resource
> https://review.openstack.org/#/c/41044/
>
> Not sure if things have changed on neutron side for us to revisit the
> concerns.
>
> Also it does not have any update_allowed properties, hence no
> handle_update(). It would be replaced if you change any property.
>
> Hope it helps.
>
>
>
>> Thanks in advance for  the help.
>>
>> Regards
>> Lajos
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Rabi Mishra
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral Monthly June 2018

2018-06-01 Thread Dougal Matthews
Hey Mistralites!

Welcome to the second edition of Mistral Monthly.

# Summit

Brad Crochet done a great job giving the Mistral project update talk. Check
it out: https://www.youtube.com/watch?v=y9qieruccO4

Also checkout the Congress update, they discuss their recent support for
Mistral. https://www.youtube.com/watch?v=5YYcysVyLCo


# Releases

Fairly quiet this month. Just a few bugfix releases. One still in flight.

- Pike
  - Mistral 5.2.4 will be released soon: https://review.openstack.org/#
/c/568881/
- Queens
  - Mistral 6.0.3 https://docs.openstack.org/releasenotes/mistral/queens.
html

Rocky Milestone 2 will be released next week. So there will be more release
news next time.


# Notable changes and additions

- We now have Zun and Qinling OpenStack actions in master.
- Two significant performance improvements were made relating to workflow
environments and the deletion of objects.
- Mistral is now using stestr in all repos. For more details, see:
https://review.openstack.org/#/c/519751/


# Milestones, Reviews, Bugs and Blueprints

- We have 105 open bugs (down from 109 last month).
  - Zero are untriaged
  - One is "critical" (but that is likely a lie as it has been critical and
ignored for some time)
- Rocky-2 still now has 58 bugs assigned to it (it was 44 last month!).
Only 13 are fixed released. Most of these will move to Rocky-3 next week.
- 4 blueprints are targeted at Rocky 2 (I have already bumped 4 that were
inactive). Two are implemented. The other two will likely slip to Rocky-3
- 29 commits were merged.
- There were 176 reviews in total, 126 of these from the core team.


That's all for this time. See you next month!

Dougal


P.S. The format of this newsletter is still somewhat fluid. Feedback would
be very welcome. What do you find interesting or useful? What is missing?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers

2018-06-01 Thread Eric Fried
Sylvain-

On 05/31/2018 02:41 PM, Sylvain Bauza wrote:
> 
> 
> On Thu, May 31, 2018 at 8:26 PM, Eric Fried  > wrote:
> 
> > 1. Make everything perform the pivot on compute node start (which can be
> >    re-used by a CLI tool for the offline case)
> > 2. Make everything default to non-nested inventory at first, and provide
> >    a way to migrate a compute node and its instances one at a time (in
> >    place) to roll through.
> 
> I agree that it sure would be nice to do ^ rather than requiring the
> "slide puzzle" thing.
> 
> But how would this be accomplished, in light of the current "separation
> of responsibilities" drawn at the virt driver interface, whereby the
> virt driver isn't supposed to talk to placement directly, or know
> anything about allocations?  Here's a first pass:
> 
> 
> 
> What we usually do is to implement either at the compute service level
> or at the virt driver level some init_host() method that will reconcile
> what you want.
> For example, we could just imagine a non-virt specific method (and I
> like that because it's non-virt specific) - ie. called by compute's
> init_host() that would lookup the compute root RP inventories, see
> whether one ore more inventories tied to specific resource classes have
> to be moved from the root RP and be attached to a child RP.
> The only subtility that would require a virt-specific update would be
> the name of the child RP (as both Xen and libvirt plan to use the child
> RP name as the vGPU type identifier) but that's an implementation detail
> that a possible virt driver update by the resource tracker would
> reconcile that.

The question was rhetorical; my suggestion (below) was an attempt at
designing exactly what you've described.  Let me know if I can
explain/clarify it further.  I'm looking for feedback as to whether it's
a viable approach.

> The virt driver, via the return value from update_provider_tree, tells
> the resource tracker that "inventory of resource class A on provider B
> have moved to provider C" for all applicable AxBxC.  E.g.
> 
> [ { 'from_resource_provider': ,
>     'moved_resources': [VGPU: 4],
>     'to_resource_provider': 
>   },
>   { 'from_resource_provider': ,
>     'moved_resources': [VGPU: 4],
>     'to_resource_provider': 
>   },
>   { 'from_resource_provider': ,
>     'moved_resources': [
>         SRIOV_NET_VF: 2,
>         NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000,
>         NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000,
>     ],
>     'to_resource_provider': 
>   }
> ]
> 
> As today, the resource tracker takes the updated provider tree and
> invokes [1] the report client method update_from_provider_tree [2] to
> flush the changes to placement.  But now update_from_provider_tree also
> accepts the return value from update_provider_tree and, for each "move":
> 
> - Creates provider C (as described in the provider_tree) if it doesn't
> already exist.
> - Creates/updates provider C's inventory as described in the
> provider_tree (without yet updating provider B's inventory).  This ought
> to create the inventory of resource class A on provider C.
> - Discovers allocations of rc A on rp B and POSTs to move them to rp C*.
> - Updates provider B's inventory.
> 
> (*There's a hole here: if we're splitting a glommed-together inventory
> across multiple new child providers, as the VGPUs in the example, we
> don't know which allocations to put where.  The virt driver should know
> which instances own which specific inventory units, and would be able to
> report that info within the data structure.  That's getting kinda close
> to the virt driver mucking with allocations, but maybe it fits well
> enough into this model to be acceptable?)
> 
> Note that the return value from update_provider_tree is optional, and
> only used when the virt driver is indicating a "move" of this ilk.  If
> it's None/[] then the RT/update_from_provider_tree flow is the same as
> it is today.
> 
> If we can do it this way, we don't need a migration tool.  In fact, we
> don't even need to restrict provider tree "reshaping" to release
> boundaries.  As long as the virt driver understands its own data model
> migrations and reports them properly via update_provider_tree, it can
> shuffle its tree around whenever it wants.
> 
> Thoughts?
> 
> -efried
> 
> [1]
> 
> https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890
> 
> 
> [2]
> 
> https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341
> 
> 

Re: [openstack-dev] Updated PTI for documentation

2018-06-01 Thread Jeremy Stanley
On 2018-06-01 14:58:02 +0100 (+0100), Stephen Finucane wrote:
[...]
>  * The recent move to zuul v3 has changed how documentation is built in
>the gate. Previously, zuul called the 'docs' target in tox (e.g.
>'tox -e docs'), which would run whatever the project team had
>defined for that target.

Nope, it never did that. It previously called `tox -evenv -- python
setup.py build_sphinx` and those "docs" envs in tox were only ever
for developer convenience, not used at all in any standard CI jobs.

>With zuul v3, zuul no longer calls this.
>Instead, it call either 'python setup.py build_sphinx' or 'sphinx-
>build' (more on this below). This means everything you wish to do as
>part of the documentation build must now be done via Sphinx
>extensions.
[...]

You've got your cause and effect a bit backwards. The new docs jobs
(which weren't really related to the move to Zuul v3 but happened
around the same timeframe) were in service of the change to the PTI,
not the other way around. The commit message for the change[*] which
introduced the documentation section in the PTI has a fair bit to
say about reasons, but the gist of it is that we wanted to switch to
a workflow which 1. didn't assume you were a Python-oriented
project, and 2. was more in line with how most projects outside
OpenStack make use of Sphinx.

[*] https://review.openstack.org/508694
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-01 Thread Zane Bitter

On 26/05/18 17:46, Mohammed Naser wrote:

Hi everyone!

During the TC retrospective at the OpenStack summit last week, the
topic of the organizational diversity tag is becoming irrelevant was
brought up by Thierry (ttx)[1].  It seems that for projects that are
not very active, they can easily lose this tag with a few changes by
perhaps the infrastructure team for CI related fixes.

As an action item, Thierry and I have paired up in order to look into
a way to resolve this issue.  There have been ideas to switch this to
a report that is published at the end of the cycle rather than
continuously.  Julia (TheJulia) suggested that we change or track
different types of diversity.

Before we start diving into solutions, I wanted to bring this topic up
to the mailing list and ask for any suggestions.  In digging the
codebase behind this[2], I've found that there are some knobs that we
can also tweak if need-be, or perhaps we can adjust those numbers
depending on the number of commits.


Crazy idea: what if we dropped the idea of measuring the diversity and 
allowed teams to decide when they applied the tag to themselves like we 
do for other tags. (No wait! Come back!)


Some teams enforce a requirement that the 2 core +2s come from reviewers 
with different affiliations. We would say that any project that enforces 
that rule would get the diversity tag. Then it's actually attached to 
something concrete, and teams could decide for themselves when to drop 
it (because they would start having difficulty merging stuff otherwise).


I'm not entirely sold on this, but it's an idea I had that I wanted to 
throw out there :)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Updated PTI for documentation

2018-06-01 Thread Stephen Finucane
There have been a couple of threads about an updated "PTI" for
documentation bouncing around the mailing list of late.

 * http://lists.openstack.org/pipermail/openstack-dev/2018-March/128817
   .html
 * http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594
   .html

I've been told the reasoning behind this change and what is required
has not been made clear so here goes my attempt at explaining it. In
short, there are two problems we're trying to work around with this
change.

 * The legacy 'build_sphinx' setuptools command provided by pbr has
   been found to be lacking. It's buggy as hell, frequently breaks with
   Sphinx version bumps, and is generally a PITA to maintain. We (the
   oslo team) want to remove this feature to ease our maintenance
   burden.
 * The recent move to zuul v3 has changed how documentation is built in
   the gate. Previously, zuul called the 'docs' target in tox (e.g.
   'tox -e docs'), which would run whatever the project team had
   defined for that target. With zuul v3, zuul no longer calls this.
   Instead, it call either 'python setup.py build_sphinx' or 'sphinx-
   build' (more on this below). This means everything you wish to do as
   part of the documentation build must now be done via Sphinx
   extensions.

Both the oslo and infra teams have a strong incentive to drop support
for the 'build_sphinx' command (albeit for different reasons) but doing
so isn't simply a case of calling 'sphinx-build' instead. In order to
migrate, some steps are required:

   1. pbr's 'build_sphinx' setuptools command provides some additional
  functionality on top of 'sphinx-build'. This must be replaced by
  Sphinx extensions.
   2. Calls to 'python setup.py build_sphinx' must be replaced by
  additional calls to 'sphinx-build'
   3. Documentation requirements must be moved to 'doc/requirements.txt'
  to avoid having to install every requirement of a project simply to
  build documentation.

The first of these has already be achieved: 'openstackdocstheme'
recently gained support for automatically configuring the project name
and version in generated documentation [1], which replaced that aspect
of the 'build_sphinx' command. Similarly, the 'sphinxcontrib-apidoc'
Sphinx extension [2] was produced in order to provide a way to
automatically generate API documentation as part of 'sphinx-build'
rather than by having to make a secondary call to 'sphinx-apidoc'
(which the gate, which, once again, no longer runs anything but
'sphinx-build' or 'python setup.py build_sphinx', would not do).

The second step is the troublesome bit and has been the reason for most
of the individual patches to various projects. The steps necessary to
make this change have been documented multiple times on the list but
they're listed here once again for posterity:

 * If necessary, enable 'sphinxcontrib.apidoc' as described at [3].
 * Make sure you're using 'openstackdocstheme', assuming your project
   is an official OpenStack one.
 * Remove the 'build_sphinx' section from 'setup.cfg' (this is
   described at [3] but applies whether you need that or not).
 * Update your doc/releasenotes/api-guide targets in 'tox.ini' so
   you're using the same commands as the gate.

The third change should be self-explanatory and infra have reasons for
requesting it. It's generally easiest to do this as part of the above.

Hopefully this clears things up for people. If anyone has any
questions, feel free to reach out to me on IRC (stephenfin) and I'll be
happy to help.

Cheers,
Stephen

PS: For those that curious, the decision on whether to run 'python
setup.py build_sphinx' command or 'sphinx-build' in the gate is based
on the presence of a 'build_sphinx' section in 'setup.cfg'. If present,
the former is run. If not, we use 'sphinx-build'. This is why it's
necessary to remove that section from 'setup.cfg'.

[1] https://docs.openstack.org/openstackdocstheme/latest/#using-the-theme
[2] https://pypi.org/project/sphinxcontrib-apidoc/
[3] https://pypi.org/project/sphinxcontrib-apidoc/#migration-from-pbr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Deprecation of nova.image.download.modules extension point

2018-06-01 Thread Moore, Curt
On 6/1/2018 12:44 AM, Chris Friesen wrote:
> On 05/31/2018 04:14 PM, Curt Moore wrote:
>> The challenge is that transferring the Glance image transfer is
>> _glacially slow_ when using the Glance HTTP API (~30 min for a 50GB
>> Windows image (It’s Windows, it’s huge with all of the necessary
>> tools installed)). If libvirt can instead perform an RBD export on
>> the image using the image download functionality, it is able to
>> download the same image in ~30 sec.
> This seems oddly slow. I just downloaded a 1.6 GB image from glance in
> slightly under 10 seconds. That would map to about 5 minutes for a
> 50GB image.
Agreed.  There's nothing really special about the Glance API setup, we
have multiple load balanced instances behind HAProxy.  However, in our
use case, we are very sensitive to node spin-up time so anything we can
do to reduce this time is desired.  If a VM lands on a compute node
where the image isn't yet locally cached, paying an additional 5 min
penalty is undesired.
>> We could look at attaching an additional ephemeral disk to the
>> instance and have cloudbase-init use it as the pagefile but it
>> appears that if libvirt is using rbd for its images_type, _all_ disks
>> must then come from Ceph, there is no way at present to allow the VM
>> image to run from Ceph and have an ephemeral disk mapped in from
>> node-local storage. Even still, this would have the effect of
>> "wasting" Ceph IOPS for the VM disk itself which could be better used
>> for other purposes. Based on what I have explained about our use
>> case, is there a better/different way to accomplish the same goal
>> without using the deprecated image download functionality? If not,
>> can we work to "un-deprecate" the download extension point? Should I
>> work to get the code for this RBD download into the upstream repository?
> Have you considered using compute nodes configured for local storage
> but then use boot-from-volume with cinder and glance both using ceph?
> I *think* there's an optimization there such that the volume creation
> is fast. Assuming the volume creation is indeed fast, in this scenario
> you could then have a local ephemeral/swap disk for your pagefile.
> You'd still have your VM root disks on ceph though.
Understood. Booting directly from a Cinder volume would work, but as you
mention, we'd still have the VM root disks in Ceph, using the expensive
Ceph SSD IOPS for no good reason.  I'm trying to get the best of both
worlds by keeping the Glance images in Ceph and also keeping all VM I/O
local to the compute node.

-Curt



CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of 
the intended recipient(s) and contain information that may be Garmin 
confidential and/or Garmin legally privileged. If you have received this email 
in error, please notify the sender by reply email and delete the message. Any 
disclosure, copying, distribution or use of this communication (including 
attachments) by someone other than the intended recipient is prohibited. Thank 
you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-06-01 Thread Lance Bragstad
It looks like I had a patch up to improve some developer documentation
that is relevant to this discussion [0].

[0] https://review.openstack.org/#/c/554727/

On 06/01/2018 08:01 AM, Jens Harbott wrote:
> 2018-05-30 20:37 GMT+00:00 Matt Riedemann :
>> On 5/30/2018 9:53 AM, Lance Bragstad wrote:
>>> While scope isn't explicitly denoted by an
>>> attribute, it can be derived from the attributes of the token response.
>>>
>> Yeah, this was confusing to me, which is why I reported it as a bug in the
>> API reference documentation:
>>
>> https://bugs.launchpad.net/keystone/+bug/1774229
>>
 * It looks like python-openstackclient doesn't allow specifying a
 scope when issuing a token, is that going to be added?
>>> Yes, I have a patch up for it [6]. I wanted to get this in during
>>> Queens, but it missed the boat. I believe this and a new release of
>>> oslo.context are the only bits left in order for services to have
>>> everything they need to easily consume system-scoped tokens.
>>> Keystonemiddleware should know how to handle system-scoped tokens in
>>> front of each service [7]. The oslo.context library should be smart
>>> enough to handle system scope set by keystonemiddleware if context is
>>> built from environment variables [8]. Both keystoneauth [9] and
>>> python-keystoneclient [10] should have what they need to generate
>>> system-scoped tokens.
>>>
>>> That should be enough to allow the service to pass a request environment
>>> to oslo.context and use the context object to reason about the scope of
>>> the request. As opposed to trying to understand different token scope
>>> responses from keystone. We attempted to abstract that away in to the
>>> context object.
>>>
>>> [6]https://review.openstack.org/#/c/524416/
>>> [7]https://review.openstack.org/#/c/564072/
>>> [8]https://review.openstack.org/#/c/530509/
>>> [9]https://review.openstack.org/#/c/529665/
>>> [10]https://review.openstack.org/#/c/524415/
>>
>> I think your reply in IRC was more what I was looking for:
>>
>> lbragstad   mriedem: if you install
>> https://review.openstack.org/#/c/524416/5 locally with devstack and setup a
>> clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin``
>> should work 15:39
>> lbragstad   http://paste.openstack.org/raw/722357/  15:39
>>
>> So users with the system role will need to create a token using that role to
>> get the system-scoped token, as far as I understand. There is no --scope
>> option on the 'openstack token issue' CLI.
> IIUC there is no option to the "token issue" command because that
> command creates a token just like any other OSC command would do from
> the global authentication parameters specified, either on the command
> line, in the environment or via a clouds.yaml file. The "token issue"
> command simply outputs the token that is then received instead of
> using it as authentication for the "real" action taken by other
> commands.
>
> So the option to request a system scope would seem to be
> "--os-system-scope all" or the corresponding env var OS_SYSTEM_SCOPE.
> And if you do that, the resulting system-scoped token will directly be
> used when you issue a command like "openstack server list".
>
> One thing to watch out for, however, is that that option seems to be
> silently ignored if the credentials also specify either a project or a
> domain. Maybe generating a warning or even an error in that situation
> would be a cleaner solution.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osc][python-openstackclient] osc-included image signing

2018-06-01 Thread Josephine Seifert
Hi,

our team has implemented a prototype for an osc-included image signing.
We would like to propose a spec or something like this, but haven't
found where to start at. So here is a brief concept of what we want to
contribute:

https://etherpad.openstack.org/p/osc-included_image_signing

Please advise us which steps to take next!

Regards,
Josephine
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-06-01 Thread Jens Harbott
2018-05-30 20:37 GMT+00:00 Matt Riedemann :
> On 5/30/2018 9:53 AM, Lance Bragstad wrote:
>>
>> While scope isn't explicitly denoted by an
>> attribute, it can be derived from the attributes of the token response.
>>
>
> Yeah, this was confusing to me, which is why I reported it as a bug in the
> API reference documentation:
>
> https://bugs.launchpad.net/keystone/+bug/1774229
>
>>> * It looks like python-openstackclient doesn't allow specifying a
>>> scope when issuing a token, is that going to be added?
>>
>> Yes, I have a patch up for it [6]. I wanted to get this in during
>> Queens, but it missed the boat. I believe this and a new release of
>> oslo.context are the only bits left in order for services to have
>> everything they need to easily consume system-scoped tokens.
>> Keystonemiddleware should know how to handle system-scoped tokens in
>> front of each service [7]. The oslo.context library should be smart
>> enough to handle system scope set by keystonemiddleware if context is
>> built from environment variables [8]. Both keystoneauth [9] and
>> python-keystoneclient [10] should have what they need to generate
>> system-scoped tokens.
>>
>> That should be enough to allow the service to pass a request environment
>> to oslo.context and use the context object to reason about the scope of
>> the request. As opposed to trying to understand different token scope
>> responses from keystone. We attempted to abstract that away in to the
>> context object.
>>
>> [6]https://review.openstack.org/#/c/524416/
>> [7]https://review.openstack.org/#/c/564072/
>> [8]https://review.openstack.org/#/c/530509/
>> [9]https://review.openstack.org/#/c/529665/
>> [10]https://review.openstack.org/#/c/524415/
>
>
> I think your reply in IRC was more what I was looking for:
>
> lbragstad   mriedem: if you install
> https://review.openstack.org/#/c/524416/5 locally with devstack and setup a
> clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin``
> should work 15:39
> lbragstad   http://paste.openstack.org/raw/722357/  15:39
>
> So users with the system role will need to create a token using that role to
> get the system-scoped token, as far as I understand. There is no --scope
> option on the 'openstack token issue' CLI.

IIUC there is no option to the "token issue" command because that
command creates a token just like any other OSC command would do from
the global authentication parameters specified, either on the command
line, in the environment or via a clouds.yaml file. The "token issue"
command simply outputs the token that is then received instead of
using it as authentication for the "real" action taken by other
commands.

So the option to request a system scope would seem to be
"--os-system-scope all" or the corresponding env var OS_SYSTEM_SCOPE.
And if you do that, the resulting system-scoped token will directly be
used when you issue a command like "openstack server list".

One thing to watch out for, however, is that that option seems to be
silently ignored if the credentials also specify either a project or a
domain. Maybe generating a warning or even an error in that situation
would be a cleaner solution.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-06-01 Thread Kashyap Chamarthy
On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote:
> StarlingX (aka STX) was announced this week at the summit, there is a
> PR to create project repos in Gerrit at [0]. STX is basically Wind

From a cursory look at the libvirt fork, there are some questionable
choices.  E.g. the config code (libvirt/src/qemu/qemu.conf) is modified
such that QEMU is launched as 'root'.  That means a bug in QEMU ==
instant host compromise.

All Linux distributions (that matter) configure libvirt to launch QEMU
as a regular user ('qemu').  E.g. from Fedora's libvirt RPM spec file:

libvirt.spec:%define qemu_user  qemu
libvirt.spec:   --with-qemu-user=%{qemu_user} \

* * *

There are multiple other such issues in the forked libvirt code.

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][neutron] Extraroute support

2018-06-01 Thread Rabi Mishra
On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona 
wrote:

> Hi,
>
> Could somebody help me out with Neutron's Extraroute support in Hot
> templates.
> The support status of the Extraroute is support.UNSUPPORTED in heat, and
> only create and delete are the supported operations.
> see: https://github.com/openstack/heat/blob/master/heat/engine/re
> sources/openstack/neutron/extraroute.py#LC35
>
>
As I see the unsupported tag was added when the feature was moved from the
> contrib folder to in-tree (https://review.openstack.org/186608)
> Perhaps you can help me out why only create and delete are supported and
> update not.
>
>
I think most of the resources when moved from contrib to in-tree are marked
as unsupported. Adding routes to an existing router by multiple stacks can
be racy and is probably the reason use of this resource is not encouraged
and hence it's not supported. You can see the discussion in the original
patch that proposed this resource https://review.openstack.org/#/c/41044/

Not sure if things have changed on neutron side for us to revisit the
concerns.

Also it does not have any update_allowed properties, hence no
handle_update(). It would be replaced if you change any property.

Hope it helps.



> Thanks in advance for  the help.
>
> Regards
> Lajos
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Core team updates

2018-06-01 Thread Tom Barron

Hi all,

Clinton Knight and Valeriy Ponomaryov have been focusing on projects 
outside Manila for some time so I'm removing them from the core team. 

Valeriy and Clinton made great contributions to Manila over the years 
both as reviewers and as contributors.  We are fortunate to have been 
able to work with them and they are certainly welcome back to the core 
team in the future if they return to active reviewing.


Clinton & Valeriy, thank you for your contributions!

-- Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][neutron] Extraroute support

2018-06-01 Thread Lajos Katona

Hi,

Could somebody help me out with Neutron's Extraroute support in Hot 
templates.
The support status of the Extraroute is support.UNSUPPORTED in heat, and 
only create and delete are the supported operations.
see: 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/extraroute.py#LC35


As I see the unsupported tag was added when the feature was moved from 
the contrib folder to in-tree (https://review.openstack.org/186608)
Perhaps you can help me out why only create and delete are supported and 
update not.


Thanks in advance for  the help.

Regards
Lajos


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Paul Bourke

+1

On 31/05/18 18:02, Borne Mace wrote:

Greetings all,

I would like to propose the addition of Steve Noyes to the kolla-cli 
core reviewer team.  Consider this nomination as my personal +1.


Steve has a long history with the kolla-cli and should be considered its 
co-creator as probably half or more of the existing code was due to his 
efforts.  He has now been working diligently since it was pushed 
upstream to improve the stability and testability of the cli and has the 
second most commits on the project.


The kolla core team consists of 19 people, and the kolla-cli team of 2, 
for a total of 21.  Steve therefore requires a minimum of 11 votes (so 
just 10 more after my +1), with no veto -2 votes within a 7 day voting 
window to end on June 6th.  Voting will be closed immediately on a veto 
or in the case of a unanimous vote.


As I'm not sure how active all of the 19 kolla cores are, your attention 
and timely vote is much appreciated.


Thanks!

-- Borne


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] CD tangent - was: A culture change (nitpicking)

2018-06-01 Thread Clint Byrum
Quoting Sean McGinnis (2018-05-31 09:54:46)
> On 05/31/2018 03:50 AM, Thierry Carrez wrote:
> > Right... There might be a reasonable middle ground between "every 
> > commit on master must be backward-compatible" and "rip out all 
> > testing" that allows us to routinely revert broken feature commits (as 
> > long as they don't cross a release boundary).
> >
> > To be fair, I'm pretty sure that's already the case: we did revert 
> > feature commits on master in the past, therefore breaking backward 
> > compatibility if someone started to use that feature right away. It's 
> > the issue with implicit rules: everyone interprets them the way they 
> > want... So I think that could use some explicit clarification.
> >
> > [ This tangent should probably gets its own thread to not disrupt the 
> > no-nitpicking discussion ]
> >
> Just one last one on this, then I'm hoping this tangent ends.
> 
> I think what Thierry said is exactly what Dims and I were saying. I'm 
> not sure how that turned into
> the idea of supporting committing broken code. The point (at least mine) 
> was just that we should
> not have the mindset that HEAD~4 committed something that we realize was 
> not right, so we
> should not have the mindset that "someone might have deployed that 
> broken behavior so we
> need to make sure we don't break them." HEAD should always be 
> deployable, just not treated like
> an official release that needs to be maintained.
> 

We are what we test.

We don't test upgrading from one commit to the next. We test upgrading
from the previous stable release. And as such, that's what has to keep
working.

So no, a revert shouldn't ever be subject to "oh no somebody may have
deployed this and you don't revert the db change". That's definitely a
downstream consideration and those who CD things have ways of detecting
and dealing with this on their end. That said, it would be nice if
developers consider this corner case, and try not to make it a huge
mess to unwind.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Christian Berendt
+1

> On 31. May 2018, at 19:02, Borne Mace  wrote:
> 
> Greetings all,
> 
> I would like to propose the addition of Steve Noyes to the kolla-cli core 
> reviewer team.  Consider this nomination as my personal +1.
> 
> Steve has a long history with the kolla-cli and should be considered its 
> co-creator as probably half or more of the existing code was due to his 
> efforts.  He has now been working diligently since it was pushed upstream to 
> improve the stability and testability of the cli and has the second most 
> commits on the project.
> 
> The kolla core team consists of 19 people, and the kolla-cli team of 2, for a 
> total of 21.  Steve therefore requires a minimum of 11 votes (so just 10 more 
> after my +1), with no veto -2 votes within a 7 day voting window to end on 
> June 6th.  Voting will be closed immediately on a veto or in the case of a 
> unanimous vote.
> 
> As I'm not sure how active all of the 19 kolla cores are, your attention and 
> timely vote is much appreciated.
> 
> Thanks!
> 
> -- Borne
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-06-01 Thread Kashyap Chamarthy
On Tue, May 22, 2018 at 05:41:18PM -0400, Brian Haley wrote:
> On 05/22/2018 04:57 PM, Jay Pipes wrote:

[...]

> > Please don't take this the wrong way, Dean, but you aren't seriously
> > suggesting that anyone outside of Windriver/Intel would ever contribute
> > to these repos are you?
> > 
> > What motivation would anyone outside of Windriver/Intel -- who must make
> > money on this effort otherwise I have no idea why they are doing it --
> > have to commit any code at all to StarlingX?

Yes, same question as Jay here.

What this product-turned-project (i.e. "Downstream First") is implicitly
asking for is the review time of the upstream community, which is
already at a premium -- for a fork.

> I read this the other way - the goal is to get all the forked code from
> StarlingX into upstream repos.  That seems backwards from how this should
> have been done (i.e. upstream first), and I don't see how a project would
> prioritize that over other work.
> 
> > I'm truly wondering why was this even open-sourced to begin with? I'm as
> > big a supporter of open source as anyone, but I'm really struggling to
> > comprehend the business, technical, or marketing decisions behind this
> > action. Please help me understand. What am I missing?
> 
> I'm just as confused.

Equally stupefied here.

> > My personal opinion is that I don't think that any products, derivatives
> > or distributions should be hosted on openstack.org infrastructure.

Yes, it should be unmistakably clear that contributions to "upstream
Nova", for example, means the 'primary' (this qualifier itself is
redundant) upstream Nova.  No slippery slope such as: "OpenStack-hosted
Nova, but not exactly _that_ OpenStack Nova".

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Mark Goddard
+1

On 1 June 2018 at 08:55, Eduardo Gonzalez  wrote:

> +1
>
> 2018-06-01 8:57 GMT+02:00 Michał Jastrzębski :
>
>> +1 from me:)
>>
>> On Thu, May 31, 2018, 11:40 PM Martin André  wrote:
>>
>>> If Steve wrote half of kolla-cli then it's a no brainer to me. +1!
>>>
>>> On Thu, May 31, 2018 at 7:02 PM, Borne Mace 
>>> wrote:
>>> > Greetings all,
>>> >
>>> > I would like to propose the addition of Steve Noyes to the kolla-cli
>>> core
>>> > reviewer team.  Consider this nomination as my personal +1.
>>> >
>>> > Steve has a long history with the kolla-cli and should be considered
>>> its
>>> > co-creator as probably half or more of the existing code was due to his
>>> > efforts.  He has now been working diligently since it was pushed
>>> upstream to
>>> > improve the stability and testability of the cli and has the second
>>> most
>>> > commits on the project.
>>> >
>>> > The kolla core team consists of 19 people, and the kolla-cli team of
>>> 2, for
>>> > a total of 21.  Steve therefore requires a minimum of 11 votes (so
>>> just 10
>>> > more after my +1), with no veto -2 votes within a 7 day voting window
>>> to end
>>> > on June 6th.  Voting will be closed immediately on a veto or in the
>>> case of
>>> > a unanimous vote.
>>> >
>>> > As I'm not sure how active all of the 19 kolla cores are, your
>>> attention and
>>> > timely vote is much appreciated.
>>> >
>>> > Thanks!
>>> >
>>> > -- Borne
>>> >
>>> >
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Eduardo Gonzalez
+1

2018-06-01 8:57 GMT+02:00 Michał Jastrzębski :

> +1 from me:)
>
> On Thu, May 31, 2018, 11:40 PM Martin André  wrote:
>
>> If Steve wrote half of kolla-cli then it's a no brainer to me. +1!
>>
>> On Thu, May 31, 2018 at 7:02 PM, Borne Mace 
>> wrote:
>> > Greetings all,
>> >
>> > I would like to propose the addition of Steve Noyes to the kolla-cli
>> core
>> > reviewer team.  Consider this nomination as my personal +1.
>> >
>> > Steve has a long history with the kolla-cli and should be considered its
>> > co-creator as probably half or more of the existing code was due to his
>> > efforts.  He has now been working diligently since it was pushed
>> upstream to
>> > improve the stability and testability of the cli and has the second most
>> > commits on the project.
>> >
>> > The kolla core team consists of 19 people, and the kolla-cli team of 2,
>> for
>> > a total of 21.  Steve therefore requires a minimum of 11 votes (so just
>> 10
>> > more after my +1), with no veto -2 votes within a 7 day voting window
>> to end
>> > on June 6th.  Voting will be closed immediately on a veto or in the
>> case of
>> > a unanimous vote.
>> >
>> > As I'm not sure how active all of the 19 kolla cores are, your
>> attention and
>> > timely vote is much appreciated.
>> >
>> > Thanks!
>> >
>> > -- Borne
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Summit][qa] Vancouver Summit 2018 QA Recap

2018-06-01 Thread Chandan kumar
Hello Ghanshyam,

Thanks for putting this all together. Great summary :-)

On Fri, Jun 1, 2018 at 12:27 PM, Ghanshyam  wrote:
> Hi All,
>
> We had another good Summit in Vancouver  and got good amount of feedback for 
> QA which really important and helpful.
> I am summarizing the QA discussions during Summit.
>
> QA feedback sessions:
> =
> Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-ops-user-feedback
> We had good number of people this time and so does more feedback.
>
> Key points, improvement and features requested in QA:
> - AT Cloud QA is by AQuA API which is tooling around upstream tools like 
> Tempest, Patrole, OpenStack Health etc.
> - Tempest, Patrole are widely used tool in Cloud testing. Patrole is being 
> used with 10 Roles in parallel testing on containers.
> - There are few more support needed from Tempest which AT (Doug 
> Schveninger) would like to see in upstream. Few of them are:
> - Better support for LDAP
> - Service available detection for plugins
> - Configure volume_type for Cinder multiple storage types tests
> - more tooling in Tempest like - tempest.conf generator,

For generating tempest.conf, we have python-tempestconf
 , It
might help.

> iproject_generator.py, advance cleanup/Leak detector,

> assembling tempest plugin in a docker container etc

By the beginning of Rocky cycle, we have added all tempest plugins in
Kolla tempest container and it is currently consumed in TripleO CI.
https://hub.docker.com/r/kolla/centos-source-tempest/tags/

It might help.

> - Tempest gabbi support
>
> ACTION ITEM:  gmann to follow up on each requested features and start 
> discussion in separate thread/IRC.
>
> Tagging all the Tempest plugins along with Tempest tag
> =
> Currently, we tag Tempest on release, intermediately or EOL  so that people 
> can use that tag against particular openstack code base/release.  Tempest 
> plugins are not being tagged as such.  So there are difficulty in using 
> plugins with particular Tempest tag in compatible way. We discussed to tag 
> all tempest plugins together everytime Tempest new tag is pushed. While 
> writing this mail, I got to know that dmellado already doing the new tag for 
> kuryr tempest plugin which is what we need.
>
> ACTION ITEM: gmann to start the ML thread to get the broader agreement from 
> each plugins and then define the process and responsible team to tag all 
> plugins and Tempest together.
>
> Patrole
> ==
> This is one of the important project now which is being requested/talked by 
> many people/operator. This was one the item in keystone Default Roles forum 
> session[1] also to start gating patrole on keystone. Below is initial plan I 
> discussed with Felipe:
> - Start gating patrole in keystone with non-voting/experimental job. This one 
> - https://review.openstack.org/#/c/464678/ . Rocky.
> - multi-policy support - Rocky
> - Make  stable release of Patrole. S cycle may be. This include various 
> things about framework stability, plugin support etc
> - Start proposing the Patrole gating on other projects like nova, cinder etc 
> - T Cycle or early if possible.
>
> ACTION ITEM: Felipe to work on above plan and gmann will be helping him on 
> that.
>
> QA onboarding sessions:
> ===
> Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-onboarding-vancouver
>
> Around  6-7 people joined which gradually increasing since previous summits 
> :). We started with asking people about their engagement in QA or what they 
> are looking forward from QA.
> Doug Schveninger(AT) talked about his team members who can helps on QA 
> things and the new features/tooling he would like to see in Tempest, Patrole 
> etc. They might not be permanent but it is good to have more people in 
> contribution. QA team will help to get them on-boarded in all perspective. 
> Thanks Doug for your support.
>
> Other item fro this sessions was to have a centralized place (etherpad, 
> document) for all the current feature or working items where we are looking 
> for volunteer like CLI unit tests, schema validation etc. Where we document 
> the enough background and helping material which will help new contributors 
> to start working on those items.
>
> ACTION ITEM:
> - gmann to find the better place to document the working item with enough 
> background for new contributors.
> - Doug to start his team member to get involve in QA.
>
> Extended Maintenance Stable Branch
> =
> During discussion of Extended Maintenance sessions[2], we discussed about 
> testing support of EM branch in QA and we all agreed on below points:
> - QA will keep doing the same number of stable branches support as it is 
> doing now. Means support till "Maintained"  phase branches. EM branch will 
> not be in scope of guaranteed support of QA.
> - As Tempest is branchless, it 

Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Michał Jastrzębski
+1 from me:)

On Thu, May 31, 2018, 11:40 PM Martin André  wrote:

> If Steve wrote half of kolla-cli then it's a no brainer to me. +1!
>
> On Thu, May 31, 2018 at 7:02 PM, Borne Mace  wrote:
> > Greetings all,
> >
> > I would like to propose the addition of Steve Noyes to the kolla-cli core
> > reviewer team.  Consider this nomination as my personal +1.
> >
> > Steve has a long history with the kolla-cli and should be considered its
> > co-creator as probably half or more of the existing code was due to his
> > efforts.  He has now been working diligently since it was pushed
> upstream to
> > improve the stability and testability of the cli and has the second most
> > commits on the project.
> >
> > The kolla core team consists of 19 people, and the kolla-cli team of 2,
> for
> > a total of 21.  Steve therefore requires a minimum of 11 votes (so just
> 10
> > more after my +1), with no veto -2 votes within a 7 day voting window to
> end
> > on June 6th.  Voting will be closed immediately on a veto or in the case
> of
> > a unanimous vote.
> >
> > As I'm not sure how active all of the 19 kolla cores are, your attention
> and
> > timely vote is much appreciated.
> >
> > Thanks!
> >
> > -- Borne
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Summit][qa] Vancouver Summit 2018 QA Recap

2018-06-01 Thread Ghanshyam
Hi All,

We had another good Summit in Vancouver  and got good amount of feedback for QA 
which really important and helpful. 
I am summarizing the QA discussions during Summit. 

QA feedback sessions:
=
Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-ops-user-feedback
We had good number of people this time and so does more feedback.

Key points, improvement and features requested in QA:
- AT Cloud QA is by AQuA API which is tooling around upstream tools like 
Tempest, Patrole, OpenStack Health etc.
- Tempest, Patrole are widely used tool in Cloud testing. Patrole is being used 
with 10 Roles in parallel testing on containers. 
- There are few more support needed from Tempest which AT (Doug Schveninger) 
would like to see in upstream. Few of them are:
- Better support for LDAP
- Service available detection for plugins
- Configure volume_type for Cinder multiple storage types tests
- more tooling in Tempest like - tempest.conf generator, 
iproject_generator.py, advance cleanup/Leak detector, assembling tempest plugin 
in a docker container etc
- Tempest gabbi support 

ACTION ITEM:  gmann to follow up on each requested features and start 
discussion in separate thread/IRC. 

Tagging all the Tempest plugins along with Tempest tag
=
Currently, we tag Tempest on release, intermediately or EOL  so that people can 
use that tag against particular openstack code base/release.  Tempest plugins 
are not being tagged as such.  So there are difficulty in using plugins with 
particular Tempest tag in compatible way. We discussed to tag all tempest 
plugins together everytime Tempest new tag is pushed. While writing this mail, 
I got to know that dmellado already doing the new tag for kuryr tempest plugin 
which is what we need. 

ACTION ITEM: gmann to start the ML thread to get the broader agreement from 
each plugins and then define the process and responsible team to tag all 
plugins and Tempest together. 

Patrole
==
This is one of the important project now which is being requested/talked by 
many people/operator. This was one the item in keystone Default Roles forum 
session[1] also to start gating patrole on keystone. Below is initial plan I 
discussed with Felipe: 
- Start gating patrole in keystone with non-voting/experimental job. This one - 
https://review.openstack.org/#/c/464678/ . Rocky. 
- multi-policy support - Rocky
- Make  stable release of Patrole. S cycle may be. This include various things 
about framework stability, plugin support etc
- Start proposing the Patrole gating on other projects like nova, cinder etc - 
T Cycle or early if possible. 

ACTION ITEM: Felipe to work on above plan and gmann will be helping him on 
that. 

QA onboarding sessions:
===
Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-onboarding-vancouver 

Around  6-7 people joined which gradually increasing since previous summits :). 
We started with asking people about their engagement in QA or what they are 
looking forward from QA. 
Doug Schveninger(AT) talked about his team members who can helps on QA things 
and the new features/tooling he would like to see in Tempest, Patrole etc. They 
might not be permanent but it is good to have more people in contribution. QA 
team will help to get them on-boarded in all perspective. Thanks Doug for your 
support. 

Other item fro this sessions was to have a centralized place (etherpad, 
document) for all the current feature or working items where we are looking for 
volunteer like CLI unit tests, schema validation etc. Where we document the 
enough background and helping material which will help new contributors to 
start working on those items. 

ACTION ITEM: 
- gmann to find the better place to document the working item with enough 
background for new contributors. 
- Doug to start his team member to get involve in QA.

Extended Maintenance Stable Branch
=
During discussion of Extended Maintenance sessions[2], we discussed about 
testing support of EM branch in QA and we all agreed on below points:
- QA will keep doing the same number of stable branches support as it is doing 
now. Means support till "Maintained"  phase branches. EM branch will not be in 
scope of guaranteed support of QA. 
- As Tempest is branchless, it should work for EM phase branches also but if 
anything new changes break EM branch testing then we stopped testing master 
Tempest on EM branches. 
Matt has already pushed the patch to document the above agreement [3]. Thanks 
for doing good documentation always :), 

Eris
===
Spec- https://review.openstack.org/#/c/443504/
It came up in feedback sessions also and people really want to see some 
progress on this. We have spec under review for that and need more volunteer to 
drive this forward. I will also check with SamP on this. Other than that there 
was not much discussion/progress on this in summit.

ACTION ITEM:  gmann 

Re: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer

2018-06-01 Thread Martin André
If Steve wrote half of kolla-cli then it's a no brainer to me. +1!

On Thu, May 31, 2018 at 7:02 PM, Borne Mace  wrote:
> Greetings all,
>
> I would like to propose the addition of Steve Noyes to the kolla-cli core
> reviewer team.  Consider this nomination as my personal +1.
>
> Steve has a long history with the kolla-cli and should be considered its
> co-creator as probably half or more of the existing code was due to his
> efforts.  He has now been working diligently since it was pushed upstream to
> improve the stability and testability of the cli and has the second most
> commits on the project.
>
> The kolla core team consists of 19 people, and the kolla-cli team of 2, for
> a total of 21.  Steve therefore requires a minimum of 11 votes (so just 10
> more after my +1), with no veto -2 votes within a 7 day voting window to end
> on June 6th.  Voting will be closed immediately on a veto or in the case of
> a unanimous vote.
>
> As I'm not sure how active all of the 19 kolla cores are, your attention and
> timely vote is much appreciated.
>
> Thanks!
>
> -- Borne
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about token scopes

2018-06-01 Thread Ghanshyam Mann
On Thu, May 31, 2018 at 11:24 PM, Lance Bragstad  wrote:
>
>
> On 05/31/2018 12:09 AM, Ghanshyam Mann wrote:
>> On Wed, May 30, 2018 at 11:53 PM, Lance Bragstad  wrote:
>>>
>>> On 05/30/2018 08:47 AM, Matt Riedemann wrote:
 I know the keystone team has been doing a lot of work on scoped tokens
 and Lance has been trying to roll that out to other projects (like nova).

 In Rocky the nova team is adding granular policy rules to the
 placement API [1] which is a good opportunity to set scope on those
 rules as well.

 For now, we've just said everything is system scope since resources in
 placement, for the most part, are managed by "the system". But we do
 have some resources in placement which have project/user information
 in them, so could theoretically also be scoped to a project, like GET
 /usages [2].
>> Just adding that this is same for nova policy also. As you might know
>> spec[1] try to make nova policy more granular but on hold because of
>> default roles things. We will do policy rule split with more better
>> defaults values like read-only for GET APIs.
>>
>> Along with that, like you mentioned about scope setting for placement
>> policy rules, we need to do same for nova policy also. That can be
>> done later or together with nova policy granular. spec.
>>
>> [1] https://review.openstack.org/#/c/547850/
>>
 While going through this, I've been hammering Lance with questions but
 I had some more this morning and wanted to send them to the list to
 help spread the load and share the knowledge on working with scoped
 tokens in the other projects.
>>> ++ good idea
>>>
 So here goes with the random questions:

 * devstack has the admin project/user - does that by default get
 system scope tokens? I see the scope is part of the token create
 request [3] but it's optional, so is there a default value if not
 specified?
>>> No, not necessarily. The keystone-manage bootstrap command is what
>>> bootstraps new deployments with the admin user, an admin role, a project
>>> to work in, etc. It also grants the newly created admin user the admin
>>> role on a project and the system. This functionality was added in Queens
>>> [0]. This should be backwards compatible and allow the admin user to get
>>> tokens scoped to whatever they had authorization on previously. The only
>>> thing they should notice is that they have another role assignment on
>>> something called the "system". That being said, they can start
>>> requesting system-scoped tokens from keystone. We have a document that
>>> tries to explain the differences in scopes and what they mean [1].
>> Another related question is, does scope setting will impact existing
>> operator? I mean when policy rule start setting scope, that might
>> break the existing operator as their current token (say project
>> scoped) might not be able to authorize the policy modified with
>> setting the system scope.
>>
>> In that case, how we are going to avoid the upgrade break. One way can
>> be to soft enforcement scope things for a cycle with warning and then
>> start enforcing that after one cycle (like we do for any policy rule
>> change)? but not sure at this point.
>
> Good question. This was the primary driver behind adding a new
> configuration option to the oslo.policy library called `enforce_scope`
> [0]. This let's operators turn off scope checking while they do a few
> things.
>
> They'll need to audit their users and give administrators of the
> deployment access to the system via a system role assignment (as opposed
> to the 'admin' role on some random project). They also need to ensure
> those people understand the concept of system scope. They might also
> send emails or notifications explaining the incoming changes and why
> they're being done, et cetera. Ideally, this should buy operators time
> to clean things up by reassessing their policy situation with the new
> defaults and scope types before enforcing those constraints. If
> `enforce_scope` is False, then a warning is logged during the
> enforcement check saying something along the lines of "someone used a
> token scoped to X to do something in Y".
>
> [0]
> https://docs.openstack.org/oslo.policy/latest/configuration/index.html#oslo_policy.enforce_scope
>

Thanks Lance, that is what i was looking for and it is default to
False which keep the things safe without behavior change.

-gmann
>>
>>> [0] https://review.openstack.org/#/c/530410/
>>> [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html
>>>
 * Why don't the token create and show APIs return the scope?
>>> Good question. In a way, they do. If you look at a response when you
>>> authenticate for a token or validate a token, you should see an object
>>> contained within the token reference for the purpose of scope. For
>>> example, a project-scoped token will have a project object in the
>>> response [2]. A domain-scoped token will