Re: [openstack-dev] [nova] Can we deprecate the server backup API please?

2018-11-18 Thread Alex Xu
Sounds make sense to me, and then we needn't fix this strange behaviour
also https://review.openstack.org/#/c/409644/

Jay Pipes  于2018年11月17日周六 上午3:56写道:

> The server backup API was added 8 years ago. It has Nova basically
> implementing a poor-man's cron for some unknown reason (probably because
> the original RAX Cloud Servers API had some similar or identical
> functionality, who knows...).
>
> Can we deprecate this functionality please? It's confusing for end users
> to have an `openstack server image create` and `openstack server backup
> create` command where the latter does virtually the same thing as the
> former only sets up some whacky cron-like thing and deletes images after
> some number of rotations.
>
> If a cloud provider wants to offer some backup thing as a service, they
> could implement this functionality separately IMHO, store the user's
> requested cronjob state in their own system (or in glance which is kind
> of how the existing Nova createBackup functionality works), and run a
> simple cronjob executor that ran `openstack server image create` and
> `openstack image delete` as needed.
>
> This is a perfect example of an API that should never have been added to
> the Compute API, in my opinion, and removing it would be a step in the
> right direction if we're going to get serious about cleaning the Compute
> API up.
>
> Thoughts?
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Alex Xu
so FYI, in case people missing this spec, there is spec from John
https://review.openstack.org/#/c/602201/3/specs/stein/approved/unified-limits-stein.rst@170

the roadmap of this spec is also saying deprecate the quota-class API.

melanie witt  于2018年10月25日周四 上午3:54写道:

> On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote:
> > On 10/24/2018 10:10 AM, Jay Pipes wrote:
> >> I'd like to propose deprecating this API and getting rid of this
> >> functionality since it conflicts with the new Keystone /limits endpoint,
> >> is highly coupled with RAX's turnstile middleware and I can't seem to
> >> find anyone who has ever used it. Deprecating this API and functionality
> >> would make the transition to a saner quota management system much easier
> >> and straightforward.
> > I was trying to do this before it was cool:
> >
> > https://review.openstack.org/#/c/411035/
> >
> > I think it was the Pike PTG in ATL where people said, "meh, let's just
> > wait for unified limits from keystone and let this rot on the vine".
> >
> > I'd be happy to restore and update that spec.
>
> Yeah, we were thinking the presence of the API and code isn't harming
> anything and sometimes we talk about situations where we could use them.
>
> Quota classes come up occasionally whenever we talk about preemptible
> instances. Example: we could create and use a quota class "preemptible"
> and decorate preemptible flavors with that quota_class in order to give
> them unlimited quota. There's also talk of quota classes in the "Count
> quota based on resource class" spec [1] where we could have leveraged
> quota classes to create and enforce quota limits per custom resource
> class. But I think the consensus there was to hold off on quota by
> custom resource class until we migrate to unified limits and oslo.limit.
>
> So, I think my concern in removing the internal code that is capable of
> enforcing quota limit per quota class is the preemptible instance use
> case. I don't have my mind wrapped around if/how we could solve it using
> unified limits yet.
>
> And I was just thinking, if we added a project_id column to the
> quota_classes table and correspondingly added it to the
> os-quota-class-sets API, we could pretty simply implement quota by
> flavor, which is a feature operators like Oath need. An operator could
> create a quota class limit per project_id and then decorate flavors with
> quota_class to enforce them per flavor.
>
> I recognize that maybe it would be too confusing to solve use cases with
> quota classes given that we're going to migrate to united limits. At the
> same time, I'm hesitant to close the door on a possibility before we
> have some idea about how we'll solve them without quota classes. Has
> anyone thought about how we can solve the use cases with unified limits
> for things like preemptible instances and quota by flavor?
>
> Cheers,
> -melanie
>
> [1] https://review.openstack.org/569011
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [ironic] agreement on how to specify options that impact scheduling and configuration

2018-10-07 Thread Alex Xu
Jay Pipes  于2018年10月5日周五 下午9:25写道:

> Added [ironic] topic.
>
> On 10/04/2018 06:06 PM, Chris Friesen wrote:
> > While discussing the "Add HPET timer support for x86 guests"
> > blueprint[1] one of the items that came up was how to represent what are
> > essentially flags that impact both scheduling and configuration.  Eric
> > Fried posted a spec to start a discussion[2], and a number of nova
> > developers met on a hangout to hash it out.  This is the result.
> >
> > In this specific scenario the goal was to allow the user to specify that
> > their image required a virtual HPET.  For efficient scheduling we wanted
> > this to map to a placement trait, and the virt driver also needed to
> > enable the feature when booting the instance.  (This can be generalized
> > to other similar problems, including how to specify scheduling and
> > configuration information for Ironic.)
> >
> > We discussed two primary approaches:
> >
> > The first approach was to specify an arbitrary "key=val" in flavor
> > extra-specs or image properties, which nova would automatically
> > translate into the appropriate placement trait before passing it to
> > placement.  Once scheduled to a compute node, the virt driver would look
> > for "key=val" in the flavor/image to determine how to proceed.
> >
> > The second approach was to directly specify the placement trait in the
> > flavor extra-specs or image properties.  Once scheduled to a compute
> > node, the virt driver would look for the placement trait in the
> > flavor/image to determine how to proceed.
> >
> > Ultimately, the decision was made to go with the second approach.  The
> > result is that it is officially acceptable for virt drivers to key off
> > placement traits specified in the image/flavor in order to turn on/off
> > configuration options for the instance.  If we do get down to the virt
> > driver and the trait is set, and the driver for whatever reason
> > determines it's not capable of flipping the switch, it should fail.
>
> Ironicers, pay attention to the above! :) It's a green light from Nova
> to use the traits list contained in the flavor extra specs and image
> metadata when (pre-)configuring an instance.
>
> > It should be noted that it only makes sense to use placement traits for
> > things that affect scheduling.  If it doesn't affect scheduling, then it
> > can be stored in the flavor extra-specs or image properties separate
> > from the placement traits.  Also, this approach only makes sense for
> > simple booleans.  Anything requiring more complex configuration will
> > likely need additional extra-spec and/or config and/or unicorn dust.
>
> Ironicers, also pay close attention to the advice above. Things that are
> not "scheduleable" -- in other words, things that don't filter the list
> of hosts that a workload can land on -- should not go in traits.
>

++, see I talk about the same thing before
https://review.openstack.org/#/c/504952/5/specs/approved/config-template-traits.rst@95
:)

>
> Finally, here's the HPET os-traits patch. Reviews welcome (it's tiny
> patch):
>
> https://review.openstack.org/608258
>
> Best,
> -jay
>
> > Chris
> >
> > [1] https://blueprints.launchpad.net/nova/+spec/support-hpet-on-guest
> > [2]
> >
> https://review.openstack.org/#/c/607989/1/specs/stein/approved/support-hpet-on-guest.rst
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Alex Xu
Sorry for append another email for something I missed to say.

Alex Xu  于2018年9月29日周六 上午10:01写道:

>
>
> Jay Pipes  于2018年9月29日周六 上午5:51写道:
>
>> On 09/28/2018 04:42 PM, Eric Fried wrote:
>> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
>> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried 
>> wrote:
>> >>> It's time somebody said this.
>> >>>
>> >>> Every time we turn a corner or look under a rug, we find another use
>> >>> case for provider traits in placement. But every time we have to have
>> >>> the argument about whether that use case satisfies the original
>> >>> "intended purpose" of traits.
>> >>>
>> >>> That's only reason I've ever been able to glean: that it (whatever
>> "it"
>> >>> is) wasn't what the architects had in mind when they came up with the
>> >>> idea of traits. We're not even talking about anything that would
>> require
>> >>> changes to the placement API. Just, "Oh, that's not a *capability* -
>> >>> shut it down."
>> >>>
>> >>> Bubble wrap was originally intended as a textured wallpaper and a
>> >>> greenhouse insulator. Can we accept the fact that traits have (many,
>> >>> many) uses beyond marking capabilities, and quit with the arbitrary
>> >>> restrictions?
>> >>
>> >> How far are we willing to go? Does an arbitrary (key: value) pair
>> >> encoded in a trait name like key_`str(value)` (e.g.
>> CURRENT_TEMPERATURE:
>> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
>> >> placement?
>> >
>> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
>> > TEMPERATURE_ is not.
>>
>> That's correct, because you're encoding >1 piece of information into the
>> single string (the fact that it's a temperature *and* the value of that
>> temperature are the two pieces of information encoded into the single
>> string).
>>
>> Now that there's multiple pieces of information encoded in the string
>> the reader of the trait string needs to know how to decode those bits of
>> information, which is exactly what we're trying to avoid doing (because
>> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and
>> the giant hairball that is the NUMA and CPU pinning "metadata requests"
>> how that turns out).
>>
>
> May I understand the one of Jay's complain is about metadata API
> undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess?
>

If yes, then we resolve the discoverable by the "/Traits" API.


>
> Another complain is about the information in the string. Agree with that
> TEMPERATURE_ is terriable.
> I prefer the way I used in nvdimm proposal now, I don't want to use Trait
> NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the
> different resource provider, and use min_size, max_size limit the
> allocation. And the user will request with resource class like
> RC_NVDIMM_GB=512.
>

TEMPERATURE_ is wrong, as the way using it. But I don't
thing the version of BIOS is wrong, I don't expect the end user to ready
the information from the trait directly, there should document from the
admin to explain more. The version of BIOS should be a thing understand by
the admin, then it is enough.


>
>>
>> > This thread isn't about setting these parameters; it's about getting
>> > us to a point where we can discuss a question just like this one
>> > without running up against: >
>> > "That's a hard no, because you shouldn't encode key/value pairs in
>> traits."
>> >
>> > "Oh, why's that?"
>> >
>> > "Because that's not what we intended when we created traits."
>> >
>> > "But it would work, and the alternatives are way harder."
>> >
>> > "-1"
>> >
>> > "But..."
>> >
>> > "-I
>>
>> I believe I've articulated a number of times why traits should remain
>> unary pieces of information, and not just said "because that's what we
>> intended when we created traits".
>>
>> I'm tough on this because I've seen the garbage code and unmaintainable
>> mess that not having structurally sound data modeling concepts and
>> information interpretation rules leads to in Nova and I don't want to
>> encourage any more of it.
>>
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Alex Xu
Jay Pipes  于2018年9月29日周六 上午5:51写道:

> On 09/28/2018 04:42 PM, Eric Fried wrote:
> > On 09/28/2018 09:41 AM, Balázs Gibizer wrote:
> >> On Fri, Sep 28, 2018 at 3:25 PM, Eric Fried  wrote:
> >>> It's time somebody said this.
> >>>
> >>> Every time we turn a corner or look under a rug, we find another use
> >>> case for provider traits in placement. But every time we have to have
> >>> the argument about whether that use case satisfies the original
> >>> "intended purpose" of traits.
> >>>
> >>> That's only reason I've ever been able to glean: that it (whatever "it"
> >>> is) wasn't what the architects had in mind when they came up with the
> >>> idea of traits. We're not even talking about anything that would
> require
> >>> changes to the placement API. Just, "Oh, that's not a *capability* -
> >>> shut it down."
> >>>
> >>> Bubble wrap was originally intended as a textured wallpaper and a
> >>> greenhouse insulator. Can we accept the fact that traits have (many,
> >>> many) uses beyond marking capabilities, and quit with the arbitrary
> >>> restrictions?
> >>
> >> How far are we willing to go? Does an arbitrary (key: value) pair
> >> encoded in a trait name like key_`str(value)` (e.g. CURRENT_TEMPERATURE:
> >> 85 encoded as CUSTOM_TEMPERATURE_85) something we would be OK to see in
> >> placement?
> >
> > Great question. Perhaps TEMPERATURE_DANGEROUSLY_HIGH is okay, but
> > TEMPERATURE_ is not.
>
> That's correct, because you're encoding >1 piece of information into the
> single string (the fact that it's a temperature *and* the value of that
> temperature are the two pieces of information encoded into the single
> string).
>
> Now that there's multiple pieces of information encoded in the string
> the reader of the trait string needs to know how to decode those bits of
> information, which is exactly what we're trying to avoid doing (because
> we can see from the ComputeCapabilitiesFilter, the extra_specs mess, and
> the giant hairball that is the NUMA and CPU pinning "metadata requests"
> how that turns out).
>

May I understand the one of Jay's complain is about metadata API
undiscoverable? That is extra_spec mess and ComputeCapabilitiesFilter mess?

Another complain is about the information in the string. Agree with that
TEMPERATURE_ is terriable.
I prefer the way I used in nvdimm proposal now, I don't want to use Trait
NVDIMM_DEVICE_500GB, NVDIMM_DEVICE_1024GB. I want to put them into the
different resource provider, and use min_size, max_size limit the
allocation. And the user will request with resource class like
RC_NVDIMM_GB=512.


>
> > This thread isn't about setting these parameters; it's about getting
> > us to a point where we can discuss a question just like this one
> > without running up against: >
> > "That's a hard no, because you shouldn't encode key/value pairs in
> traits."
> >
> > "Oh, why's that?"
> >
> > "Because that's not what we intended when we created traits."
> >
> > "But it would work, and the alternatives are way harder."
> >
> > "-1"
> >
> > "But..."
> >
> > "-I
>
> I believe I've articulated a number of times why traits should remain
> unary pieces of information, and not just said "because that's what we
> intended when we created traits".
>
> I'm tough on this because I've seen the garbage code and unmaintainable
> mess that not having structurally sound data modeling concepts and
> information interpretation rules leads to in Nova and I don't want to
> encourage any more of it.
>
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-09-28 Thread Alex Xu
Chris Dent  于2018年9月29日周六 上午1:19写道:

> On Fri, 28 Sep 2018, Jay Pipes wrote:
>
> > On 09/28/2018 09:25 AM, Eric Fried wrote:
> >> It's time somebody said this.
>
> Yes, a useful topic, I think.
>

++, I'm interesting this topic also, since it confuses me for a long time...


>
> >> Every time we turn a corner or look under a rug, we find another use
> >> case for provider traits in placement. But every time we have to have
> >> the argument about whether that use case satisfies the original
> >> "intended purpose" of traits.
> >>
> >> That's only reason I've ever been able to glean: that it (whatever "it"
> >> is) wasn't what the architects had in mind when they came up with the
> >> idea of traits.
> >
> > Don't pussyfoot around things. It's me you're talking about, Eric. You
> could
> > just ask me instead of passive-aggressively posting to the list like
> this.
>
> It's not just you. Ed and I have also expressed some fairly strong
> statement about how traits are "supposed" to be used and I would
> guess that from Eric's perspective all three of us (amongst others)
> have some form of architectural influence. Since it takes a village
> and all that.
>
> > They aren't arbitrary. They are there for a reason: a trait is a boolean
> > capability. It describes something that either a provider is capable of
> > supporting or it isn't.
>
> This is somewhat (maybe even only slightly) different from what I
> think the definition of a trait is, and that nuance may be relevant.
>
> I describe a trait as a "quality that a resource provider has" (the
> car is blue). This contrasts with a resource class which is a
> "quantity that a resource provider has" (the car has 4 doors).
>
>
Yes, this is what I'm thinking when I propose the Trait. Basically, I'm
trying to match two points in the proposal: #1 we need qualitative of
resource, #2 we don't want another metadata API, since metadata API isn't
discoverable and wild place, people put anything to it. Nobody knows what
metadata available in the code except deep into the code.

For #1, just as Chris said.
For #2, You have to create Trait before using it, and we have API to query
traits, make it discoverable in the API. And standard trait make its naming
has rule, then as Jay suggested, we have os-traits library to store all the
standard traits. But we have to have custom trait, since there have
use-case for managing resource out of OpenStack.



> Our implementation is pretty much exactly that ^. We allow
> clients to ask "give me things that have qualities x, y, z, not
> qualities a, b, c, and quanities of G of 5 and H of 7".
>
> Add in aggregates and we have exactly what you say:
>
> > * Does the provider have *capacity* for the requested resources?
> > * Does the provider have the required (or forbidden) *capabilities*?
> > * Does the provider belong to some group?
>
> The nuance of difference is that your description of *capabilities*
> seems more narrow than my description of *qualities* (aka
> characteristics). You've got something fairly specific in mind, as a
> way of constraining the profusion of noise that has happened with
> how various kinds of information about resources of all sorts is
> managed in OpenStack, as you describe in your message.
>
> I do not think it should be placement's job to control that noise.
> It should be placement's job to provide a very strict contract about
> what you can do with a trait:
>
> * create it, if necessary
> * assign it to one or more resource providers
> * ask for providers that either have it
> * ... or do not have it
>
> That's all. Placement _code_ should _never_ be aware of the value of
> a trait (except for the magical MISC_SHARES...). It should never
> become possible to regex on traits or do comparisons
> (required=

++


>
> > If we want to add further constraints to the placement allocation
> candidates
> > request that ask things like:
> >
> > * Does the provider have version 1.22.61821 of BIOS firmware from
> Marvell
> > installed on it?
>
> That's a quality of the provider in a moment.
>
> > * Does the provider support an FPGA that has had an OVS program flashed
> to it
> > in the last 20 days?
>
> If you squint, so is this.
>
> > * Does the provider belong to physical network "corpnet" and also
> support
> > creation of virtual NICs of type either "DIRECT" or "NORMAL"?
>
> And these.
>
> But at least some of them are dynamic rather than some kind of
> platonic ideal associated with the resource provider.
>
> I don't think placement should be concerned about temporal aspects
> of traits. If we can't write a web service that can handle setting
> lots of traits every second of every day, we should go home. If
> clients of placement want to set weird traits, more power to them.
>
> However, if clients of placement (such as nova) which are being the
> orchestrator of resource providers manipulated by multiple systems
> (neutron, cinder, ironic, cyborg, etc) wish to set some constraints
> on how and what traits ca

Re: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)?

2018-09-17 Thread Alex Xu
That only means after 599276 we only have servers API and
os-instance-action API stopped accepting the undefined query parameter.

What I'm thinking about is checking all the APIs, add json-query-param
checking with additionalProperties=True if the API don't have yet. And
using another microversion set additionalProperties to False, then the
whole Nova API become consistent.

Jay Pipes  于2018年9月18日周二 上午4:07写道:

> On 09/17/2018 03:28 PM, Matt Riedemann wrote:
> > This is a question from a change [1] which adds a new changes-before
> > filter to the servers, os-instance-actions and os-migrations APIs.
> >
> > For context, the os-instance-actions API stopped accepting undefined
> > query parameters in 2.58 when we added paging support.
> >
> > The os-migrations API stopped allowing undefined query parameters in
> > 2.59 when we added paging support.
> >
> > The open question on the review is if we should change GET /servers and
> > GET /servers/detail to stop allowing undefined query parameters starting
> > with microversion 2.66 [2]. Apparently when we added support for 2.5 and
> > 2.26 for listing servers we didn't think about this. It means that a
> > user can specify a query parameter, documented in the API reference, but
> > with an older microversion and it will be silently ignored. That is
> > backward compatible but confusing from an end user perspective since it
> > would appear to them that the filter is not being applied, when it fact
> > it would be if they used the correct microversion.
> >
> > So do we want to start enforcing query parameters when listing servers
> > to our defined list with microversion 2.66 or just continue to silently
> > ignore them if used incorrectly?
> >
> > Note that starting in Rocky, the Neutron API will start rejecting
> > unknown query parameteres [3] if the filter-validation extension is
> > enabled (since Neutron doesn't use microversions). So there is some
> > precedent in OpenStack for starting to enforce query parameters.
> >
> > [1] https://review.openstack.org/#/c/599276/
> > [2]
> >
> https://review.openstack.org/#/c/599276/23/nova/api/openstack/compute/schemas/servers.py
> >
> > [3]
> > https://docs.openstack.org/releasenotes/neutron/rocky.html#upgrade-notes
>
> My vote would be just change additionalProperties to False in the 599276
> patch and be done with it.
>
> Add a release note about the change, of course.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-16 Thread Alex Xu
Fred Li  于2018年9月17日周一 上午8:25写道:

> There are many wechat groups about OpenStack, some of them are regional
> (like southern east China, Beijing, Xi'an group), some of them are event
> oriented, and some are for others. Yes, you need to be invited, which is
> not convenient. So far as I know there is not nova group, or maybe Alex
> knows.
>

No, I don't have any nova group.


> Thanks, I will invite you to 1 or 2 active groups.
>
> On Sun, Sep 16, 2018 at 11:33 PM, Matt Riedemann 
> wrote:
>
>> On 9/15/2018 9:50 PM, Fred Li wrote:
>>
>>> As a non-native English speaker, it is nice-to-have that some TC or BoD
>>> can stay in the local social media, like wechat group in China. But it is
>>> also very difficult for non-native Chinese speakers to stay find useful
>>> information in ton of Chinese chats.
>>> My thoughts (even I am not a TC candidate) on this is,
>>> 1. it is kind of you to stay in the local group.
>>> 2. if we know that you are in, we will say English if we want you to
>>> notice.
>>> 3. since there is local OpenStack operation manager, hope he/she can
>>> identify some information and help to translate, or remind them to
>>> translate.
>>>
>>> My one cent.
>>>
>>
>> Is there a generic openstack group on wechat? Does one have to be invited
>> to it? Is there a specific openstack/nova group on wechat? I'm on wechat
>> anyway so I don't mind being in those groups if someone wants to reach out.
>>
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards
> Fred Li (李永乐)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc]Question for candidates about global reachout

2018-09-16 Thread Alex Xu
I'm happy to be the translator or forwarder for the nova issue if you guys
need(although, the nova team isn't happy with me now, also  i see it is not
to my personal. I guess they won't be make me hard for other work I do.). I
can see there are a lot of Chinese operators/users complain some issues,
but they never send their feedback to the mail-list, this may due to the
language, or people don't know the OpenSource culture in the China.(To be
host, the OpenStack is first project, let a lot of developers to understand
what is OpenSource, and how it is works. In the before, since the linux
kernel is hard, really only few people in the China experience OpenSource).




Matt Riedemann  于2018年9月16日周日 下午11:34写道:

> On 9/15/2018 9:50 PM, Fred Li wrote:
> > As a non-native English speaker, it is nice-to-have that some TC or BoD
> > can stay in the local social media, like wechat group in China. But it
> > is also very difficult for non-native Chinese speakers to stay find
> > useful information in ton of Chinese chats.
> > My thoughts (even I am not a TC candidate) on this is,
> > 1. it is kind of you to stay in the local group.
> > 2. if we know that you are in, we will say English if we want you to
> notice.
> > 3. since there is local OpenStack operation manager, hope he/she can
> > identify some information and help to translate, or remind them to
> > translate.
> >
> > My one cent.
>
> Is there a generic openstack group on wechat? Does one have to be
> invited to it? Is there a specific openstack/nova group on wechat? I'm
> on wechat anyway so I don't mind being in those groups if someone wants
> to reach out.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Chris Dent for placement-core

2018-09-04 Thread Alex Xu
+1

Eric Fried  于2018年8月31日周五 下午11:45写道:

> The openstack/placement project [1] and its core team [2] have been
> established in gerrit.
>
> I hereby nominate Chris Dent for membership in the placement-core team.
> He has been instrumental in the design, implementation, and stewardship
> of the placement API since its inception and has shown clear and
> consistent leadership.
>
> As we are effectively bootstrapping placement-core at this time, it
> would seem appropriate to consider +1/-1 responses from heavy placement
> contributors as well as existing cores (currently nova-core).
>
> [1] https://review.openstack.org/#/admin/projects/openstack/placement
> [2] https://review.openstack.org/#/admin/groups/1936,members
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-28 Thread Alex Xu
2018-08-27 23:31 GMT+08:00 Matt Riedemann :

> On 8/24/2018 7:36 AM, Chris Dent wrote:
>
>>
>> Over the past few days a few of us have been experimenting with
>> extracting placement to its own repo, as has been discussed at
>> length on this list, and in some etherpads:
>>
>> https://etherpad.openstack.org/p/placement-extract-stein
>> https://etherpad.openstack.org/p/placement-extraction-file-notes
>>
>> As part of that, I've been doing some exploration to tease out the
>> issues we're going to hit as we do it. None of this is work that
>> will be merged, rather it is stuff to figure out what we need to
>> know to do the eventual merging correctly and efficiently.
>>
>> Please note that doing that is just the near edge of a large
>> collection of changes that will cascade in many ways to many
>> projects, tools, distros, etc. The people doing this are aware of
>> that, and the relative simplicity (and fairly immediate success) of
>> these experiments is not misleading people into thinking "hey, no
>> big deal". It's a big deal.
>>
>> There's a strategy now (described at the end of the first etherpad
>> listed above) for trimming the nova history to create a thing which
>> is placement. From the first run of that Ed created a github repo
>> and I branched that to eventually create:
>>
>> https://github.com/EdLeafe/placement/pull/2
>>
>> In that, all the placement unit and functional tests are now
>> passing, and my placecat [1] integration suite also passes.
>>
>> That work has highlighted some gaps in the process for trimming
>> history which will be refined to create another interim repo. We'll
>> repeat this until the process is smooth, eventually resulting in an
>> openstack/placement.
>>
>
> We talked about the github strategy a bit in the placement meeting today
> [1]. Without being involved in this technical extraction work for the past
> few weeks, I came in with a different perspective on the end-game, and it
> was not aligned with what Chris/Ed thought as far as how we get to the
> official openstack/placement repo.
>
> At a high level, Ed's repo [2] is a fork of nova with large changes on top
> using pull requests to do things like remove the non-placement nova files,
> update import paths (because the import structure changes from
> nova.api.openstack.placement to just placement), and then changes from
> Chris [3] to get tests working. Then the idea was to just use that to seed
> the openstack/placement repo and rather than review the changes along the
> way*, people that care about what changed (like myself) would see the tests
> passing and be happy enough.
>
> However, I disagree with this approach since it bypasses our community
> code review system of using Gerrit and relying on a core team to approve
> changes at the sake of expediency.
>
> What I would like to see are the changes that go into making the seed repo
> and what gets it to passing tests done in gerrit like we do for everything
> else. There are a couple of options on how this is done though:
>
> 1. Seed the openstack/placement repo with the filter_git_history.sh script
> output as Ed has done here [4]. This would include moving the placement
> files to the root of the tree and dropping nova-specific files. Then make
> incremental changes in gerrit like with [5] and the individual changes
> which make up Chris's big pull request [3]. I am primarily interested in
> making sure there are not content changes happening, only mechanical
> tree-restructuring type changes, stuff like that. I'm asking for more
> changes in gerrit so they can be sanely reviewed (per normal).
>
> 2. Eric took a slightly different tack in that he's OK with just a couple
> of large changes (or even large patch sets within a single change) in
> gerrit rather than ~30 individual changes. So that would be more like at
> most 3 changes in gerrit for [4][5][3].
>
> 3. The 3rd option is we just don't use gerrit at all and seed the official
> repo with the results of Chris and Ed's work in Ed's repo in github.
> Clearly this would be the fastest way to get us to a new repo (at the
> expense of bucking community code review and development process - is an
> exception worth it?).
>
> Option 1 would clearly be a drain on at least 2 nova cores to go through
> the changes. I think Eric is on board for reviewing options 1 or 2 in
> either case, but he prefers option 2. Since I'm throwing a wrench in the
> works, I also need to stand up and review the changes if we go with option
> 1 or 2. Jay said he'd review them but consider these reviews lower
> priority. I expect we could get some help from some other nova cores
> though, maybe not on all changes, but at least some (thinking gibi,
> alex_xu, sfinucan).
>

I can help some. And yes, small change is good than huge change.


>
> Any CI jobs would be non-voting while going through options 1 or 2 until
> we get to a point that tests should finally be passing and we can make them
> voting (it should be possible to contro

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-24 Thread Alex Xu
gt; fully exercised and bugs and performance problems found.
>
> The projects above, which might like to--and at various times have
> expressed desire to do so--work on features within placement that
> would benefit their projects, are forced to compete with existing
> priorities to get blueprint attention. Though runways seemed to help
> a bit on that front this just-ending cycle, it's simply too dense a
> competitive environment for good, clean progress.
>
> 4. While extracting the placement code into another repo within the
> compute umbrella might help a small amount with some of the
> competition described in item 3, it would be insufficient. The same
> forces would apply.
>
> Similarly, _if_ there are factors which are preventing some people
> from being willing to participate with a compute-associated project,
> a repo within compute is an insufficient break.
>
> Also, if we are going to go to the trouble of doing any kind of
> disrupting transition of the placement code, we may as well take as
> a big a step as possible in this one instance as these opportunities
> are rare and our capacity for change is slow. I started working on
> placement in early 2016, at that time we had plans to extract it to
> "it's own thing". We've passed the half-way point in 2018.
>
> 5. In OpenStack we have a tradition of the contributors having a
> strong degree of self-determination. If that tradition is to be
> upheld, then it would make sense that the people who designed and
> wrote the code that is being extracted would get to choose what
> happens with it. As much as Mel's and Dan's (only picking on them
> here because they are the dissenting voices that have showed up so
> far) input has been extremely important and helpful in the evolution
> of placement, they are not those people.
>
> So my hope is that (in no particular order) Jay Pipes, Eric Fried,
> Takashi Natsume, Tetsuro Nakamura, Matt Riedemann, Andrey Volkov,
> Alex Xu, Balazs Gibizer, Ed Leafe, and any other contributor to
> placement whom I'm forgetting [1] would express their preference on
> what they'd like to see happen.
>

Sorry, I didn't read all the reply, compare to 70 replies, I prefer to
review some specs...English is heavy for me.

I'm not very care about the extraction. But in the currently situation, I
think placement contributors and nova contributors still need work to
together, the resharp API is an example. So whatever we extract the
placement or not, pretty sure nova and placement should work together.

And really hope we won't have separate room in the PTG for placement and
nova..I don't want to make a hard choice to listen which one...I already
used to stay at one spot in a week now.


>
> At the same time, if people from neutron, cinder, blazar, zun,
> mogan, ironic, and cyborg could express their preferences, we can get
> through this by acclaim and get on with getting things done.
>
> Thank you.
>
> [1] My apologies if I have left you out. It's Saturday, I'm tried
> from trying to make this happen for so long, and I'm using various
> forms of git blame and git log to extract names from the git history
> and there's some degree of magic and guessing going on.
>
>
> --
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] A multi-cell instance-list performance test

2018-08-19 Thread Alex Xu
2018-08-17 2:44 GMT+08:00 Dan Smith :

> >  yes, the DB query was in serial, after some investigation, it seems
> that we are unable to perform eventlet.mockey_patch in uWSGI mode, so
> >  Yikun made this fix:
> >
> >  https://review.openstack.org/#/c/592285/
>
> Cool, good catch :)
>
> >
> >  After making this change, we test again, and we got this kind of data:
> >
> >   total collect sort view
> >  before monkey_patch 13.5745 11.7012 1.1511 0.5966
> >  after monkey_patch 12.8367 10.5471 1.5642 0.6041
> >
> >  The performance improved a little, and from the log we can saw:
>
> Since these all took ~1s when done in series, but now take ~10s in
> parallel, I think you must be hitting some performance bottleneck in
> either case, which is why the overall time barely changes. Some ideas:
>
> 1. In the real world, I think you really need to have 10x database
>servers or at least a DB server with plenty of cores loading from a
>very fast (or separate) disk in order to really ensure you're getting
>full parallelism of the DB work. However, because these queries all
>took ~1s in your serialized case, I expect this is not your problem.
>
> 2. What does the network look like between the api machine and the DB?
>
> 3. What do the memory and CPU usage of the api process look like while
>this is happening?
>
> Related to #3, even though we issue the requests to the DB in parallel,
> we still process the result of those calls in series in a single python
> thread on the API. That means all the work of reading the data from the
> socket, constructing the SQLA objects, turning those into nova objects,
> etc, all happens serially. It could be that the DB query is really a
> small part of the overall time and our serialized python handling of the
> result is the slow part. If you see the api process pegging a single
> core at 100% for ten seconds, I think that's likely what is happening.
>

I remember I did a test on sqlalchemy, the sqlalchemy object construction
is super slow than fetch the data from remote.
Maybe you can try profile it, to figure out how much time spend on the
wire, how much time spend on construct the object.
http://docs.sqlalchemy.org/en/latest/faq/performance.html


>
> >  so, now the queries are in parallel, but the whole thing still seems
> >  serial.
>
> In your table, you show the time for "1 cell, 1000 instances" as ~3s and
> "10 cells, 1000 instances" as 10s. The problem with comparing those
> directly is that in the latter, you're actually pulling 10,000 records
> over the network, into memory, processing them, and then just returning
> the first 1000 from the sort. A closer comparison would be the "10
> cells, 100 instances" with "1 cell, 1000 instances". In both of those
> cases, you pull 1000 instances total from the db, into memory, and
> return 1000 from the sort. In that case, the multi-cell situation is
> faster (~2.3s vs. ~3.1s). You could also compare the "10 cells, 1000
> instances" case to "1 cell, 10,000 instances" just to confirm at the
> larger scale that it's better or at least the same.
>
> We _have_ to pull $limit instances from each cell, in case (according to
> the sort key) the first $limit instances are all in one cell. We _could_
> try to batch the results from each cell to avoid loading so many that we
> don't need, but we punted this as an optimization to be done later. I'm
> not sure it's really worth the complexity at this point, but it's
> something we could investigate.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-01 Thread Alex Xu
2018-08-02 4:09 GMT+08:00 Jay Pipes :

> On 08/01/2018 02:02 PM, Chris Friesen wrote:
>
>> On 08/01/2018 11:32 AM, melanie witt wrote:
>>
>> I think it's definitely a significant issue that troubleshooting "No
>>> allocation
>>> candidates returned" from placement is so difficult. However, it's not
>>> straightforward to log detail in placement when the request for
>>> allocation
>>> candidates is essentially "SELECT * FROM nodes WHERE cpu usage < needed
>>> and disk
>>> usage < needed and memory usage < needed" and the result is returned
>>> from the API.
>>>
>>
>> I think the only way to get useful info on a failure would be to break
>> down the huge SQL statement into subclauses and store the results of the
>> intermediate queries.
>>
>
> This is a good idea and something that can be done.
>

That sounds like you need separate sql query for each resource to get
the intermediate,
will that be terrible performance than a single query to get the final
result?


>
> Unfortunately, it's refactoring work and as a community, we tend to
> prioritize fancy features like NUMA topology and CPU pinning over
> refactoring work.
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-29 Thread Alex Xu
Oh, right, sorry, I'm keeping to think about its about the user in specific
tenant usage just like other resources. You are right, Keypair has nothing
about the tenant, only about the user. Thanks.

2018-07-26 23:22 GMT+08:00 Chris Friesen :

> On 07/25/2018 06:22 PM, Alex Xu wrote:
>
>>
>>
>> 2018-07-26 1:43 GMT+08:00 Chris Friesen > <mailto:chris.frie...@windriver.com>>:
>>
>
> Keypairs are weird in that they're owned by users, not projects.  This
>> is
>> arguably wrong, since it can cause problems if a user boots an
>> instance with
>> their keypair and then gets removed from a project.
>>
>> Nova microversion 2.54 added support for modifying the keypair
>> associated
>> with an instance when doing a rebuild.  Before that there was no
>> clean way
>> to do it.
>>
>>
>> I don't understand this, we didn't count the keypair usage with the
>> instance
>> together, we just count the keypair usage for specific user.
>>
>
>
> I was giving an example of why it's strange that keypairs are owned by
> users rather than projects.  (When instances are owned by projects, and
> keypairs are used to access instances.)
>
>
> Chris
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Alex Xu
2018-07-26 1:43 GMT+08:00 Chris Friesen :

> On 07/25/2018 10:29 AM, William M Edmonds wrote:
>
>>
>> Ghanshyam Mann  wrote on 07/25/2018 05:44:46 AM:
>> ... snip ...
>>  > 1. is it ok to show the keypair used info via API ? any original
>>  > rational not to do so or it was just like that from starting.
>>
>> keypairs aren't tied to a tenant/project, so how could nova track/report
>> a quota
>> for them on a given tenant/project? Which is how the API is
>> constructed... note
>> the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail
>>
>>  > 2. Because this change will show the keypair used quota information
>>  > in API's existing filed 'in_use', it is API behaviour change (not
>>  > interface signature change in backward incompatible way) which can
>>  > cause interop issue. Should we bump microversion for this change?
>>
>> If we find a meaningful way to return in_use data for keypairs, then yes,
>> I
>> would expect a microversion bump so that callers can distinguish between
>> a)
>> talking to an older installation where in_use is always 0 vs. b) talking
>> to a
>> newer installation where in_use is 0 because there are really none in
>> use. Or if
>> we remove keypairs from the response, which at a glance seems to make more
>> sense, that should also have a microversion bump so that someone who
>> expects the
>> old response format will still get it.
>>
>
> Keypairs are weird in that they're owned by users, not projects.  This is
> arguably wrong, since it can cause problems if a user boots an instance
> with their keypair and then gets removed from a project.
>
> Nova microversion 2.54 added support for modifying the keypair associated
> with an instance when doing a rebuild.  Before that there was no clean way
> to do it.


I don't understand this, we didn't count the keypair usage with the
instance together, we just count the keypair usage for specific user.


>
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Alex Xu
2018-07-26 0:29 GMT+08:00 William M Edmonds :

>
> Ghanshyam Mann  wrote on 07/25/2018 05:44:46 AM:
> ... snip ...
> > 1. is it ok to show the keypair used info via API ? any original
> > rational not to do so or it was just like that from starting.
>
> keypairs aren't tied to a tenant/project, so how could nova track/report a
> quota for them on a given tenant/project? Which is how the API is
> constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/
> detail
>

Keypairs usage is only value for the API 'GET
/os-quota-sets/{tenant_id}/detail?user_id={user_id}'

>
>
> > 2. Because this change will show the keypair used quota information
> > in API's existing filed 'in_use', it is API behaviour change (not
> > interface signature change in backward incompatible way) which can
> > cause interop issue. Should we bump microversion for this change?
>
> If we find a meaningful way to return in_use data for keypairs, then yes,
> I would expect a microversion bump so that callers can distinguish between
> a) talking to an older installation where in_use is always 0 vs. b) talking
> to a newer installation where in_use is 0 because there are really none in
> use. Or if we remove keypairs from the response, which at a glance seems to
> make more sense, that should also have a microversion bump so that someone
> who expects the old response format will still get it.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Alex Xu
2018-07-25 17:44 GMT+08:00 Ghanshyam Mann :

> Hi All,
>
> During today API office hour, we were discussing about keypair quota usage
> bug (newton) [1]. key_pair 'in_use' quota is always 0 even when request per
> user which is because it is being set as 0 always [2].
>
> From checking the history and review discussion on [3], it seems that it
> was like that from staring. key_pair quota is being counted when actually
> creating the keypair but it is not shown in API 'in_use' field. Vishakha
> (assignee of this bug) is currently planing to work on this bug and before
> that we have few queries:
>
> 1. is it ok to show the keypair used info via API ? any original rational
> not to do so or it was just like that from starting.
>

It doesn't make sense to show the usage when the user queries project
quota, but it makes sense to show the usage when the user queries specific
user quota. And we have no way to show usage for the
server_group_memebers/security_group_rules, since they are the limit for a
specific server group and security group, we have no way to express that in
our quota API.



>
> 2. Because this change will show the keypair used quota information in
> API's existing filed 'in_use', it is API behaviour change (not interface
> signature change in backward incompatible way) which can cause interop
> issue. Should we bump microversion for this change?
>

If we are going to bump microversion, I prefer to set the usage to -1 for
server_group_member/security_group_rules usage, since 0 is really confuse
for the end user.


>
> [1] https://bugs.launchpad.net/nova/+bug/1644457
> [2] https://github.com/openstack/nova/blob/bf497cc47497d3a5603bf60de65205
> 4ac5ae1993/nova/quota.py#L189
> [3] https://review.openstack.org/#/c/446239/
>
> -gmann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-08 Thread Alex Xu
The version API isn't protected by the microversion, since the version API
is used to discover the microversion.

2018-07-07 5:37 GMT+08:00 Matt Riedemann :

> On 7/6/2018 6:28 AM, Kristi Nikolla wrote:
>
>> If the answer is 'no', can we find a process that gets us there? Or
>> are we doomed
>> by the inability to version the version document?
>>
>
> We could always microversion the version document couldn't we? Not saying
> we want to, but it's an option right?
>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Alex Xu
2018-07-06 10:03 GMT+08:00 Alex Xu :

>
>
> 2018-07-06 2:55 GMT+08:00 melanie witt :
>
>> +openstack-dev@
>>
>> On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
>>
>>> But, I can not use nova command, endpoint nova have been redirected from
>>>> https to http. Here:http://prntscr.com/k2e8s6  (command: nova
>>>> –insecure service list)
>>>>
>>> First of all, it seems that the nova client is hitting /v2.1 instead of
>>> /v2.1/ URI and this seems to be triggering the redirect.
>>>
>>> Since openstack CLI works, I presume it must be using the correct URL
>>> and hence it’s not getting redirected.
>>>
>>>   And this is error log: Unable to establish connection tohttp://
>>>> 192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",))
>>>>
>>>>
>>> Looks to me that nova-api does a redirect to an absolute URL. I suspect
>>> SSL is terminated on the HAProxy and nova-api itself is configured without
>>> SSL so it redirects to an http URL.
>>>
>>> In my opinion, nova would be more load-balancer friendly if it used a
>>> relative URI in the redirect but that’s outside of the scope of this
>>> question and since I don’t know the context behind choosing the absolute
>>> URL, I could be wrong on that.
>>>
>>
>> Thanks for mentioning this. We do have a bug open in python-novaclient
>> around a similar issue [1]. I've added comments based on this thread and
>> will consult with the API subteam to see if there's something we can do
>> about this in nova-api.
>>
>>
> Emm...check with the RFC, it said the value of Location header is absolute
> URL https://tools.ietf.org/html/rfc2616.html#section-14.30
>

Sorry, correct that. the RFC7231 updated that. The relativeURL is ok.
https://tools.ietf.org/html/rfc7231#section-7.1.2


>
>
>> -melanie
>>
>> [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Alex Xu
2018-07-06 2:55 GMT+08:00 melanie witt :

> +openstack-dev@
>
> On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
>
>> But, I can not use nova command, endpoint nova have been redirected from
>>> https to http. Here:http://prntscr.com/k2e8s6  (command: nova –insecure
>>> service list)
>>>
>> First of all, it seems that the nova client is hitting /v2.1 instead of
>> /v2.1/ URI and this seems to be triggering the redirect.
>>
>> Since openstack CLI works, I presume it must be using the correct URL and
>> hence it’s not getting redirected.
>>
>>   And this is error log: Unable to establish connection tohttp://
>>> 192.168.30.70:8774/v2.1/: ('Connection aborted.', BadStatusLine("''",))
>>>
>>>
>> Looks to me that nova-api does a redirect to an absolute URL. I suspect
>> SSL is terminated on the HAProxy and nova-api itself is configured without
>> SSL so it redirects to an http URL.
>>
>> In my opinion, nova would be more load-balancer friendly if it used a
>> relative URI in the redirect but that’s outside of the scope of this
>> question and since I don’t know the context behind choosing the absolute
>> URL, I could be wrong on that.
>>
>
> Thanks for mentioning this. We do have a bug open in python-novaclient
> around a similar issue [1]. I've added comments based on this thread and
> will consult with the API subteam to see if there's something we can do
> about this in nova-api.
>
>
Emm...check with the RFC, it said the value of Location header is absolute
URL https://tools.ietf.org/html/rfc2616.html#section-14.30


> -melanie
>
> [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Cyborg traits

2018-06-06 Thread Alex Xu
After reading the spec
https://review.openstack.org/#/c/554717/14/doc/specs/rocky/cyborg-nova-sched.rst
,
I confuse on the CUSTOM_ACCELERATOR_FPGA meaning. Initially, I thought it
means a region. But after reading the spec, it can be a device, a region or
a function. Is it on purpose design?

Sounds like we need to have agreement on the naming also. We already have
resource class `VGPU`, so we only need to add another resource class
'FPGA'(but same as above question, I thought it should be FPGA_REGION?), is
it right? I didn't see any requirement on the prefix 'ACCELERATOR'.

2018-05-31 4:18 GMT+08:00 Eric Fried :

> This all sounds fully reasonable to me.  One thing, though...
>
> >>   * There is a resource class per device category e.g.
> >> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA.
>
> Let's propose standard resource classes for these ASAP.
>
> https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20
> eb5355a818/nova/rc_fields.py
>
> -efried
> .
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Alex Xu
2018-06-05 22:53 GMT+08:00 Eric Fried :

> Alex-
>
> Allocations for an instance are pulled down by the compute manager
> and
> passed into the virt driver's spawn method since [1].  An allocation
> comprises a consumer, provider, resource class, and amount.  Once we can
> schedule to trees, the allocations pulled down by the compute manager
> will span the tree as appropriate.  So in that sense, yes, nova-compute
> knows which amounts of which resource classes come from which providers.
>

Eric, thanks, that is the thing I missed. Initial I thought we will return
the allocations from the scheduler and down to the compute manager. I see
we already pull the allocations in the compute manager now.


>
> However, if you're asking about the situation where we have two
> different allocations of the same resource class coming from two
> separate providers: Yes, we can still tell which RCxAMOUNT is associated
> with which provider; but No, we still have no inherent way to correlate
> a specific one of those allocations with the part of the *request* it
> came from.  If just the provider UUID isn't enough for the virt driver
> to figure out what to do, it may have to figure it out by looking at the
> flavor (and/or image metadata), inspecting the traits on the providers
> associated with the allocations, etc.  (The theory here is that, if the
> virt driver can't tell the difference at that point, then it actually
> doesn't matter.)
>
> [1] https://review.openstack.org/#/c/511879/
>
> On 06/05/2018 09:05 AM, Alex Xu wrote:
> > Maybe I missed something. Is there anyway the nova-compute can know the
> > resources are allocated from which child resource provider? For example,
> > the host has two PFs. The request is asking one VF, then the
> > nova-compute needs to know the VF is allocated from which PF (resource
> > provider). As my understand, currently we only return a list of
> > alternative resource provider to the nova-compute, those alternative is
> > root resource provider.
> >
> > 2018-06-05 21:29 GMT+08:00 Jay Pipes  > <mailto:jaypi...@gmail.com>>:
> >
> > On 06/05/2018 08:50 AM, Stephen Finucane wrote:
> >
> > I thought nested resource providers were already supported by
> > placement? To the best of my knowledge, what is /not/ supported
> > is virt drivers using these to report NUMA topologies but I
> > doubt that affects you. The placement guys will need to weigh in
> > on this as I could be missing something but it sounds like you
> > can start using this functionality right now.
> >
> >
> > To be clear, this is what placement and nova *currently* support
> > with regards to nested resource providers:
> >
> > 1) When creating a resource provider in placement, you can specify a
> > parent_provider_uuid and thus create trees of providers. This was
> > placement API microversion 1.14. Also included in this microversion
> > was support for displaying the parent and root provider UUID for
> > resource providers.
> >
> > 2) The nova "scheduler report client" (terrible name, it's mostly
> > just the placement client at this point) understands how to call
> > placement API 1.14 and create resource providers with a parent
> provider.
> >
> > 3) The nova scheduler report client uses a ProviderTree object [1]
> > to cache information about the hierarchy of providers that it knows
> > about. For nova-compute workers managing hypervisors, that means the
> > ProviderTree object contained in the report client is rooted in a
> > resource provider that represents the compute node itself (the
> > hypervisor). For nova-compute workers managing baremetal, that means
> > the ProviderTree object contains many root providers, each
> > representing an Ironic baremetal node.
> >
> > 4) The placement API's GET /allocation_candidates endpoint now
> > understands the concept of granular request groups [2]. Granular
> > request groups are only relevant when a user wants to specify that
> > child providers in a provider tree should be used to satisfy part of
> > an overall scheduling request. However, this support is yet
> > incomplete -- see #5 below.
> >
> > The following parts of the nested resource providers modeling are
> > *NOT* yet complete, however:
> >
> > 5) GET /allocation_candidates does not currently return *results*
> > when granular request groups are specified. So, while the placement

Re: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs

2018-06-05 Thread Alex Xu
Maybe I missed something. Is there anyway the nova-compute can know the
resources are allocated from which child resource provider? For example,
the host has two PFs. The request is asking one VF, then the nova-compute
needs to know the VF is allocated from which PF (resource provider). As my
understand, currently we only return a list of alternative resource
provider to the nova-compute, those alternative is root resource provider.

2018-06-05 21:29 GMT+08:00 Jay Pipes :

> On 06/05/2018 08:50 AM, Stephen Finucane wrote:
>
>> I thought nested resource providers were already supported by placement?
>> To the best of my knowledge, what is /not/ supported is virt drivers using
>> these to report NUMA topologies but I doubt that affects you. The placement
>> guys will need to weigh in on this as I could be missing something but it
>> sounds like you can start using this functionality right now.
>>
>
> To be clear, this is what placement and nova *currently* support with
> regards to nested resource providers:
>
> 1) When creating a resource provider in placement, you can specify a
> parent_provider_uuid and thus create trees of providers. This was placement
> API microversion 1.14. Also included in this microversion was support for
> displaying the parent and root provider UUID for resource providers.
>
> 2) The nova "scheduler report client" (terrible name, it's mostly just the
> placement client at this point) understands how to call placement API 1.14
> and create resource providers with a parent provider.
>
> 3) The nova scheduler report client uses a ProviderTree object [1] to
> cache information about the hierarchy of providers that it knows about. For
> nova-compute workers managing hypervisors, that means the ProviderTree
> object contained in the report client is rooted in a resource provider that
> represents the compute node itself (the hypervisor). For nova-compute
> workers managing baremetal, that means the ProviderTree object contains
> many root providers, each representing an Ironic baremetal node.
>
> 4) The placement API's GET /allocation_candidates endpoint now understands
> the concept of granular request groups [2]. Granular request groups are
> only relevant when a user wants to specify that child providers in a
> provider tree should be used to satisfy part of an overall scheduling
> request. However, this support is yet incomplete -- see #5 below.
>
> The following parts of the nested resource providers modeling are *NOT*
> yet complete, however:
>
> 5) GET /allocation_candidates does not currently return *results* when
> granular request groups are specified. So, while the placement service
> understands the *request* for granular groups, it doesn't yet have the
> ability to constrain the returned candidates appropriately. Tetsuro is
> actively working on this functionality in this patch series:
>
> https://review.openstack.org/#/q/status:open+project:opensta
> ck/nova+branch:master+topic:bp/nested-resource-providers-
> allocation-candidates
>
> 6) The virt drivers need to implement the update_provider_tree() interface
> [3] and construct the tree of resource providers along with appropriate
> inventory records for each child provider in the tree. Both libvirt and
> XenAPI virt drivers have patch series up that begin to take advantage of
> the nested provider modeling. However, a number of concerns [4] about
> in-place nova-compute upgrades when moving from a single resource provider
> to a nested provider tree model were raised, and we have begun
> brainstorming how to handle the migration of existing data in the
> single-provider model to the nested provider model. [5] We are blocking any
> reviews on patch series that modify the local provider modeling until these
> migration concerns are fully resolved.
>
> 7) The scheduler does not currently pass granular request groups to
> placement. Once #5 and #6 are resolved, and once the migration/upgrade path
> is resolved, clearly we will need to have the scheduler start making
> requests to placement that represent the granular request groups and have
> the scheduler pass the resulting allocation candidates to its filters and
> weighers.
>
> Hope this helps highlight where we currently are and the work still left
> to do (in Rocky) on nested resource providers.
>
> Best,
> -jay
>
>
> [1] https://github.com/openstack/nova/blob/master/nova/compute/p
> rovider_tree.py
>
> [2] https://specs.openstack.org/openstack/nova-specs/specs/queen
> s/approved/granular-resource-requests.html
>
> [3] https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a
> 7aca73d185775d43df2/nova/virt/driver.py#L833
>
> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/
> 130783.html
>
> [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.

Re: [openstack-dev] [Cyborg] [Nova] Cyborg traits

2018-05-31 Thread Alex Xu
I can help on it.

2018-05-31 21:49 GMT+08:00 Eric Fried :

> Yup.  I'm sure reviewers will bikeshed the names, but the review is the
> appropriate place for that to happen.
>
> A couple of test changes will also be required.  You can have a look at
> [1] as an example to follow.
>
> -efried
>
> [1] https://review.openstack.org/#/c/511180/
>
> On 05/31/2018 01:02 AM, Nadathur, Sundar wrote:
> > On 5/30/2018 1:18 PM, Eric Fried wrote:
> >> This all sounds fully reasonable to me.  One thing, though...
> >>
> * There is a resource class per device category e.g.
>   CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA.
> >> Let's propose standard resource classes for these ASAP.
> >>
> >> https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20
> eb5355a818/nova/rc_fields.py
> >>
> >>
> >> -efried
> > Makes sense, Eric. The obvious names would be ACCELERATOR_GPU and
> > ACCELERATOR_FPGA. Do we just submit a patch to rc_fields.py?
> >
> > Thanks,
> > Sundar
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-16 Thread Alex Xu
2018-05-17 9:38 GMT+08:00 Alex Xu :

>
>
> 2018-05-17 1:24 GMT+08:00 Jay Pipes :
>
>> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
>>
>>> Hi,
>>> The Cyborg quota spec [1] proposes to implement a quota (maximum
>>> usage) for accelerators on a per-project basis, to prevent one project
>>> (tenant) from over-using some resources and starving other tenants. There
>>> are separate resource classes for different accelerator types (GPUs, FPGAs,
>>> etc.), and so we can do quotas per RC.
>>>
>>> The current proposal [2] is to track the usage in Cyborg agent/driver. I
>>> am not sure that scheme will work, as I have indicated in the comments on
>>> [1]. Here is another possible way.
>>>
>>>   * The operator configures the oslo.limit in keystone per-project
>>> per-resource-class (GPU, FPGA, ...).
>>>   o Until this gets into Keystone, Cyborg may define its own quota
>>> table, as defined in [1].
>>>   * Cyborg implements a table to track per-project usage, as defined in
>>> [1].
>>>
>>
>> Placement already stores usage information for all allocations of
>> resources. There is already even a /usages API endpoint that you can
>> specify a project and/or user:
>>
>> https://developer.openstack.org/api-ref/placement/#list-usages
>>
>> I see no reason not to use it.
>>
>> There is already actually a spec to use placement for quota usage checks
>> in Nova here:
>>
>> https://review.openstack.org/#/c/509042/
>
>
> FYI, I'm working on a spec which append to that spec. It's about counting
> quota for the resource class(GPU, custom RC, etc) other than nova built-in
> resources(cores, ram). It should be able to count the resource classes
> which are used by cyborg. But yes, we probably should answer Matt's
> question first, whether we should let Nova count quota instead of Cyborg.
>

here is the line https://review.openstack.org/#/c/569011/


>
>
>>
>>
>> Probably best to have a look at that and see if it will end up meeting
>> your needs.
>>
>>   * Cyborg provides a filter for the Nova scheduler, which checks
>>> whether the project making the request has exceeded its own quota.
>>>
>>
>> Quota checks happen before Nova's scheduler gets involved, so having a
>> scheduler filter handle quota usage checking is pretty much a non-starter.
>>
>> I'll have a look at the patches you've proposed and comment there.
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-16 Thread Alex Xu
2018-05-17 1:24 GMT+08:00 Jay Pipes :

> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
>
>> Hi,
>> The Cyborg quota spec [1] proposes to implement a quota (maximum
>> usage) for accelerators on a per-project basis, to prevent one project
>> (tenant) from over-using some resources and starving other tenants. There
>> are separate resource classes for different accelerator types (GPUs, FPGAs,
>> etc.), and so we can do quotas per RC.
>>
>> The current proposal [2] is to track the usage in Cyborg agent/driver. I
>> am not sure that scheme will work, as I have indicated in the comments on
>> [1]. Here is another possible way.
>>
>>   * The operator configures the oslo.limit in keystone per-project
>> per-resource-class (GPU, FPGA, ...).
>>   o Until this gets into Keystone, Cyborg may define its own quota
>> table, as defined in [1].
>>   * Cyborg implements a table to track per-project usage, as defined in
>> [1].
>>
>
> Placement already stores usage information for all allocations of
> resources. There is already even a /usages API endpoint that you can
> specify a project and/or user:
>
> https://developer.openstack.org/api-ref/placement/#list-usages
>
> I see no reason not to use it.
>
> There is already actually a spec to use placement for quota usage checks
> in Nova here:
>
> https://review.openstack.org/#/c/509042/


FYI, I'm working on a spec which append to that spec. It's about counting
quota for the resource class(GPU, custom RC, etc) other than nova built-in
resources(cores, ram). It should be able to count the resource classes
which are used by cyborg. But yes, we probably should answer Matt's
question first, whether we should let Nova count quota instead of Cyborg.


>
>
> Probably best to have a look at that and see if it will end up meeting
> your needs.
>
>   * Cyborg provides a filter for the Nova scheduler, which checks
>> whether the project making the request has exceeded its own quota.
>>
>
> Quota checks happen before Nova's scheduler gets involved, so having a
> scheduler filter handle quota usage checking is pretty much a non-starter.
>
> I'll have a look at the patches you've proposed and comment there.
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-24 Thread Alex Xu
2018-04-24 20:53 GMT+08:00 Eric Fried :

> > The problem isn't just checking the traits in the nested resource
> > provider. We also need to ensure the trait in the exactly same child
> > resource provider.
>
> No, we can't get "granular" with image traits.  We accepted this as a
> limitation for the spawn aspect of this spec [1], for all the same
> reasons [2].  And by the time we've spawned the instance, we've lost the
> information about which granular request groups (from the flavor) were
> satisfied by which resources - retrofitting that information from a new
> image would be even harder.  So we need to accept the same limitation
> for rebuild.
>
> [1] "Due to the difficulty of attempting to reconcile granular request
> groups between an image and a flavor, only the (un-numbered) trait group
> is supported. The traits listed there are merged with those of the
> un-numbered request group from the flavor."
> (http://specs.openstack.org/openstack/nova-specs/specs/
> rocky/approved/glance-image-traits.html#proposed-change)
> [2]
> https://review.openstack.org/#/c/554305/2/specs/rocky/
> approved/glance-image-traits.rst@86


Why we can return a RP which has a specific trait but we won't consume any
resources on it?
If the case is that we request two VFs, and this two VFs have different
required traits, then that should be granular request.


>
>
> __
>

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild

2018-04-24 Thread Alex Xu
2018-04-24 5:51 GMT+08:00 Arvind N :

> Thanks for the detailed options Matt/eric/jay.
>
> Just few of my thoughts,
>
> For #1, we can make the explanation very clear that we rejected the
> request because the original traits specified in the original image and the
> new traits specified in the new image do not match and hence rebuild is not
> supported.
>
> For #2,
>
> Other Cons:
>
>1. None of the filters currently make other API requests and my
>understanding is we want to avoid reintroducing such a pattern. But
>definitely workable solution.
>2. If the user disables the image properties filter, then traits based
>filtering will not be run in rebuild case
>
> For #3,
>
> Even though it handles the nested provider, there is a potential issue.
>
> Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), another
> one with some kind of offload feature(VF2).(Described by alex)
>
> Initial instance launch happens with VF:1 allocated, rebuild launches with
> modified request with traits=HW_NIC_OFFLOAD_X, so basically we want the
> instance to be allocated VF2.
>
> But the original allocation happens against VF1 and since in rebuild the
> original allocations are not changed, we have wrong allocations.
>


Yes, that is the case what I said, and none of #1,2,3,4 and the proposal in
this threads works also.

The problem isn't just checking the traits in the nested resource provider.
We also need to ensure the trait in the exactly same child resource
provider. Or we need to adjust allocations for the child resource provider.



> for #4, there is good amount of pushback against modifying the
> allocation_candiadates api to not have resources.
>
> Jay:
> for the GET /resource_providers?in_tree=&required=,
> nested resource providers and allocation pose a problem see #3 above.
>
> I will investigate erics option and update the spec.
> --
> Arvind N
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-19 Thread Alex Xu
I'm trying so hard to catch up the discussion since I lost few.., it is
really hard...

In my mind , I'm always thinking the request group is only about binding
the trait and the resource class together. Also thinking about whether we
need a explicit tree structure to describe the request. So sounds like
proximity parameter right to me.

2018-04-19 6:45 GMT+08:00 Eric Fried :

> > I have a feeling we're just going to go back and forth on this, as we
> > have for weeks now, and not reach any conclusion that is satisfactory to
> > everyone. And we'll delay, yet again, getting functionality into this
> > release that serves 90% of use cases because we are obsessing over the
> > 0.01% of use cases that may pop up later.
>
> So I vote that, for the Rocky iteration of the granular spec, we add a
> single `proximity={isolate|any}` qparam, required when any numbered
> request groups are specified.  I believe this allows us to satisfy the
> two NUMA use cases we care most about: "forced sharding" and "any fit".
> And as you demonstrated, it leaves the way open for finer-grained and
> more powerful semantics to be added in the future.
>
> -efried
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] The createBackup API

2018-03-29 Thread Alex Xu
There is spec proposal to fix a bug of createBackup API with microversion. (
https://review.openstack.org/#/c/511825/)

When rotation parameter is '0', the createBackup API just do a snapshot,
and then delete all the snapshots. That is meaningless behavier.

But there is thing hope to get wider suggestion. Since we said before all
the nova API should be primitive, the API shouldn't be another wrap of
another API.

So the createBackup sounds like just using the createImage API to create a
snapshot, and upload the snapshot into the glance with index number in the
image name, and rotation the image in after each snapshot.

So it should be something can be done by the client scrips to do same thing
with createImage API.

We have two options here:
#1. fix the bug with a microversion. And we aren't sure any people really
use '0' in the real life. But we use microversion to fix that bug, not sure
it is worth.
#2. deprecate the backup API with a microversion, leave the bug along.
Document that how the user can do that in the client script.

Looking for your comments.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Alex Xu
Agree with that, whatever the tweak inventory or traits, none of them works.

Same as VGPU, we can support pre-programmed mode for multiple-functions
region, and each region only can support one type function.

There are two reasons why Cyborg has a filter:
* records the usage of functions in a region
* records which function is programmed.

For #1, each region provider multiple functions. Each function can be
assigned to a VM. So we should create ResourceProvider for the region. And
the resource class is function. That is similar to the SR-IOV device. The
region(The PF)
provides functions (VFs).

For #2, We should use trait to distinguish the function type.

Then we didn't keep any inventory info in the cyborg again, and we needn't
any filter in cyborg also,
and there is no race condition anymore.

2018-03-29 2:48 GMT+08:00 Eric Fried :

> Sundar-
>
> We're running across this issue in several places right now.   One
> thing that's definitely not going to get traction is
> automatically/implicitly tweaking inventory in one resource class when
> an allocation is made on a different resource class (whether in the same
> or different RPs).
>
> Slightly less of a nonstarter, but still likely to get significant
> push-back, is the idea of tweaking traits on the fly.  For example, your
> vGPU case might be modeled as:
>
> PGPU_RP: {
>   inventory: {
>   CUSTOM_VGPU_TYPE_A: 2,
>   CUSTOM_VGPU_TYPE_B: 4,
>   }
>   traits: [
>   CUSTOM_VGPU_TYPE_A_CAPABLE,
>   CUSTOM_VGPU_TYPE_B_CAPABLE,
>   ]
> }
>
> The request would come in for
> resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting
> in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing
> that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP.
> So it doesn't matter that there's still inventory of
> CUSTOM_VGPU_TYPE_B:4, because a request including
> required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP.
> There's of course a window between when the initial allocation is made
> and when you tweak the trait list.  In that case you'll just have to
> fail the loser.  This would be like any other failure in e.g. the spawn
> process; it would bubble up, the allocation would be removed; retries
> might happen or whatever.
>
> Like I said, you're likely to get a lot of resistance to this idea
> as
> well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your
> patches; there's nothing about placement that disallows it.)
>
> The simple-but-inefficient solution is simply that we'd still be
> able
> to make allocations for vGPU type B, but you would have to fail right
> away when it came down to cyborg to attach the resource.  Which is code
> you pretty much have to write anyway.  It's an improvement if cyborg
> gets to be involved in the post-get-allocation-candidates
> weighing/filtering step, because you can do that check at that point to
> help filter out the candidates that would fail.  Of course there's still
> a race condition there, but it's no different than for any other resource.
>
> efried
>
> On 03/28/2018 12:27 PM, Nadathur, Sundar wrote:
> > Hi Eric and all,
> > I should have clarified that this race condition happens only for
> > the case of devices with multiple functions. There is a prior thread
> > <http://lists.openstack.org/pipermail/openstack-dev/2018-
> March/127882.html>
> > about it. I was trying to get a solution within Cyborg, but that faces
> > this race condition as well.
> >
> > IIUC, this situation is somewhat similar to the issue with vGPU types
> > <http://eavesdrop.openstack.org/irclogs/%23openstack-nova/
> %23openstack-nova.2018-03-27.log.html#t2018-03-27T13:41:00>
> > (thanks to Alex Xu for pointing this out). In the latter case, we could
> > start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after
> > consuming a unit of  vGPU-type-a, ideally the inventory should change
> > to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators,
> > we start with an RP inventory of (region-type-A: 1, function-X: 4). But,
> > after consuming a unit of that function, ideally the inventory should
> > change to: (region-type-A: 0, function-X: 3).
> >
> > I understand that this approach is controversial :) Also, one difference
> > from the vGPU case is that the number and count of vGPU types is static,
> > whereas with FPGAs, one could reprogram it to result in more or fewer
> > functions. That said, we could hopefully keep this analogy in mind for
> > future discussions.
> >
> > We 

Re: [openstack-dev] [nova] Proposing Eric Fried for nova-core

2018-03-26 Thread Alex Xu
+1

2018-03-27 10:00 GMT+08:00 melanie witt :

> Howdy everyone,
>
> I'd like to propose that we add Eric Fried to the nova-core team.
>
> Eric has been instrumental to the placement effort with his work on nested
> resource providers and has been actively contributing to many other areas
> of openstack [0] like project-config, gerritbot, keystoneauth, devstack,
> os-loganalyze, and so on.
>
> He's an active reviewer in nova [1] and elsewhere in openstack and reviews
> in-depth, asking questions and catching issues in patches and working with
> authors to help get code into merge-ready state. These are qualities I look
> for in a potential core reviewer.
>
> In addition to all that, Eric is an active participant in the project in
> general, helping people with questions in the #openstack-nova IRC channel,
> contributing to design discussions, helping to write up outcomes of
> discussions, reporting bugs, fixing bugs, and writing tests. His
> contributions help to maintain and increase the health of our project.
>
> To the existing core team members, please respond with your comments, +1s,
> or objections within one week.
>
> Cheers,
> -melanie
>
> [0] https://review.openstack.org/#/q/owner:efried
> [1] http://stackalytics.com/report/contribution/nova/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

2018-03-19 Thread Alex Xu
2018-03-19 0:34 GMT+08:00 Nadathur, Sundar :

> Sorry for the delayed response. I broadly agree with previous replies.
> For the concerns about the impact of Cyborg weigher on scheduling
> performance , there are some options (apart from filtering candidates as
> much as possible in Placement):
> * Handle hosts in bulk by extending BaseWeigher
> <https://github.com/openstack/nova/blob/master/nova/weights.py#L67> and
> overriding weigh_objects
> <https://github.com/openstack/nova/blob/master/nova/weights.py#L92>(),
> instead of handling one host at a time.
>

Still an external REST call, I guess people still doesn't like that.


>
* If we have to handle one host at a time for whatever reason, since the
> weigher is maintained by Cyborg, it could directly query Cyborg DB rather
> than go through Cyborg REST API. This will be not unlike other weighers.
>

That means when the cyborg DB schema changed, we have to restart the
nova-scheduler to update the weigher also. We couple the two service
upgrade together.


> Given these and other possible optimizations, it may be too soon to worry
> about the performance impact.
>

yea, maybe. What about the preferred traits?


>
> I am working on a spec that will capture the flow discussed in the PTG. I
> will try to address these aspects as well.
>
> Thanks & Regards,
> Sundar
>
>
> On 3/8/2018 4:53 AM, Zhipeng Huang wrote:
>
> @jay I'm also against a weigher in nova/placement. This should be an
> optional step depends on vendor implementation, not a default one.
>
> @Alex I think we should explore the idea of preferred trait.
>
> @Mathew: Like Sean said, Cyborg wants to support both reprogrammable FPGA
> and pre-programed ones.
> Therefore it is correct that in your description, the programming
> operation should be a call from Nova to Cyborg, and cyborg will complete
> the operation while nova waits. The only problem is that the weigher step
> should be an optional one.
>
>
> On Wed, Mar 7, 2018 at 9:21 PM, Jay Pipes  wrote:
>
>> On 03/06/2018 09:36 PM, Alex Xu wrote:
>>
>>> 2018-03-07 10:21 GMT+08:00 Alex Xu >> sou...@gmail.com>>:
>>>
>>>
>>>
>>> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K >> <mailto:sean.k.moo...@intel.com>>:
>>>
>>> __ __
>>>
>>> __ __
>>>
>>> *From:*Matthew Booth [mailto:mbo...@redhat.com
>>> <mailto:mbo...@redhat.com>]
>>> *Sent:* Saturday, March 3, 2018 4:15 PM
>>> *To:* OpenStack Development Mailing List (not for usage
>>> questions) >> <mailto:openstack-dev@lists.openstack.org>>
>>> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple
>>> functions
>>>
>>> __ __
>>>
>>> On 2 March 2018 at 14:31, Jay Pipes >> <mailto:jaypi...@gmail.com>> wrote:
>>>
>>> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:
>>>
>>> Hello Nova team,
>>>
>>>   During the Cyborg discussion at Rocky PTG, we
>>> proposed a flow for FPGAs wherein the request spec asks
>>> for a device type as a resource class, and optionally a
>>> function (such as encryption) in the extra specs. This
>>> does not seem to work well for the usage model that I’ll
>>> describe below.
>>>
>>> An FPGA device may implement more than one function. For
>>> example, it may implement both compression and
>>> encryption. Say a cluster has 10 devices of device type
>>> X, and each of them is programmed to offer 2 instances
>>> of function A and 4 instances of function B. More
>>> specifically, the device may implement 6 PCI functions,
>>> with 2 of them tied to function A, and the other 4 tied
>>> to function B. So, we could have 6 separate instances
>>> accessing functions on the same device.
>>>
>>> __ __
>>>
>>> Does this imply that Cyborg can't reprogram the FPGA at all?
>>>
>>> */[Mooney, Sean K] cyborg is intended to support fixed function
>>> acclerators also so it will not always be able to program the
>>> accelerator. In this case where an fpga is preprogramed with a
>>

Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

2018-03-06 Thread Alex Xu
2018-03-07 10:21 GMT+08:00 Alex Xu :

>
>
> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K :
>
>>
>>
>>
>>
>> *From:* Matthew Booth [mailto:mbo...@redhat.com]
>> *Sent:* Saturday, March 3, 2018 4:15 PM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple
>> functions
>>
>>
>>
>> On 2 March 2018 at 14:31, Jay Pipes  wrote:
>>
>> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:
>>
>> Hello Nova team,
>>
>>  During the Cyborg discussion at Rocky PTG, we proposed a flow for
>> FPGAs wherein the request spec asks for a device type as a resource class,
>> and optionally a function (such as encryption) in the extra specs. This
>> does not seem to work well for the usage model that I’ll describe below.
>>
>> An FPGA device may implement more than one function. For example, it may
>> implement both compression and encryption. Say a cluster has 10 devices of
>> device type X, and each of them is programmed to offer 2 instances of
>> function A and 4 instances of function B. More specifically, the device may
>> implement 6 PCI functions, with 2 of them tied to function A, and the other
>> 4 tied to function B. So, we could have 6 separate instances accessing
>> functions on the same device.
>>
>>
>>
>> Does this imply that Cyborg can't reprogram the FPGA at all?
>>
>> *[Mooney, Sean K] cyborg is intended to support fixed function
>> acclerators also so it will not always be able to program the accelerator.
>> In this case where an fpga is preprogramed with a multi function bitstream
>> that is statically provisioned cyborge will not be able to reprogram the
>> slot if any of the fuctions from that slot are already allocated to an
>> instance. In this case it will have to treat it like a fixed function
>> device and simply allocate a unused  vf  of the corret type if available. *
>>
>>
>>
>>
>>
>> In the current flow, the device type X is modeled as a resource class, so
>> Placement will count how many of them are in use. A flavor for ‘RC
>> device-type-X + function A’ will consume one instance of the RC
>> device-type-X.  But this is not right because this precludes other
>> functions on the same device instance from getting used.
>>
>> One way to solve this is to declare functions A and B as resource classes
>> themselves and have the flavor request the function RC. Placement will then
>> correctly count the function instances. However, there is still a problem:
>> if the requested function A is not available, Placement will return an
>> empty list of RPs, but we need some way to reprogram some device to create
>> an instance of function A.
>>
>>
>> Clearly, nova is not going to be reprogramming devices with an instance
>> of a particular function.
>>
>> Cyborg might need to have a separate agent that listens to the nova
>> notifications queue and upon seeing an event that indicates a failed build
>> due to lack of resources, then Cyborg can try and reprogram a device and
>> then try rebuilding the original request.
>>
>>
>>
>> It was my understanding from that discussion that we intend to insert
>> Cyborg into the spawn workflow for device configuration in the same way
>> that we currently insert resources provided by Cinder and Neutron. So while
>> Nova won't be reprogramming a device, it will be calling out to Cyborg to
>> reprogram a device, and waiting while that happens.
>>
>> My understanding is (and I concede some areas are a little hazy):
>>
>> * The flavors says device type X with function Y
>>
>> * Placement tells us everywhere with device type X
>>
>> * A weigher orders these by devices which already have an available
>> function Y (where is this metadata stored?)
>>
>> * Nova schedules to host Z
>>
>> * Nova host Z asks cyborg for a local function Y and blocks
>>
>>   * Cyborg hopefully returns function Y which is already available
>>
>>   * If not, Cyborg reprograms a function Y, then returns it
>>
>> Can anybody correct me/fill in the gaps?
>>
>> *[Mooney, Sean K] that correlates closely to my recollection also. As for
>> the metadata I think the weigher may need to call to cyborg to retrieve
>> this as it will not be available in the host state object.*
>>
> Is it the nova scheduler weigher or we want to support weigh on placement?
> F

Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

2018-03-06 Thread Alex Xu
2018-03-06 22:45 GMT+08:00 Mooney, Sean K :

>
>
>
>
> *From:* Matthew Booth [mailto:mbo...@redhat.com]
> *Sent:* Saturday, March 3, 2018 4:15 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions
>
>
>
> On 2 March 2018 at 14:31, Jay Pipes  wrote:
>
> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:
>
> Hello Nova team,
>
>  During the Cyborg discussion at Rocky PTG, we proposed a flow for
> FPGAs wherein the request spec asks for a device type as a resource class,
> and optionally a function (such as encryption) in the extra specs. This
> does not seem to work well for the usage model that I’ll describe below.
>
> An FPGA device may implement more than one function. For example, it may
> implement both compression and encryption. Say a cluster has 10 devices of
> device type X, and each of them is programmed to offer 2 instances of
> function A and 4 instances of function B. More specifically, the device may
> implement 6 PCI functions, with 2 of them tied to function A, and the other
> 4 tied to function B. So, we could have 6 separate instances accessing
> functions on the same device.
>
>
>
> Does this imply that Cyborg can't reprogram the FPGA at all?
>
> *[Mooney, Sean K] cyborg is intended to support fixed function acclerators
> also so it will not always be able to program the accelerator. In this case
> where an fpga is preprogramed with a multi function bitstream that is
> statically provisioned cyborge will not be able to reprogram the slot if
> any of the fuctions from that slot are already allocated to an instance. In
> this case it will have to treat it like a fixed function device and simply
> allocate a unused  vf  of the corret type if available. *
>
>
>
>
>
> In the current flow, the device type X is modeled as a resource class, so
> Placement will count how many of them are in use. A flavor for ‘RC
> device-type-X + function A’ will consume one instance of the RC
> device-type-X.  But this is not right because this precludes other
> functions on the same device instance from getting used.
>
> One way to solve this is to declare functions A and B as resource classes
> themselves and have the flavor request the function RC. Placement will then
> correctly count the function instances. However, there is still a problem:
> if the requested function A is not available, Placement will return an
> empty list of RPs, but we need some way to reprogram some device to create
> an instance of function A.
>
>
> Clearly, nova is not going to be reprogramming devices with an instance of
> a particular function.
>
> Cyborg might need to have a separate agent that listens to the nova
> notifications queue and upon seeing an event that indicates a failed build
> due to lack of resources, then Cyborg can try and reprogram a device and
> then try rebuilding the original request.
>
>
>
> It was my understanding from that discussion that we intend to insert
> Cyborg into the spawn workflow for device configuration in the same way
> that we currently insert resources provided by Cinder and Neutron. So while
> Nova won't be reprogramming a device, it will be calling out to Cyborg to
> reprogram a device, and waiting while that happens.
>
> My understanding is (and I concede some areas are a little hazy):
>
> * The flavors says device type X with function Y
>
> * Placement tells us everywhere with device type X
>
> * A weigher orders these by devices which already have an available
> function Y (where is this metadata stored?)
>
> * Nova schedules to host Z
>
> * Nova host Z asks cyborg for a local function Y and blocks
>
>   * Cyborg hopefully returns function Y which is already available
>
>   * If not, Cyborg reprograms a function Y, then returns it
>
> Can anybody correct me/fill in the gaps?
>
> *[Mooney, Sean K] that correlates closely to my recollection also. As for
> the metadata I think the weigher may need to call to cyborg to retrieve
> this as it will not be available in the host state object.*
>
Is it the nova scheduler weigher or we want to support weigh on placement?
Function is traits as I think, so can we have preferred_traits? I remember
we talk about that parameter in the past, but we don't have good use-case
at that time. This is good use-case.


> Matt
>
>
>
> --
>
> Matthew Booth
>
> Red Hat OpenStack Engineer, Compute DFG
>
>
>
> Phone: +442070094448 <+44%2020%207009%204448> (UK)
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.o

Re: [openstack-dev] [nova][cyborg]Dublin PTG Cyborg Nova Interaction Discussion

2018-02-13 Thread Alex Xu
+1, I'm interested also.

2018-02-12 23:27 GMT+08:00 Eric Fried :

> I'm interested.  No date/time preference so far as long as it sticks to
> Monday/Tuesday.
>
> efried
>
> On 02/12/2018 09:13 AM, Zhipeng Huang wrote:
> > Hi Nova team,
> >
> > Cyborg will have ptg sessions on Mon and Tue from 2:00pm to 6:00pm, and
> > we would love to invite any of you guys who is interested in nova-cyborg
> > interaction to join the discussion. The discussion will mainly focus on:
> >
> > (1) Cyborg team recap on the resource provider features that are
> > implemented in Queens.
> > (2) Joint discussion on what will be the impact on Nova side and future
> > collaboration areas.
> >
> > The session is planned for 40 mins long.
> >
> > If you are interested plz feedback which date best suit for your
> > arrangement so that we could arrange the topic accordingly :)
> >
> > Thank you very much.
> >
> >
> >
> > --
> > Zhipeng (Howard) Huang
> >
> > Standard Engineer
> > IT Standard & Patent/IT Product Line
> > Huawei Technologies Co,. Ltd
> > Email: huangzhip...@huawei.com 
> > Office: Huawei Industrial Base, Longgang, Shenzhen
> >
> > (Previous)
> > Research Assistant
> > Mobile Ad-Hoc Network Lab, Calit2
> > University of California, Irvine
> > Email: zhipe...@uci.edu 
> > Office: Calit2 Building Room 2402
> >
> > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding Takashi Natsume to python-novaclient core

2018-02-10 Thread Alex Xu
+1

2018-02-09 23:01 GMT+08:00 Matt Riedemann :

> I'd like to add Takashi to the python-novaclient core team.
>
> python-novaclient doesn't get a ton of activity or review, but Takashi has
> been a solid reviewer and contributor to that project for quite awhile now:
>
> http://stackalytics.com/report/contribution/python-novaclient/180
>
> He's always fast to get new changes up for microversion support and help
> review others that are there to keep moving changes forward.
>
> So unless there are objections, I'll plan on adding Takashi to the
> python-novaclient-core group next week.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] PTL Election Season

2018-01-22 Thread Alex Xu
Matt, thanks for your leading and help over the past years! Your obsessive
paperwork is really helpful and I appreciate for that.

2018-01-23 7:09 GMT+08:00 Matt Riedemann :

> On 1/15/2018 11:04 AM, Kendall Nelson wrote:
>
>> Election details: https://governance.openstack.org/election/
>>
>> Please read the stipulations and timelines for candidates and electorate
>> contained in this governance documentation.
>>
>> Be aware, in the PTL elections if the program only has one candidate,
>> that candidate is acclaimed and there will be no poll. There will only be a
>> poll if there is more than one candidate stepping forward for a program's
>> PTL position.
>>
>> There will be further announcements posted to the mailing list as action
>> is required from the electorate or candidates. This email is for
>> information purposes only.
>>
>> If you have any questions which you feel affect others please reply to
>> this email thread.
>>
>>
> To anyone that cares, I don't plan on running for Nova PTL again for the
> Rocky release. Queens was my fourth tour and it's definitely time for
> someone else to get the opportunity to lead here. I don't plan on going
> anywhere and I'll be here to help with any transition needed assuming
> someone else (or a couple of people hopefully) will run in the election.
> It's been a great experience and I thank everyone that has had to put up
> with me and my obsessive paperwork and process disorder in the meantime.
>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ResMgmt SIG]Proposal to form Resource Management SIG

2018-01-18 Thread Alex Xu
++, I also want to join this party :)

2018-01-09 8:40 GMT+08:00 Zhipeng Huang :

> Agree 100% to avoid regular meeting and it is better to have bi-weekly
> email report. Meeting should be arranged event based, and I think given the
> status of OpenStack community's work on resource provider, mostly what we
> need to do is attend k8s meetings (sig-scheduler, wg-resource-management,
> etc.)
>
> BTW for the RM SIG proposed here, let's not limit the scope to k8s only
> since we might have broader collaborative efforts happening in the future.
> k8s is our first primary target community to sync up with.
>
> On Tue, Jan 9, 2018 at 4:12 AM, Jay Pipes  wrote:
>
>> On 01/08/2018 12:26 PM, Zhipeng Huang wrote:
>>
>>> Hi all,
>>>
>>> With the maturing of resource provider/placement feature landing in
>>> OpenStack in recent release, and also in light of Kubernetes community
>>> increasing attention to the similar effort, I want to propose to form a
>>> Resource Management SIG as a contact point for OpenStack community to
>>> communicate with Kubernetes Resource Management WG[0] and other related
>>> SIGs.
>>>
>>> The formation of the SIG is to provide a gathering of similar interested
>>> parties and establish an official channel. Currently we have already
>>> OpenStack developers actively participating in kubernetes discussion (e.g.
>>> [1]), we would hope the ResMgmt SIG could further help such activities and
>>> better align the resource mgmt mechanism, especially the data modeling
>>> between the two communities (or even more communities with similar desire).
>>>
>>> I have floated the idea with Jay Pipes and Chris Dent and received
>>> positive feedback. The SIG will have a co-lead structure so that people
>>> could spearheading in the area they are most interested in. For example for
>>> me as Cyborg dev, I will mostly lead in the area of acceleration[2].
>>>
>>> If you are also interested please reply to this thread, and let's find a
>>> efficient way to form this SIG. Efficient means no extra unnecessary
>>> meetings and other undue burdens.
>>>
>>
>> +1
>>
>> From the Nova perspective, the scheduler meeting (which is Mondays at
>> 1400 UTC) is the primary meeting where resource tracking and accounting
>> issues are typically discussed.
>>
>> Chris Dent has done a fabulous job recording progress on the resource
>> providers and placement work over the last couple releases by issuing
>> status emails to the openstack-dev@ mailing list each Friday.
>>
>> I think having a bi-weekly cross-project (or even cross-ecosystem if
>> we're talking about OpenStack+k8s) status email reporting any big events in
>> the resource tracking world would be useful. As far as regular meetings for
>> a resource management SIG, I'm +0 on that. I prefer to have targeted
>> topical meetings over regular meetings.
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Why we return a redirect (302) when request to 'GET /v2.1'?

2017-11-02 Thread Alex Xu
2017-11-02 20:42 GMT+08:00 Sean Dague :

> On 11/01/2017 11:04 PM, Alex X wrote:
>
>> There is bug complain about the redirect which returned by the 'GET
>> /v2.1' https://launchpad.net/bugs/1728732
>>
>> 'GET /v2.1' will return a redirect to the 'GET /v2.1/'. The response of
>> 'GET /v2.1/' is the API version info. This seems nova API behaviour for a
>> long time.
>>
>> In the keystone catalog, the endpoint should be the version API I think.
>> For nova, it is 'GET /v2.1' which return a redirect instead of version info.
>>
>> Is there anybody knowing that why we return a redirect?
>>
>
> I thought it was an artifact of the way that paste builds pipelines, and
> the way our resources need urls. I was trying to see if we generate it on
> our side, but I'm not seeing it, so I suspect this is just a consequence of
> the resource mapper and paste.


It is generated it on our side
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/routes.py#L410



>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How do you use the instance IP filter?

2017-11-02 Thread Alex Xu
FYI, Nova did use regex
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L2408

2017-10-27 11:35 GMT+08:00 Matt Riedemann :

> On 10/26/2017 9:54 PM, Tony Breeds wrote:
>
>> Can you use RLIKE/REGEX? or is that too MySQL specific ?
>>
>
> I thought about that, and my gut response is 'no' because even if it does
> work for mysql, I'm assuming regex pattern matching for postgresql is
> different. And then you have different API behavior between clouds based on
> the backend database they are using, and now we've opened that whole can of
> worms again.
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][nova] Why we return a redirect (302) when request to 'GET /v2.1'?

2017-11-01 Thread Alex Xu
There is bug complain about the redirect which returned by the 'GET /v2.1'
https://launchpad.net/bugs/1728732

'GET /v2.1' will return a redirect to the 'GET /v2.1/'. The response of
'GET /v2.1/' is the API version info. This seems nova API behaviour for a
long time.

In the keystone catalog, the endpoint should be the version API I think.
For nova, it is 'GET /v2.1' which return a redirect instead of version info.

Is there anybody knowing that why we return a redirect?

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic and traits

2017-10-23 Thread Alex Xu
It sounds like Ironic use the Trait to configure the instance
https://review.openstack.org/#/c/504952/5/specs/approved/config-template-traits.rst@95

The downside I can see is that the extra burden added to the placement.
As the example, used in the spec:
* CUSTOM_BM_CONFIG_BIOS_VMX_ON
* CUSTOM_BM_CONFIG_BIOS_VMX_OFF

Actually, the placement only needs to find a host whose CPU have VMX
feature. So it only one trait "HW_CPU_X86_VMX". But to use Trait to config
the instance, we have to add each possible value as trait to the placement.

That isn't very terrible for the boolean value, but if there are 10 values,
or it is just an integer value.

That sounds like we put information isn't about the scheduling to the
placement, and those information adds extra burden to the placement.

2017-10-23 22:09 GMT+08:00 Eric Fried :

> We discussed this a little bit further in IRC [1].  We're all in
> agreement, but it's worth being precise on a couple of points:
>
> * We're distinguishing between a "feature" and the "trait" that
> represents it in placement.  For the sake of this discussion, a
> "feature" can (maybe) be switched on or off, but a "trait" can either be
> present or absent on a RP.
> * It matters *who* can turn a feature on/off.
>   * If it can be done by virt at spawn time, then it makes sense to have
> the trait on the RP, and you can switch the feature on/off via a
> separate extra_spec.
>   * But if it's e.g. an admin action, and spawn has no control, then the
> trait needs to be *added* whenever the feature is *on*, and *removed*
> whenever the feature is *off*.
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/
> %23openstack-nova.2017-10-23.log.html#t2017-10-23T13:12:13
>
> On 10/23/2017 08:15 AM, Sylvain Bauza wrote:
> >
> >
> > On Mon, Oct 23, 2017 at 2:54 PM, Eric Fried  > > wrote:
> >
> > I agree with Sean.  In general terms:
> >
> > * A resource provider should be marked with a trait if that feature
> >   * Can be turned on or off (whether it's currently on or not); or
> >   * Is always on and can't ever be turned off.
> >
> >
> > No, traits are not boolean. If a resource provider stops providing a
> > capability, then the existing related trait should just be removed,
> > that's it.
> > If you see a trait, that's just means that the related capability for
> > the Resource Provider is supported, that's it too.
> >
> > MHO.
> >
> > -Sylvain
> >
> >
> >
> > * A consumer wanting that feature present (doesn't matter whether
> it's
> > on or off) should specify it as a required *trait*.
> > * A consumer wanting that feature present and turned on should
> >   * Specify it as a required trait; AND
> >   * Indicate that it be turned on via some other mechanism (e.g. a
> > separate extra_spec).
> >
> > I believe this satisfies Dmitry's (Ironic's) needs, but also Jay's
> drive
> > for placement purity.
> >
> > Please invite me to the hangout or whatever.
> >
> > Thanks,
> > Eric
> >
> > On 10/23/2017 07:22 AM, Mooney, Sean K wrote:
> > >
> > >
> > >
> > >
> > > *From:*Jay Pipes [mailto:jaypi...@gmail.com
> > ]
> > > *Sent:* Monday, October 23, 2017 12:20 PM
> > > *To:* OpenStack Development Mailing List
> >  > >
> > > *Subject:* Re: [openstack-dev] [ironic] ironic and traits
> > >
> > >
> > >
> > > Writing from my phone... May I ask that before you proceed with
> any plan
> > > that uses traits for state information that we have a hangout or
> > > videoconference to discuss this? Unfortunately today and tomorrow
> I'm
> > > not able to do a hangout but I can do one on Wednesday any time of
> the day.
> > >
> > >
> > >
> > > */[Mooney, Sean K] on the uefi boot topic I did bring up at the
> > ptg that
> > > we wanted to standardizes tratis for “verified boot” /*
> > >
> > > */that included a trait for uefi secure boot enabled and to
> > indicated a
> > > hardware root of trust, e.g. intel boot guard or similar/*
> > >
> > > */we distinctly wanted to be able to tag nova compute hosts with
> those
> > > new traits so we could require that vms that request/*
> > >
> > > */a host with uefi secure boot enabled and a hardware root of
> > trust are
> > > scheduled only to those nodes. /*
> > >
> > > */ /*
> > >
> > > */There are many other examples that effect both vms and bare
> > metal such
> > > as, ecc/interleaved memory, cluster on die, /*
> > >
> > > */l3 cache code and data prioritization, vt-d/vt-c, HPET, Hyper
> > > threading, power states … all of these feature may be present on
> the
> > > platform/*
> > >
> > > */but I also need to know if they are turned on. Ruling out state
> in
> > > traits means all of this logic will eventually get pu

Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Alex Xu
2017-09-20 21:14 GMT+08:00 Matt Riedemann :

> On 9/20/2017 12:48 AM, Chen CH Ji wrote:
>
>> in analyzing other code, found seems we don't need PUT
>> /servers/{server_id}/metadata/{key} ?
>>
>> as the id is only used for check whether it's in the body and we will
>> honor the whole body (body['meta'] in the code)
>> https://github.com/openstack/nova/blob/master/nova/api/opens
>> tack/compute/server_metadata.py#L80
>>
>> looks like it's identical to
>> PUT /servers/{server_id}/metadata
>>
>> why we need this API or it should be something like
>>
>> PUT /servers/{server_id}/metadata/{key}but we only accept a value to
>> modify the meta given by {key} in the API side?
>>
>> Best Regards!
>>
>> Kevin (Chen) Ji 纪 晨
>>
>> Engineer, zVM Development, CSTL
>> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
>> Phone: +86-10-82451493
>> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
>> Beijing 100193, PRC
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> This API is a bit confusing, and the code is too since it all goes down to
> some common code, and I think you're missing the 'delete' flag:
>
> https://github.com/openstack/nova/blob/5bf1bb47c7e17c26592a6
> 99d07c2faa59d98bfb8/nova/compute/api.py#L3830
>
> If delete=False, as it is in this case, we only add/update the existing
> metadata with the new metadata from the request body. If delete=True, then
> we overwrite the instance metadata with whatever is in the request.
>
> Does that answer your question?
>
> This API is problematic and we have bugs against it since it's not atomic,
> i.e. two concurrent requests will overwrite one of them. We should really
> have a generation ID or etag on this data to be sure it's atomically
> updated.


 is there any use-case that people update server's metadata such frequently?


>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Less than 24 hours to Pike RC2

2017-08-23 Thread Alex Xu
2017-08-24 11:34 GMT+08:00 Matt Riedemann :

> This is just a reminder that we're in the final stretch for Pike RC2 which
> happens tomorrow.
>
> There are a couple of fixes in flight yet for RC2 at the top of the
> etherpad:
>
> https://etherpad.openstack.org/p/nova-pike-release-candidate-todo
>
> And another bug that Alex pointed out tonight not yet reported in
> launchpad, but we don't cleanup allocations from the current node before
> doing a reschedule. If you have Ocata computes or are doing super-conductor
> mode tiered conductors for cells v2 then it's not an issue, but any
> installs that are doing single conductor relying on reschedules will have
> this issue, which I'd consider something we should fix for RC2 as it means
> we'll be reporting usage against compute nodes in Placement which isn't
> really there, thus potentially taking them out of scheduling decisions.
>

here is the bug https://bugs.launchpad.net/nova/+bug/1712718 and the fix
https://review.openstack.org/496995

>
> If you find anything else in the next few hours, please report a bug and
> tag it with pike-rc-potential.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposing Balazs Gibizer for nova-core

2017-08-22 Thread Alex Xu
Yay!

2017-08-23 9:18 GMT+08:00 Matt Riedemann :

> I'm proposing that we add gibi to the nova core team. He's been around for
> awhile now and has shown persistence and leadership in the multi-release
> versioned notifications effort, which also included helping new
> contributors to Nova get involved which helps grow our contributor base.
>
> Beyond that though, gibi has a good understanding of several areas of
> Nova, gives thoughtful reviews and feedback, which includes -1s on changes
> to get them in shape before a core reviewer gets to them, something I
> really value and look for in people doing reviews who aren't yet on the
> core team. He's also really helpful with not only reporting and triaging
> bugs, but writing tests to recreate bugs so we know when they are fixed,
> and also works on fixing them - something I expect from a core maintainer
> of the project.
>
> So to the existing core team members, please respond with a yay/nay and
> after about a week or so we should have a decision (knowing a few cores are
> on vacation right now).
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should PUT /os-services be idempotent?

2017-07-11 Thread Alex Xu
2017-07-12 9:18 GMT+08:00 Matt Riedemann :

> I'm looking for some broader input on something being discussed in this
> change:
>
> https://review.openstack.org/#/c/464280/21/nova/api/openstac
> k/compute/services.py
>
> This is collapsing the following APIs into a single API:
>
> Old:
>
> * PUT /os-services/enable
> * PUT /os-services/disable
> * PUT /os-services/disable-log-reason
> * PUT /os-services/force-down
>
> New:
>
> * PUT /os-services
>
> With the old APIs, if you tried to enable and already enabled service, it
> was not an error. The same is you tried to disable an already disabled
> service. It doesn't change anything, but it's not an error.
>
> The question is coming up in the new API if trying to enable an enabled
> service should be a 400, or trying to disable a disabled service. The way I
> wrote the new API, those are no 400 conditions. They don't do anything,
> like before, but they aren't errors.
>

Sorry, I didn't describe clearly in the comment.

Some of those comments about save a DB call with more conditions checks. It
means if enable a enabled service, we needn't a db call, we can just return
to the user 200 directly.

One of those comments is about when the API user specified 'status=enabled'
and 'disabled_reason' in the request body, then we just ignore the
'disabled_reason' and didn't save it into the db also. That sounds not
right. We should return 400 to the API user, you can't specified the
'status=enabled' and 'disabled_reason'.


>
> Looking at [1] it seems this should not be an error condition if you're
> trying to update the state of a resource and it's already at that state.
>
> I don't have a PhD in REST though so would like broader discussion on this.
>
> [1] http://www.restapitutorial.com/lessons/idempotency.html
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 27

2017-07-07 Thread Alex Xu
2017-07-07 19:44 GMT+08:00 Chris Dent :

>
> After 40 days in the desert I've returned with placement update 27.
>
> Unfortunately, as far as I can tell, no one did any updates while I
> was gone so I don't have anything to crib from to have the full
> story on what's going on. I suspect I will miss some relevant
> reviews when making this list. If I have, please let me know.
> Otherwise, let's begin:
>
> # What Matters Most
>
> Claims in the scheduler remains the key feature we'd like to get in
> before feature freeze. After some hiccups on how to do it, making
> requests of the new /allocation_candidates (of which more, below) is
> the way to go. Changes for that are starting at
>
> https://review.openstack.org/#/c/476631/
>
> # What's Changed
>
> As mentioned, there's now a new URL in the placement API:
> GET /allocation_candidates. It has a similar interface to GET
> /resource_providers (in that you can filter the results by the kind
> of resources required) but the information is formatted as a
> two-tuple of lists of allocation requests and a dictionary of
> resource provider information. The latter will provide the initial
> list of available resource providers and augment the process of
> filtering and weighing those providers. The former provides a
> collection of correctly formed JSON bodies that can be sent in a PUT
> to /allocations/{consumer_uuid} when making a claim.
>
> I'm still a bit confused about where the concept of "alternatives"
> that are going to be passed to the cell conductors fits into this,
> but I guess that will become more clear soon.
>
> It also seems like this model creates a pretty strong conceptual
> coupling between a thing which behaves like a nova-scheduler
> (request, process, then claim resources). As placement becomes
> useful to other services it will be important to revisit some of
> these decisions and make sure the HTTP API is not imposing too many
> behaviuor requirements on the client side (otherwise why bother
> having an HTTP API?). But that's for later. Right now we're on a
> tight schedule trying to make sure that claims get in in Ocata.
>
> Because there's a bit of a dependency hierarchy with the various
> threads of work going on in placement, the work on claims may punt
> traits and/or nested resource providers further down the timeline.
> Work continues on all three concurrently.
>
> Another change is that allocations now include project id and user
> id information and usages by those id can be retrieved.
>
> # Help Wanted
>
> Areas where volunteers are needed.
>
> * General attention to bugs tagged placement:
>  https://bugs.launchpad.net/nova/+bugs?field.tag=placement
>
> # Main Themes
>
> ## Claims in the Scheduler
>
> As described above there's been a change in direction. That probably
> means some or all of the code now at
>
> https://review.openstack.org/#/q/status:open+topic:bp/placement-claims
>
> can be abandoned in favor of the work at
>
> https://review.openstack.org/#/q/topic:bp/placement-allocati
> on-requests+status:open
>
> The main starting point for that is
>
> https://review.openstack.org/#/c/476631/
>
> ## Traits
>
> The concept of traits now exists in the placement service, but
> filtering resource providers on traits is in flux. With the advent
> of /allocation_candidates as the primary scheduling interface, that
> needs to support traits. Work for that is in a stack starting at
>
> https://review.openstack.org/#/c/478464/
>
> It's not yet clear if we'll want to support traits at both
> /allocation_candidates and /resource_providers. I think we should,
> but the immediate need is on /allocation_candidates.
>

For traits support in /allocation_candidates, I started some patches:
https://review.openstack.org/478464
https://review.openstack.org/479766
https://review.openstack.org/479776


>
> There's some proposed code to get the latter started:
>
> https://review.openstack.org/#/c/474602/


>
> ## Shared Resource Providers
>
> Support for shared resource providers is "built in" to the
> /allocation_candidates concept and one of the drivers for having it.
>
> ## Nested Resource Providers
>
> Work continues on nested resource providers.
>
>   https://review.openstack.org/#/q/status:open+topic:bp/nested
> -resource-providers
>
> The need with these is simply more review, but they are behind
> claims in priority.
>
> ## Docs
>
> Lots of placement-related api docs have merged or are in progress:
>
> https://review.openstack.org/#/q/status:open+topic:cd/placem
> ent-api-ref
>
> Shortly there will be a real publishing job:
>
> https://review.openstack.org/#/c/480991/
>
> and the tooling which tests that new handlers are documented
> will be turned on:
>
> https://review.openstack.org/#/c/480924/
>
> Some changes have been proposed to document the scheduler's
> workflow, including visual aids, starting at:
>
> https://review.openstack.org/#/c/475810/
>
> # Other Code/Specs
>
> * https:

Re: [openstack-dev] [nova] The definition of 'Optional' parameter in API reference

2017-07-04 Thread Alex Xu
2017-07-04 15:40 GMT+08:00 Ghanshyam Mann :

> On Mon, Jul 3, 2017 at 1:38 PM, Takashi Natsume
>  wrote:
> > Hi, all.
> >
> > In Nova API reference, there is inconsistency in
> > whether to define parameters added in new microversion as 'optional' or
> not.
>
> Those should be defined based on how they are defined in respective
> microversion. If they are 'optional' in that microversion they should
> be mentioned as 'optional' and vice versa. Any parameter added in
> microversion is mentioned as 'New in version 2.xy' which shows the non
> availability of that parameter in earlier versions. Same case for
> removal of parameter also.
>
> But if any microversion change parameter from option param to required
> or vice versa then it is tricky but IMO documenting the latest
> behavior is right thing but with clear notes.
> For example, microversion 2.37,   where 'network' in request made as
> required from optional. In this cases, api-ref have the latest
> behavior of that param which is 'required' and a clear notes about
> till when it was optional and from when it is mandatory.
>
> In all cases, doc should reflect the latest behavior of param with
> notes(manual or auto generated with min_version & max_version)
>

++


>
> >
> > In the case that the parameter is always included in the response after a
> > certain microversion,
> > some parameters(e.g. 'type' [1]) are defined as 'required', but some
> > parameters (e.g. 'project_id', 'user_id'[2])
> > are defined as 'optional'.
> >
> > [1] List Keypairs in Keypairs (keypairs)
> > https://developer.openstack.org/api-ref/compute/?expanded=
> list-keypairs-detail#list-keypairs


'keypairs_links' in the response should be the required parameter. Because
it always show up after 2.35.


>
> >
> > [2] List Server Groups in Server groups (os-server-groups)
> > https://developer.openstack.org/api-ref/compute/?expanded=
> list-server-groups-detail#list-server-groups
>
> 'project_id', 'user_id' are introduced as 'required' from version 2.13
> [2] and should be added as 'required' in api doc also. i reported bug
> on this - https://bugs.launchpad.net/nova/+bug/1702238
>
>
> >
> > In the case that the parameter is always included in the response after a
> > certain microversion,
> > should it be defined as 'required' instead of 'optional'?
> >
> > Regards,
> > Takashi Natsume
> > NTT Software Innovation Center
> > E-mail: natsume.taka...@lab.ntt.co.jp
> >
>
> ..1 https://developer.openstack.org/api-ref/compute/?expanded=
> create-server-detail#create-server
> ..2 https://github.com/openstack/nova/blob/038619cce803c3522701886aa59c0c
> 2750532b3a/nova/api/openstack/compute/server_groups.py#L104-L106
>
> -gmann
>
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Alex Xu
2017-06-19 22:17 GMT+08:00 Jay Pipes :

> On 06/19/2017 09:04 AM, Edward Leafe wrote:
>
>> Current flow:
>> * Scheduler gets a req spec from conductor, containing resource
>> requirements
>> * Scheduler sends those requirements to placement
>> * Placement runs a query to determine the root RPs that can satisfy those
>> requirements
>>
>
> Not root RPs. Non-sharing resource providers, which currently effectively
> means compute node providers. Nested resource providers isn't yet merged,
> so there is currently no concept of a hierarchy of providers.
>
> * Placement returns a list of the UUIDs for those root providers to
>> scheduler
>>
>
> It returns the provider names and UUIDs, yes.
>
> * Scheduler uses those UUIDs to create HostState objects for each
>>
>
> Kind of. The scheduler calls ComputeNodeList.get_all_by_uuid(), passing
> in a list of the provider UUIDs it got back from the placement service. The
> scheduler then builds a set of HostState objects from the results of
> ComputeNodeList.get_all_by_uuid().
>
> The scheduler also keeps a set of AggregateMetadata objects in memory,
> including the association of aggregate to host (note: this is the compute
> node's *service*, not the compute node object itself, thus the reason
> aggregates don't work properly for Ironic nodes).
>
> * Scheduler runs those HostState objects through filters to remove those
>> that don't meet requirements not selected for by placement
>>
>
> Yep.
>
> * Scheduler runs the remaining HostState objects through weighers to order
>> them in terms of best fit.
>>
>
> Yep.
>
> * Scheduler takes the host at the top of that ranked list, and tries to
>> claim the resources in placement. If that fails, there is a race, so that
>> HostState is discarded, and the next is selected. This is repeated until
>> the claim succeeds.
>>
>
> No, this is not how things work currently. The scheduler does not claim
> resources. It selects the top (or random host depending on the selection
> strategy) and sends the launch request to the target compute node. The
> target compute node then attempts to claim the resources and in doing so
> writes records to the compute_nodes table in the Nova cell database as well
> as the Placement API for the compute node resource provider.
>
> * Scheduler then creates a list of N UUIDs, with the first being the
>> selected host, and the the rest being alternates consisting of the next
>> hosts in the ranked list that are in the same cell as the selected host.
>>
>
> This isn't currently how things work, no. This has been discussed, however.
>
> * Scheduler returns that list to conductor.
>> * Conductor determines the cell of the selected host, and sends that list
>> to the target cell.
>> * Target cell tries to build the instance on the selected host. If it
>> fails, it unclaims the resources for the selected host, and tries to claim
>> the resources for the next host in the list. It then tries to build the
>> instance on the next host in the list of alternates. Only when all
>> alternates fail does the build request fail.
>>
>
> This isn't currently how things work, no. There has been discussion of
> having the compute node retry alternatives locally, but nothing more than
> discussion.
>
> Proposed flow:
>> * Scheduler gets a req spec from conductor, containing resource
>> requirements
>> * Scheduler sends those requirements to placement
>> * Placement runs a query to determine the root RPs that can satisfy those
>> requirements
>>
>
> Yes.
>
> * Placement then constructs a data structure for each root provider as
>> documented in the spec. [0]
>>
>
> Yes.
>
> * Placement returns a number of these data structures as JSON blobs. Due
>> to the size of the data, a page size will have to be determined, and
>> placement will have to either maintain that list of structured datafor
>> subsequent requests, or re-run the query and only calculate the data
>> structures for the hosts that fit in the requested page.
>>
>
> "of these data structures as JSON blobs" is kind of redundant... all our
> REST APIs return data structures as JSON blobs.
>
> While we discussed the fact that there may be a lot of entries, we did not
> say we'd immediately support a paging mechanism.
>
> * Scheduler continues to request the paged results until it has them all.
>>
>
> See above. Was discussed briefly as a concern but not work to do for first
> patches.
>
> * Scheduler then runs this data through the filters and weighers. No
>> HostState objects are required, as the data structures will contain all the
>> information that scheduler will need.
>>
>
> No, this isn't correct. The scheduler will have *some* of the information
> it requires for weighing from the returned data from the GET
> /allocation_candidates call, but not all of it.
>
> Again, operators have insisted on keeping the flexibility currently in the
> Nova scheduler to weigh/sort compute nodes by things like thermal metrics
> and kinds of data that the Placement API will never be

[openstack-dev] [nova][api] Strict validation in query parameters

2017-06-15 Thread Alex Xu
We added new decorator 'query_schema' to support validate the query
parameters by JSON-Schema.

It provides more strict valiadation as below:
* set the 'additionalProperties=False' in the schema, it means that reject
any invalid query parameters and return HTTPBadRequest 400 to the user.
* use the marco function 'single_param' to declare the specific query
parameter only support single value. For example, the 'marker' parameters
for the pagination actually only one value is the valid. If the user
specific multiple values "marker=1&marker=2", the validation will return
400 to the user.

Currently there is patch related to this:
https://review.openstack.org/#/c/459483/13/nova/api/openstack/compute/schemas/server_migrations.py

So my question is:
Are we all good with this strict validation in all the future microversion?

I didn't remember we explicit agreement this at somewhere, just want to
double check this is the direction everybody want to go.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [libvirt] Whether kvm supports NVIDIA VGPU

2017-04-17 Thread Alex Xu
2017-04-15 3:03 GMT+08:00 Jay Pipes :

> On 04/12/2017 10:53 PM, 文峰sx-9149 wrote:
>
>> Will the openstack or libvirt (kvm) support NVIDIA VGPU?
>>  I am here
>> 
>> to see a mail introduction libvirt kvm support VGPU.
>>  But I do not know the current development situation of this feature.
>> Who can tell me about VGPU in Openstack?
>> Thanks.
>>
>
> A number of things need to happen before vGPU resources are a reality in
> OpenStack/Nova. In order, they are:
>
> 1) Completion of the "traits framework" for resource providers [1]. This
> should be completed in Pike.
>

yea, the traits API already merged, just left one patch
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/resource-provider-traits


>
> 2) Completion of the "nested resource providers framework" [2]. This is
> critical because physical GPUs (or physical GPU *groups* in the case of
> XenServer) are child providers to the compute node resource provider and
> need to be tracked in a hierarchical relationship for resource accounting
> purposes. It is a stretch goal to get this work complete for Pike.
>
> 3) The spec for VGPU resources needs to be approved and merged [3]. This
> should happen today or Monday.
>
> 4) The os-traits library [4] needs to have GPU traits added to it.
> Jianghua from Citrix and myself are working on this.
>
> 5) The virt driver's get_inventory() methods [5] need to be reworked to
> account for physical GPUs (or physical GPU groups in the case of XenServer)
> having a set inventory of VGPU resources for each unique combination of max
> resolution size and other traits.
>
> 6) The flavor extra specs and image metadata need to be updated to allow
> an admin to configure and a user to request one or more VGPU resources from
> a VGPU resource provider having a set of required traits.
>

This spec https://review.openstack.org/#/c/351063/ is going to enable
configure the flavor include a set of required traits or preferred traits.
But it is abandoned by you, I'm not sure the reason yet, try to catch you
later.


>
> Best,
> -jay
>
> [1] https://blueprints.launchpad.net/nova/+spec/resource-provider-traits
> [2] https://blueprints.launchpad.net/nova/+spec/nested-resource-providers
> [3] https://review.openstack.org/#/c/450122/
> [4] https://github.com/openstack/os-traits
> [5] https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L778
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Usability question for the server migrations API

2017-04-16 Thread Alex Xu
2017-04-15 4:38 GMT+08:00 Chet Burgess :

> On Fri, Apr 14, 2017 at 1:27 PM, Matt Riedemann 
> wrote:
>
>> The GET /servers/{server_id}/migrations API only lists in-progress live
>> migrations. This is an artifact of when it was originally introduced as the
>> os-migrations API which was tightly coupled with the API operation to
>> cancel a live migration.
>>
>> There is a spec [1] which is now approved which proposes to expand that
>> to also return other types of in-progress migrations, like cold migrations,
>> resizes and evacuations.
>>
>> What I don't like about the proposal is that it still filters out
>> completed migrations from being returned. I never liked the original design
>> where only in-progress live migrations would be returned. I understand why
>> it was done that way, as a convenience for using those results to then
>> cancel a live migration, but seriously that's something that can be
>> filtered out properly.
>>
>> So what I'd propose is that in a new microversion, we'd return all
>> migration records for a server, regardless of status. We could provide a
>> status filter query parameter if desired to just see in-progress
>> migrations, or completed migrations, etc. And the live migration cancel
>> action API would still validate that the requested migration to cancel is
>> indeed in progress first, else it's a 400 error.
>>
>> The actual migration entries in the response are quite detailed, so if
>> that's a problem, we could change listing to just show some short info (id,
>> status, source and target host), and then leave the actual details for the
>> show API.
>>
>> What do operators think about this? Is this used at all? Would you like
>> to get all migrations and not just in-progress migrations, with the ability
>> to filter as necessary?
>>
>> [1] https://review.openstack.org/#/c/407237/
>
>
> +1
>
> I would love to see this. Our support team frequently has to figure out
> the "history" of a VM and today they have to use tool that relies on logs
> and/or the DB to figure out where a VM used to be and when it was moved. It
> would wonderful if that whole tool can just be replaced with a single call
> to the nova API to return a full history.
>

Chet, do you have requirement to query the migrations for multiple VMs?
'/servers/{uuid}/migrations' will pain for that.

Also note that, we still have the API '/os-migrations', it will return all
the migration records in any status for all the VMs, and it supports
filters like 'instance_uuid', 'status', and 'migration_type' etc. I can't
remember clearly whether we said we will deprecated it, at least for now,
we didn't deprecate it yet. Want to figure whether it still have some
useful use-case for query multiple VMs' migration records.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][deployment] FYI: changes to cells v2 setup guide (pike only)

2017-04-16 Thread Alex Xu
Is it strange that the 'nova service-list' and 'nova host-list' return the
hosts which didn't have host mapping yet?

How the user to know whether a host was added to a cell or not?

2017-04-14 23:45 GMT+08:00 Matt Riedemann :

> Nova is working on adding multi-cell aware support to the compute API. A
> side effect of this is we can now have a chicken-and-egg situation during
> deployment such that if your tooling is depending on the compute API to
> list compute hosts, for example, before running the discover_hosts command,
> nothing will actually show up. This is because to list compute hosts, like
> using 'nova hypervisor-list', we get those from the cells now and until you
> run discover_hosts, they aren't mapped to a cell.
>
> The solution is to use "nova service-list --binary nova-compute" instead
> of "nova hypervisor-list" since we can pull services from all cells before
> the hosts are mapped using discover_hosts.
>
> I have a patch up to update our docs and add a release note:
>
> https://review.openstack.org/#/c/456923/
>
> I'll be updating the official install guide docs later.
>
> Note that this is master branch (Pike) only, this does not impact Ocata.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] quota-class-show not sync to quota-show

2017-04-11 Thread Alex Xu
We talked about remove the quota-class API for multiple times (
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099218.html)

I guess we can deprecate the entire quota-class API directly.

2017-04-07 18:19 GMT+08:00 Chen CH Ji :

> Version 2.35 removed most deprecated output like floating ip etc so we
> won't have following in quota-show output
> | floating_ips | 10 |
> | fixed_ips | -1 |
> | security_groups | 10 |
> | security_group_rules | 20 |
>
> however, quota-class-show still have those output, should we use 2.35 to
> fix this bug or add a new microversion or because os-quota-class-sets is
> about to deprecate, we can let it be ? Thanks
>
> DEBUG (session:347) REQ: curl -g -i -X GET http://192.168.123.10:8774/v2.
> 1/os-quota-class-sets/1 -H "OpenStack-API-Version: compute 2.41" -H
> "User-Agent: python-novaclient" -H "Accept: application/json" -H
> "X-OpenStack-Nova-API-Version: 2.41" -H "X-Auth-Token: {SHA1}
> 5008bb2787a9548d65b063f4db2525b4e3bf7163"
>
> RESP BODY: {"quota_class_set": {"injected_file_content_bytes": 10240,
> "metadata_items": 128, "ram": 51200, "floating_ips": 10, "key_pairs": 100,
> "id": "1", "instances": 10, "security_group_rules": 20, "injected_files":
> 5, "cores": 20, "fixed_ips": -1, "injected_file_path_bytes": 255,
> "security_groups": 10}}
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493 <010%208245%201493>
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-29 Thread Alex Xu
oops, sorry, the correct link is https://review.openstack.org/#/c/384261/,
I must remove last number accidently

2017-03-30 14:34 GMT+08:00 Kevin Benton :

> Not sure what you meant to link to, but that's not a spec. :)
>
> On Wed, Mar 29, 2017 at 11:21 PM, Alex Xu  wrote:
>
>> I just move the spec into Pike release https://review.opensta
>> ck.org/#/c/38426.
>>
>> The problem description section describes the strange API behaviour, and
>> proposal to deprecate the API since there isn't a clear use-case for this
>> API.
>>
>> 2017-03-29 8:59 GMT+08:00 Kevin Benton :
>>
>>> +1. If there is a use case missing from the neutron API that this
>>> allows, we can also expand the API to address it.
>>>
>>> On Mar 28, 2017 07:16, "Matt Riedemann"  wrote:
>>>
>>>> On 3/27/2017 11:42 PM, Rui Chen wrote:
>>>>
>>>>> Thank you Matt, the background information is important. Seems all the
>>>>> peoples don't know how the add-fixed-ip API works,
>>>>> and there is no exact use case about it. Now neutron port-update API
>>>>> also support to set multiple fixed ip for a port, and
>>>>> the fixed-ip updating will sync to nova side automatically (I had
>>>>> verified it in my latest devstack). Updating fixed-ip for
>>>>> specified port is easier to understand for me in multiple nics case
>>>>> than
>>>>> nova add-fixed-ip API.
>>>>>
>>>>> So if others known the orignal API design or had used nova add/remove
>>>>> fixed-ip API and would like to show your use cases,
>>>>> it's nice for us to understand how the API works and when we should use
>>>>> it, we can update the api-ref and add exact usage,
>>>>> avoid users' confusion about it. Feel free to reply something, thank
>>>>> you.
>>>>>
>>>>>
>>>> If the functionality is available via Neutron APIs, we should just
>>>> deprecate the multinic API like we did for the other network API proxies in
>>>> microversion 2.36. This reminds me that Alex Xu had a blueprint for
>>>> deprecating the multinic API [1] but it needs to be updated for Pike.
>>>>
>>>> [1] https://review.openstack.org/#/c/384261/
>>>>
>>>> --
>>>>
>>>> Thanks,
>>>>
>>>> Matt
>>>>
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] What the behavior of AddFixedIp API should be?

2017-03-29 Thread Alex Xu
I just move the spec into Pike release
https://review.openstack.org/#/c/38426.

The problem description section describes the strange API behaviour, and
proposal to deprecate the API since there isn't a clear use-case for this
API.

2017-03-29 8:59 GMT+08:00 Kevin Benton :

> +1. If there is a use case missing from the neutron API that this allows,
> we can also expand the API to address it.
>
> On Mar 28, 2017 07:16, "Matt Riedemann"  wrote:
>
>> On 3/27/2017 11:42 PM, Rui Chen wrote:
>>
>>> Thank you Matt, the background information is important. Seems all the
>>> peoples don't know how the add-fixed-ip API works,
>>> and there is no exact use case about it. Now neutron port-update API
>>> also support to set multiple fixed ip for a port, and
>>> the fixed-ip updating will sync to nova side automatically (I had
>>> verified it in my latest devstack). Updating fixed-ip for
>>> specified port is easier to understand for me in multiple nics case than
>>> nova add-fixed-ip API.
>>>
>>> So if others known the orignal API design or had used nova add/remove
>>> fixed-ip API and would like to show your use cases,
>>> it's nice for us to understand how the API works and when we should use
>>> it, we can update the api-ref and add exact usage,
>>> avoid users' confusion about it. Feel free to reply something, thank you.
>>>
>>>
>> If the functionality is available via Neutron APIs, we should just
>> deprecate the multinic API like we did for the other network API proxies in
>> microversion 2.36. This reminds me that Alex Xu had a blueprint for
>> deprecating the multinic API [1] but it needs to be updated for Pike.
>>
>> [1] https://review.openstack.org/#/c/384261/
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Removes the extra URL routes in Nova API

2017-03-29 Thread Alex Xu
Currently I'm working on removing the stevedore [0], and move the URL
routes mapping into a plain list [1].

But actually, the original URL mappings which Nova created [2] are more
than that list.
Due to the interface 'routes.Mapper.resources' create a set routes
conforming to the Atom publish protocol.[3]

So there are some URL mapping we never documented at somewhere (at least I
don't know):

POST /servers.:(format)
GET /servers/detail.:(format)

GET /servers/new.:(format)  (the API doesn't work, return 404)
GET /servers/new (the API doesn't work, return 404)
GET /servers/:(id)/edit.:(format) (the API doesn't work, return 404)


I plan to remove the support for those URL mappings in the patch [1], and
it will fix the bug https://bugs.launchpad.net/nova/+bug/1615475.

But I want to send this email to ensure we are ok to remove those URL
mappings.

Thanks
Alex

[0] https://review.openstack.org/#/c/445864/
[1]
https://review.openstack.org/#/c/445864/7/nova/api/openstack/compute/routes.py@119
[2]
https://github.com/openstack/nova/blob/master/nova/api/openstack/__init__.py#L174
[3] https://routes.readthedocs.io/en/latest/restful.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2017-02-28 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC for Intel RDT/CAT Support in Nova for Virtual Machine QoS

2017-02-22 Thread Alex Xu
@Jay, actually I'm here for CAT. I also have another idea about the
proposal, so catch you about it, let us sync all the ideas. :)

Thanks
Alex

2017-02-22 11:17 GMT-05:00 Jay Pipes :

> Hi Eli,
>
> Sorry for top-posting. Just a quick note to say I had a good conversation
> on Monday about this with Sean Mooney. I think we have some ideas on how to
> model all of these resources in the new placement/resource providers schema.
>
> Are you at the PTG? If so, would be great to meet up to discuss...
>
> Best,
> -jay
>
> On 02/21/2017 05:38 AM, Qiao, Liyong wrote:
>
>> Hi folks:
>>
>>
>>
>> Seeking community input on an initial design for Intel Resource Director
>> Technology (RDT), in particular for Cache Allocation Technology in
>> OpenStack Nova to protect workloads from co-resident noisy neighbors, to
>> ensure quality of service (QoS).
>>
>>
>>
>> 1. What is Cache Allocation Technology (CAT)?**
>>
>> Intel’s RDT(Resource Director Technology) [1]  is a umbrella of
>> *hardware* support to facilitate the monitoring and reservation of
>> shared resources such as cache, memory and network bandwidth towards
>> obtaining Quality of Service. RDT will enable fine grain control of
>> resources which in particular is valuable in cloud environments to meet
>> Service Level Agreements while increasing resource utilization through
>> sharing. CAT is a part of RDT and concerns itself with reserving for a
>> process(es) a portion of last level cache with further fine grain
>> control as to how much for code versus data. The below figure shows a
>> single processor composed of 4 cores and the cache hierarchy. The L1
>> cache is split into Instruction and Data, the L2 cache is next in speed
>> to L1. The L1 and L2 caches are per core. The Last Level Cache (LLC) is
>> shared among all cores. With CAT on the currently available hardware the
>> LLC can be partitioned on a per process (virtual machine, container, or
>> normal application) or process group basis.
>>
>>
>>
>> Libvirt and OpenStack [2] already support monitoring cache (CMT) and
>> memory bandwidth usage local to a processor socket (MBM_local) and total
>> memory bandwidth usage across all processor sockets (MBM_total) for a
>> process or process group.
>>
>>
>>
>>
>> 2. How CAT works  **
>>
>> To learn more about CAT please refer to the Intel Processor Soft
>> Developer's Manual
>> > ures-software-developer-manuals.html>
>>  volume 3b, chapters 17.16 and 17.17 [3]. Linux kernel support for the
>> same is expected in release 4.10 and documented at [4]
>>
>>
>> 3. Libvirt Interface**
>>
>>
>> Libvirt support for CAT is underway with the patch at reversion 7
>>
>>
>>
>> Interface changes of libvirt:
>>
>>
>>
>> 3.1 The capabilities xml has been extended to reveal cache information **
>>
>>
>>
>> 
>>
>>  
>>
>>
>>
>>  
>>
>>  
>>
>>
>>
>>  
>>
>> 
>>
>>
>>
>> The new `cache` xml element shows that the host has two *banks* of
>> *type* L3 or Last Level Cache (LLC), one per processor socket. The cache
>> *type* is l3 cache, its *size* 56320 KiB, and the *cpus* attribute
>> indicates the physical CPUs associated with the same, here ‘0-21’,
>> ‘44-65’ respectively.
>>
>>
>>
>> The *control *tag shows that bank belongs to scope L3, with a minimum
>> possible allocation of 2816 KiB and still has 2816 KiB need to be
>> reserved.
>>
>>
>>
>> If the host enabled CDP (Code and Data Prioritization) , l3 cache will
>> be divided as code  (L3CODE)and data (L3Data).
>>
>>
>>
>> Control tag will be extended to:
>>
>> ...
>>
>>  
>>
>>  
>>
>> …
>>
>>
>>
>> The scope of L3CODE and L3DATA show that we can allocate cache for
>> code/data usage respectively, they share same amount of l3 cache.
>>
>>
>>
>> 3.2 Domain xml extended to include new CacheTune element **
>>
>>
>>
>> 
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> vcpus='0, 1/>
>>
>>> vcpus=’2, 3’/>
>>
>> ...
>>
>> 
>>
>>
>>
>> This means the guest will be have vcpus 0, 1 running on host’s socket 0,
>> with 2816 KiB cache exclusively allocated to it and vcpus 2, 3 running
>> on host’s socket 0, with 2816 KiB cache exclusively allocated to it.
>>
>>
>>
>> Here we need to make sure vcpus 0, 1 are pinned to the pcpus of socket
>> 0, refer capabilities
>>
>>  :
>>
>>
>>
>> Here we need to make sure vcpus 2, 3 are pinned to the pcpus of socket
>> 1, refer capabilities
>>
>>  :.
>>
>>
>>
>> 3.3 Libvirt work flow for CAT**
>>
>>
>>
>>  1. Create qemu process and get it’s PIDs
>>  2. Define a new resource control domain also known as
>> *Cl*ass-*o*f-*S*ervice (CLOS) under /sys/fs/resctrl and set the
>> desired *C*ache *B*it *M*ask(CBM) in the libvirt domain xml file in
>> addition to updating the default schemata of the host
>>
>>
>>
>> 4. Proposed Nova Changes**
>>
>>
>>
>>  1. Get host capabilities from libvirt and extend compute node’ filed
>>  2. Add new scheduler filter and weight to help sched

[openstack-dev] [nova] Nova API sub-team meeting

2017-02-14 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 11

2017-02-12 Thread Alex Xu
2017-02-11 0:10 GMT+08:00 Ed Leafe :

> Your regular reporter, Chris Dent, is on PTO today, so I'm filling in.
> I'll be brief.
>
> After the flurry of activity to get as much in before the Ocata RCs, this
> past week was relatively calm. Work continued on the patch to have Ironic
> resources tracked as, well, individual entities instead of pseudo-VMs, and
> with a little more clarity, should be ready to merge soon.
>
> https://review.openstack.org/#/c/404472/
>
> The patch series to add the concept of nested resource providers is moving
> forward a bit more slowly. Nested RPs allow for modeling of complex
> resources, such as a compute node that contains PCI devices, each of which
> has multiple physical and virtual functions. The series starts here:
>
> https://review.openstack.org/#/c/415920/
>
> We largely ignored traits, which represent the qualitative part of a
> resource, and focused on the quantitative side during Ocata. With Pike
> development now open, we look to begin discussing and developing the traits
> work in more detail. The spec for traits is here:
>
> https://review.openstack.org/#/c/345138/
>
> …and the series of POC code starts with:
>
> https://review.openstack.org/#/c/377381/9


Ed, thanks for summary! I will add comments for the PoC. I hope that can
help people to understand what is about easily:

The first patch https://review.openstack.org/#/c/377381/9 is about
os_traits library, that is clone from
https://github.com/jaypipes/os-traits which
created by Jay. I just put it in the nova tree for implement the PoC.

The Traits API starts from the second patch
https://review.openstack.org/#/c/376198/  and the last one is
https://review.openstack.org/#/c/376202. It is all about the data model and
the API implement. The new added API endpoints are '/traits' and
'/resource_providers/{rp_uuid}/traits'.

The client of Traits API is
https://review.openstack.org/#/c/417288. It implements that the
ResourceTracker reports the CPU features as traits to the placement service
by Traits API.

The last one https://review.openstack.org/#/c/429364 is for getting a
specific list resource providers. The patch is based on Sylvain's patch
https://review.openstack.org/#/c/392569. It adds two new filters
'required_traits' and 'preferred_traits'. Then you can a list of RPs which
match the requirement of resources and traits by the request 'GET
/resource_provider?resources=.&required_traits=...&preferred_traits='.
(The API layer patch will be submitted soon)

Thanks
Alex


We've also begun planning for the discussions at the PTG around what our
goals for Pike will be. I'm sure that there will a summary of those
discussions in one of these emails after the PTG.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2017-02-07 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. And it is the time to talk about
the plan of Pike. I created an etherpad for Pike ideas
https://etherpad.openstack.org/p/nova-api-pike, please free feel to add
ideas and comments.


The meeting is being held Wednesday UTC1300 and irc channel is
#openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] The status of servers API's filters

2017-01-28 Thread Alex Xu
2017-01-29 1:31 GMT+08:00 Matt Riedemann :

> On 1/27/2017 3:37 AM, Alex Xu wrote:
>
>> The patches about validation the filters and sorts for servers API are
>> merged [0]. But we still have something left [1].
>>
>> The left is about the proposal of introducing the new rule
>> 'os_compute_api:servers:all_tenants_visible' which is soft enforcement.
>> The new rule will instead of the old hard enforcement rule
>> "os_compute_api:servers:index:get_all_tenants".
>>
>> In the discussion of nova API meeting, Join pointed out that the change
>> from hard enforcement to soft enforcement needs Microversion. The API
>> used to return 403 when user didn't have permission of all_tenants
>> parameter. But now the API returns 200 with the own instances when no
>> permission of all_tenants parameter. So the proposal should be separated
>> to two parts:
>>
>> i. rename the policy from "get_all_tenants" to the "all_tenants_visible"
>> ii. change the enforcement from hard to soft by Microversion.
>>
>> In the old microversion, the rule keeps as hard enforcement.
>>
>> So in Ocata, "get_all_tenants" will be deprecated. If the deployer have
>> overriden rule in the policy file, the old rule still will be enforced,
>> and the warning message will be emit to notice that the user needs to
>> move their custom rule to the new rule 'all_tenants_visiable'. And if
>> the API user requests with new microversion, the rule will become soft
>> enforcement.
>>
>> So if that sounds make sense, there also have another question about
>> whether we have enough time to merge it. I think Matt will make a call
>> on it.
>>
>> And due to holidays in China, both I and Kevin are in vacation.  And
>> really really appreciate Ghanshyam take care on those patches! The
>> spec[3] and the patch[1] already updated by him.
>>
>> AnywayHappy Chinese New Year to everyone(yea, new year again \o/).
>>
>> Thanks
>> Alex
>>
>> [0] https://review.openstack.org/408571
>> <https://review.openstack.org/408571>
>> and https://review.openstack.org/415142
>> <https://review.openstack.org/415142>
>> [1] https://review.openstack.org/#/q/status:open+project:opensta
>> ck/nova+branch:master+topic:bp/add-whitelist-for-server-
>> list-filter-sort-parameters
>> <https://review.openstack.org/#/q/status:open+project:openst
>> ack/nova+branch:master+topic:bp/add-whitelist-for-server-
>> list-filter-sort-parameters>
>> [3] https://review.openstack.org/425533
>> <https://review.openstack.org/425533>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> My immediate question is, does this need to happen in Ocata or can we
> defer the all_tenants_visible policy change to Pike? Is there anything we
> merged in Ocata that is now broken, or weird, or blocks us from doing
> something later, etc if we don't get this done now?
>

I didn't see any broken or blocks without the all_tenants_visiable policy
change. The policy change is just part of vision of how filters should be
looks like between admin user and non-admin user.


>
> Honestly I never really understood why the all_tenants policy change was
> being lumped in with the server sort/filter whitelist blueprint, except
> maybe just because of convenience?
>

Emm..I didn't remember any discussion about why we should put all of them
into one spec or not.


> Anyway, this seems like something we can defer to Pike unless I'm missing
> something.


I'm ok with that, due to I didn't have any critical reason. The only thing
is we need one more cycle to remove a old policy rule. But currently the
new proposal without more discussion, and we only have 1 week left for spec
change and patches. It isn't worth to take that risk I guess.

Anyway, Matt, thanks for your response.


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] The status of servers API's filters

2017-01-27 Thread Alex Xu
The patches about validation the filters and sorts for servers API are
merged [0]. But we still have something left [1].

The left is about the proposal of introducing the new rule
'os_compute_api:servers:all_tenants_visible' which is soft enforcement. The
new rule will instead of the old hard enforcement rule
"os_compute_api:servers:index:get_all_tenants".

In the discussion of nova API meeting, Join pointed out that the change
from hard enforcement to soft enforcement needs Microversion. The API used
to return 403 when user didn't have permission of all_tenants parameter.
But now the API returns 200 with the own instances when no permission of
all_tenants parameter. So the proposal should be separated to two parts:

i. rename the policy from "get_all_tenants" to the "all_tenants_visible"
ii. change the enforcement from hard to soft by Microversion.

In the old microversion, the rule keeps as hard enforcement.

So in Ocata, "get_all_tenants" will be deprecated. If the deployer have
overriden rule in the policy file, the old rule still will be enforced, and
the warning message will be emit to notice that the user needs to move
their custom rule to the new rule 'all_tenants_visiable'. And if the API
user requests with new microversion, the rule will become soft enforcement.

So if that sounds make sense, there also have another question about
whether we have enough time to merge it. I think Matt will make a call on
it.

And due to holidays in China, both I and Kevin are in vacation.  And really
really appreciate Ghanshyam take care on those patches! The spec[3] and the
patch[1] already updated by him.

AnywayHappy Chinese New Year to everyone(yea, new year again \o/).

Thanks
Alex

[0] https://review.openstack.org/408571 and https://review.openstack.
org/415142
[1] https://review.openstack.org/#/q/status:open+project:
openstack/nova+branch:master+topic:bp/add-whitelist-for-
server-list-filter-sort-parameters
[3] https://review.openstack.org/425533
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2017-01-24 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tag in the API breaks in the old microversion

2017-01-24 Thread Alex Xu
2017-01-25 0:27 GMT+08:00 Matt Riedemann :

> On 1/24/2017 9:18 AM, Matt Riedemann wrote:
>
>>
>> First, thanks to Kevin and Alex for finding this issue and explaining it
>> in detail so we can understand the scope.
>>
>> This is a nasty unfortunate issue which I really wish we could just fix
>> without a microversion bump but we have microversions for a reason,
>> which is to fix issues in the API. In thinking about if this were the
>> legacy 2.0 API, we always had a rule that you couldn't fix bugs in the
>> API if they changed the behavior, no matter how annoying.
>>
>> So let's fix this with a microversion. I don't think we need to hold it
>> to the feature freeze deadline as it's a microversion only for a bug
>> fix, it's not a new feature. So that's a compromise at least and gives
>> us some time to get this done correctly and still have it fixed in
>> Ocata. We'll also want to document this in the api-ref and REST API
>> version history in whatever way makes it clear about the limitations
>> between microversions.
>>
>> As for testing, I think using a mix of test inheritance and using
>> 2.latest is probably a good step to take. I know we've had a mix of that
>> in different places in the functional API samples tests, but there was
>> never a clear rule about what do to with testing microversions and if
>> you should use inheritance to build on existing tests.
>>
>>
> One other thing: we're going to need to also fix this in
> python-novaclient, which we might want to do first, or work concurrently,
> since that's going to give us the client side perspective on how gross it
> will be to deal with this issue.


+1, thanks for this good point!


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Device tag in the API breaks in the old microversion

2017-01-24 Thread Alex Xu
Unfortunately the device tag support in the API was broken in the old
Microversion https://bugs.launchpad.net/nova/+bug/1658571, which thanks to
Kevin Zheng to find out that.

Actually there are two bugs, just all of them are about device tag. The
first one [0] is a mistake in the initial introduce of device tag. The new
schema only available for the version 2.32, when the request version >
2.32, the schema fallback to the old one.

The second one [1] is that when we bump the API to 2.37, the network device
tag was removed accidentally which also added in 2.32 [2].

So the current API behavior is as below:

2.32: BDM tag and network device tag added.
2.33 - 2.36: 'tag' in the BDM disappeared. The network device tag still
works.
2.37: The network device tag disappeared also.

There are few questions we should think about:

1. Should we fix that by Microversion?
Thanks to Chris Dent point that out in the review. I also think we need
to bump Microversion, which follow the rule of Microversion.

2. If we need Microversion, is that something we can do before release?
We are very close to the feature freeze. And in normal, we need spec
for microversion. Maybe we only can do that in Pike. For now we can update
the API-ref, and microversion history to notice that, maybe a reno also.

2. How can we prevent that happened again?
   Both of those patches were reviewed multiple cycles. But we still miss
that. It is worth to think about how to prevent that happened again.

   Talk with Sean. He suggests stop passing plain string version to the
schema extension point. We should always pass APIVersionRequest object
instead of plain string. Due to "version == APIVersionRequest('2.32')" is
always wrong, we should remove the '__eq__'. The developer should always
use the 'APIVersionRequest.matches' [3] method.

   That can prevent the first mistake we made. But nothing help for second
mistake. Currently we only run the test on the specific Microversion for
the specific interesting point. In the before, the new tests always
inherits from the previous microversion tests, just like [4]. That can test
the old API behavior won't be changed in the new microversion. But now, we
said that is waste, we didn't do that again just like [5]. Should we change
that back?

Thanks
Alex

[0]
https://review.openstack.org/#/c/304510/64/nova/api/openstack/compute/block_device_mapping.py
[1]
https://review.openstack.org/#/c/316398/37/nova/api/openstack/compute/schemas/servers.py@88
[2]
https://review.openstack.org/#/c/316398/37/nova/api/openstack/compute/schemas/servers.py@79
[3]
https://github.com/openstack/nova/blob/master/nova/api/openstack/api_version_request.py#L219
[4]
https://github.com/openstack/nova/blob/master/nova/tests/unit/api/openstack/compute/test_evacuate.py#L415
[5]
https://github.com/openstack/nova/blob/master/nova/tests/unit/api/openstack/compute/test_serversV21.py#L3584
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-20 Thread Alex Xu
2017-01-19 23:43 GMT+08:00 Sylvain Bauza :

>
>
> Le 19/01/2017 16:27, Matt Riedemann a écrit :
> > Sylvain and I were talking about how he's going to work placement
> > microversion requests into his filter scheduler patch [1]. He needs to
> > make requests to the placement API with microversion 1.4 [2] or later
> > for resource provider filtering on specific resource classes like VCPU
> > and MEMORY_MB.
> >
> > The question was what happens if microversion 1.4 isn't available in the
> > placement API, i.e. the nova-scheduler is running Ocata code now but the
> > placement service is running Newton still.
> >
> > Our rolling upgrades doc [3] says:
> >
> > "It is safest to start nova-conductor first and nova-api last."
> >
> > But since placement is bundled with n-api that would cause issues since
> > n-sch now depends on the n-api code.
> >
> > If you package the placement service separately from the nova-api
> > service then this is probably not an issue. You can still roll out n-api
> > last and restart it last (for control services), and just make sure that
> > placement is upgraded before nova-scheduler (we need to be clear about
> > that in [3]).
> >
> > But do we have any other issues if they are not packaged separately? Is
> > it possible to install the new code, but still only restart the
> > placement service before nova-api? I believe it is, but want to ask this
> > out loud.
> >
> > I think we're probably OK here but I wanted to ask this out loud and
> > make sure everyone is aware and can think about this as we're a week
> > from feature freeze. We also need to look into devstack/grenade because
> > I'm fairly certain that we upgrade n-sch *before* placement in a grenade
> > run which will make any issues here very obvious in [1].
> >
> > [1] https://review.openstack.org/#/c/417961/
> > [2]
> > http://docs.openstack.org/developer/nova/placement.html#
> filter-resource-providers-having-requested-resource-capacity
> >
> > [3]
> > http://docs.openstack.org/developer/nova/upgrade.html#
> rolling-upgrade-process
> >
> >
>
> I thought out loud in the nova channel at the following possibility :
> since we always ask to upgrade n-cpus *AFTER* upgrading our other
> services, we could imagine to allow the nova-scheduler gently accept to
> have a placement service be Newton *UNLESS* you have Ocata computes.
>
> On other technical words, the scheduler getting a response from the
> placement service is an hard requirement for Ocata. That said, if the
> response code is a 400 with a message saying that the schema is
> incorrect, it would be checking the max version of all the computes and
> then :
>  - either the max version is Newton and then call back the
> ComputeNodeList.get_all() for getting the list of nodes
>  - or, the max version is Ocata (at least one node is upgraded), and
> then we would throw a NoValidHosts
>

Emm...when you request a Microversion which didn't support by the service,
you will get 406 response. Then you will know the placement is old. Then
you needn't check the version of computes?


>
> That way, the upgrade path would be :
>  1/ upgrade your conductor
>  2/ upgrade all your other services but n-cpus (we could upgrade and
> restart n-sch before n-api, that would still work, or the contrary would
> be fine too)
>  3/ rolling upgrade your n-cpus
>
> I think we would keep then the existing upgrade path and we would still
> have the placement service be mandatory for Ocata.
>
> Thoughts ?
> -Sylvain
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Alex Xu
2017-01-17 10:26 GMT+08:00 Matt Riedemann :

> On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:
>
>> Hi Nova,
>>
>> I just discovered something interesting, the tag has a limited length,
>> and in the current implementation, it is 60 in the tag object definition:
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/objects/tag.py#n18
>>
>> but 80 in the db model:
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sq
>> lalchemy/models.py#n1464
>>
>> As asked in the IRC and some of the cores responded(thanks to Matt and
>> Jay), it seems to be an
>> oversight and has no particular reason to do it this way.
>>
>> Since we have already created a 80 long space in DB and the current
>> implementation might be confusing,  maybe we should expand the
>> limitation in tag object definition to 80. Besides, users can enjoy
>> longer tags.
>>
>> And the question could be, does anyone know why it is 60 in object but
>> 80 in DB model? is it an oversight or we have some particular reason?
>>
>> If we could expand it to be the same as DB model (80 for both), it is ok
>> to do this tiny change without microversion?
>>
>> Thanks,
>>
>> Kevin Zheng
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> As I said in IRC, the tags feature took a long time to land (several
> releases) so between the time that the spec was written and then the data
> model patch and finally the REST API change, we might have just totally
> missed that the length of the column in the DB was different than what was
> allowed in the REST API.
>
> I'm not aware of any technical reason why they are different. I'm hoping
> that Sergey Nikitin might remember something about this. But even looking
> at the spec:
>
> https://specs.openstack.org/openstack/nova-specs/specs/liber
> ty/approved/tag-instances.html
>
> The column was meant to be 60 so my guess is someone noticed that in the
> REST API review but missed it in the data model review.
>

I can't remember the detail also. Hoping Sergey can remember something also.


>
> As for needing a microversion of changing this, I tend to think we don't
> need a microversion because we're not restricting the schema in the REST
> API, we're just increasing it to match the length in the data model. But
> I'd like opinions from the API subteam about that.
>
>
We still need microversion for the user to discover the max length between
different cloud deployments.


> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2017-01-10 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 6

2017-01-06 Thread Alex Xu
2017-01-07 5:55 GMT+08:00 Matt Riedemann :

> On 12/16/2016 6:40 AM, Chris Dent wrote:
>
>>
>> ## Resource Provider Traits
>>
>> There's been some recent activity on the spec for resource provider
>> traits. These are a way of specifying qualitative resource
>> requirements (e.g., "I want my disk to be SSD").
>>
>> https://review.openstack.org/#/c/345138/
>>
>> I'm not clear on whether this is still targeting Ocata or not?
>>
>
> Sorry for the late reply to this older update, but just to be clear,
> traits were never a goal or planned item for Ocata. We have to get the
> quantitative stuff working first before moving onto the qualitative stuff,
> but it's for sure fine to discuss designs/ideas or hack on POC code.
>
>
yeah, ++, it isn't a goal for Ocata. I begin the PoC to ensure I can finish
it before PTG(and right after Ocata-3,  holidays in China).  Then people
can have more material to discuss.


> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2017-01-03 Thread Alex Xu
Happy new year! Start Nova API meeting again.

The meeting is being held Wednesday UTC1300 and irc channel is
#openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-12-20 Thread Alex Xu
Hi,

It is really close to the holidays. But we didn't say we will cancel the
meeting. I will be there if there are people.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][zun] Effective URL for resource actions

2016-12-19 Thread Alex Xu
2016-12-20 11:34 GMT+08:00 Qiming Teng :

> On Tue, Dec 20, 2016 at 10:14:49AM +0800, Alex Xu wrote:
> > Yea, looks like no consensus at here. Look at the discussion Chris
> pointed
> > to, the "/containers//action" sounds a good API for tasks.
>
> Did you mean "/containers//actions" ?
>

yea, you are right. sorry about that.


>
> > But we also see the disadvantage for it. When we want to use URL to
> > identifying an action, we found all the actions into a single API. We
> faced
> > this problem multiple times:
>
> Then we should stop identifying actions based on URL. :)
> The following URL for representing a 'pause' action is really ugly:
>
>/containers//pause
>
> First of all, 'pause' is a verb. I cannot persuade myself that doing a
> POST to a verb is a ReST call. And I cannot do a 'GET', 'DELETE' not
> due to I'm not an admin ... rather, it is because I cannot explain what
> it means by 'deleting a pause'.
>
> > 1. In the beginning of thinking about capability discovery, an idea is
> > about returning a list of URLs if the user have the ability to execute.
> But
> > found that all the actions are into single URL
> > 2. Before there is an idea about if the policy rule name is the URL, then
> > the user can easy to know the mapping between policy rule and API. The
> same
> > problem, all the actions into single URL
>
> Neither capability discovery or policy enforcement should be based
> solely on URL, IMO. Capabilities should have its own resource
> representation if possible. As for policy, it still seems an over
> simplification of authorization. There many scenarios where users
> want a finer granularity of access control. Even if we want to stick to
> the policy.json approach today, we can somehow improve the checking, for
> example:
>
>"containers:action:reboot": "role:owner"
>
> Similarly, auditing and logging can be done in the same way.
>
> > 3. Before think about using OpenAPI(swagger) to generate api doc, but the
> > OpenAPI spec didn't support multiple "POST containers//action", that
> > means we need to put all the actions into single entry. That makes the
> > generated doc unreadable.
>
> That is history now. We are moving to the api-ref way of documenting
> APIs. Don't tell me there are plans to migrate it back, :D
>

Yea, just as I said, none of above problems block on this issue. It is also
hard to say this is key issue lead to another solution. I just provide some
info if people really care about this issue we met before.


>
> - Qiming
> > But yes, that doesn't means we block on this problem. Finally we go to
> > another direction. So just input something we met before for your
> > consideration.
> >
> > Thanks
> > Alex
> >
> > 2016-12-19 19:57 GMT+08:00 Chris Dent :
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][zun] Effective URL for resource actions

2016-12-19 Thread Alex Xu
Yea, looks like no consensus at here. Look at the discussion Chris pointed
to, the "/containers//action" sounds a good API for tasks.

But we also see the disadvantage for it. When we want to use URL to
identifying an action, we found all the actions into a single API. We faced
this problem multiple times:

1. In the beginning of thinking about capability discovery, an idea is
about returning a list of URLs if the user have the ability to execute. But
found that all the actions are into single URL
2. Before there is an idea about if the policy rule name is the URL, then
the user can easy to know the mapping between policy rule and API. The same
problem, all the actions into single URL
3. Before think about using OpenAPI(swagger) to generate api doc, but the
OpenAPI spec didn't support multiple "POST containers//action", that
means we need to put all the actions into single entry. That makes the
generated doc unreadable.

But yes, that doesn't means we block on this problem. Finally we go to
another direction. So just input something we met before for your
consideration.

Thanks
Alex

2016-12-19 19:57 GMT+08:00 Chris Dent :

> On Fri, 16 Dec 2016, Hongbin Lu wrote:
>
> I am from the Zun team and I wanted to consult with you about the API
>> design. Our team was discussing what is the best API design for
>> exposing different container operations [1]. There are two proposed
>> options:
>>
>> 1. Expose multiple URLs for individual container operation. For example:
>>
> [...]
>
>>
>> 2. Expose a single URL for all operations. For example:
>>
> [...]
>
> How to deal with "actions" is something we've struggled to reach
> consensus about in the API-WG. There have been a few proposals over
> the years (including some like both of the options you've listed),
> but none have been loved by everyone. There's a great deal of
> discussion that has happened around this issue and could still
> happen. Below is my personal perspective.
>
> There's a third option you may wish to consider that is perhaps a
> bit more resource oriented: use PUT or PATCH to update the state of
> the containers representation. Note that the examples should be take
> as describing an option, not indicating the right choices for the
> terms involved.
>
> a) GET /containers/
>
>This gets you a representation including some indicator of state.
>
>{...
> "uptime": 542819,
> "state": "running",
> ...
>}
>
> b) Change that state value to the target and PUT the representation
>back.
>
>PUT /containers/
>
>{...
> "state": "rebooting"
> ...
>}
>
>Or, if for some reason you need to save some bytes, you could
>PATCH the state attribute.
>
> c) If the change takes time and the request is asynchronous, the
>response could be 202 and doing a GET will representing the
>change in progress:
>
>GET /containers/
>
>{...
> "state": "rebooting",
> ...
>}
>
>[time passes..]
>
>GET /containers/
>
>{...
> "uptime": 30,
> "state": "running",
> ...
>}
>
> Like everything this mode has advantages and disadvantages. One
> advantage is that we avoid adding URLs. One disadvantage (for some)
> is that passing around the full representation is complex and/or
> confusing.
>
> As discussed on the abandoned review[1] referenced from the etherpad
> it is important to distinguish between actions which are atomic (at
> least from the perspective of the user-agent) and don't need
> observation and sequences of tasks which may need observation and
> may need to be interrupted and continued.
>
> The former are changes in resource state, and thus could be
> represented as such (as I've described above).
>
> In the latter, the task is (or tasks are ) the actual resource and
> should be the thing that is addressed by URL. That resource should
> make reference to the other entities which are being manipulated by
> the task.
>
> From a user's standpoint stop, start, pause, unpause, reboot etc are
> isolated actions that describe a state of the container resource.
>
> [1] https://review.openstack.org/#/c/234994/
>
> --
> Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-12-13 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-12-06 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-02 Thread Alex Xu
+1

2016-12-02 23:22 GMT+08:00 Matt Riedemann :

> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer,
> my ability to tell time in nova has gotten fuzzy over the years.
> Regardless, he's always been eager to contribute and over the last several
> months has done a lot of reviews, as can be seen here:
>
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
>
> http://stackalytics.com/report/contribution/nova/180
>
> Stephen has been a main contributor and mover for the config option
> cleanup series that last few cycles, and he's a go-to person for a lot of
> the NFV/performance features in Nova like NUMA, CPU pinning, huge pages,
> etc.
>
> I think Stephen does quality reviews, leaves thoughtful comments, knows
> when to hold a +1 for a patch that needs work, and when to hold a -1 from a
> patch that just has some nits, and helps others in the project move their
> changes forward, which are all qualities I look for in a nova-core member.
>
> I'd like to see Stephen get a bit more vocal / visible, but we all handle
> that differently and I think it's something Stephen can grow into the more
> involved he is.
>
> So with all that said, I need a vote from the core team on this
> nomination. I honestly don't care to look up the rules too much on number
> of votes or timeline, I think it's pretty obvious once the replies roll in
> which way this goes.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Stepping back

2016-11-23 Thread Alex Xu
Yea, thanks for all the help you give, and yes, I learned from you a lot
also. Really sad lost one teacher.

Wish all the best to you!

Thanks
Alex

2016-11-24 9:56 GMT+08:00 Ghanshyam Mann 
:

> Thanks a lot alaski for your incredible contribution in Nova. You were a
> great learning guide and mentor.
> Very best of luck for your new responsibility and hoping to work together
> in community again.
>
> Thanks & Regards,
> gmann
>
>
> > -Original Message-
> > From: Andrew Laski [mailto:and...@lascii.com]
> > Sent: 23 November 2016 01:40
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] [Nova] Stepping back
> >
> > I should have sent this weeks ago but I'm a bad person who forgets common
> > courtesy. My employment situation has changed in a way that does not
> > afford me the necessary time to remain a regular contributor to Nova, or
> the
> > broader OpenStack community. So it is with regret that I announce that I
> will
> > no longer be actively involved in the project.
> >
> > Fortunately, for those of you reading this, the Nova community is full of
> > wonderful and talented individuals who have picked up the work that I'm
> not
> > able to continue. Primarily this means parts of the cellsv2 effort, for
> which I
> > am extremely grateful.
> >
> > It has been a true pleasure working with you all these past few years
> and I'm
> > thankful to have had the opportunity. As I've told people many times when
> > they ask me what it's like to work on an open source project like this:
> working
> > on proprietary software exposes you to smart people but you're limited to
> > the small set of people within an organization, working on a project
> like this
> > exposed me to smart people from many companies and many parts of the
> > world. I have learned a lot working with you all. Thanks.
> >
> > I will continue to lurk in #openstack-nova, and try to stay minimally
> involved
> > as time allows, so feel free to ping me there.
> >
> > -Laski
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-11-22 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-11-15 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-11 Thread Alex Xu
2016-11-03 4:52 GMT+08:00 Jay Pipes :

> On 11/01/2016 10:14 AM, Alex Xu wrote:
>
>> Currently we only update the resource usage with Placement API in the
>> instance claim and the available resource update periodic task. But
>> there is no claim for migration with placement API yet. This works is
>> tracked by https://bugs.launchpad.net/nova/+bug/1621709. In newton, we
>> only fix one bit which make the resource update periodic task works
>> correctly, then it will auto-heal everything. For the migration claim
>> part, that isn't the goal for newton release.
>>
>> So the first question is do we want to fix it in this release? If the
>> answer is yes, there have a concern need to discuss.
>>
>
> Yes, I believe we should fix the underlying problem in Ocata. The
> underlying problem is what Sylvain brought up: live migrations do not
> currently use any sort of claim operation. The periodic resource audit is
> relied upon to essentially clean up the state of claimed resources over
> time, and as Chris points out in review comments on
> https://review.openstack.org/#/c/244489/, this leads to the scheduler
> operating on stale data and can lead to an increase in retry operations.
>
> This needs to be fixed before even attempting to address the issue you
> bring up with the placement API calls from the resource tracker.


ok, let me see if I can help something at here.


>
>
> In order to implement the drop of migration claim, the RT needs to
>> remove allocation records on the specific RP(on the source/destination
>> compute node). But there isn't any API can do that. The API about remove
>> allocation records is 'DELETE /allocations/{consumer_uuid}', but it will
>> delete all the allocation records for the consumer. So the initial
>> fix(https://review.openstack.org/#/c/369172/) adds new API 'DELETE
>> /resource_providers/{rp_uuid}/allocations/{consumer_id}'. But Chris Dent
>> pointed out this against the original design. All the allocations for
>> the specific consumer only can be dropped together.
>>
>
> Yes, and this is by design. Consumption of resources -- or the freeing
> thereof -- must be an atomic, transactional operation.
>
> There also have suggestion from Andrew, we can update all the allocation
>> records for the consumer each time. That means the RT will build the
>> original allocation records and new allocation records for the claim
>> together, and put into one API. That API should be 'PUT
>> /allocations/{consumer_uuid}'. Unfortunately that API doesn't replace
>> all the allocation records for the consumer, it always amends the new
>> allocation records for the consumer.
>>
>
> I see no reason why we can't change the behaviour of the `PUT
> /allocations/{consumer_uuid}` call to allow changing either the amounts of
> the allocated resources (a resize operation) or the set of resource
> provider UUIDs referenced in the allocations list (a move operation).
>
> For instance, let's say we have an allocation for an instance "i1" that is
> consuming 2 VCPU and 2048 MEMORY_MB on compute node "rpA", 50 DISK_GB on a
> shared storage pool "rpC".
>
> The allocations table would have the following records in it:
>
> resource_provider resource_class consumer used
> - --  
> rpA   VCPU   i1  2
> rpA   MEMORY_MB  i1   2048
> rpC   DISK_GBi1 50
>
> Now, we need to migrate instance "i1" to compute node "rpB". The instance
> disk uses shared storage so the only allocation records we actually need to
> modify are the VCPU and MEMORY_MB records.
>

yea, think about with shared storage, this makes sense a lot. Thanks for
such detail explain at here!


>
> We would create the following REST API call from the resource tracker on
> the destination node:
>
> PUT /allocations/i1
> {
>   "allocations": [
>   {
> "resource_provider": {
>   "uuid": "rpB",
> },
> "resources": {
>   "VCPU": 2,
>   "MEMORY_MB": 2048
> }
>   },
>   {
> "resource_provider": {
>   "uuid": "rpC",
> },
> "resources": {
>   "DISK_GB": 50
> }
>   }
>   ]
> }
>
> The placement service would receive that request payload and immediately
> grab any existing allocation records referencing consumer_uuid of "i1". It
> would notice that records referencing "rpA&

[openstack-dev] [nova] Nova API sub-team meeting

2016-11-08 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Alex Xu
2016-11-02 16:26 GMT+08:00 Sylvain Bauza :

>
>
> Le 01/11/2016 15:14, Alex Xu a écrit :
>
> Currently we only update the resource usage with Placement API in the
> instance claim and the available resource update periodic task. But there
> is no claim for migration with placement API yet. This works is tracked by
> https://bugs.launchpad.net/nova/+bug/1621709. In newton, we only fix one
> bit which make the resource update periodic task works correctly, then it
> will auto-heal everything. For the migration claim part, that isn't the
> goal for newton release.
>
>
> To be clear, there are two distinct points :
> #1 there are MoveClaim objects that are synchronously made on resize (and
> cold-migrate) and rebuild (and evacuate), but there is no claim done by the
> live-migration path.
> There is a long-standing bugfix https://review.openstack.org/#/c/244489/
> that's been tracked by https://bugs.launchpad.net/nova/+bug/1289064
>

Yea, thanks for the info. I say `migration claim` is more about the move
claim. Maybe I should say the move claim.

>
>
> #2 all those claim operations don't trigger an allocation request to the
> placement API, while the regular boot operation does (hence your bug
> report).
>

Yes, except the booting new instance, other claims won't trigger allocation
request to the placement API.


>
>
>
>
> So the first question is do we want to fix it in this release? If the
> answer is yes, there have a concern need to discuss.
>
>
> I'd appreciate if we could merge first #1 before #2 because the placement
> API decisions could be wrong if we decide to only allocate for certain move
> operations.
>

Sorry, I didn't get you, what is 'the placement API decisions' pointed to?


>
>
> In order to implement the drop of migration claim, the RT needs to remove
> allocation records on the specific RP(on the source/destination compute
> node). But there isn't any API can do that. The API about remove allocation
> records is 'DELETE /allocations/{consumer_uuid}', but it will delete all
> the allocation records for the consumer. So the initial fix(
> https://review.openstack.org/#/c/369172/) adds new API 'DELETE
> /resource_providers/{rp_uuid}/allocations/{consumer_id}'. But Chris Dent
> pointed out this against the original design. All the allocations for the
> specific consumer only can be dropped together.
>
> There also have suggestion from Andrew, we can update all the allocation
> records for the consumer each time. That means the RT will build the
> original allocation records and new allocation records for the claim
> together, and put into one API. That API should be 'PUT
> /allocations/{consumer_uuid}'. Unfortunately that API doesn't replace all
> the allocation records for the consumer, it always amends the new
> allocation records for the consumer.
>
> So which directly we should go at here?
>
> Thanks
> Alex
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-11-01 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-01 Thread Alex Xu
Currently we only update the resource usage with Placement API in the
instance claim and the available resource update periodic task. But there
is no claim for migration with placement API yet. This works is tracked by
https://bugs.launchpad.net/nova/+bug/1621709. In newton, we only fix one
bit which make the resource update periodic task works correctly, then it
will auto-heal everything. For the migration claim part, that isn't the
goal for newton release.

So the first question is do we want to fix it in this release? If the
answer is yes, there have a concern need to discuss.

In order to implement the drop of migration claim, the RT needs to remove
allocation records on the specific RP(on the source/destination compute
node). But there isn't any API can do that. The API about remove allocation
records is 'DELETE /allocations/{consumer_uuid}', but it will delete all
the allocation records for the consumer. So the initial fix(
https://review.openstack.org/#/c/369172/) adds new API 'DELETE
/resource_providers/{rp_uuid}/allocations/{consumer_id}'. But Chris Dent
pointed out this against the original design. All the allocations for the
specific consumer only can be dropped together.

There also have suggestion from Andrew, we can update all the allocation
records for the consumer each time. That means the RT will build the
original allocation records and new allocation records for the claim
together, and put into one API. That API should be 'PUT
/allocations/{consumer_uuid}'. Unfortunately that API doesn't replace all
the allocation records for the consumer, it always amends the new
allocation records for the consumer.

So which directly we should go at here?

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [nova] microversion edge case query

2016-10-20 Thread Alex Xu
2016-10-19 0:58 GMT+08:00 Ed Leafe :

> On Oct 18, 2016, at 11:01 AM, Chris Dent  wrote:
> >
> > If the requested microversion is greater than the maximum, a 404 still
> > makes some sense (no mapping _now_), but a 406 could as well because it
> > provides a signal that if you used a different microversion the
> > situation could be different and the time represented by the
> > requested microversion has conceptual awareness of its past.
> >
> > What do people think?
> >
> > I think I recall there was some discussion of this sort of thing
> > with regard to some of the proxy APIs at the nova midcycle but I
> > can't remember the details of the outcome.
>
> The only way that that could happen (besides a total collapse of the
> review process) is when a method is removed from the API. When that
> happens, the latest version has its max set to the last microversion where
> that method is supported. For microversions after that, 404 is the correct
> response. For all other methods, the latest version should not have a
> maximum specified.
>


Also think 404 is right at here. If you return 406 and it is a signal that
if you used a different microversion the situation could be different, the
thing will become strange when we raise the acceptable min_version someday.



>
> -- Ed Leafe
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-10-18 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-10-11 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-09-27 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can all virt drivers provide a disk 'id' for the diagnostics API?

2016-09-26 Thread Alex Xu
2016-09-23 20:38 GMT+08:00 Daniel P. Berrange :

> On Fri, Sep 23, 2016 at 07:32:36AM -0500, Matt Riedemann wrote:
> > On 9/23/2016 3:54 AM, Daniel P. Berrange wrote:
> > > On Thu, Sep 22, 2016 at 01:54:21PM -0500, Matt Riedemann wrote:
> > > > Sergey is working on a spec to use the standardized virt driver
> instance
> > > > diagnostics in the os-diagnostics API. A question came up during
> review of
> > > > the spec about how to define a disk 'id':
> > > >
> > > > https://review.openstack.org/#/c/357884/2/specs/ocata/
> approved/restore-vm-diagnostics.rst@140
> > > >
> > > > The existing diagnostics code doesn't set a disk id in the list of
> disk
> > > > dicts, but I think with at least libvirt we can set that to the
> target
> > > > device from the disk device xml.
> > > >
> > > > The xenapi code for getting this info is a bit confusing for me at
> least,
> > > > but it looks like it's possible to get the disks, but the id might
> need to
> > > > be parsed out (as a side note, it looks like the cpu/memory/disk
> diagnostics
> > > > are not even populated in the get_instance_diagnostics method for
> xen).
> > > >
> > > > vmware is in the same boat as xen, it's not fully implemented:
> > > >
> > > > https://github.com/openstack/nova/blob/
> 64cbd7c51a5a82b965dab53eccfaecba45be9c27/nova/virt/
> vmwareapi/vmops.py#L1561
> > > >
> > > > Hyper-v and Ironic virt drivers haven't implemented
> get_instance_diagnostics
> > > > yet.
> > >
> > > The key value of this field (which we should call "device_name", not
> "id"),
> > > is to allow the stats data to be correlated with the entries in the
> block
> > > device mapping list used to configure storage when bootin the VM. As
> such
> > > we should declare its value to match the corresponding field in BDM.
> > >
> > > Regards,
> > > Daniel
> > >
> >
> > Well, except that we don't want people specifying a device name in the
> block
> > device list when creating a server, and the libvirt driver ignores that
> > altogether. In fact, I think Dan Smith was planning on adding a
> microversion
> > in Ocata to remove that field from the server create request since we
> can't
> > guarantee it's what you'll end up with for all virt drivers.
>
> We don't want people specifying it, but we should report the auto-allocated
> names back when you query the data after instance creation, don't we ? If
> we don't, then there's no way for users to correlate the disks that they
> requested with the instance diagnostic stats, which severely limits their
> usefulness.
>

So what use-case for this API? I thought it is used by admin user to
diagnose the cloud. If that is the right use-case, we can expose the disk
image path in the API for admin user to correlate the disks. In the
libvirt, it would looks like
"/opt/stack/data/nova/instances/cbc7985c-434d-4ec3-8d96-d99ad6afb618/disk".
As this is admin-only API, and for diagnostics, this info is safe to expose
in this API.



>
> > I'm fine with calling the field device_name though.
>
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Latest news on placement API and Ocata rough goals

2016-09-23 Thread Alex Xu
7;t want to say we won't do that, just that it looks like a stretch
> goal for Ocata. At least, I think the discussion in the spec is a priority
> for Ocata, sure.



Yea, very short cycle. I'm plan to update the spec. The update part is
about hiding the standard traits validation behind the placement API. I and
Yingxin work on the PoC, to show how was that looks like, hope we can done
that in next week. Then I hope we have enough thing for people discussion.
It is also good for people to measure what is worth to be done Ocata, and
what isn't.


>
>
> 5. Nested resource providers
>>
>> Things like SR-IOV PCI devices are actually resource providers that are
>> embedded within another resource provider (the compute node itself). In
>> order to tag things like SR-IOV PFs or VFs with a set of traits, we need to
>> have discovery code run on the compute node that registers things like
>> SR-IOV PF/VFs or SR-IOV FPGAs as nested resource providers.
>>
>> Some steps needed here:
>>
>> a) agreement on schema for placement DB for representing this nesting
>> relationship
>> b) write the discovery code in nova-compute for adding these resource
>> providers to the placement API when found
>>
>>
> Again, that looks like a stretch goal to me, given how small we already
> discussed about that yet. But sure, Ocata would be fine for discussing
> first.
>
> Anyway, in conclusion, we've got a ton of work to do and I'm going to
>> spend time before the summit trying to get good agreement on direction and
>> proposed implementation for a number of the items listed above. Hopefully
>> by mid-October we'll have a good idea of assignees for various work and
>> what is going to be realistic to complete in Ocata.
>>
>> Best,
>> -jay
>>
>> [1] I'd like to personally thank Chris Dent, Dan Smith, Sean Dague, Ed
>> Leafe, Sylvain Bauza, Andrew Laski, Alex Xu and Matt Riedemann for
>> tolerating my sometimes lengthy absences and for pushing through
>> communication breakdowns resulting from my inability to adequately express
>> my ideas or document agreed solutions.
>>
>>
> Heh, thanks buddy. No worries about your absences, we had an awesome Dan
> for helping us :-)
>
>
> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-09-20 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-09-13 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-09-06 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] resignation from bug czar role

2016-09-05 Thread Alex Xu
Markus, thank you for all your help!

2016-09-05 19:19 GMT+08:00 Markus Zoeller :

> TL;DR: bug czar role for Nova is vacant from now on
>
>
> After doing bug triage for ~1 year, which was quiet interesting, it's
> time for me to move to different topics. My tasks within the company
> internal team are shifting too. Unfortunately less Nova for me in the
> next (hopefully short) time. That means I'm resigning from the bug czar
> role as of now.
>
>
> Observations in this timeframe
> --
>
> * The quality of most of the bug reports could be better. Very often
> they are not actionable. A bug report which isn't actionable burns
> resources without any benefit. The pattern I've seen is:
> * 1/3 : invalid because they are support requests or a wrong
> understanding
> * 1/3 : could be reasonable but essential information is missing
> * 1/3 : sounds reasonable + has a little info, should be looked at
>   Very few follow this template which is shown when you open a new
> report: https://wiki.openstack.org/wiki/Nova/BugsTeam/BugReportTemplate
>
> * We get ~40 new bug reports per week. With the current number of people
> who do bug triage, the number of overall bug reports doesn't decline. I
> started collecting data 6 months ago:
>
> http://45.55.105.55:3000/dashboard/db/openstack-bugs?
> from=now-6M&panelId=1&fullscreen
>
> * I wish the cores would engage more in bug triaging. If one core every
> other week would do the bug triage for 1 week, a core would have to do
> that only once per dev cycle. I'm aware of the review backlog though :/
>
> * I wish more non-cores would engage more in bug triaging.
>
> * We don't have contacts for a lot of areas in Nova:
>   https://wiki.openstack.org/wiki/Nova/BugTriage#Tag_Owner_List
>
> * Keeping the bug reports in a consistent state is cumbersome:
>   http://45.55.105.55:8082/bugs-dashboard.html#tabInProgressStale
>   We could introduce more automation here.
>
>
> Things we should continue
> -
>
> * Bug reports older that the oldest supported stable release should be
>   expired. Maybe best when the EOL tag gets applied.
>
> https://github.com/openstack-infra/release-tools/blob/
> master/expire_old_bug_reports.py
>   http://lists.openstack.org/pipermail/openstack-dev/2016-May/095654.html
>
> * We never came to a real conclusion how the ops communicated the RFEs
> to us. The way of using "wishlist" bug reports wasn't successful IMO.
> The last proposal was to use the ops ML to bring an RFE into some
> actionable shape and then create a backlog spec out of it.
>   http://lists.openstack.org/pipermail/openstack-dev/2016-
> March/089365.html
>
>
>
> Things we should start
> --
>
> * A cross-project discussion of (easy) ways to collect and send debug
> data to upstream OpenStack. Almost no bug report in Nova had the result
> of "sosreport" attached although we ask for that in the report template.
>
>
>
> Some last words
> ---
>
> * Whoever wants to do the job next, I offer some kind of onboarding.
>
> * I'll push a change to remove the IRC meetings in the next few days:
>   http://eavesdrop.openstack.org/#Nova_Bugs_Team_Meeting
>
> * The tooling I used will still be available at:
>   https://github.com/markuszoeller/openstack/tree/master/scripts/launchpad
>
> * My server which hosts some dashboards will still be available at:
>   http://45.55.105.55:3000/dashboard/db/openstack-bugs
>   http://45.55.105.55:8082/bugs-dashboard.html
>   http://45.55.105.55:8082/bugs-stats.html
>
> * I did an evaluation of Storyboard in July 2016 and it looks promising.
> Give it a shot at: https://storyboard-dev.openstack.org/#!/project/2 If
> you don't like something there, push a change, it's Python based.
>
> * I'll still hang out in the IRC channels, but don't expect much from me.
>
>
> Thanks a lot to the people who helped making Nova a better project by
> doing bug triage! Special thanks to auggy who put a lot(!) of effort
> into that.
>
> See you (hopefully) in Barcelona!
>
> --
> Regards,
> Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-08-30 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-08-24 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   >