Re: [openstack-dev] [nova] placement/resource providers update 33

2017-08-28 Thread Matt Riedemann

On 8/28/2017 5:09 PM, Matt Riedemann wrote:
There are some issues with this, mainly around the fact this is talking 
about scheduling, as I expect it to work today when I came into reading 
this, but it's talking in detail about the alternatives stuff which was 
not implemented in Pike, so I think that should be removed, or amended 
with a big fat note that it's not available yet and anything to do with 
alternatives is future work.


Since patches are welcome, I've put forth a patch:

https://review.openstack.org/#/c/498613/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 33

2017-08-28 Thread Matt Riedemann

On 8/25/2017 7:54 AM, Chris Dent wrote:


There's a stack that documents (with visual aids!) the flow of
scheduler and placement. It is pretty much ready:

https://review.openstack.org/#/c/475810/


I see I am late to the party here, but I've left comments in the 
now-merged patch.


There are some issues with this, mainly around the fact this is talking 
about scheduling, as I expect it to work today when I came into reading 
this, but it's talking in detail about the alternatives stuff which was 
not implemented in Pike, so I think that should be removed, or amended 
with a big fat note that it's not available yet and anything to do with 
alternatives is future work.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]notification update week 35

2017-08-28 Thread Matt Riedemann

On 8/28/2017 10:52 AM, Balazs Gibizer wrote:
[Undecided] https://bugs.launchpad.net/nova/+bug/1700496 Notifications 
are emitted per-cell instead of globally
Devstack config has already been modified so notifications are emitted 
to the top level MQ. It seems that only a nova cells doc update is 
needed that tells the admin how to configure the transport_url for the not

ifications.


This was done as part of the cells v2 layout docs here:

https://docs.openstack.org/nova/latest/user/cellsv2_layout.html#notifications

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Can we remove the monkey_patch_modules config option?

2017-08-28 Thread Matt Riedemann

On 8/28/2017 9:51 AM, Alex Schultz wrote:

JFYI,https://review.openstack.org/#/c/494305/

Since this was just added, someone is looking to use it or is using it.


Thanks for pointing this out. I've asked in the puppet review that the 
submitter of that change explain if/why they are using this config 
option and reply to this thread, since the change to deprecate the 
option in nova just merged:


https://review.openstack.org/498113

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Can we remove the monkey_patch_modules config option?

2017-08-25 Thread Matt Riedemann
I'm having a hard time tracing what this is necessary for. It's related 
to the notify_decorator which is around for legacy notifications but I 
don't actually see that decorator used anywhere. Given there are other 
options related to the notify_decorator, like "default_publisher_id" if 
we can start unwinding and removing this legacy stuff it would make the 
config (.005%) simpler.


It also just looks like we have a monkey_patch option that is run at the 
beginning of every service, uses monkey_patch_modules and if loaded, 
monkey patches whatever is configured for modules.


I mean, if we thought hooks were bad, this is pretty terrible.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Less than 24 hours to Pike RC2

2017-08-23 Thread Matt Riedemann
This is just a reminder that we're in the final stretch for Pike RC2 
which happens tomorrow.


There are a couple of fixes in flight yet for RC2 at the top of the 
etherpad:


https://etherpad.openstack.org/p/nova-pike-release-candidate-todo

And another bug that Alex pointed out tonight not yet reported in 
launchpad, but we don't cleanup allocations from the current node before 
doing a reschedule. If you have Ocata computes or are doing 
super-conductor mode tiered conductors for cells v2 then it's not an 
issue, but any installs that are doing single conductor relying on 
reschedules will have this issue, which I'd consider something we should 
fix for RC2 as it means we'll be reporting usage against compute nodes 
in Placement which isn't really there, thus potentially taking them out 
of scheduling decisions.


If you find anything else in the next few hours, please report a bug and 
tag it with pike-rc-potential.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Proposing Balazs Gibizer for nova-core

2017-08-22 Thread Matt Riedemann
I'm proposing that we add gibi to the nova core team. He's been around 
for awhile now and has shown persistence and leadership in the 
multi-release versioned notifications effort, which also included 
helping new contributors to Nova get involved which helps grow our 
contributor base.


Beyond that though, gibi has a good understanding of several areas of 
Nova, gives thoughtful reviews and feedback, which includes -1s on 
changes to get them in shape before a core reviewer gets to them, 
something I really value and look for in people doing reviews who aren't 
yet on the core team. He's also really helpful with not only reporting 
and triaging bugs, but writing tests to recreate bugs so we know when 
they are fixed, and also works on fixing them - something I expect from 
a core maintainer of the project.


So to the existing core team members, please respond with a yay/nay and 
after about a week or so we should have a decision (knowing a few cores 
are on vacation right now).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] [tripleo] heads up: custom resource classes, bare metal scheduling and you

2017-08-22 Thread Matt Riedemann

On 8/22/2017 12:36 PM, Dmitry Tantsur wrote:
All operators running ironic will have to set the resource class field 
before upgrading to Pike and change their flavors before upgrading to 
Queens. See our upgrade notes [6] for details.


It might be worth mentioning that the ironic virt driver in nova is 
going to be automatically migrating the embedded flavor within existing 
instances in Pike as long as the associated node has a resource_class set:


https://review.openstack.org/#/c/487954/

Also worth mentioning that a periodic task in the nova-compute service 
will automatically create any custom resource class from an ironic node 
in the Placement service if it does not already exist so it can be used 
later for scheduling with flavors that have the resource class set.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RequestSpec questions about force_hosts/nodes and requested_destination

2017-08-21 Thread Matt Riedemann
I don't dabble in the RequestSpec code much, but in trying to fix bug 
1712008 [1] I'm venturing in there and have some questions. This is 
mostly an email to Sylvain for when he gets back from vacation but I 
wanted to dump it before moving forward.


Mainly, what is the difference between 
RequestSpec.force_hosts/force_nodes and RequestSpec.requested_destination?


When should one be used over the other? I take it that 
requested_destination is the newest and coolest thing and we should 
always use that first, and that's what the nova-api code is using, but I 
also see the scheduler code checking force_hosts/force_nodes.


Is that all legacy compatibility code? And if so, then why don't we 
handle requested_destination in RequestSpec routines like 
reset_forced_destinations() and to_legacy_filter_properties_dict(), i.e. 
for the latter, if it's a new style RequestSpec with 
requested_destination set, but we have to backport and call 
to_legacy_filter_properties_dict(), shouldn't requested_destination be 
used to set force_hosts/force_nodes on the old style filter properties?


Since RequestSpec.requested_destination is the thing that restricts a 
move operation to a single cell, it seems pretty important to always be 
using that field when forcing where an instance is moving to. But I'm 
confused about whether or not both requested_destination *and* 
force_hosts/force_nodes should be set since the compat code doesn't seem 
to transform the former into the latter.


If this is all transitional code, we should really document the hell out 
of this in the RequestSpec class itself for anyone trying to write new 
client side code with it, like me.


[1] https://bugs.launchpad.net/nova/+bug/1712008

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimum version of shred in our supported distros?

2017-08-21 Thread Matt Riedemann

On 8/20/2017 1:11 AM, Michael Still wrote:
Specifically we could do something like this: 
https://review.openstack.org/#/c/495532


Sounds like we're OK with doing this in Queens given the other 
discussion in this thread. However, this is part of a much larger 
series. It looks like it doesn't need to be though, so could you split 
this out and we could just merge it on it's own?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] On idmapshift deprecation

2017-08-21 Thread Matt Riedemann

On 8/20/2017 3:28 AM, Michael Still wrote:
I'm going to take the general silence on this as permission to remove 
the idmapshift binary from nova. You're welcome.




The reality is that no one is using the LXC code as far as I know. 
Rackspace was the only one ever contributing changes for LXC and we 
never got a CI stood up for it in the gate. So if the changes break 
something, being a Rackspace employee yourself I'd hope you'd find out 
soon enough. So having said that, I think it's fine to go forward with 
removing the binary dependency if you can replace it with privsep.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Things for next week (8/14-8/18)

2017-08-11 Thread Matt Riedemann

Several cores are on vacation (myself, sdague, bauzas, stephenfin).

RC1 is cut, and we know we have some things for RC2. Those are tracked 
at the top of the etherpad here:


https://etherpad.openstack.org/p/nova-pike-release-candidate-todo

dansmith is the only person on the release team that is going to be 
around to approve stable/pike backports for RC2:


https://review.openstack.org/#/admin/groups/147,members

So all things go through Dan. You were warned.

The deadline for the final release candidate is Thursday 8/24, so we 
don't need to tag RC2 next week.


Besides tagging RC2, I'd like to do a release for stable/ocata and 
stable/newton soon (around the same time that we tag RC2).


A few of us went through open stable branch reviews today, of which 
there were many, and we should be close to flushing things out but need 
another +2 on a lot of them:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton

Again, that's probably going to fall largely on dansmith since several 
of the other stable cores [1] aren't around next week, unless we can 
poke lyarwood to come out of hiding.


Beyond that, I hope it's quiet. Try and break stuff and find Pike 
regressions that should go into RC2. Triage bugs daily to see if 
something gets reported. Brush your teeth. Clean your room. Etc.


[1] https://review.openstack.org/#/admin/groups/540,members

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][docs] O search where art thou?

2017-08-11 Thread Matt Riedemann
Before the great docs migration, searching for something in the nova 
devref was restricted to the nova devref:


https://docs.openstack.org/nova/ocata/search.html?q=rbd&check_keywords=yes&area=default

Now searching for something in the nova docs searches docs.o.o, ask.o.o, 
maybe other places, but it's basically so many unrelated results it's 
not usable for me:


https://docs.openstack.org/nova/latest/search.html#stq=rbd&stp=1

Is there a way we can just get the content-specific (restricted to 
whatever is in the nova repo for docs) search results back and if people 
want more, they go to docs.o.o to search for stuff?


Because when I'm in nova docs looking for rbd stuff, I don't want to 
sift through forum questions or glance docs or cinder docs, etc.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Thanks gibi!

2017-08-10 Thread Matt Riedemann
Apparently we don't have community contributor awards at the PTG, only 
the summit, and seeing as that's several months away now, which is kind 
of an eternity, I wanted to take the time now to thank gibi (Balazs 
Gibizer to his parents) for all the work he's been doing in Nova.


Not only does gibi lead the versioned notification transformation work, 
which includes running a weekly meeting (that only one other person 
shows up to) and sending a weekly status email, and does it in a 
ridiculously patient and kind way, but he's also been identifying 
several critical issues late in the release related to the Placement and 
claims in the scheduler work that's going on.


And it's not just doing manual testing, reporting a bug and throwing it 
over the wall - which is a major feat in OpenStack on it's own - but 
also taking the time to write automated functional regression tests to 
exhibit the bugs so when we have a fix we can tell it's actually 
working, plus he's been fixing some on his own also.


So with all that, I just wanted to formally and publicly say thanks to 
gibi for the great work he's doing which often goes overlooked when 
we're crunching toward a deadline.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Self-nomination for PTL in Queens

2017-08-08 Thread Matt Riedemann

Hi everyone,

This is my self-nomination to continue running as Nova PTL for the 
Queens cycle.


If elected, this would be my fourth term as Nova PTL. While I try to 
continually improve, at this point I have a fairly steady way of doing 
things and most people are probably used to that by now. That's not to 
say it's the best way of doing things, so I'm always open to getting 
feedback on what people would like to see more (or less) of from me.


I really see this as a service role and I'm happy to continue being of 
service for another release, which includes preparing for the PTG and 
Forum, being aware of the schedule and communicating major changes or 
plans to various groups (developers, operators, Foundation staff, etc).


I'm also happy to say that I'm fortunate enough to have an employer that
supports me doing this again and the amount of time it takes working 
mostly full time in the community.


As for Queens content, we made a lot of progress again in Pike but some 
things are left undone and that's what I'd like to focus on in Queens. 
Specifically:


- Continue to evolve and solidify Nova's interaction with the Placement
  service, which includes getting the allocations code out of the
  compute service, fully supporting shared storage providers (and
  testing that in the Ceph CI job), and finally adding the nested
  resource providers support which will enable other features like vGPUs
  and other hardware-accelerated configurations.
- Close some gaps in our multi-cell support, mainly related to up-calls
  for reschedules during build and affinity/anti-affinity sanity checks,
  and also work on real multi-cell deployment testing in a multi-node CI
  job.
- Finish the Cinder 3.27 API integration early (before the PTG) so we
  can finally get volume multi-attach support.
- Cleanup the documentation now that it has moved in-tree.

Finally, a personal goal for me is going to be working on helping mentor 
someone into the PTL role for the Rocky release, so if you are 
interested in this role, please reach out.


Thanks for your consideration.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][barbican][freezer][horizon][karbor][keystone][mistral][nova][pack-deb][refstack][solum][storlets][swift][tacker][telemetry][watcher][zaqar] Last days for PTL candidate announ

2017-08-07 Thread Matt Riedemann

On 8/7/2017 2:50 PM, Kendall Nelson wrote:

Hello Everyone :)

A quick reminder that we are in the last days for PTL candidate 
announcements.



If you want to stand for PTL, don't delay, follow the instructions at 
[1] to make sure the community knows your intentions.



Make sure your candidacy has been submitted to the openstack/election 
repository and approved by election officials.



Election statistics[2]:


This means that with approximately 2.5 days left more than 27% of 
projects will be deemed leaderless.  In this case the TC will be bound 
by [3].



Thank you,


-Kendall Nelson (diablo_rojo)


[1]http://governance.openstack.org/election/#how-to-submit-your-candidacy

[2] Assuming the open reviews below are validated

https://review.openstack.org/#/q/is:open+project:openstack/election

[3] 
http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Despite mikal flaming up the twitter, I'll post something for Nova PTL 
in the next day or so. I had mentioned intentions in the nova meeting a 
couple of weeks ago basically saying I'd be happy to do it again for 
Queens unless someone else was thinking about doing it and wanted to 
talk first - not that anyone has to talk to me about it to nominate 
themselves.


Also, I've been kind of sort of distracted by this whole release 
candidate thing going on right now...


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstackclient][python-openstacksdk][neutron][nova] supporting resource extensions with our CLI

2017-08-07 Thread Matt Riedemann

On 8/3/2017 1:39 PM, Boden Russell wrote:

I think we have a gap in our OSC CLI for non-stadium plugin/driver
projects (neutron plugin projects, nova driver projects) that implement
RESTful resource API attribute extensions.

For details, go directly to [1].
For a summary read on...


For OpenStack APIs that support extensions (ex [2]), the classic
python-client CLIs worked "out of the box" for extensions
adding attributes to existing RESTful resources.

For example, your project has a neutron plugin that adds a 'my_bool'
boolean attribute to 'network' resources that can be set via POST/PUT
and is returned with GET. This just works with the python-neutronclient
CLI without any client-side code changes.


However, with OSC resource attributes must be added directly/statically
to the sdk's resource and then consumed in the client; the support does
not come "for free" in the CLI. While this is fine for stadium projects
(they can contribute directly to the sdk/client), non-stadium projects
have no viable option to plugin/extend the CLI today for this type of
API extension mechanism.

With the phasing out of the python clients, a number of our users will
be left without a CLI to interface with these extensions.

I'd like to try and close this gap in Queens and welcome discussion in [1].

Thanks


[1] https://bugs.launchpad.net/python-openstacksdk/+bug/1705755
[2] https://wiki.openstack.org/wiki/NeutronDevelopment#API_Extensions

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There is nothing like this for Nova so I'm not sure why Nova should be 
involved here. We dropped all support for extending the API via 
stevedore extension loading in Pike [1]. The virt drivers don't extend 
the REST API either.


[1] https://blueprints.launchpad.net/nova/+spec/api-no-more-extensions-pike

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMWare Type Snapshot for Openstack 4.3

2017-08-04 Thread Matt Riedemann

On 8/4/2017 8:41 AM, Tom Kennedy wrote:
Is there an Openstack document that shows how to extend openstack to do 
something like this?


The create snapshot API is this in upstream Nova:

https://developer.openstack.org/api-ref/compute/#create-image-createimage-action

There is no distinction between a live and cold snapshot in the end user 
REST API. That's dependent on the backend compute driver. For example, 
the libvirt driver may attempt to perform a live snapshot if possible, 
but falls back to a cold snapshot if that's not possible. Other drivers 
could do the same.


As for the difference between the OpenStack concept of a snapshot and 
the VMware concept of a snapshot, I don't know what that is, but I can 
saw we wouldn't add a VMware-specific REST API for snapshots to the 
compute API when we already have the createImage API. So some design 
work would be involved if you wanted to upstream this.


For information on contributing features to Nova, you can start here:

https://docs.openstack.org/nova/latest/contributor/blueprints.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMWare Type Snapshot for Openstack 4.3

2017-08-03 Thread Matt Riedemann

On 8/3/2017 3:16 PM, Tom Kennedy wrote:
I see that this is implemented in 
nova(nova/api/openstack/compute/contrib/server_snapshot.py) , but is not 
available in Horizon.


I think you're looking at some forked code because that doesn't exist in 
upstream Nova:


https://github.com/openstack/nova/tree/master/nova/api/openstack/compute

I seem to remember a team in China at IBM working on VMware snapshots 
years ago, or something like this, for a product, so maybe you stumbled 
upon that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Can we move some non-voting broken jobs to the experimental queue?

2017-08-02 Thread Matt Riedemann
I don't dabble in Trove-land often but today I pushed a change and was 
watching it in zuul, and noticed that the change runs 26 jobs in the 
check queue. Several of those (cassandra, couch, mongo, percona) all 
failed nearly immediately with something in the diskimage builder, like 
this:


http://logs.openstack.org/42/490042/1/check/gate-trove-scenario-dsvm-cassandra-single-ubuntu-xenial-nv/d38a8c1/logs/devstack-gate-post_test_hook.txt.gz#_2017-08-02_14_58_16_953

diskimage_builder.element_dependencies.MissingElementException: Element 
'ubuntu-xenial-cassandra' not found


Is anyone maintaining these jobs? If not, they should be moved to the 
experimental queue so they can be run on demand, not in the check queue 
for every patch set proposed to Trove. These are also single and 
multi-node jobs, so they are needlessly eating up node pool resources.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Trying again on wait_for_compute in devstack

2017-08-02 Thread Matt Riedemann

On 8/2/2017 10:04 AM, Matt Riedemann wrote:

and we're not going
to use it in multinode scenarios.


Why would you not run this in multinode scenarios? That's the only time 
this is really a problem, because in the single node case we're 
discovering and mapping the compute node late enough that it's not been 
a problem.


Did you mean we're not going to use "cells v1" in multinode scenarios? I 
read "we're not going to use it" with "it" being the discovery patch in 
devstack, not cells v1.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Trying again on wait_for_compute in devstack

2017-08-02 Thread Matt Riedemann

On 8/2/2017 6:17 AM, Sean Dague wrote:

and we're not going
to use it in multinode scenarios.


Why would you not run this in multinode scenarios? That's the only time 
this is really a problem, because in the single node case we're 
discovering and mapping the compute node late enough that it's not been 
a problem.


The main failures are in the multinode jobs:

https://bugs.launchpad.net/grenade/+bug/1708039

https://goo.gl/xXhW8r

The devstack change is also failing on XenServer CI and who knows how 
many other 3rd party CIs that don't run against devstack changes, but 
will explode once it merges and they are running on Cinder or Neutron 
changes. I've dropped the tags in the subject line of this email so that 
this gets broader awareness as this isn't really just going to impact 
nova and ironic jobs if third party CIs aren't setup to handle this.


Why don't we wait until after RC1 on 8/10 before doing this? We already 
broke the gate and lost at least 6-10 hours last week on the day of 
feature freeze because of this.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][docs] Concerns with docs migration

2017-08-02 Thread Matt Riedemann
Now that Stephen Finucane is back from enjoying his youth and 
gallivanting all over Europe, and we talked about a few things in IRC 
this morning on the docs migration for Nova, I wanted to dump my 
concerns here for broader consumption.


1. We know we have to fix a bunch of broken links by adding in redirects 
[1] which sdague started here [2]. However, that apparently didn't catch 
everything, e.g. [3], so I'm concerned we're missing other broken links. 
Is there a way to find out?


2. The bottom change in the docs migration series for Nova is a massive 
refactor of the layout of the Nova devref [4]. That's something I don't 
want to do in Pike for two reasons:


a) It's a huge change and we simply don't have the time to invest in 
properly assessing and reviewing it before Pike RC1.


b) I think that if we're going to refactor the Nova devref home page to 
be a certain format, then we should really consider doing the same thing 
in the other projects, because today they are all different formats 
[5][6][7]. This is likely a cross-project discussion for the Queens PTG 
to determine if the home page for the projects should look similar. It 
seems they should given the uniformity that the Foundation has been 
working toward lately.


3. The patch for the import of the admin guide [8] is missing some CLI 
specific pages which are pretty useful given they aren't documented 
anywhere else, like the forced_host part of the compute API [9]. 
Basically anything that's cli-nova-* in the admin guide should be in the 
Nova docs. It's also missing the compute-flavors page [10] which is 
pretty important for using OpenStack at all.


4. Similar to #3, but we don't have a patch yet for importing the user 
guide and there are several docs in the user guide that are Nova 
specific so I'd like to make sure we include those, like [11][12].


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-August/120418.html

[2] https://review.openstack.org/#/c/489650/
[3] https://review.openstack.org/#/c/489641/
[4] https://review.openstack.org/#/c/478485/
[5] https://docs.openstack.org/cinder/latest/
[6] https://docs.openstack.org/glance/latest/
[7] https://docs.openstack.org/neutron/latest/
[8] https://review.openstack.org/#/c/477497/
[9] 
https://github.com/openstack/openstack-manuals/blob/stable/ocata/doc/admin-guide/source/cli-nova-specify-host.rst
[10] 
https://github.com/openstack/openstack-manuals/blob/stable/ocata/doc/admin-guide/source/compute-flavors.rst
[11] 
https://github.com/openstack/openstack-manuals/blob/stable/ocata/doc/user-guide/source/cli-launch-instances.rst
[12] 
https://github.com/openstack/openstack-manuals/blob/stable/ocata/doc/user-guide/source/cli-delete-an-instance.rst


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Working toward Pike RC1

2017-08-01 Thread Matt Riedemann
Now that we're past feature freeze for Pike, I've started an etherpad 
for tracking items needed to get done before the first release candidate 
here:


https://etherpad.openstack.org/p/nova-pike-release-candidate-todo

Just a reminder but Pike RC1 is Thursday August 10th.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Matt Riedemann

On 7/31/2017 5:21 PM, Tony Breeds wrote:

We need a +1 from the release team (are they okay to accept a late
release of glance_store); and a +1 from glance (are they okay to do said
release)


Glance doesn't actually need this minimum version bump for os-brick, the 
fix is for some attached volume extend stuff, which isn't related to 
Glance, so does having the minimum bump in glance* matter?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Requirements for re-adding Gluster support

2017-07-28 Thread Matt Riedemann

On 7/26/2017 4:16 PM, Eric Harney wrote:

 From a technical point of view there are not a lot of steps involved
here, we can restore the previous gate jobs and driver code and I expect
things would still be in working order.

I can help coordinate these things with the new owner.


Note that the libvirt volume driver would also have to be added back to 
Nova, since we dropped that after Cinder dropped the volume driver.


https://review.openstack.org/#/c/463992/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] hardware offload support for openvswitch feature exception

2017-07-27 Thread Matt Riedemann

On 7/26/2017 10:42 PM, Moshe Levi wrote:

Hi all,

In the last few week I was working on hardware offload support for 
openvswitch.


The idea is to leverage SR-IOV technology with OVS control plane management.

Just last month the ovs community merged all the required patches to 
enable this feature [1] should be in OVS-2.8.0.


I was working on the required patches to enable this in OpenStack.

On the neutron side the RFE approved [2] and the neutron patch is 
already merged [3]


On the OS-VIF side the patch is merged [4]

On the Third party CI side we have a Mellanox CI which is currently 
commenting on os-vif [5] ( we will extend it to nova and neutron as well)


The missing piece is the nova patch [6]

I just notice that this week is feature freeze in OpenStack and I would 
like to request an exception for this feature.


I will appreciate if nova-core reviewers will review it.

( Jay , Sean and Jan already review it several times and I think it is 
close to be merged)


[1] - 
https://github.com/openvswitch/ovs/commit/bf090264e68d160d0ae70ebc93d59bc09d34cc8b 



[2] - https://bugs.launchpad.net/neutron/+bug/1627987

[3] - https://review.openstack.org/#/c/275616/

[4] - https://review.openstack.org/#/c/460278/

[5] - 
http://52.169.200.208/25/485125/6/check-os-vif/OVS_HW_offload/aaf2792/


[6] - https://review.openstack.org/#/c/398265/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Given everything else done here and the change in nova is (1) a single 
change and (2) self-contained and (3) relatively uncomplicated, I'm OK 
with this. Please create a blueprint in nova so we can track it as a 
feature since that's what it is.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova] - https://launchpad.net/bugs/1667794 Changing hostname not to be treated as a pattern instead exact match will be done.

2017-07-26 Thread Matt Riedemann

On 7/26/2017 11:34 AM, Matt Riedemann wrote:

On 7/26/2017 11:23 AM, Matt Riedemann wrote:


Given this, what else do you need? Please be clear about what your use 
case is and how it is not solved using the 2.53 microversion. There 
may need to be changes to the CLI but let's separate that concern from 
the REST API changes.


I think your issue might be with these commands which use the 
hypervisors.search python API binding in novaclient:


1. nova host-meta # Set or Delete metadata on all instances of a host.
2. nova host-evacuate # Evacuate all instances from failed host
3. nova host-evacuate-live # Live migrate all instances of the specified 
host (we should really rename this command since it doesn't 'evacuate' 
it live migrates)
4. nova host-servers-migrate # Cold migrate all instances off the 
specified host


The risk with any of these is on the hostname match hitting more 
hypervisors than you wanted or expected. So if I have 10 computes in a 
London region in data center 1 named something like 
london.dc1.compute1.foo.bar, london.dc1.compute2.foo.bar, etc, and I do:


nova host-evacuate london.dc1

It's going to match all of those and evacuate instances from all of 
those dc1 computes in my london region at once, obviously probably not 
something you want, unless dc1 is being attacked by Harry Potter fans 
and you need to get those instances to another data center.


The solution here is you specify the fully qualified domain name for the 
host you want to evacuate:


nova host-evacuate london.dc1.compute1.foo.bar

Right? What am I missing here?

If you wanted to change the CLI to be more strict, we could do that by 
just adding a --strict_hostname option or something and fail if we get 
back more than one host, as a way to guard the operator from making a 
mistake.


But none of this sounds like changes that need to be done in the REST 
API, and arguably isn't a bug in the CLI either.




Also, FYI in case you haven't read this:

http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova] - https://launchpad.net/bugs/1667794 Changing hostname not to be treated as a pattern instead exact match will be done.

2017-07-26 Thread Matt Riedemann

On 7/26/2017 11:23 AM, Matt Riedemann wrote:


Given this, what else do you need? Please be clear about what your use 
case is and how it is not solved using the 2.53 microversion. There may 
need to be changes to the CLI but let's separate that concern from the 
REST API changes.


I think your issue might be with these commands which use the 
hypervisors.search python API binding in novaclient:


1. nova host-meta # Set or Delete metadata on all instances of a host.
2. nova host-evacuate # Evacuate all instances from failed host
3. nova host-evacuate-live # Live migrate all instances of the specified 
host (we should really rename this command since it doesn't 'evacuate' 
it live migrates)
4. nova host-servers-migrate # Cold migrate all instances off the 
specified host


The risk with any of these is on the hostname match hitting more 
hypervisors than you wanted or expected. So if I have 10 computes in a 
London region in data center 1 named something like 
london.dc1.compute1.foo.bar, london.dc1.compute2.foo.bar, etc, and I do:


nova host-evacuate london.dc1

It's going to match all of those and evacuate instances from all of 
those dc1 computes in my london region at once, obviously probably not 
something you want, unless dc1 is being attacked by Harry Potter fans 
and you need to get those instances to another data center.


The solution here is you specify the fully qualified domain name for the 
host you want to evacuate:


nova host-evacuate london.dc1.compute1.foo.bar

Right? What am I missing here?

If you wanted to change the CLI to be more strict, we could do that by 
just adding a --strict_hostname option or something and fail if we get 
back more than one host, as a way to guard the operator from making a 
mistake.


But none of this sounds like changes that need to be done in the REST 
API, and arguably isn't a bug in the CLI either.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova] - https://launchpad.net/bugs/1667794 Changing hostname not to be treated as a pattern instead exact match will be done.

2017-07-26 Thread Matt Riedemann

On 7/26/2017 12:24 AM, nidhi.h...@wipro.com wrote:

Hello All,
This is follow up mail on my y'days mail regarding bug
_https://bugs.launchpad.net/python-novaclient/+bug/1667794_
It looks like all the cli commands that are related to
http://10.141.67.190:8774/v2.1/os-hypervisors/wipro/servers api
are expecting exact match.
Commands which are affected if we change
http://10.141.67.190:8774/v2.1/os-hypervisors/wipro/servers api
from pattern match to exact match are as below :
It’s clearly seen that these commands expect exact match only.


How is this clear?  The help on the --host parameter to the 'nova 
hypervisor-servers' command says:


"The hypervisor hostname (or pattern) to search for."

Hence, I am planning to correct/change 
http://10.141.67.190:8774/v2.1/os-hypervisors/wipro/servers api

to match hostname as exact match “NOT” as a pattern.


This is basically already done for you in microversion 2.53:

https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id48

If you need to list hypervisors and get the servers hosted on those 
hypervisors in the response, you can do that with the "with_servers" 
query parameter on these APIs:


GET /os-hypervisors?with_servers=True
GET /os-hypervisors/detail?with_servers=True

The /servers and /search routes are deprecated in the 2.53 microversion, 
meaning they won't work with microversion >= 2.53, you'll get a 404 
response.


The novaclient change for 2.53 is here:

https://review.openstack.org/#/c/485435/

Given this, what else do you need? Please be clear about what your use 
case is and how it is not solved using the 2.53 microversion. There may 
need to be changes to the CLI but let's separate that concern from the 
REST API changes.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][searchlight] status of an instance on the REST API and in the instance notifications

2017-07-25 Thread Matt Riedemann

On 7/19/2017 10:54 AM, Balazs Gibizer wrote:
On Wed, Jul 19, 2017 at 5:38 PM, McLellan, Steven 
 wrote:

Thanks Balazs for noticing and replying to my message!

The Status field is quite important to us since it's the indicator of 
VM state that Horizon displays most prominently and the most simple 
description of whether a VM is currently usable or not without having 
to parse the various _state fields. If we can't get this change added 
in Pike I'll probably implement a simplified version of the mapping in 
[2], but it would be really good to get it into the notifications in 
Pike if possible. I understand though that this late in the cycle it 
may not be possible.


I can create a patch to add the status to the instance notifications but 
I don't know if nova cores accept it for this late in Pike.

@Cores: Do you?

Cheers,
gibi


It's probably too late to be dealing with this right now in Pike. I'd 
like to defer this to Queens where we can refactor the REST API common 
view code into a better place where it could be re-used with the 
notifications code if this is something we're going to add to the 
versioned notifications, and it's probably easy enough to do that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Heads up - nova conductor fleet round 2 is coming to devstack

2017-07-25 Thread Matt Riedemann
The second iteration of the nova conductor fleet change to devstack is 
approved:


https://review.openstack.org/#/c/477556/

The first attempt blew up a few jobs because of things like quotas and 
notifications which are now fixed on master, either in the devstack 
change itself (for notifications) or in nova (for quotas).


I went through the various non-voting job failures in the change this 
afternoon and triaged them all. The only one that looked remotely 
related was the dvr-ha multinode (3-node) job which failed to map the 
subnode-3 node to the cell, but I think that's more due to a latent race 
that's been around in our devstack-gate/devstack setup since Ocata, and 
is maybe made worse in a 3-node job.


I know it's not fun doing this so close to feature freeze but we need as 
much time as possible to burn this in before the pike-rc1 phase.


If you see anything blow up as a result, please reply to this thread or 
yell at us (mriedem/dansmith/melwitt) in the openstack-nova channel.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 29

2017-07-22 Thread Matt Riedemann

On 7/21/2017 6:54 AM, Chris Dent wrote:

## Custom Resource Classes for Ironic

A spec for custom resource classes is being updated to reflect the
need to update the flavor and allocations of a previously allocated
ironic node that how has a custom resource class (such as
CUSTOM_SILVER_IRON):

https://review.openstack.org/#/c/481748/

The implementation of those changes has started at:

https://review.openstack.org/#/c/484949/

That gets the flavor adjustment. Do we also need to do allocation
cleanups or was that already done at some point in the past?


That's done:

https://review.openstack.org/#/c/484935/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Queens PTG Planning

2017-07-22 Thread Matt Riedemann
I wanted to get some things documented for the Queens PTG so I've 
started an etherpad:


https://etherpad.openstack.org/p/nova-ptg-queens

Feel free to put down things you'd like to discuss. We'll shape it up later.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-19 Thread Matt Riedemann

On 7/19/2017 6:16 AM, Sean Dague wrote:

I was just starting to look through some logs to see if I could line up
request ids (part of global request id efforts), when I realized that in
the process to uwsgi by default, we've entirely lost the INFO wsgi
request logs. :(

Instead of the old format (which was coming out of oslo.service) we get
the following -
http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-n-api.txt.gz#_Jul_19_03_44_58_233532


That definitely takes us a step backwards in understanding the world, as
we lose our request id on entry that was extremely useful to match up
everything. We hit a similar issue with placement, and added custom
paste middleware for that. Maybe we need to consider a similar thing
here, that would only emit if running under uwsgi/apache?

Thoughts?

-Sean



I'm noticing some other weirdness here:

http://logs.openstack.org/65/483565/4/check/gate-tempest-dsvm-py35-ubuntu-xenial/9921636/logs/screen-n-sch.txt.gz#_Jul_19_20_17_18_801773

The first part of the log message got cut off:

Jul 19 20:17:18.801773 ubuntu-xenial-infracloud-vanilla-9950433 
nova-scheduler[22773]: 
-01dc-4de3-9da7-8eb3de9e305e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active'), 
'a4eba582-075a-4200-ae6f-9fc7797c95dd':


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] loss of WSGI request logs with request-id under uwsgi/apache

2017-07-19 Thread Matt Riedemann

On 7/19/2017 6:16 AM, Sean Dague wrote:

We hit a similar issue with placement, and added custom
paste middleware for that. Maybe we need to consider a similar thing
here, that would only emit if running under uwsgi/apache?


For example, this:

http://logs.openstack.org/97/479497/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a0fb17/logs/screen-placement-api.txt.gz#_Jul_19_03_41_21_429324

If it's not optional for placement, why would we make it optional for 
the compute API? Would turning it on always make it log the request IDs 
twice or something?


Is this a problem for glance/cinder/neutron/keystone and whoever else is 
logging request IDs in the API?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] 1.6.1 release for pike.

2017-07-18 Thread Matt Riedemann

On 7/18/2017 12:07 PM, Mooney, Sean K wrote:

Resending with correct subject line


The real correct subject line tag would be [nova] or [nova][neutron]. :P

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]notification update week 29

2017-07-18 Thread Matt Riedemann

On 7/18/2017 7:22 AM, Balazs Gibizer wrote:
Unfortunately the assignee left OpenStack during Pike so that BP did not 
progress. We definitely cannot make this to Pike. However I don't even 
know if there is somebody who will have time to work with it in Queens. 
Can we move this to the backlog somehow?


I've deferred the blueprint to Queens. We can re-assess at the PTG.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] git review -d + git rebase changing author?

2017-07-17 Thread Matt Riedemann
I don't have a strict recreate on this right now, but wanted to bring it 
up in case others have seen it. I've done this unknowingly and seen it 
happen to other changes, like:


https://review.openstack.org/#/c/428241/7..8//COMMIT_MSG

https://review.openstack.org/#/c/327564/3..4//COMMIT_MSG

Where the author changes in the commit.

When I've seen this, I think it's because I'm doing some combination of:

1. git review -d
2. git rebase -i master
3. change something
4. git commit
5. git rebase --continue (if in the middle of a series)
6. git review

Something about the combination of the git review/rebase/commit changes 
the author.


Again, I can try to recreate and come up with repeatable steps later, 
but wanted to bring this up while I'm thinking about it again.


My versions:

user@ubuntu:~/git/nova$ git --version
git version 2.7.4
user@ubuntu:~/git/nova$ pip show git-review
Name: git-review
Version: 1.25.0


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]notification update week 29

2017-07-17 Thread Matt Riedemann


What do you want to do with this blueprint?

https://blueprints.launchpad.net/nova/+spec/json-schema-for-versioned-notifications

I don't know if all of the dependencies are done, and it looks like the 
Nova changes are pretty stale. Should we just defer this to Queens?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova] - https://launchpad.net/bugs/1667794 Changing hostname not to be treated as a pattern instead exact match will be done.

2017-07-14 Thread Matt Riedemann

On 7/14/2017 6:49 AM, nidhi.h...@wipro.com wrote:

Hello all,

This is regarding bug 1667794 as mentioned in subject.

Its review is going on here.

https://review.openstack.org/#/c/474949/

*_Bug is - _**_Nova treats hostname as pattern_*

*_Description_*

Nova commands such as "hypervisor-list --matching ",

host-evacuate-live and host-evacuate and few more, treat the

user-specified "host-name" as the input to the HTTP

/os-hypervisors/{hypervisor_hostname_pattern}/search API.

*Nova checks "host-name" as a pattern instead of exact match,*

*which causes problem with some commands such as*

*nova host-evacuate-live compute-1 where in host-evacuate*

*action will apply to all "compute-1", "compute-10".*

*That is not right.*

Correcting it by using exact match.

We have fixed it and put it for review. We need your opinion on this.

*_Kindly share your opinion in case this does not seem to be an 
acceptable fix to anyone._*


Thanks

Nidhi

The information contained in this electronic message and any attachments 
to this message are intended for the exclusive use of the addressee(s) 
and may contain proprietary, confidential or privileged information. If 
you are not the intended recipient, you should not disseminate, 
distribute or copy this e-mail. Please notify the sender immediately and 
destroy all copies of this message and any attachments. WARNING: 
Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The 
company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for bringing this up. Your fix is in the wrong place, see the 
comments in the patch.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Blueprint review focus toward feature freeze (July 27)

2017-07-13 Thread Matt Riedemann
As discussed in the nova meeting today [1] I have started an etherpad of 
blueprint changes up for review [2] broken down into categories to help 
focus reviews.


I did something similar in the Newton release and use this to help 
myself organize my TODO list, sort of like a Kanban board. As things 
make progress they move to the top.


I've already filled out some of the changes which are very close to 
completing the blueprint and we could get done this week.


Then I've started noting changes against priority efforts with notes.

I will then start filling in categories for other blueprints that need 
attention and try to prioritize those based on what I think can actually 
get completed in Pike. For example, if there are two changes which 
haven't gotten much review attention but I feel like one has a better 
chance of getting completed before the feature freeze, I will prioritize 
that one higher. Some people might think this is unfair, but the way I 
see it is, if we're going to focus on something, I'd rather it be the 
thing that can be done, rather than divide our attention and fail to get 
either done.


Please let me know if there are questions.

[1] 
http://eavesdrop.openstack.org/meetings/nova/2017/nova.2017-07-13-14.00.html

[2] https://etherpad.openstack.org/p/nova-pike-feature-freeze-status

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should PUT /os-services be idempotent?

2017-07-11 Thread Matt Riedemann
I'm looking for some broader input on something being discussed in this 
change:


https://review.openstack.org/#/c/464280/21/nova/api/openstack/compute/services.py

This is collapsing the following APIs into a single API:

Old:

* PUT /os-services/enable
* PUT /os-services/disable
* PUT /os-services/disable-log-reason
* PUT /os-services/force-down

New:

* PUT /os-services

With the old APIs, if you tried to enable and already enabled service, 
it was not an error. The same is you tried to disable an already 
disabled service. It doesn't change anything, but it's not an error.


The question is coming up in the new API if trying to enable an enabled 
service should be a 400, or trying to disable a disabled service. The 
way I wrote the new API, those are no 400 conditions. They don't do 
anything, like before, but they aren't errors.


Looking at [1] it seems this should not be an error condition if you're 
trying to update the state of a resource and it's already at that state.


I don't have a PhD in REST though so would like broader discussion on this.

[1] http://www.restapitutorial.com/lessons/idempotency.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] another ironic-stable-maint update proposal

2017-07-10 Thread Matt Riedemann

On 6/30/2017 11:10 AM, Dmitry Tantsur wrote:

Hi all!

I'd like to propose another round of changes to the ironic-stable-maint 
group [0]:


1. Add Julia Kreger (TheJulia) to the group. Julia is one of the top 
reviewers in Ironic, and she is quite active on stable branches as well 
[1].


2. Remove Jim Rollenhagen (sigh..) as he no longer works on OpenStack.

So for those on the team already, please reply with a +1 or -1 vote.
I'll also need somebody to apply this change, as I don't have ACL for that.

[0] https://review.openstack.org/#/admin/groups/950,members
[1] 
https://review.openstack.org/#/q/(project:openstack/ironic+OR+project:openstack/ironic-python-agent+OR+project:openstack/ironic-lib)+NOT+branch:master+reviewer:%22Julia+Kreger+%253Cjuliaashleykreger%2540gmail.com%253E%22 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Done.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] notification update week 28

2017-07-10 Thread Matt Riedemann

On 7/10/2017 3:22 AM, Balazs Gibizer wrote:

Hi,

Here is the status update / focus setting mail about notification work
for week 28.

Bugs

[Undecided] https://bugs.launchpad.net/nova/+bug/1684860 Versioned
server notifications don't include updated_at
Takashi's fix needs a second +2 https://review.openstack.org/#/c/475276/

[Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications
use nova-api as binary name instead of nova-osapi_compute
Agreed not to change the binary name in the notifications. Instead we
make an enum for that name to show that the name is intentional.
Patch has been proposed:  https://review.openstack.org/#/c/476538/

[Undecided] https://bugs.launchpad.net/nova/+bug/1702667 publisher_id of 
the versioned instance.update notification is not consistent with other 
notifications
The inconsistency of publisher_ids was revealed by #1696152. Fix has 
been proposed https://review.openstack.org/#/c/480984


[Undecided] https://bugs.launchpad.net/nova/+bug/1699115 api.fault
notification is never emitted
Still no response on the ML thread about the way forward.
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html

[Undecide] https://bugs.launchpad.net/nova/+bug/1700496 Notifications
are emitted per-cell instead of globally
Fix is to configure a global MQ endpoint for the notifications in cells
v2. Patch is being worked on: https://review.openstack.org/#/c/477556/

Versioned notification transformation
-
There is quite a long list of ready notification transformations for the 
cores to look at:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0 



If you are affraid of the long list then here is a short list of live 
migration related transformations:

* https://review.openstack.org/#/c/480214/
* https://review.openstack.org/#/c/420453/
* https://review.openstack.org/#/c/480119/
* https://review.openstack.org/#/c/469784/

Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
The BDM addition needs core review, it just lost +2 due to a rebase:
https://review.openstack.org/#/c/448779/

Besides the BDM patch we are still missing the Add tags to
instance.create Notification https://review.openstack.org/#/c/459493/
patch but that depends on supporting tags and instance boot
https://review.openstack.org/#/c/394321/ which is still not ready.


One of my goals for this week is to get these two done so we can close 
out both of those blueprints.




Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting
* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.

Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 11th of July.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170711T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] The definition of 'Optional' parameter in API reference

2017-07-09 Thread Matt Riedemann

On 7/4/2017 7:13 PM, Takashi Natsume wrote:

On 2017/07/04 21:12, Alex Xu wrote:

2017-07-04 15:40 GMT+08:00 Ghanshyam Mann :


On Mon, Jul 3, 2017 at 1:38 PM, Takashi Natsume
 wrote:

In Nova API reference, there is inconsistency in
whether to define parameters added in new microversion as 'optional' or

not.

Those should be defined based on how they are defined in respective
microversion. If they are 'optional' in that microversion they should
be mentioned as 'optional' and vice versa. Any parameter added in
microversion is mentioned as 'New in version 2.xy' which shows the non
availability of that parameter in earlier versions. Same case for
removal of parameter also.

But if any microversion change parameter from option param to required
or vice versa then it is tricky but IMO documenting the latest
behavior is right thing but with clear notes.
For example, microversion 2.37,   where 'network' in request made as
required from optional. In this cases, api-ref have the latest
behavior of that param which is 'required' and a clear notes about
till when it was optional and from when it is mandatory.

In all cases, doc should reflect the latest behavior of param with
notes(manual or auto generated with min_version & max_version)



++


Thank you for your replies and the fix in 
https://review.openstack.org/#/c/480162/ .


In the case that the parameter is always included in the response 
after a

certain microversion,
some parameters(e.g. 'type' [1]) are defined as 'required', but some
parameters (e.g. 'project_id', 'user_id'[2])
are defined as 'optional'.

[1] List Keypairs in Keypairs (keypairs)
https://developer.openstack.org/api-ref/compute/?expanded=

list-keypairs-detail#list-keypairs



'keypairs_links' in the response should be the required parameter. 
Because

it always show up after 2.35.


The 'keypairs_links' is an optional parameter.
When the 'get_links' method of the view builder for keypairs operations
returns an empty list, the 'keypairs_links' does not appear
in the response.

https://github.com/openstack/nova/blob/32e613b9cd499847b7a7dc49d43020523b96c1d1/nova/api/openstack/compute/keypairs.py#L286-L288 


I noticed the same thing with hypervisor_links in the os-hypervisors 
API. The links are not shown if a limit is not in the request query 
parameters and there aren't more results than the default max limit 
(1000). In other words, you don't need links to another page if there is 
not another page to get.





Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for starting this thread. I've been struggling with this a bit 
too lately in some changes I'm working on, for example:


https://review.openstack.org/#/c/454322/

In there, instance_action_events are optional before 2.50, but required 
in >= 2.50. It gets doubly confusing because actually the 'events' in 
the response are required if you are an admin user before 2.50. So there 
are really three cases there:


1. Microversion < 2.50, admin user: 'events' are required
2. Microversion < 2.50, non-admin user: 'events' are optional
3. Microversion >= 2.50, admin or non-admin user: 'events' are required

I've tried to clarify with the description on each, and used the max/min 
versions for the behavior differences for when they are optional or not, 
but I can see where it's still a bit confusing.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 27

2017-07-07 Thread Matt Riedemann
* https://review.openstack.org/#/c/470578/
Add functional test for local delete allocations

* https://review.openstack.org/#/c/427200/
   Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/469048/
 Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
 Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468797/
 Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
   ensure shared RP maps with correct root RP
   (Some discussion on this one what the goal is and whether the
   approach is the right one.)

# End

That's all I've got this week, next week I should be a bit more
caught up and aware of any bits I've missed. No prize this week, but
maybe next week.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Another thing to mention that wasn't in this list:

https://blueprints.launchpad.net/nova/+spec/custom-resource-classes-in-flavors

Ed got the part for the scheduler done, but there are two other changes 
needed for that blueprint. I also have a change up to amend the spec 
since it was missing some details:


https://review.openstack.org/#/c/481748/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Status update on 3.27 attachments integration

2017-07-07 Thread Matt Riedemann
I wanted to provide an update on where we're at with the series of nova 
changes to integrate with the new style 3.27 volume attachments API.


This is building a foundation between both projects for supporting 
multi-attach volumes in Nova in Queens.


The series has 3 major changes: 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/cinder-new-attach-apis


a) Updating swap volume to use the new volume attachment flow. I've been 
working on this with Ildiko Vancsa. Integration tests are passing on 
this with the new style attachments enabled at the end of the series. 
This is should be ready to go but has not had review from other Nova 
cores, so is at risk.


b) Updating live migration to use the new volume attachment flow. Steve 
Noyes has been working on this. I've reviewed and helped with some 
testing (verified with CI that volume-backed live migration is passing 
against this change with the new flow enabled with libvirt). No other 
Nova cores have reviewed this yet so it's at risk. Also, even if we get 
that change in, we have to also update the Hyper-v and XenAPI drivers 
which also support live migration.


c) Updating the compute API code to start using the new attachments API 
when (1) all computes are upgraded to the latest version and (2) Cinder 
3.27 is available in the deployment. Those constraints are meant to 
support rolling upgrades before the new code flows are running. The 
change is passing Tempest CI but has not gotten much Nova core review 
yet. I've done some review but not a deep review on the latest revision.


Risk: High - this is high risk mainly because of the lack of another 
Nova core being involved in the changes. John Garbutt was the other core 
helping with this series but he has not been active since he was laid 
off from OSIC, which was back in May.


We also need to add some upgrade testing in the Grenade project so that 
we can be sure a volume attached in Ocata is properly detached in Pike. 
Steve Noyes was investigating this.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Nova] Special scenario tests

2017-07-06 Thread Matt Riedemann

On 7/6/2017 9:50 AM, Matt Riedemann wrote:
Another alternative is we should be able to test evacuate with the 
in-tree functional tests using fixtures. That kind of testing only gives 
us so much coverage since we have to stub some things out, but it would 
at least test the mechanics of an evacuate through all of the services.


To be clear, the main disadvantage to this is we have to stub out 
anything related to networking and cinder volumes during evacuate, which 
are the most complicated parts of evacuate which actually need the test 
coverage.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Nova] Special scenario tests

2017-07-06 Thread Matt Riedemann

On 7/5/2017 4:16 AM, Ferenc Horváth wrote:

I looked at how novaclient's functional test suite works.
As far as I understand there is a functional environment in tox
in which all the test cases are executed in a common dsvm job.
Obviously, the test cases are using the CLI to communicate
with devstack.

My current idea is that we could break down the problem to
at least three work items. So, from a (very) high-level view, we
will need the following things to implement the aforementioned
nova-integration environment:
1. A new tox environment, which will run test from a directory
(which might be /nova/tests/functional/integration for example).


If it's a new tox env, I'd put them in nova/tests/integration, and leave 
the 'functional' subdirectory to the "tox -e functional" tests.



2. A new dsvm job, which will execute only the new integration
environment.


Yes.


3. Some way to communicate with devstack. In case of tempest
or novaclient this layer is already implemented, but my guess is
that we will have to implement new clients/fixtures in nova.


It probably depends on what needs to be tested. If you're going through 
the REST API, then a simple REST client using keystoneauth1 is probably 
good enough. We can use os-client-config for auth credentials like the 
novaclient functional job uses.




I think the most critical is #3, but if we can discuss this and the idea
is mature enough then I'd like to start with proposing tests for evacuate.


Note that for evacuate testing, we'd need a multi-node job (minimum of 2 
nodes), and we'd also need to run the tests serially since evacuate 
requires that the nova-compute service is 'down' before you can evacuate 
instances from it.


The novaclient functional job runs it's tests serially, but is single node.

If we wanted to test evacuate cheaply without an entirely new test / job 
infrastructure, we could add a multi-node novaclient functional job 
which runs serially and tests evacuate for us - we have all of the CLIs 
we'd need to do that and the majority of the infra and test framework 
plumbing is already in place to do this.


It seems a bit odd to build that into novaclient functional testing 
rather than in something that runs against nova changes though, i.e. we 
can't verify fixes to evacuate in nova itself, and if we break something 
in evacuate because we aren't testing it in nova, then novaclient 
changes will all be blocked until that's fixed.


We'd have to weigh the pros/cons to having some testing in the 
short-term with novaclient vs the effort it would take to get a new job 
framework setup for nova.


Another alternative is we should be able to test evacuate with the 
in-tree functional tests using fixtures. That kind of testing only gives 
us so much coverage since we have to stub some things out, but it would 
at least test the mechanics of an evacuate through all of the services.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Allow passing security groups when attaching interfaces?

2017-07-06 Thread Matt Riedemann

On 7/6/2017 6:39 AM, Gary Kotton wrote:

Hi,

When you attach an interface there are a number of options:

1. Pass a existing port

2. Pass a network

In the second case a new port will be created and by default that will 
have the default security group.


You could try the first option by attaching the security group to the port

Thanks

Gary

*From: *Zhenyu Zheng 
*Reply-To: *OpenStack List 
*Date: *Thursday, July 6, 2017 at 12:45 PM
*To: *OpenStack List 
*Subject: *[openstack-dev] [Nova][Neutron] Allow passing security groups 
when attaching interfaces?


Hi,

Our product has meet this kind of problem, when we boot instances, we 
are allowed to pass security groups, and if we provided network id, 
ports with the sg we passed will be created and when we show instances, 
we can see security groups field of instance is the sg we provided. But 
when we attach again some new interfaces(using network_id), the newly 
added interfaces will be in the default security group.


We are wondering, will it be better to allow passing security groups 
when attaching interfaces? or it is considered to be a proxy-api which 
we do not like?


BR,

Kevin Zheng



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't think we want this, it's more proxy orchestration that would 
have to live in Nova. As Gary pointed out, if you want a non-default 
security group, create the port in neutron ahead of time, associate the 
non-default security group(s) and then attach that port to the server 
instance in nova.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Need volunteer(s) to help migrate project docs

2017-06-23 Thread Matt Riedemann
The spec [1] with the plan to migrate project-specific docs from 
docs.openstack.org to each project has merged.


There are a number of steps outlined in there which need people from the 
project teams, e.g. nova, to do for their project. Some of it we're 
already doing, like building a config reference, API reference, using 
the openstackdocstheme, etc. But there are other things like moving the 
install guide for compute into the nova repo.


Is anyone interested in owning this work? There are enough tasks that it 
could probably be a couple of people coordinating. It also needs to be 
done by the end of the Pike release, so time is a factor.


[1] https://review.openstack.org/#/c/472275/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Matt Riedemann

On 6/21/2017 4:28 PM, Rochelle Grober wrote:




From: Matt
On 6/21/2017 7:04 AM, Shewale, Bhagyashri wrote:

I  would like to write functional tests to check the exact req/resp
for each placement API for all supported versions similar

to what is already done for other APIs under
nova/tests/functional/api_sample_tests/api_samples/*.

These request/response json samples can be used by the
api.openstack.org and in the manuals.

There are already functional tests written for placement APIs under
nova/tests/functional/api/openstack/placement,

but these tests doesn’t check the entire HTTP response for each API
for all supported versions.

I think adding such functional tests for checking response for each
placement API would be beneficial to the project.

If there is an interest to create such functional tests, I can file a
new blueprint for this activity.



This has come up before and we don't want to use the same functional API
samples infrastructure for generating API samples for the placement API.
The functional API samples tests are confusing and a steep learning curve for
new contributors (and even long time old tooth contributors still get
confused by them).


I second that you talk with Chris Dent (mentioned below), but I also want to 
encourage you to write tests.  Write API tests that demonstrate *exactly* what 
is allowed and not allowed and verify that whether the api call is constructed 
correctly or not, that the responses are appropriate and correct.  By writing 
these new/extra/improved tests, the Interop guidelines can use these tests to 
improve interop expectations across clouds.  Plus, operators will be able to 
more quickly identify what the problem is when the tests demonstrate the 
problem-response patterns.  And, like you said, knowing what to expect makes 
documenting expected behaviors, for both correct and incorrect uses, much more 
straightforward.  Details are very important when tracking down issues based on 
the responses logged.

I want to encourage you to work with Chris to help expand our tests and their 
specificity and their extent.

Thanks!

--Rocky (with my interop, QA and ops hats on)




Talk with Chris Dent about ideas here for API samples with placement.
He's talked about building something into the gabbi library for this, but I 
don't
know if that's being worked on or not.

Chris is also on vacation for a couple of weeks, just FYI.

--

Thanks,

Matt

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rocky, we have tests, we just don't have API samples for documentation 
purposes like in the compute API reference docs.


This doesn't have anything to do with interop guidelines, and it 
wouldn't, since the Placement APIs are all admin-only and interop is 
strictly about non-admin APIs.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Matt Riedemann
a 
is met.


That still doesn't mean you're going to get the attendance you need from 
all parties. I don't know how you solve that one. People are going to 
work on what they are paid to work on.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-21 Thread Matt Riedemann

On 6/21/2017 9:59 AM, Thierry Carrez wrote:

Hi everyone,

One of the areas identified as a priority by the Board + TC + UC
workshop in March was the need to better close the feedback loop and
make unanswered requirements emerge. Part of the solution is to ensure
that groups that look at specific use cases, or specific problem spaces
within OpenStack get participation from a wide spectrum of roles, from
pure operators of OpenStack clouds, to upstream developers, product
managers, researchers, and every combination thereof. In the past year
we reorganized the Design Summit event, so that the design / planning /
feedback gathering part of it would be less dev- or ops-branded, to
encourage participation of everyone in a neutral ground, based on the
topic being discussed. That was just a first step.

In OpenStack we have a number of "working groups", groups of people
interested in discussing a given use case, or addressing a given problem
space across all of OpenStack. Examples include the API working group,
the Deployment working group, the Public clouds working group, the
Telco/NFV working group, or the Scientific working group. However, for
governance reasons, those are currently set up either as a User
Committee working group[1], or a working group depending on the
Technical Committee[2]. This branding of working groups artificially
discourages participation from one side to the others group, for no
specific reason. This needs to be fixed.

We propose to take a page out of Kubernetes playbook and set up "SIGs"
(special interest groups), that would be primarily defined by their
mission (i.e. the use case / problem space the group wants to
collectively address). Those SIGs would not be Ops SIGs or Dev SIGs,
they would just be OpenStack SIGs. While possible some groups will lean
more towards an operator or dev focus (based on their mission), it is
important to encourage everyone to join in early and often. SIGs could
be very easily set up, just by adding your group to a wiki page,
defining the mission of the group, a contact point and details on
meetings (if the group has any). No need for prior vetting by any
governance body. The TC and UC would likely still clean up dead SIGs
from the list, to keep it relevant and tidy. Since they are neither dev
or ops, SIGs would not use the -dev or the -operators lists: they would
use a specific ML (openstack-sigs ?) to hold their discussions without
cross-posting, with appropriate subject tagging.

Not everything would become a SIG. Upstream project teams would remain
the same (although some of them, like Security, might turn into a SIG).
Teams under the UC that are purely operator-facing (like the Ops Tags
Team or the AUC recognition team) would likewise stay as UC subteams.

Comments, thoughts ?

[1]
https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups_and_Teams
[2] https://wiki.openstack.org/wiki/Upstream_Working_Groups



How does the re-branding or re-categorization of these groups solve the 
actual feedback problem? If the problem is getting different people from 
different groups together, how does this solve that? For example, how do 
we get upstream developers aware of operator issues or product managers 
communicating their needs and feature priorities to the upstream 
developers? No one can join all work groups or SIGs and be aware of all 
things at the same time, and actually have time to do anything else.


Is the number of various work groups/SIGs a problem?

Maybe what I'd need is an example of an existing problem case and how 
the new SIG model would fix that - concrete examples would be really 
appreciated when communicating suggested governance changes.


For example, is there some feature/requirement/issue that one group has 
wanted implemented/fixed for a long time but another group isn't aware 
of it? How would SIGs fix that in a way that work groups haven't?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-21 Thread Matt Riedemann

On 6/21/2017 7:04 AM, Shewale, Bhagyashri wrote:
I  would like to write functional tests to check the exact req/resp for 
each placement API for all supported versions similar


to what is already done for other APIs under 
nova/tests/functional/api_sample_tests/api_samples/*.


These request/response json samples can be used by the api.openstack.org 
and in the manuals.


There are already functional tests written for placement APIs under 
nova/tests/functional/api/openstack/placement,


but these tests doesn’t check the entire HTTP response for each API for 
all supported versions.


I think adding such functional tests for checking response for each 
placement API would be beneficial to the project.


If there is an interest to create such functional tests, I can file a 
new blueprint for this activity.




This has come up before and we don't want to use the same functional API 
samples infrastructure for generating API samples for the placement API. 
The functional API samples tests are confusing and a steep learning 
curve for new contributors (and even long time old tooth contributors 
still get confused by them).


Talk with Chris Dent about ideas here for API samples with placement. 
He's talked about building something into the gabbi library for this, 
but I don't know if that's being worked on or not.


Chris is also on vacation for a couple of weeks, just FYI.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to handle nova show --minimal with embedded flavors

2017-06-20 Thread Matt Riedemann
Microversion 2.47 embeds the instance.flavor in the server response 
body. Chris Friesen is adding support for this microversion to 
novaclient [1] and a question has come up over how to deal with the 
--minimal option which before this microversion would just show the 
flavor id. When --minimal is not specified today, the flavor name and id 
are shown.


In Chris' change, he's showing the full flavor information regardless of 
the --minimal option.


The help for the --minimal option is different between show/rebuild 
commands and list.


show/rebuild: "Skips flavor/image lookups when showing servers."

list: "Get only UUID and name."

Personally I think that if I specify --minimal I want minimal output, 
which would just be the flavor's original name after the new 
microversion, which is closer in behavior to how --minimal works today 
before the 2.47 microversion.


I'm posting this in the mailing list for wider discussion/input.

[1] https://review.openstack.org/#/c/435141/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][tricircle] CellsV2 in Pike?

2017-06-19 Thread Matt Riedemann

On 6/19/2017 8:02 PM, joehuang wrote:

Hello,

In May, Tricircle has done some work to make Nova cells V2 + Neutron + 
Tricircle work together[1]: each cell will have corresponding local 
Neutron with Tricricle local plugin installed, and one central Neutron 
server work together with Nova API server, where the Tricricle central 
plugin installed.


Would like to know how far multi-cells will be supported for CellsV2 in 
Pike release, so that Tricircle can do more verification of this 
deployment option.


[1]http://lists.openstack.org/pipermail/openstack-dev/2017-May/117599.html

Best Regards
Chaoyi Huang (joehuang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi Joe,

Tempest is passing on this devstack change [1] which enables a 
multi-cell environment. We're still finding some random things that need 
to be aware of a multi-cell deployment and are working through those, 
but at this point we expect to be able to declare support for multiple 
cells v2 cells in Pike.


[1] https://review.openstack.org/#/c/436094/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Matt Fischer
Amrith,

Some good thoughts in your email. I've replied to a few specific pieces
below. Overall I think it's a good start to a plan.

On Sun, Jun 18, 2017 at 5:35 AM, Amrith Kumar 
wrote:

> Trove has evolved rapidly over the past several years, since integration
> in IceHouse when it only supported single instances of a few databases.
> Today it supports a dozen databases including clusters and replication.
>
> The user survey [1] indicates that while there is strong interest in the
> project, there are few large production deployments that are known of (by
> the development team).
>
> Recent changes in the OpenStack community at large (company realignments,
> acquisitions, layoffs) and the Trove community in particular, coupled with
> a mounting burden of technical debt have prompted me to make this proposal
> to re-architect Trove.
>
> This email summarizes several of the issues that face the project, both
> structurally and architecturally. This email does not claim to include a
> detailed specification for what the new Trove would look like, merely the
> recommendation that the community should come together and develop one so
> that the project can be sustainable and useful to those who wish to use it
> in the future.
>
> TL;DR
>
> Trove, with support for a dozen or so databases today, finds itself in a
> bind because there are few developers, and a code-base with a significant
> amount of technical debt.
>
> Some architectural choices which the team made over the years have
> consequences which make the project less than ideal for deployers.
>
> Given that there are no major production deployments of Trove at present,
> this provides us an opportunity to reset the project, learn from our v1 and
> come up with a strong v2.
>
> An important aspect of making this proposal work is that we seek to
> eliminate the effort (planning, and coding) involved in migrating existing
> Trove v1 deployments to the proposed Trove v2. Effectively, with work
> beginning on Trove v2 as proposed here, Trove v1 as released with Pike will
> be marked as deprecated and users will have to migrate to Trove v2 when it
> becomes available.
>
> While I would very much like to continue to support the users on Trove v1
> through this transition, the simple fact is that absent community
> participation this will be impossible. Furthermore, given that there are no
> production deployments of Trove at this time, it seems pointless to build
> that upgrade path from Trove v1 to Trove v2; it would be the proverbial
> bridge from nowhere.
>
> This (previous) statement is, I realize, contentious. There are those who
> have told me that an upgrade path must be provided, and there are those who
> have told me of unnamed deployments of Trove that would suffer. To this,
> all I can say is that if an upgrade path is of value to you, then please
> commit the development resources to participate in the community to make
> that possible. But equally, preventing a v2 of Trove or delaying it will
> only make the v1 that we have today less valuable.
>
> We have learned a lot from v1, and the hope is that we can address that in
> v2. Some of the more significant things that I have learned are:
>
> - We should adopt a versioned front-end API from the very beginning;
> making the REST API versioned is not a ‘v2 feature’
>
> - A guest agent running on a tenant instance, with connectivity to a
> shared management message bus is a security loophole; encrypting traffic,
> per-tenant-passwords, and any other scheme is merely lipstick on a security
> hole
>

This was a major concern when we deployed it and drove the architectural
decisions. I'd be glad to see it resolved or re-architected.


>
> - Reliance on Nova for compute resources is fine, but dependence on Nova
> VM specific capabilities (like instance rebuild) is not; it makes things
> like containers or bare-metal second class citizens
>
> - A fair portion of what Trove does is resource orchestration; don’t
> reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
> along when Trove got started but that’s not the case today and we have an
> opportunity to fix that now
>

+1


>
> - A similarly significant portion of what Trove does is to implement a
> state-machine that will perform specific workflows involved in implementing
> database specific operations. This makes the Trove taskmanager a stateful
> entity. Some of the operations could take a fair amount of time. This is a
> serious architectural flaw
>
> - Tenants should not ever be able to directly interact with the underlying
> storage and compute used by database instances; that should be the default
> configuration, not an untested deployment alternative
>

+1 to this also. Trove should offer a black box DB as a Service, not
something the user sees as an instance+storage that they feel that they can
manipulate.


>
> - The CI should test all databases that are considered to be ‘supported’
> without excessive use of resources

Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Matt Riedemann

On 6/17/2017 10:55 AM, Jay Bryant wrote:


I am responding under Tim's note because I think it gets at what we 
really want to communicate and takes me to what we have presented in 
OUI.  We have Core OpenStack Projects and then a whole community of 
additional projects that support cloud functionality.


So, without it being named, or cutesy, though I liked "Friends of 
Openstack", can we go with "OpenStack Core Projects" and "Peripheral 
OpenStack Projects"?


Because then you have to define what "core" means, and how you get to be 
"core", which is like the old system of integrated and incubated 
projects. I agree that a "core" set of projects is more understandable 
at first, probably most for an outsider. But it gets confusing from a 
governance perspective within the community.


And if you want to run just containers with Kubernetes and you want to 
use Keystone and Cinder with it, you don't need Nova, so is Nova "core" 
or not?


This is probably where the constellations idea comes in [1].

At the end of the day it's all OpenStack to me if it's hosted on 
OpenStack infra, but I'm not the guy making budget decisions at a 
company determining what to invest in. I think Doug has tried to explain 
that perspective a bit elsewhere in this thread, and it sounds like 
that's the key issue, the outside perspective from people making budget 
decisions.


[1] https://review.openstack.org/#/c/453262/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-19 Thread Matt Riedemann
ent will have to either maintain that list of structured 
datafor subsequent requests, or re-run the query and only calculate 
the data structures for the hosts that fit in the requested page.


"of these data structures as JSON blobs" is kind of redundant... all our 
REST APIs return data structures as JSON blobs.


While we discussed the fact that there may be a lot of entries, we did 
not say we'd immediately support a paging mechanism.


I believe we said in the initial version we'd have the configurable 
limit in the DB API queries, like we have today - the default limit is 
1000. There was agreement to eventually build paging support into the API.


This does make me wonder though what happens when you have 100K or more 
compute nodes reporting into placement and we limit on the first 1000. 
Aren't we going to be imposing a packing strategy then just because of 
how we pull things out of the database for Placement? Although I don't 
see how that would be any different from before we had Placement and the 
nova-scheduler service just did a ComputeNode.get_all() to the nova DB 
and then filtered/weighed those objects.





* Scheduler continues to request the paged results until it has them all.


See above. Was discussed briefly as a concern but not work to do for 
first patches.


* Scheduler then runs this data through the filters and weighers. No 
HostState objects are required, as the data structures will contain 
all the information that scheduler will need.


No, this isn't correct. The scheduler will have *some* of the 
information it requires for weighing from the returned data from the GET 
/allocation_candidates call, but not all of it.


Again, operators have insisted on keeping the flexibility currently in 
the Nova scheduler to weigh/sort compute nodes by things like thermal 
metrics and kinds of data that the Placement API will never be 
responsible for.


The scheduler will need to merge information from the 
"provider_summaries" part of the HTTP response with information it has 
already in its HostState objects (gotten from 
ComputeNodeList.get_all_by_uuid() and AggregateMetadataList).


* Scheduler then selects the data structure at the top of the ranked 
list. Inside that structure is a dict of the allocation data that 
scheduler will need to claim the resources on the selected host. If 
the claim fails, the next data structure in the list is chosen, and 
repeated until a claim succeeds.


Kind of, yes. The scheduler will select a *host* that meets its needs.

There may be more than one allocation request that includes that host 
resource provider, because of shared providers and (soon) nested 
providers. The scheduler will choose one of these allocation requests 
and attempt a claim of resources by simply PUT 
/allocations/{instance_uuid} with the serialized body of that allocation 
request. If 202 returned, cool. If not, repeat for the next allocation 
request.


* Scheduler then creates a list of N of these data structures, with 
the first being the data for the selected host, and the the rest being 
data structures representing alternates consisting of the next hosts 
in the ranked list that are in the same cell as the selected host.


Yes, this is the proposed solution for allowing retries within a cell.


* Scheduler returns that list to conductor.
* Conductor determines the cell of the selected host, and sends that 
list to the target cell.
* Target cell tries to build the instance on the selected host. If it 
fails, it uses the allocation data in the data structure to unclaim 
the resources for the selected host, and tries to claim the resources 
for the next host in the list using its allocation data. It then tries 
to build the instance on the next host in the list of alternates. Only 
when all alternates fail does the build request fail.


I'll let Dan discuss this last part.

Best,
-jay


[0] https://review.openstack.org/#/c/471927/





__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-19 Thread Matt Riedemann

On 6/16/2017 8:58 AM, Eric Harney wrote:

I'm not convinced yet that this failure is purely Ceph-specific, at a
quick look.

I think what happens here is, unshelve performs an asynchronous delete
of a glance image, and returns as successful before the delete has
necessarily completed.  The check in tempest then sees that the image
still exists, and fails -- but this isn't valid, because the unshelve
API doesn't guarantee that this image is no longer there at the time it
returns.  This would fail on any image delete that isn't instantaneous.

Is there a guarantee anywhere that the unshelve API behaves how this
tempest test expects it to?


There are no guarantees, no. The unshelve API reference is here [1]. The 
asynchronous postconditions section just says:


"After you successfully shelve a server, its status changes to ACTIVE. 
The server appears on the compute node.


The shelved image is deleted from the list of images returned by an API 
call."


It doesn't say the image is deleted immediately, or that it waits for 
the image to be gone before changing the instance status to ACTIVE.


I see there is also a typo in there, that should say after you 
successfully *unshelve* a server.


From an API user point of view, this is all asynchronous because it's 
an RPC cast from the nova-api service to the nova-conductor and finally 
nova-compute service when unshelving the instance.


So I think the test is making some wrong assumptions on how fast the 
image is going to be deleted when the instance is active.


As Ken'ichi pointed out in the Tempest change, Glance returns a 204 when 
deleting an image in the v2 API [2]. If the image delete is asynchronous 
then that should probably be a 202.


Either way the Tempest test should probably be in a wait loop for the 
image to be gone if it's really going to assert this.


[1] 
https://developer.openstack.org/api-ref/compute/?expanded=unshelve-restore-shelved-server-unshelve-action-detail#unshelve-restore-shelved-server-unshelve-action
[2] 
https://developer.openstack.org/api-ref/image/v2/index.html?expanded=delete-an-image-detail#delete-an-image


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Matt Riedemann

On 6/16/2017 9:46 AM, Eric Harney wrote:

On 06/16/2017 10:21 AM, Sean McGinnis wrote:


I don't think merging tests that are showing failures, then blacklisting
them, is the right approach. And as Eric points out, this isn't
necessarily just a failure with Ceph. There is a legitimate logical
issue with what this particular test is doing.

But in general, to get back to some of the earlier points, I don't think
we should be merging tests with known breakages until those breakages
can be first addressed.



As another example, this was the last round of this, in May:

https://review.openstack.org/#/c/332670/

which is a new tempest test for a Cinder API that is not supported by
all drivers.  The Ceph job failed on the tempest patch, correctly, the
test was merged, then the Ceph jobs broke:

https://bugs.launchpad.net/glance/+bug/1687538
https://review.openstack.org/#/c/461625/

This is really not a sustainable model.

And this is the _easy_ case, since Ceph jobs run in OpenStack infra and
are easily visible and trackable.  I'm not sure what the impact is on
Cinder third-party CI for other drivers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This is generally why we have config options in Tempest to not run tests 
that certain backends don't implement, like all of the backup/snapshot 
volume tests that the NFS job was failing on forever.


I think it's perfectly valid to have tests in Tempest for things that 
not all backends implement as long as they are configurable. It's up to 
the various CI jobs to configure Tempest properly for what they support 
and then work on reducing the number of things they don't support. We've 
been doing that for ages now.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Matt Riedemann

On 6/16/2017 8:13 PM, Matt Riedemann wrote:
Yeah there is a distinction between the ceph nv job that runs on 
nova/cinder/glance changes and the ceph job that runs on os-brick and 
glance_store changes. When we made the tempest dsvm ceph job non-voting 
we failed to mirror that in the os-brick/glance-store jobs. We should do 
that.


Here you go:

https://review.openstack.org/#/c/475095/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-16 Thread Matt Riedemann

On 6/16/2017 3:32 PM, Sean McGinnis wrote:


So, before we go further, ceph seems to be -nv on all projects right
now, right? So I get there is some debate on that patch, but is it
blocking anything?



Ceph is voting on os-brick patches. So it does block some things when
we run into this situation.

But again, we should avoid getting into this situation in the first
place, voting or no.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah there is a distinction between the ceph nv job that runs on 
nova/cinder/glance changes and the ceph job that runs on os-brick and 
glance_store changes. When we made the tempest dsvm ceph job non-voting 
we failed to mirror that in the os-brick/glance-store jobs. We should do 
that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Strict validation in query parameters

2017-06-15 Thread Matt Riedemann

On 6/15/2017 8:43 PM, Alex Xu wrote:
We added new decorator 'query_schema' to support validate the query 
parameters by JSON-Schema.


It provides more strict valiadation as below:
* set the 'additionalProperties=False' in the schema, it means that 
reject any invalid query parameters and return HTTPBadRequest 400 to the 
user.
* use the marco function 'single_param' to declare the specific query 
parameter only support single value. For example, the 'marker' 
parameters for the pagination actually only one value is the valid. If 
the user specific multiple values "marker=1&marker=2", the validation 
will return 400 to the user.


Currently there is patch related to this:
https://review.openstack.org/#/c/459483/13/nova/api/openstack/compute/schemas/server_migrations.py

So my question is:
Are we all good with this strict validation in all the future microversion?

I didn't remember we explicit agreement this at somewhere, just want to 
double check this is the direction everybody want to go.


Thanks
Alex


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I think this is fine and makes sense for new microversions. The spec for 
consistent query parameter validation does talk about it a bit:


https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/consistent-query-parameters-validation.html#proposed-change

"The behaviour additionalProperties as below:

* When the value of additionalProperties is True means the extra query 
parameters are allowed. But those extra query parameters will be 
stripped out.
* When the value of additionalProperties is False means the extra query 
aren’t allowed.


The value of additionalProperties will be True until we decide to 
restrict the parameters in the future, and it will be changed with new 
microversion."


I don't see a point in allowing someone to specify a query parameter 
multiple times if we only pick the first one from the list and use that.


There are certain query parameters that we allow multiple instances of, 
for sorting I believe. But for other things like filtering restricting 
to 1 should be fine, and using additionalProperties=False should also be 
fine on new microversions. For example, if we allow additional 
properties, someone could type the parameter name incorrectly and we'd 
just ignore it. With strict validation, we'll return a 400 which should 
be clear to the end user that what they requested as invalid and they 
need to fix it on their end.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Matt Riedemann

On 6/15/2017 9:57 AM, Thierry Carrez wrote:

Obviously we are not the target audience for that term. I think we are
deep enough in OpenStack and technically-focused enough to see through
that. But reality is, the majority of the rest of the world is confused,
and needs help figuring it out. Giving the category a name is a way to
do that.


Maybe don't ask the inmates what the asylum/prison should be called. Why 
don't we have people that are confused about this weighing in on this 
thread? Oh right because they don't subscribe to, or read, or reply to a 
development mailing list.


God I feel like I waste an inordinate amount of time each week reading 
about what new process or thing we're going to call something, rather 
than actually working on getting stuff done for the release or reviewing 
changes. I'm tired of constant bureaucratic distraction. I believe it 
has to be demoralizing to the development community.


I'm not trying to offend or troll so much as vent some frustration.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] os-api-ref 1.4.0 about to hit upper-constraints

2017-06-14 Thread Matt Riedemann

On 6/14/2017 6:01 AM, Sean Dague wrote:

There were some changes in Sphinx 1.6.x that removed functions that
os-api-ref was using to warn for validation. Which meant that when
things failed instead of getting the warning you got a huge cryptic
stack trace. :(


https://bugs.launchpad.net/openstack-doc-tools/+bug/1697736 if you want 
the dirty details.




Those are fixed in 1.4.0, however with the other changes in Sphinx about
how warnings work, I'm not 100% confident this is going to be smooth for
everyone. If you hit an issue in your api-ref building after this lands,
please pop up on #openstack-dev and we'll try to work through it.

-Sean




--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Matt Riedemann

On 6/13/2017 8:17 PM, Dan Smith wrote:

So it seems our options are:

1. Allow PUT /os-services/{service_uuid} on any type of service, even if
doesn't make sense for non-nova-compute services.

2. Change the behavior of [1] to only disable new "nova-compute" 
services.


Please, #2. Please.

--Dan

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Are we allowed to cheat and say auto-disabling non-nova-compute services 
on startup is a bug and just fix it that way for #2? :) Because (1) it 
doesn't make sense, as far as we know, and (2) it forces the operator to 
have to use the API to enable them later just to fix their nova 
service-list output.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Matt Riedemann

On 6/13/2017 12:19 PM, Matt Riedemann wrote:

With this change in Pike:

https://review.openstack.org/#/c/442162/

The PUT /os-services/* APIs to enable/disable/force-down a service will 
now only work with nova-compute services. If you're using those to try 
and disable a non-compute service, like nova-scheduler or 
nova-conductor, those APIs will result in a 404 response because there 
won't be host mappings for non-compute services.


There really never was a good reason to disable/enable non-compute 
services anyway since it wouldn't do anything. The scheduler and API are 
checking the status and forced_down fields to see if instance builds can 
be scheduled to a compute host or if instances can be evacuated from a 
downed compute host. There is nothing that relies on a disabled or 
downed conductor or scheduler service.


I realize the docs aren't justification for API behavior, but the API 
reference has always pointed out that these PUT operations are for 
*compute* services:


https://developer.openstack.org/api-ref/compute/#compute-services-os-services 



This has come up while working on an API microversion [1] where we'll 
now expose service uuids in GET calls and take a service uuid in PUT and 
DELETE calls to the os-services API. The uuid is needed to uniquely 
identify a service across cells. I plan on restricting PUT 
/os-services/{service_id} calls to only nova-compute services, and 
return a 400 on any other service like nova-conductor or nova-scheduler, 
since it doesn't make sense to enable/disable/force-down non-compute 
services.


This email is to provide awareness of this change and to also see if 
there are any corner cases in which people are relying on any of this 
behavior that we don't know about - this is your chance to speak up 
before we make the change.


[1] 
https://review.openstack.org/#/c/464280/11/nova/api/openstack/compute/services.py@288 





Kris Lindgren brought up a good point in IRC today about this.

If you configure enable_new_services=False, when new services are 
created they will be automatically disabled [1].


As noted earlier, disabled nova-conductor, nova-scheduler, etc, doesn't 
really mean anything. However, if we don't allow you to enable them via 
the API (the new PUT /os-services/{service_uuid} microversion), then 
those are going to be listed as disabled until you tweak them in the 
database directly, which isn't good.


And trying to get around this by using "PUT /os-services/enable" with 
microversion 2.1 won't work in Pike because of the host mapping issue I 
mentioned before.


So it seems our options are:

1. Allow PUT /os-services/{service_uuid} on any type of service, even if 
doesn't make sense for non-nova-compute services.


2. Change the behavior of [1] to only disable new "nova-compute" services.

[1] 
https://github.com/openstack/nova/blob/d26b3e7051a89160ad26c38548fcf0c08c06dc33/nova/db/sqlalchemy/api.py#L588


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-13 Thread Matt Riedemann

On 6/8/2017 7:45 AM, Jim Rollenhagen wrote:

Hey friends,

I've been mostly missing for the past six weeks while looking for a new 
job, so maybe you've forgotten me already, maybe not. I'm happy to tell 
you I've found one that I think is a great opportunity for me. But, I'm 
sad to tell you that it's totally outside of the OpenStack community.


The last 3.5 years have been amazing. I'm extremely grateful that I've 
been able to work in this community - I've learned so much and met so 
many awesome people. I'm going to miss the insane(ly awesome) level of 
collaboration, the summits, the PTGs, and even some of the bikeshedding. 
We've built amazing things together, and I'm sure y'all will continue to 
do so without me.


I'll still be lurking in #openstack-dev and #openstack-ironic for a 
while, if people need me to drop a -2 or dictate old knowledge or 
whatever, feel free to ping me. Or if you just want to chat. :)


<3 jroll

P.S. obviously my core permissions should be dropped now :P


How can you drop a -2 if you don't have core anymore Jim?!

Good luck on the new position. We'll miss you around the nova channel. 
We were just talking today about how much better you made the 
nova/ironic interaction for users and operators, and developers by 
bridging the gap on both sides.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Matt Riedemann

With this change in Pike:

https://review.openstack.org/#/c/442162/

The PUT /os-services/* APIs to enable/disable/force-down a service will 
now only work with nova-compute services. If you're using those to try 
and disable a non-compute service, like nova-scheduler or 
nova-conductor, those APIs will result in a 404 response because there 
won't be host mappings for non-compute services.


There really never was a good reason to disable/enable non-compute 
services anyway since it wouldn't do anything. The scheduler and API are 
checking the status and forced_down fields to see if instance builds can 
be scheduled to a compute host or if instances can be evacuated from a 
downed compute host. There is nothing that relies on a disabled or 
downed conductor or scheduler service.


I realize the docs aren't justification for API behavior, but the API 
reference has always pointed out that these PUT operations are for 
*compute* services:


https://developer.openstack.org/api-ref/compute/#compute-services-os-services

This has come up while working on an API microversion [1] where we'll 
now expose service uuids in GET calls and take a service uuid in PUT and 
DELETE calls to the os-services API. The uuid is needed to uniquely 
identify a service across cells. I plan on restricting PUT 
/os-services/{service_id} calls to only nova-compute services, and 
return a 400 on any other service like nova-conductor or nova-scheduler, 
since it doesn't make sense to enable/disable/force-down non-compute 
services.


This email is to provide awareness of this change and to also see if 
there are any corner cases in which people are relying on any of this 
behavior that we don't know about - this is your chance to speak up 
before we make the change.


[1] 
https://review.openstack.org/#/c/464280/11/nova/api/openstack/compute/services.py@288


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread Matt Riedemann

On 6/11/2017 9:32 AM, Roman Podoliaka wrote:

Hi all,

I recently changed job and hasn't been able to devote as much time to
oslo.db as it is expected from a core reviewer. I'm no longer working
on OpenStack, so you won't see me around much.

Anyway, it's been an amazing experience to work with all of you! Best
of luck! And see ya at various PyCon's around the world! ;)

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Good luck with the new position Roman. You've always been a great help 
not only in Oslo land but also helping us out in Nova. You'll be missed.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-11 Thread Matt Riedemann

On 6/8/2017 12:57 AM, Adam Harwell wrote:
As a core reviewer for LBaaS I actually find Stackalytics quite helpful 
for giving me a quick snapshot of contributions, and it lines up almost 
perfectly in my experience with what I see when I'm actually reviewing 
and working with people (if you know which statistics to look at -- just 
sorting by sheer number of reviews or commits and ignoring everything 
else is of course not useful, and as you say possibly misleading). In 
all though I actually find that it is a very accurate representation of 
people's work.


For example, in looking at reviewer contributions, I make a mental score 
based on both the number of reviews, but also the +% (this shouldn't be 
too high) and the disagreement score (low is generally good, but 0% with 
a high review count might be questionable). So, I know to discount 
someone who just spams +1 at everything that has a +2 already and 
doesn't contribute anything else, which can go unnoticed while reading 
reviews but sticks out like a sore thumb in Stackalytics. The other side 
of the coin is someone who posts a ton of useless comments and -1's 
everything, which then is super obvious to anyone who actually reads 
reviews.


Maybe the experience with the projects I work on is a little different 
than some of the more populous "base" services like Nova or Neutron? 
Regardless, I'd be really sad to see it go, as I use it multiple times a 
week for various reasons. So, I definitely agree with keeping it around 
and possibly focusing on improving the way the data is displayed. It is 
definitely best used as one tool in a toolkit, not taken alone as a 
single source of truth. Is that the main problem people are trying to solve?




I agree with you, and the experience in Nova is the same. I use 
Stackalytics the same way.


Note, however, that reviewstats is also published from a site that 
russellb has running, e.g.:


http://russellbryant.net/openstack-stats/nova-reviewers-30.txt

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann

On 6/8/2017 6:17 PM, John Griffith wrote:
​The attachment_update call could do this for you, might need some 
slight tweaks because I tried to make sure that we weren't having 
attachment records be modified things that lived forever and were 
dynamic.  This particular case seems like a descent fit though, issue 
the call; cinder queries the backend to get any updated connection info 
and sends it back.  I'd leave it to Nova to figure out if said info has 
been updated or not.  Just iterate through the attachment_ids in the bdm 
and update/refresh each one maybe?


Yeah, although we have to keep in mind that's a new API we're not even 
using yet for volume attach, so anything I'm thinking about here has to 
handle old-style attachments (old-style as in, you know, today). Plus we 
don't have a migration plan yet for the old style attachments to the new 
style. At the Pike PTG we said we'd work on that in Queens.


I definitely want to use new shiny things at some point, we just have to 
handle the old crufty things too.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann

On 6/8/2017 1:39 PM, melanie witt wrote:

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:
Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for 
making volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for 
your ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I believe the only way to work around this currently is by doing a 'nova 
shelve' followed by a 'nova unshelve'. That will end up querying the 
connection_info from Cinder and update the block device mapping record 
for the instance. Maybe detach/re-attach would work too but I can't 
remember trying it.


Shelve has it's own fun set of problems like the fact it doesn't 
terminate the connection to the volume backend on shelve. Maybe that's 
not a problem for Ceph, I don't know. You do end up on another host 
though potentially, and it's a full delete and spawn of the guest on 
that other host. Definitely disruptive.




I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff 
like this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?


We've discussed this a few times before and we were thinking it might be 
best to handle this transparently and just do a connection_info refresh 
+ record update inline with the request flows that will end up reading 
connection_info from the block device mapping records. That way, 
operators won't have to intervene when connection_info changes.


The thing that sucks about this is if we're going to be refreshing 
something that maybe rarely changes for every volume-related operation 
on the instance. That seems like a lot of overhead to me (nova/cinder 
API interactions, Cinder interactions to the volume backend, 
nova-compute round trips to conductor and the DB to update the BDM 
table, etc).




At least in the case of Ceph, as long as a guest is running, it will 
continue to work fine if the monitor IPs or secrets change because it 
will continue to use its existing connection to the Ceph cluster. Things 
go wrong when an instance action such as resize, stop/start, or reboot 
is done because when the instance is taken offline and being brought 
back up, the stale connection_info is read from the block_device_mapping 
table and injected into the instance, and so it loses contact with the 
cluster. If we query Cinder and update the block_device_mapping record 
at the beginning of those actions, the instance will get the new 
connection_info.


-melanie





--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann

On 6/8/2017 10:17 AM, Arne Wiebalck wrote:


On 08 Jun 2017, at 15:58, Matt Riedemann <mailto:mriede...@gmail.com>> wrote:


Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for 
making volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for 
your ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff 
like this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?


I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this 
issue some time ago.
Back then I was more thinking of using an alias and not deal with IP 
addresses directly. From
what I understand, this should work with Ceph. In any case, there is 
still interest in a fix :-)


Cheers,
  Arne


--
Arne Wiebalck
CERN IT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah this was also discussed in the dev mailing list over a year ago:

http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html

At that time I was opposed to a REST API for a *user* doing this, but 
I'm more open to an *admin* (by default) doing this. Also, if it were 
initiated via the volume API then Cinder could call the Nova 
os-server-external-events API which is admin-only by default and then 
Nova can do a refresh.


Later in that thread Melanie Witt also has an idea about doing a refresh 
in a periodic task on the compute service, like we do for refreshing the 
instance network info cache with Neutron in a periodic task.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler]

2017-06-08 Thread Matt Riedemann

On 6/8/2017 3:36 AM, Narendra Pal Singh wrote:
Does Ocata bits support adding custom resource monitor say network 
bandwidth?


I don't believe so in the upstream code. There is only a CPU bandwidth 
monitor in-tree today, but only supported by the libvirt driver and 
untested anywhere in our integration testing.


Nova Scheduler should consider new metric data for cost calculation each 
filtered host.


There was an attempt in Liberty, Mitaka and Newton to add a new memory 
bandwidth monitor:


https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/memory-bw.html

But we eventually said no to that, and stated why here:

https://docs.openstack.org/developer/nova/policies.html#metrics-gathering

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann
Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for making 
volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for your 
ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff like 
this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] allow vfs to be trusted

2017-06-07 Thread Matt Riedemann

On 6/7/2017 8:28 AM, Sahid Orentino Ferdjaoui wrote:

I still have a question do
I need to provide a spec for this?


There is a spec for it:

https://review.openstack.org/#/c/397932/

So why not just revive that for Queens? Specs also serve as 
documentation of a feature. Release notes are not a substitute for 
documenting how to use a feature. Specs aren't really either, or 
shouldn't be, but sometimes that's the only thing we have since we don't 
get things into the manuals or in-tree devref.


That's my way of saying I think a spec is a good idea.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to move on from live_migration_uri?

2017-06-07 Thread Matt Riedemann
The [libvirt]/live_migration_uri config option was deprecated in Ocata 
[1] in favor of two other config options:


live_migration_scheme: defaults to tcp (could be ssh), only used for 
kvm/qemu virt types


live_migration_inbound_addr: defaults to None, only used if doing a 
non-tunneled live migration


Those are used here:

https://github.com/openstack/nova/blob/7815108d4892525b0047c787cbd2fe2f26c204c2/nova/virt/libvirt/driver.py#L652

If you leave a %s in the URI, the libvirt driver will replace that with 
the destination target host.


Devstack is configuring the live_migration_uri option and setting it to 
"qemu+ssh://stack@%s/system" in our CI jobs. That %s gets replaced with 
the target destination host IP as noted above.


Since live_migration_uri is deprecated, I tried to update devstack to 
use the new options that replace it [2], but then I ran into some 
problems [3].


What I'm trying to do is continue to use ssh as the scheme since that's 
what devstack sets up. So I set live_migration_scheme=ssh.


Within the libvirt driver, it starts with a URL like this for qemu:

qemu+%s://%s/system

And does a string replace on that URL with (scheme, destination), which 
would give us:


qemu+ssh:///system

The problem lies in the dest part. Devstack is trying to specify the 
username for the ssh URI, so it wants "stack@%s" for the dest part. I 
tried setting live_migration_inbound_addr="stack@%s" but that doesn't 
work because the driver doesn't replace the dest on top of that again, 
so we just end up with this:


qemu+ssh://stack@%s/system

Is there some other way to be doing this? We could try to use tunneling 
but the config option help text for live_migration_tunnelled makes that 
sound scary, e.g. "Enable this option will definitely

impact performance massively." Holy crap Scoobs, let's scram!

Should we check if the scheme is ssh and try a final string replacement 
with the destination host after we've already applied 
(live_migration_scheme, live_migration_inbound_addr)?


Other ideas? Given the bazillion config options related to libvirt live 
migration, this is just a landmine of terrible so I'm interested in what 
people are doing config-wise if they are using ssh.


[1] https://review.openstack.org/#/c/410817/
[2] https://review.openstack.org/#/c/471011/
[3] 
http://logs.openstack.org/19/471019/1/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/546935b/logs/screen-n-cpu.txt.gz?level=TRACE#_Jun_05_15_56_42_184587


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]Nova cells v2+Neutron+Tricircle, it works

2017-06-06 Thread Matt Riedemann
4.html
[2]https://docs.openstack.org/developer/tricircle/installation-guide.html#work-with-nova-cell-v2-experiment

Best Regards
Chaoyi Huang (joehuang)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 23

2017-06-06 Thread Matt Riedemann

On 6/6/2017 5:10 PM, Chris Dent wrote:

This week had a scheduled TC meeting for the express purpose of
discussing what to do about PostgreSQL. The remainder of this
document has notes from that meeting.


Just wanted to say thanks for the nice summary. I just got done writing 
up something similar and I'm happy to say they said the same things. 
Just wish I'd seen this earlier. :)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Move policy and policy docs into code

2017-06-02 Thread Matt Riedemann

On 6/1/2017 12:54 PM, Lance Bragstad wrote:

Hi all,

I've proposed a community-wide goal for Queens to move policy into code 
and supply documentation for each policy [0]. I've included references 
to existing documentation and specifications completed by various 
projects and attempted to lay out the benefits for both developers and 
operators.


I'd greatly appreciate any feedback or discussion.

Thanks!

Lance


[0] https://review.openstack.org/#/c/469954/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



+1, especially because now I don't have to write the governance patch 
for this which was a TODO of mine from the summit.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-02 Thread Matt Riedemann

On 6/2/2017 1:14 PM, Clay Gerrard wrote:

Can we make this (at least) two (community?) goals?

#1 Make a thing that is not paste that is better than paste (i.e. > 
works, ie >= works & is maintained)

#2 Have some people/projects "migrate" to it

If the goal is just "take over paste maintenance" that's maybe ok - but 
is that an "OpenStack community" goal or just something that someone who 
has the bandwidth to do could do?  It also sounds cheaper and probably 
about as good.


Alternatively we can just keep using paste until we're tired of working 
around it's bugs/limitations - and then replace it with something in 
tree that implements only 100% of what the project using it needs to get 
done - then if a few projects do this and they see they're maintaining 
similar code they could extract it to a common library - but iff sharing 
their complexity isolated behind an abstraction sounds better than 
having multiple simpler and more agile ways to do similar-ish stuff - 
and only *then* make a thing that is not paste but serves a similar 
use-case as paste and is also maintained and easy to migrate too from 
paste.  At which point it might be reasonable to say "ok, community, new 
goal, if you're not already using the thing that's not paste but does 
about the same as paste - then we want to organize some people in the 
community experienced with the effort of such a migration to come assist 
*all openstack projects* (who use paste) in completing the goal of 
getting off paste - because srly, it's *that* important"


-Clay



I don't think the maintenance issue is the prime motivator, it's the 
fact paste is in /etc which makes it a config file and therefore an 
impediment to smooth upgrades. The more we can move into code, like 
default policy and privsep, the better.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-06-02 Thread Matt Riedemann

On 6/2/2017 12:40 AM, 한승진 wrote:

Hello, stackers

I am just curious about the results of lots of discussions on the below 
blueprint.


https://blueprints.launchpad.net/nova/+spec/support-volume-type-with-bdm-parameter

Can I ask what the concolusion is?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There wasn't one really. There is a mailing list discussion here:

http://lists.openstack.org/pipermail/openstack-dev/2017-May/117242.html

Which turned into a discussion about porcelain APIs.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-05-31 Thread Matt Riedemann
UT

Then doing the same with a Nova test just to verify that it is correctly
configured to use multipathing:

  $ ostestr -n 
cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_attach_detach_once_with_errors_1

And if these work we can go ahead and run the 10 operations scenarios,
since the individual ones don't have any added value over those.  I usually
run the tests like this:

  $ OS_TEST_TIMEOUT=7200 ostestr -r 
'cinder.tests.tempest.scenario.test_multipath.TestMultipath.test_(create_volumes|attach_detach_many)_with_errors_*'
 --serial -- -f

Friendly warning, some of the tests take forever that's why we are increasing
the keystone token expiration time and the test timeout.  For example with 2
paths some tests take around 40 minutes, so don't despair.

The only backends I've actually tried the tests on are QNAP and XtremIO,
so I'm really hoping someone else will have the inclination and the time
to run the tests on different backends, and maybe even do some
additional testing.  :-)


Cheers,
Gorka,


PS: For my tests I actually changed iscsid login retries to reduce the
running time by setting a value of 2 as the configuration parameter of
"node.session.initial_login_retry_max".


[1] https://gorka.eguileor.com/iscsi-multipath/
[2] https://gorka.eguileor.com/revamping-iscsi-connections-in-openstack/
[3] 
https://github.com/open-iscsi/open-iscsi/commit/5e32aea95741a07d53153c658a0572588eae494d
[4] 
https://github.com/open-iscsi/open-iscsi/commit/d5483b0df96bd2a1cf86039cf4c6822ec7d7f609
[5] https://review.openstack.org/455392
[6] https://review.openstack.org/455393
[7] https://review.openstack.org/455394
[8] https://review.openstack.org/459453
[9] https://review.openstack.org/459454
[10] https://review.openstack.org/469445

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Gorka, this is really all about testing and making multipath support 
more robust, right? For those not using multipath does any of this matter?


The reason I ask is I was thinking we were going to also fix some other 
long standing issues, like [1][2], where we don't terminate connections 
and remove exports properly when shelve-offloading an instance. I guess 
that's totally unrelated here.


As for the testing concern in Tempest with serial tests, it is possible 
to run tests in Tempest with a LockFixture but you'd likely have to lock 
all tests that involve a volume from running at the same time. We have 
the same issue with needing to test the evacuate feature in Nova but 
evacuate requires that the nova-compute service is down on the host so 
we'd have to run it serially.


So do you plan on leaving those tests in Tempest or moving them into the 
Cinder repo and making them run under a separate tox serial environment?


[1] https://bugs.launchpad.net/nova/+bug/1547142
[2] https://bugs.launchpad.net/cinder/+bug/1527278

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why don't we unbind ports or terminate volume connections on shelve offload?

2017-05-31 Thread Matt Riedemann

On 4/13/2017 11:45 AM, Matt Riedemann wrote:
This came up in the nova/cinder meeting today, but I can't for the life 
of me think of why we don't unbind ports or terminate the connection 
volumes when we shelve offload an instance from a compute host.


When you unshelve, if the instance was shelved offloaded, the conductor 
asks the scheduler for a new set of hosts to build the instance on 
(unshelve it). That could be a totally different host.


So am I just missing something super obvious? Or is this the most latent 
bug ever?




Looks like this is a known bug:

https://bugs.launchpad.net/nova/+bug/1547142

The fix on the nova side apparently depends on some changes on the 
cinder side. The new v3.27 APIs in cinder might help with all of this, 
but it doesn't fix old attachments.


By the way, search for shelve + volume in nova bugs and you're rewarded 
with a treasure trove of bugs:


https://bugs.launchpad.net/nova/?field.searchtext=shelved+volume&search=Search&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.assignee=&field.bug_reporter=&field.omit_dupes=on&field.has_patch=&field.has_no_package=

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] NSX CI seems to be at 100% fail

2017-05-30 Thread Matt Riedemann

I've reported a bug here:

https://bugs.launchpad.net/nova/+bug/1694543

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] cells v1 job is at 100% fail on master

2017-05-26 Thread Matt Riedemann

Fix is proposed here:

https://review.openstack.org/#/c/468585/

This is just FYI so people aren't needlessly rechecking.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Newton 14.0.7 and Ocata 15.0.5 releases

2017-05-26 Thread Matt Riedemann
This is a follow up to an email from Melanie Witt [1] calling attention 
to a high severity performance regression identified in Newton. That 
change is merged and the fix will be in the Ocata 15.0.5 release [2] and 
Newton 14.0.7 release [3].


Those releases will also contain a fix for a bug where we didn't 
properly handle special characters in the database connection URL when 
running the simple_cell_setup or map_cell0 commands, which are used when 
setting up cells v2 (optional in Newton, required in Ocata).


Finally, the Newton release is also going to include a fix to generate 
the cell0 database connection URL to use the 'nova_cell0' database name 
rather than 'nova_api_cell0' if you allow Nova to generate the name 
(note that Nova doesn't create the database, that's your job). Again, 
setting up cells v2 is optional in Newton but people have been getting 
started with it there and some people have hit this. This fix doesn't 
help anyone that has already upgraded, but is there for people which 
haven't done it yet (which I'm assuming is the majority).


Details are in the release notes for both:

https://docs.openstack.org/releasenotes/nova/newton.html

https://docs.openstack.org/releasenotes/nova/ocata.html

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117132.html
[2] https://review.openstack.org/#/c/468388/
[3] https://review.openstack.org/#/c/468387/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-25 Thread Matt Riedemann

On 5/23/2017 10:23 AM, Zane Bitter wrote:
Yes! Everything is much easier if you tell all the users to re-architect 
their applications from scratch :) Which, I mean, if you can... great! 
Meanwhile here on planet Earth, it's 2017 and 95% of payment card 
transactions are still processed using COBOL at some point. (Studies 
show that 79% of statistics are made up, but I actually legit read this 
last week.)


That's one reason I don't buy any of the 'OpenStack is dead' commentary. 
If we respond appropriately to the needs of users who run a *mixture* of 
legacy, cloud-aware, and cloud-native applications then OpenStack will 
be relevant for a very long time indeed.


I enjoyed this, thank you.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-25 Thread Matt Riedemann

On 5/22/2017 11:01 AM, Zane Bitter wrote:
If the user does a stack update that changes the network from 'auto' to 
'none', or vice-versa.


OK I guess we should make this a side discussion at some point, or hit 
me up in IRC, but if you're requesting networks='none' with microversion 
>= 2.37 then nova should not allocate any networking, it should not 
event attempt to do so.


Maybe the issue is the server is created with networks='auto' and has a 
port, and then when you 'update the stack' it doesn't delete that server 
and create a new one, but it tries to do something with the same server, 
and in this case you'd have to detach the port(s) that were previously 
created?


I don't know how Heat works, but if that's the case, then yeah that 
doesn't sound fun, but I think Nova provides the APIs to be able to do this.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Issues with reno

2017-05-25 Thread Matt Riedemann

On 5/24/2017 2:46 PM, Doug Hellmann wrote:

Please take a look at
the results and let me know if that's doing what you all expect.


Tested this out locally and it fixes my issue. Thanks Doug!

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Matt Riedemann

On 5/24/2017 9:59 AM, Matt Riedemann wrote:
I started going down a path the other night of trying to see if we could 
bulk query floating IPs when building the internal instance network info 
cache [1] but it looks like that's not supported. The REST API docs for 
Neutron say that you can't OR filter query parameters together, but at 
the time looking at the code it seemed like it might be possible.


Kevin Benton pointed out the bug in my code, so the bulk query for 
floating IPs is working now it seems:


https://review.openstack.org/#/c/465792/

http://logs.openstack.org/92/465792/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/fd0e93f/logs/screen-q-svc.txt.gz#_May_24_20_54_02_457529

So we can probably iterate on that a bit to bulk query other things, but 
I'd have to dig through the code to see where we're doing that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Matt Riedemann

On 5/11/2017 1:44 PM, Georg Kunz wrote:
Nevertheless, one concrete thing which came to my mind, is this proposed 
improvement of the interaction between Nova and Neutron:


https://review.openstack.org/#/c/390513/

In a nutshell, the idea is that Neutron adds more information to a port 
object so that Nova does not need to make multiple calls to Neutron to 
collect all required networking information. It seems to have stalled 
for the time being, but bringing forward the edge computing use case 
might increase the interest again.




Yes we've needed a sort of bulk query capability with the networking API 
for years. I started going down a path the other night of trying to see 
if we could bulk query floating IPs when building the internal instance 
network info cache [1] but it looks like that's not supported. The REST 
API docs for Neutron say that you can't OR filter query parameters 
together, but at the time looking at the code it seemed like it might be 
possible.


Chris Friesen from Wind River has also been looking at some of this 
lately, see [2].


But getting people looking at doing performance profiling at scale and 
then identifying the major pain points would be really really helpful 
for the upstream development team that don't have access to those types 
of resources. Then we could prioritize investigating ways to fix those 
issues to improve performance.


[1] https://review.openstack.org/#/c/465792/
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117096.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Massively distributed] Optimizing the interaction of OpenStack components

2017-05-24 Thread Matt Riedemann

On 5/24/2017 6:48 AM, Ronan-Alexandre Cherrueau wrote:

You can find examples of such diagrams
that have been automatically generated on the website where we host
results of our experiments[5].


This is nice, I've seen others do things like this before with Rally and 
osprofiler. The super thin vertical is hard to follow though, it would 
be nice if the graph could be expanded horizontally.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Documenting config drive - what do you want to see?

2017-05-24 Thread Matt Riedemann
Rocky tipped me off to a request to document config drive which came up 
at the Boston Forum, and I tracked that down to Clark's wishlist 
etherpad [1] (L195) which states:


"Document the config drive. The only way I have been able to figure out 
how to make a config drive is by either reading nova's source code or by 
reading cloud-init's source code."


So naturally I have some questions, and I'm looking to flesh the idea / 
request out a bit so we can start something in the in-tree nova devref.


Question the first: is this existing document [2] helpful? At a high 
level, that's more about 'how' rather than 'what', as in what's in the 
config drive.


Question the second: are people mostly looking for documentation on the 
content of the config drive? I assume so, because without reading the 
source code you wouldn't know, which is the terrible part.


Based on this, I can think of a few things we can do:

1. Start documenting the versions which come out of the metadata API 
service, which regardless of whether or not you're using it, is used to 
build the config drive. I'm thinking we could start with something like 
the in-tree REST API version history [3]. This would basically be a 
change log of each version, e.g. in 2016-06-30 you got device tags, in 
2017-02-22 you got vlan tags, etc.


2. Start documenting the contents similar to the response tables in the 
compute API reference [4]. For example, network_data.json has an example 
response in this spec [5]. So have an example response and a table with 
an explanation of fields in the response, so describe 
ethernet_mac_address and vif_id, their type, whether or not they are 
optional or required, and in which version they were added to the 
response, similar to how we document microversions in the compute REST 
API reference.


--

Are there other thoughts here or things I'm missing? At this point I'm 
just trying to gather requirements so we can get something started. I 
don't have volunteers to work on this, but I'm thinking we can at least 
start with some basics and then people can help flesh it out over time.


[1] https://etherpad.openstack.org/p/openstack-user-api-improvements
[2] https://docs.openstack.org/user-guide/cli-config-drive.html
[3] https://docs.openstack.org/developer/nova/api_microversion_history.html
[4] https://developer.openstack.org/api-ref/compute/
[5] 
https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html#rest-api-impact


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] on the subject of when we should be deprecating API's in a release cycle

2017-05-23 Thread Matt Riedemann

On 5/23/2017 7:50 PM, Amrith Kumar wrote:

TL;DR

When IaaS projects in OpenStack deprecate their API's after milestone 1, it
puts PaaS projects in a pickle. I think it would be much better for PaaS
projects if the IaaS projects could please do their deprecations well before
milestone-1

The longer issue:

OK, the guy from Trove is bitching again. The Trove gate is broken (again).
This time, it appears to be because Trove was using a deprecated Nova
Networking API call, and even though everyone and their brother knew that
Nova Networking was gone-gone, Trove never got the memo, and like a few
others got hit by it.

But the fact of the matter is this, it happened. This has happened in
previous releases as well where at milestone 2 we are scrambling to fix
something because an IaaS project did a planned deprecation.

I'm wondering whether we can get a consensus around doing these earlier in
the cycle, like before milestone-1, so other projects which depend on the
API have a chance to handle it with enough time to test and verify.

Just to be explicitly clear, I AM NOT pointing fingers at Nova. I knew that
NN was gone, just that a couple of API's remained in use and we got bit in
the glueteus maximus. I asked Matt for help to find out what API's had been
deprecated, he almost immediately helped me with a list and I'm working
through getting them fixed (Thanks Matt).

I'm merely raising the generic question of whether or not planned
deprecations should be done before Milestone 1.

Thanks for reading the longer version ...

--
Amrith Kumar
amrith.ku...@gmail.com




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The novaclient changes to deprecate the networking proxy CLIs and APIs 
was done in the Newton release. They were removed and released in 8.0.0 
which was milestone 1 of the Pike release. So what are you specifically 
asking for here? Maybe Trove didn't get hit until recently because 
novaclient 8.0.0 wasn't pulled into upper-constraints? That might have 
been why it seems recent for Trove. I think the u-c change was gating on 
Horizon fixing their stuff, but maybe u-c changes aren't gated on Trove 
unit tests?


Admittedly the python API binding deprecations in novaclient weren't 
using the python warnings module with the DeprecationWarning, which 
we've been pretty consistent about with other API deprecations in the 
novaclient (like with the volume, image and baremetal proxy APIs). We 
dropped the ball on the networking ones though. We have docs in 
novaclient about how to deprecate things, but it's mostly CLI-focused so 
I'm going to update that to be explicit about deprecation warnings in 
the API bindings too.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 21

2017-05-23 Thread Matt Riedemann

On 5/23/2017 2:44 PM, Chris Dent wrote:

Doing LTS is probably too big for that, but "stable branch
reviews" is not.


Oh if we only had more things to review on stable branches. It's also 
just at a bare minimum having people propose backports. Very few 
people/organizations actually do that upstream. So it's always funny (in 
a sad way) how much people clamor for stable branch support upstream, 
and for a long time period, but people aren't even proposing backports 
upstream en masse. Anyway, there is my dig since you brought it back up. :)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    2   3   4   5   6   7   8   9   10   11   >