On 4/29/2018 10:53 PM, Gilles Dubreuil wrote:
Remember Boston's Summit presentation [1] about GraphQL [2] and how it
addresses REST limitations.
I wonder if any project has been thinking about using GraphQL. I haven't
find any mention or pointers about it.
GraphQL takes a complete different
On 4/27/2018 4:02 AM, Tomáš Vondra wrote:
Also, Windows host isolation is done using image metadata. I have filled
a bug somewhere that it does not work correctly with Boot from Volume.
Likely because for boot from volume the instance.image_id is ''. The
request spec, which the filter has
On 4/25/2018 4:07 PM, Jimmy McArthur wrote:
Please have a look at the Vancouver Forum schedule:
https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing
(also attached as a CSV) The proposed schedule was put together by two
members from UC, TC and
On 4/25/2018 10:34 AM, Sreeram Vancheeswaran wrote:
Thank you so much Matt for the detailed steps. We are doing boot from
image and are probably running into the issue mentioned in [2] in your
email.
Hmm, OK, but that doesn't really make sense how you're going down this
path [1] in the code
On 4/25/2018 3:32 AM, Sreeram Vancheeswaran wrote:
Hi team!
We are currently facing an issue in our out-of-tree driver nova-dpm [1]
with nova and cinder on master, where instance launch in devstack is
failing due to communication/time-out issues in nova-cinder. We are
unable to get to the
I wanted to advertise the need for some help in adding multiattach
volume support to Horizon. There is a blueprint tracking the changes
[1]. I started the ball rolling with [2] but there is more work to do,
listed in the work items section of the blueprint.
[2] was I think my first real code
On 4/24/2018 12:58 PM, Morgan Fainberg wrote:
Hi,
I am proposing making some changes to the Keystone Stable Maint team.
A lot of this is cleanup for contributors that have moved on from
OpenStack. For the most part, I've been the only one responsible for
Keystone Stable Maint reviews, and I'm
On 4/23/2018 8:07 AM, Balázs Gibizer wrote:
Add versioned notifications for removing a member from a server group
-
The specless bp
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
is
On 4/23/2018 3:26 PM, Eric Fried wrote:
No, the question you're really asking in this case is, "Do the resource
providers in this tree contain (or not contain) these traits?" Which to
me, translates directly to:
GET /resource_providers?in_tree=$rp_uuid={$TRAIT|!$TRAIT, ...}
...which we
Looking over the things in the runways queue [1], excluding the zVM
driver (because I'm not sure what the status is on that thread), the
next in line is blueprint list-show-all-server-migration-types [2].
I know this has been approved since Pike, but I wanted to raise some
questions again [3]
We seem to be at a bit of an impasse in this spec amendment [1] so I
want to try and summarize the alternative solutions as I see them.
The overall goal of the blueprint is to allow defining traits via image
properties, like flavor extra specs. Those image-defined traits are used
to filter
On 4/23/2018 1:24 PM, Jeremy Stanley wrote:
Some of us also urged existing leaders in various projects to record
videos encouraging contributors to get more involved by demystifying
processes like code review or bug triage. This could be as simple as
signing up for an available lightning talk
On 4/23/2018 12:18 PM, Doug Hellmann wrote:
I would like for us to collect some more data about what efforts
teams are making with encouraging new contributors, and what seems
to be working or not. In the past we've done pretty well at finding
new techniques by experimenting within one team and
On 4/20/2018 2:04 AM, Andreas Jaeger wrote:
We use in openstack-manuals "# root-command" and "$ non-root command", see:
https://docs.openstack.org/install-guide/common/conventions.html
I learned something new today.
But looking at
How loose are we with saying things like, "you should run this as root"
in the docs?
I was triaging this nova bug [1] which is saying that the docs should
tell you to run nova-status (which implies also nova-manage) as root,
but isn't running admin-level CLIs implied that you need root, or
On 4/19/2018 1:15 PM, Doug Hellmann wrote:
Second, releasing early and often gives us more time to fix issues,
so we aren't rushing around at deadline trying to solve a problem
while the gate is full of other last minute patches for other
projects.
Yup, case in point: I waited too long to
On 4/19/2018 10:46 AM, Chris Friesen wrote:
From the CLI perspective, it makes no sense that "nova evacuate"
operates after a host is already down, but "nova evacuate-live" operates
on a running host.
http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/
On 4/19/2018 11:06 AM, Matthew Booth wrote:
I'm ambivalent, tbh, but I think it's better to pick one. I thought
we'd picked 'evacuate' based on the TODOs from Matt R:
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985
On 4/18/2018 12:40 PM, Jay Pipes wrote:
We can go even deeper if you'd like, since NFV means "never-ending
feature velocity". Just let me know.
Cool. So let's not use a GET for this and instead change it to a POST
with a request body that can more cleanly describe what the user is
On 4/17/2018 8:49 PM, 赵超 wrote:
Thanks for approving the stable branch patches of trove and
python-trove, we also have some in the trove-dashboard.
I also went through the trove-dashboard ones, just need another
stable-maint-core to approve those.
On 4/18/2018 12:09 PM, Chris Friesen wrote:
If this happens, is it clear to the end-user that the reason the boot
failed is that the cloud doesn't support trusted cert IDs for
boot-from-vol? If so, then I think that's totally fine.
If you're creating an image-backed server and requesting
On 4/18/2018 11:57 AM, Jay Pipes wrote:
There is a compute REST API change proposed [1] which will allow users
to pass trusted certificate IDs to be used with validation of images
when creating or rebuilding a server. The trusted cert IDs are based
on certificates stored in some key manager,
There is a compute REST API change proposed [1] which will allow users
to pass trusted certificate IDs to be used with validation of images
when creating or rebuilding a server. The trusted cert IDs are based on
certificates stored in some key manager, e.g. Barbican.
The full nova spec is
On 4/18/2018 9:06 AM, Jay Pipes wrote:
"By default, should resources/traits submitted in different numbered
request groups be supplied by separate resource providers?"
Without knowing all of the hairy use cases, I'm trying to channel my
inner sdague and some of the similar types of
On 4/16/2018 3:04 AM, 赵超 wrote:
There are some patches to stable branches to the different trove repos,
and they are always progressing slowly ,because none of the current
trove team core members are in the trove-stable-maint. I tried to
contact with the previous PTLs about expanding the
On 4/12/2018 7:42 AM, Eric Fried wrote:
This sounds reasonable to me. I'm glad the issue was raised, but IMO it
shouldn't derail progress on an approved blueprint with ready code.
Jichen, would you please go ahead and file that blueprint template (no
need to write a spec yet) and link it in a
On 4/11/2018 5:09 PM, Michael Still wrote:
https://review.openstack.org/#/c/523387 proposes adding a z/VM specific
dependancy to nova's requirements.txt. When I objected the counter
argument is that we have examples of windows specific dependancies
(os-win) and powervm specific dependancies
On 4/9/2018 9:57 PM, Chen CH Ji wrote:
Could you please help to share whether this kind of event is sent by
neutron-server or neutron agent ? I searched neutron code
from [1][2] this means the agent itself need tell neutron server the
device(VIF) is up then neutron server will send notification
On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote:
Keep in mind that Matt has a tendency to sometimes unfairly
over-simplify others views;-). More seriously, c'mon Matt; I went out
of my way to spend time learning about Debian's packaging structure and
trying to get the details right by talking to
On 4/9/2018 1:00 PM, Duncan Thomas wrote:
Hopefully this flow means we can do rebuild root filesystem from
snapshot/backup too? It seems rather artificially limiting to only do
restore-from-image. I'd expect restore-from-snap to be a more common
use case, personally.
Hmm, now you've got me
As part of a bug fix [1], the internal
ComputeVirtAPI.wait_for_instance_event interface is changing to no
longer accept event names that are strings, and will now require the
(name, tag) tuple form which all of the in-tree virt drivers are already
using.
If you have an out of tree driver
On 4/9/2018 3:51 AM, Gorka Eguileor wrote:
As I see it, the process would look something like this:
- Nova detaches volume using OS-Brick
- Nova calls Cinder re-image passing the node's info (like we do when
attaching a new volume)
- Cinder would:
- Ensure only that node is connected to
On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote:
FWIW, I'd suggest so, if it's not too much maintenance. It'll just
spare you additional bug reports in that area, and the overall default
experience when dealing with CPU models would be relatively much better.
(Another way to look at it is,
On 4/6/2018 5:09 AM, Matthew Booth wrote:
I think you're talking at cross purposes here: this won't require a
swap volume. Apart from anything else, swap volume only works on an
attached volume, and as previously discussed Nova will detach and
re-attach.
Gorka, the Nova api Matt is referring to
On 4/5/2018 3:32 PM, Thomas Goirand wrote:
If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0
is fine, please choose 3.0.0 as minimum.
If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is
fine, please choose 2.8.0 as minimum.
If you don't absolutely need
On 4/5/2018 3:15 AM, Gorka Eguileor wrote:
But just to be clear, Nova will have to initialize the connection with
the re-imagined volume and attach it again to the node, as in all cases
(except when defaulting to downloading the image and dd-ing it to the
volume) the result will be a new volume
On 4/2/2018 6:59 AM, Gorka Eguileor wrote:
I can only see one benefit from implementing this feature in Cinder
versus doing it in Nova, and that is that we can preserve the volume's
UUID, but I don't think this is even relevant for this use case, so why
is it better to implement this in Cinder
On 3/29/2018 6:50 PM, Sean McGinnis wrote:
May we can add a "Reimaging" state to the volume? Then Nova could poll for it
to go from that back to Available?
That would be fine with me, and maybe similar to how 'extending' and
'retyping' work for an attached volume?
Nova wouldn't wait for the
On 3/29/2018 3:36 AM, Tony Breeds wrote:
Hi all,
At Sydney we started the process of change on the stable branches.
Recently we merged a TC resolution[1] to alter the EOL process. The
next step is refinining the stable policy itself.
I've created a review to do that. I think it covers
On 3/29/2018 9:28 AM, Sean McGinnis wrote:
I do not think changing the revert to snapshot implementation is appropriate
here. There may be some cases where this can get the desired result, but there
is no guarantee that there is a snapshot on the volume's base image state to
revert to. It also
On 3/29/2018 2:44 AM, Radoslav Gerganov wrote:
While running the VMware CI continues to be a challenge, I must say this
patch fixes a regression introduced by Matt Riedemann's patch:
https://review.openstack.org/#/c/549411/
for which the VMware CI clearly indicated there was a problem and
On 3/29/2018 7:53 AM, William M Edmonds wrote:
running only on virt/vmwareapi changes would not catch problems caused
by changes elsewhere, such as compute/manager.py or virt/driver.py
Right, I think virt driver 3rd party CI should run on at least some
select sub-trees, the major ones that
On 3/29/2018 5:19 AM, melanie witt wrote:
Thanks. Just curious, how is the CI passing if the driver is currently
broken for detach_volume? I had thought maybe particular tests were
skipped in response to my original email that linked the bug fix patch,
but it looks like that run was from
On 3/27/2018 10:37 AM, Jay Pipes wrote:
If we want to actually fix the issue once and for all, we need to make
availability zones a real thing that has a permanent identifier (UUID)
and store that permanent identifier in the instance (not the instance
metadata).
Or we can continue to paper
On 3/28/2018 11:21 AM, Andrey Kurilin wrote:
PS: https://review.openstack.org/#/c/59694/
PS2: it was abandoned due to several -2 :)
Look how nice I was as a reviewer 5 years ago...
--
Thanks,
Matt
__
OpenStack
On 3/28/2018 11:07 AM, melanie witt wrote:
We were reviewing a bug fix for the vmware driver [0] today and we
noticed it appears that the VMware NSX CI is no longer running, not even
on only the nova/virt/vmwareapi/ tree.
From the third-party CI dashboard, I see some claims of it running but
On 3/26/2018 9:00 PM, melanie witt wrote:
To the existing core team members, please respond with your comments,
+1s, or objections within one week.
+1
--
Thanks,
Matt
__
OpenStack Development Mailing List (not for
On 3/27/2018 10:37 AM, Jay Pipes wrote:
If we want to actually fix the issue once and for all, we need to make
availability zones a real thing that has a permanent identifier (UUID)
and store that permanent identifier in the instance (not the instance
metadata).
Aggregates have a UUID now,
Sylvain has had a spec up for awhile [1] about solving an old issue
where admins can rename an AZ (via host aggregate metadata changes)
while it has instances in it, which likely results in at least user
confusion, but probably other issues later if you try to move those
instances, e.g. the
FYI
Forwarded Message
Subject: [openstack-dev] [nova][placement] Upgrade placement first!
Date: Mon, 26 Mar 2018 15:02:23 -0500
From: Eric Fried
Reply-To: OpenStack Development Mailing List (not for usage questions)
On 3/26/2018 6:24 AM, ChangBo Guo wrote:
What's your use case for ListOpt, just make sure the value(a list) is
part of 'choices' ? Maybe we need another parameter to distinguish
It came up because of this change in nova:
https://review.openstack.org/#/c/534384/
We want to backport that as
On 3/21/2018 6:34 AM, 李杰 wrote:
So what should we do then about rebuild the volume backed server?Until
the cinder could re-image a volume?
I've added the spec to the 'stuck reviews' section of the nova meeting
agenda so it can at least get some discussion there next week.
On 3/22/2018 10:30 PM, Chen CH Ji wrote:
seems we have a EC2 implementation in api layer and deprecated since
Mitaka, maybe eligible to be removed this cycle?
That is easier said than done. There have been a couple of related
attempts in the past:
https://review.openstack.org/#/c/266425/
On 3/22/2018 10:47 PM, 李杰 wrote:
This is the spec about rebuild a instance booted from
volume, anyone who is interested in
booted from volume can help to review this. Any suggestion is
welcome.Thank you very much!
The link is here.
Re:the rebuild
On 3/22/2018 2:59 PM, melanie witt wrote:
And (MHO) I'm not sure we need help in reviewing more specs.
I wholly disagree here. If you're on the core team, or want to be on the
core team, you should be reviewing specs, because those are the things
that lay out the high level design and
On 3/22/2018 2:59 PM, melanie witt wrote:
Maybe a good compromise would be to start runways now and move spec
freeze out to r-2 (Jun 7). That way we have less pressure on spec review
earlier on, more time to review the current queue of approved
implementations via runways, and a chance to
On 3/20/2018 6:44 PM, melanie witt wrote:
We were thinking of starting the runways process after the spec review
freeze (which is April 19) so that reviewers won't be split between spec
reviews and reviews of work in runways.
I'm going to try and reign in the other thread [1] and bring it
On 3/20/2018 6:47 PM, melanie witt wrote:
I was thinking that 2-3 weeks ahead of spec freeze would be appropriate,
so that would be March 27 (next week) or April 3 if we do it on a Tuesday.
It's spring break here on April 3 so I'll be listening to screaming
kids, I mean on vacation. Not that
On 3/20/2018 5:57 PM, melanie witt wrote:
* For rebuild, we're going to defer the instance.save() until
conductor has passed scheduling and before it casts to compute in order
to address the issue of rolling back instance values if something fails
during rebuild scheduling
I got to
On 3/20/2018 1:45 PM, Sean McGinnis wrote:
All known patches are merged now and the last step of reverting the non-voting
state of the one job is just about to finish in the gate queue.
Stable branches should now be OK to recheck any failed jobs from the last
couple of days. If you see anything
On 3/16/2018 10:33 AM, Peter Penchev wrote:
Would there be any major opposition to adding a StorPool shared
storage image backend, so that our customers are not limited to
volume-backed instances? Right now, creating a StorPool volume and
snapshot from a Glance image and then booting instances
On 3/16/2018 1:22 AM, 양유석 wrote:
Our company operates Openstack clusters and we had legacy DNS system,
and it needs to check hostname check more strictly including RFC952.
Also our operators demands for unique hostname in a region (we do not
have tenant network yet using l3 only network). So
On 3/16/2018 9:29 AM, Kwan, Louie wrote:
In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0
are described in openstack's upper-constraints.txt,
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
If you're noticed any volume-related tests failing this week, it's not
just you. There is an old bug that is back where the c-sch
CapacityFilter is kicking out the host because there is too much going
on in the single host at once.
http://status.openstack.org/elastic-recheck/#1741275
On 3/15/2018 3:30 PM, melanie witt wrote:
* We don't need to block bandwidth-based scheduling support for
doing port creation in conductor (it's not trivial), however, if nova
creates a port on a network with a QoS policy, nova is going to have to
munge the allocations and update
On 3/15/2018 5:29 PM, Dan Smith wrote:
Yep, for sure. I think if there are snapshots, we have to refuse to do
te thing. My comment was about the "does nova have authority to destroy
the root volume during a rebuild" and I think it does, if
delete_on_termination=True, and if there are no
On 3/15/2018 11:05 AM, Kendall Waters wrote:
The schedule is organized by new tracks according to use cases: private
& hybrid cloud, public cloud, container infrastructure, CI / CD, edge
computing, HPC / GPUs / AI, and telecom / NFV. You can sort within the
schedule to find sessions and
On 3/15/2018 7:27 AM, 李杰 wrote:
It seems that we can only delete the snapshots of the original volume
firstly,then delete the original volume if the original volume has
snapshots.
Nova won't be deleting the volume snapshots just to delete the volume
during a rebuild.
If we decide to
On 3/14/2018 3:42 AM, 李杰 wrote:
This is the spec about rebuild a instance booted from
volume.In the spec,there is a
question about if we should delete the old root_volume.Anyone who
is interested in
booted from volume can help to review this. Any suggestion is
On 3/13/2018 7:37 AM, Balázs Gibizer wrote:
I filed the bp
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
I added it to the weekly meeting agenda. I know you likely won't attend
the meeting this week, but I can probably be your proxy on this one.
--
On 3/9/2018 6:26 AM, Balázs Gibizer wrote:
The instance-action REST API has already provide the traceback to the
user (to the admin by default) and the notifications are also admin only
things as they are emitted to the message bus by default. So I assume
that security is not a bigger concern
On 2/5/2018 11:44 PM, Massimo Sgaravatto wrote:
But if I try to specify the long list of projects, I get:a "Value ... is
too long" error message [*].
I can see two workarounds for this problem:
1) Create an host aggregate per project:
HA1 including CA1, C2, ... Cx and with
On 3/8/2018 6:51 AM, Jay Pipes wrote:
- VGPU_DISPLAY_HEAD resource class should be removed and replaced with
a set of os-traits traits that indicate the maximum supported number of
display heads for the vGPU type
How does a trait express a quantifiable limit? Would we end up have
several
On 3/7/2018 8:43 AM, Thierry Carrez wrote:
mriedem volunteered to work on a TC resolution to define
what we exactly meant by that (the proposal is now being discussed at
https://review.openstack.org/#/c/548916/).
A new revision is now up for this after much discussion in the review
itself and
On 3/7/2018 2:24 PM, Lance Bragstad wrote:
I tried bringing this up during the PTG feedback session last Thursday
Unless you wanted to talk about snow, there was no feedback to be had at
the feedback session.
Being able to actually give feedback on the PTG during the PTG feedback
session
On 3/7/2018 6:12 AM, Chris Dent wrote:
# Talking about the PTG at the PTG
At the [board
meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html),
the future of the PTG was a big topic. As currently constituted it
presents some challenges:
* It is difficult for some
On 3/5/2018 6:26 AM, Andrey Kurilin wrote:
A year ago, Nova team decided to deprecate the addFixedIP,
removeFixedIP, addFloatingIP, removeFloatingIP server action APIs and it
was done[1].
It looks like not all the consumers paid attention to this change so
after novaclient 10.0.0 release
On 3/1/2018 10:44 AM, Ilya Shakhat wrote:
For those who do not know, DriverLog is a community registry of
3rd-party drivers for OpenStack hosted together with Stackalytics [1].
The project started 4 years ago and by now contains information about
220 drivers. The data from DriverLog is also
On 2/28/2018 5:55 AM, Vitalii Solodilov wrote:
Wouldn't it be a good idea to check for more general DBError?
So like catching Exception? How are you going to distinguish from
IntegrityErrors which shouldn't be retried, which are also DBErrors?
--
Thanks,
Matt
On 2/27/2018 6:34 PM, John Griffith wrote:
So replication is set on create of the volume, you could have a rule
that keeps the two features mutually exclusive, but I'm still not quite
sure why that would be a requirement here.
Yeah I didn't think of that either, the attachment record has
On 2/27/2018 10:02 AM, Matthew Booth wrote:
Sounds like the work Nova will have to do is identical to volume update
(swap volume). i.e. Change where a disk's backing store is without
actually changing the disk.
That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the
libvirt
On 2/27/2018 10:43 AM, melanie witt wrote:
It will be really cold outside, so be prepared for that.
If you live in California, sure...
--
Thanks,
Matt
__
OpenStack Development Mailing List (not for usage questions)
On 2/26/2018 9:52 PM, John Griffith wrote:
Yeah, it seems like this would be pretty handy with what's there. So
are folks good with that? Wanted to make sure there's nothing
contentious there before I propose a spec on the Nova and Cinder sides.
If you think it seems at least worth
On 2/26/2018 9:28 PM, John Griffith wrote:
I'm also wondering how much of the extend actions we can leverage here,
but I haven't looked through all of that yet.
The os-server-external-events API in nova is generic. We'd just add a
new microversion to register a new tag for this event. Like
On 2/26/2018 8:09 PM, John Griffith wrote:
I'm interested in looking at creating a mechanism to "refresh" all of
the existing/current attachments as part of the Cinder Failover process.
What would be involved on the nova side for the refresh? I'm guessing
disconnect/connect the volume via
On 2/16/2018 7:54 AM, Chris Dent wrote:
Before I get to the meat of this week's report, I'd like to request
some feedback from readers on how to improve the report. Over its
lifetime it has grown and it has now reached the point that while it
tries to give the impression of being complete, it
On 2/21/2018 4:30 AM, Édouard Thuleau wrote:
Hi Seán, Michael,
Since patch [1] moved Contrail VIF plugging under privsep, Nova fails to
plug TAP on the Contrail software switch (named vrouter) [2]. I proposed
a fix in the beginning of the year [3] but it still pending approval
even it got a
On 2/1/2018 9:51 AM, Lance Bragstad wrote:
Just like with feature freeze, I put together a review dashboard that
contains patches we need to land in order to cut a release candidate
[0]. I'll be adding more patches throughout the day, but so far there
are 21 changes there waiting for review. If
On 2/9/2018 9:01 AM, Matt Riedemann wrote:
I'd like to add Takashi to the python-novaclient core team.
python-novaclient doesn't get a ton of activity or review, but Takashi
has been a solid reviewer and contributor to that project for quite
awhile now:
http://stackalytics.com/report
I sent a similar email after Pike was released [1] and these are our
blueprint burndown chart results for Queens [2].
Comparing to Pike, the trends are similar, with the overall numbers
down. Things ramp up until the spec freeze, then tail off, with a little
spike toward the end to get things
On 2/5/2018 9:00 PM, Matt Riedemann wrote:
Given the size and detail of this thread, I've tried to summarize the
problems and possible solutions/workarounds in this etherpad:
https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu
For those working on this, please
On 2/13/2018 10:31 AM, gordon chung wrote:
was there a resolution for this? iiuc, pgsql is not supported by glance
based on:
https://github.com/openstack/glance/commit/f268df1cbc3c356c472ace04bd4f2d4b3da6c026
i don't know if it was a bad commit but it seems to break any case that
tries to use
On 2/12/2018 11:11 AM, Balázs Gibizer wrote:
Add the user id and project id of the user initiated the instance
action to the notification
-
The bp
I'm going through the proposed stable/queens backports and marking them
as -Workflow if they are not fixing a regression introduced in queens
itself or required for a queens-rc2 tag.
If we have a need for a queens-rc2 tag then we can assess if any of
these other backports should be included,
I triaged this bug a couple of weeks ago:
https://bugs.launchpad.net/nova/+bug/1746483
It looks like it's been regressed since Mitaka when that filter started
using the RequestSpec object rather than legacy filter_properties dict.
Looking a bit deeper though, it looks like this filter never
I'd like to add Takashi to the python-novaclient core team.
python-novaclient doesn't get a ton of activity or review, but Takashi
has been a solid reviewer and contributor to that project for quite
awhile now:
http://stackalytics.com/report/contribution/python-novaclient/180
He's always
On 2/6/2018 8:44 AM, Emilien Macchi wrote:
TC voted (but not approved yet) and selected 2 goals that will likely be
approved if no strong voice is raised this week:
Remove mox
https://review.openstack.org/#/c/532361/
Toggle the debug option at runtime
https://review.openstack.org/#/c/534605/
On 2/5/2018 9:32 AM, Balázs Gibizer wrote:
Introduce instance.lock and instance.unlock notifications
-
A specless bp has been proposed to the Rocky cycle
Given the size and detail of this thread, I've tried to summarize the
problems and possible solutions/workarounds in this etherpad:
https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu
For those working on this, please check that what I have written down is
correct
On 2/1/2018 2:56 AM, Saverio Proto wrote:
Hello !
thanks for accepting the patch :)
It looks like the best is always to send an email and have a short
discussion together, when we are not sure about a patch.
thank you
Cheers,
Saverio
There is also the #openstack-stable IRC channel if you
301 - 400 of 2269 matches
Mail list logo