://docs.openstack.org/python-openstackclient/latest/cli/command-objects/compute-service.html#compute-service-delete
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo
ich depending on your release you might not have:
https://review.openstack.org/#/q/I7b8622b178d5043ed1556d7bdceaf60f47e5ac80
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-
On 11/28/2018 4:19 AM, Ignazio Cassano wrote:
Hi Matt, sorry but I lost your answer and Gianpiero forwarded it to me.
I am sure kvm nodes names are note changed.
Tables where uuid are duplicated are:
dataresource_providers in nova_api db
compute_nodes in nova db
Regards
Ignazio
It would
ing the
upgrade. Is the deleted value in the database the same (0) for both of
those records?
* The exception to this is for the ironic driver which re-uses the
ironic node uuid as of this change: https://review.openstack.org/#/c/571535/
--
Thanks,
M
t think you can do this:
GET /flavors?spec=hw%3Acpu_policy=dedicated
Maybe you'd do:
GET /flavors?hw%3Acpu_policy=dedicated
The problem with that is we wouldn't be able to perform any kind of
request schema validation of it, especially since flavor extra specs are
not sta
reaks" wouldn't result in anything breaking.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
that because of a critical bug, the lazy translation was
disabled in Havana to be fixed in Icehouse but I don't think that ever
happened before IBM developers dropped it upstream, which is further
justification for nuking this code from the various projects.
--
Thanks,
Matt
+spec/user-locale-api
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
is deleted and archived/purged? Because if so, that might be something
we want to add as a nova-manage command.
[1] https://bugs.launchpad.net/nova/+bug/1800755
[2] https://review.openstack.org/#/c/409943/
--
Thanks,
Matt
___
OpenStack-operators mailing
On 10/18/2018 5:07 PM, Matt Riedemann wrote:
It's been deprecated since Pike, and the time has come to remove it [1].
mgagne has been the most vocal CachingScheduler operator I know and he
has tested out the "nova-manage placement heal_allocations" CLI, added
in Rocky, and said it
yment from the
CachingScheduler to the FilterScheduler + Placement.
If you are using the CachingScheduler and have a problem with its
removal, now is the time to speak up or forever hold your peace.
[1] https://review.openstack.org/#/c/611723/1
--
Tha
-in-osc
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
quot;podto1-kvm01" and see if a record existed at one point, was deleted and
then archived to the shadow tables.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/
:17
[4]
http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
ttempt that. Just fail
if being forced and nested allocations exist on the source.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
those types of servers.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
registered on that thing, and then
you pass a handle (ID reference) to that to nova when creating the
(baremetal) server, nova pulls it down from glare and hands it off to
the virt driver. It's just that no one is doing that work.
--
Thanks,
Matt
___
t a problem because they never
created any other CLI outside of OSC.
[1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc
[2] https://etherpad.openstack.org/p/nova-ptg-stein (~L721)
--
Thanks,
Matt
___
OpenStack-operators ma
?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
, that should do it, thanks!
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
led? In the review I
had suggested that we add a pre-upgrade check which inspects the flavors
and if any of these are found, we report a warning meaning those flavors
need to be updated to use traits rather than capabilities. Would that be
reasonable?
--
Thanks,
M
omeone else is going to cover it?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
l.html#end-of-life
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
nchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailma
On 3/28/2018 4:35 PM, Jay Pipes wrote:
On 03/28/2018 03:35 PM, Matt Riedemann wrote:
On 3/27/2018 10:37 AM, Jay Pipes wrote:
If we want to actually fix the issue once and for all, we need to
make availability zones a real thing that has a permanent identifier
(UUID) and store that permanent
UC is also expected to enlist others and hopefully through our
efforts others are encouraged participate and enlist others.
[1] https://etherpad.openstack.org/p/uc-stein-ptg
[2] https://etherpad.openstack.org/p/UC-Election-Qualifications
Awesome, thank you Melvin and others on the UC.
--
Tha
tainly may be
one), it's the role of the TC to do the same across openstack as a
whole. If a PTL doesn't have the time or willingness to do that within
their project, they shouldn't be the PTL. The same goes for TC members IMO.
--
Thanks,
Matt
___
rather he not
stop doing those things to spend all his time acting as a project
manager.
I specifically called out what Doug is doing as an example of things I
want to see the TC doing. I want more/all TC members doing that.
--
Thanks,
Matt
to give the impression
that you must be on the TC to have such an impact.
See my reply to Thierry. This isn't what I'm saying. But I expect the
elected TC members to be *much* more *directly* involved in managing and
driving hard cross-project technical deliverables.
--
Thanks,
Matt
pe that no
one sees the way forward, document a decision and then drop it.
[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html
[2]
https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
[3] htt
check" to verify a green
install, it's probably good to leave the old checks in place, i.e.
you're likely always going to want those cells v2 and placement checks
we added in ocata even long after ocata EOL.
--
Thanks,
Matt
___
OpenStack
lue to hide it - you have to archive/purge those
records to get them out of the main table.
[1] https://bugs.launchpad.net/nova/+bug/1791824
[2] https://etherpad.openstack.org/p/upgrade-sig-ptg-stein
[3] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html
--
Thanks,
M
priorities.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
renade/latest/readme.html#theory-of-upgrade
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
On 9/6/2018 2:56 PM, Jeremy Stanley wrote:
On 2018-09-06 14:31:01 -0500 (-0500), Matt Riedemann wrote:
On 8/29/2018 1:08 PM, Jim Rollenhagen wrote:
On Wed, Aug 29, 2018 at 12:51 PM, Jimmy McArthur mailto:ji...@openstack.org>> wrote:
Examples of typical sessions that make for a
on in IRC: this is an example,
right? Not a new thing being announced?
// jim
FYI for those that didn't see this on the other ML:
http://lists.openstack.org/pipermail/foundation/2018-August/002617.html
--
Thanks,
Matt
___
OpenStack-operators mai
plan to do for the database
migration to minimize downtime.
+openstack-operators ML since this is an operators discussion now.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi
the
known issues with cross-cell migration but also the things I'm not
thinking about.
[1] https://etherpad.openstack.org/p/nova-ptg-stein
[2] https://etherpad.openstack.org/p/nova-ptg-stein-cells
--
Thanks,
Matt
___
OpenStack-operators mailing list
are welcome here, the review, or in IRC.
[1] https://review.openstack.org/#/c/596502/
[2] https://bugs.launchpad.net/tripleo/+bug/1787910
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http
-admins (or some other role of
user) to hit the API directly. I would ask why that is.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
+operators
On 8/24/2018 4:08 PM, Matt Riedemann wrote:
On 8/23/2018 10:22 AM, Sean McGinnis wrote:
I haven't gone through the workflow, but I thought shelve/unshelve
could detach
the volume on shelving and reattach it on unshelve. In that workflow,
assuming
the networking is in place
operators, especially those
running with multiple cells today, as possible. Thanks in advance.
[1] https://etherpad.openstack.org/p/nova-ptg-stein-cells
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http
left!
Please your User Survey by *tomorrow*, *Tuesday, August 21 at 11:59pm UTC.*
Get started now: https://www.openstack.org/user-survey
Let me know if you have any questions.
Thank you,
VW
--
Matt Van Winkle
Senior Manager, Software Engineering | Salesforce
+ops list
On 8/18/2018 10:20 PM, Matt Riedemann wrote:
On 8/13/2018 9:30 PM, Rambo wrote:
1.Only in one region situation,what will happen in the cloud
as expansion of cluster size?Then how solve it?If have the limit
physical node number under the one region situation?How many nodes
a blueprint and not a bug fix - it's not something we'd
backport to stable branches upstream, for example.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo
CFP work is hard as hell. Much respect to the review panel members. It's
a thankless difficult job.
So, in lieu of being thankless, THANK YOU
-Matt
On Mon, Aug 13, 2018 at 9:59 AM, Allison Price
wrote:
> Hi everyone,
>
> One quick clarification. The speakers will be announced on*
#elementGuest
[5]
https://github.com/openstack/nova/blob/c18b1c1bd646d7cefa3d3e4b25ce59460d1a6ebc/nova/virt/libvirt/driver.py#L5196
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi
,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
should provide
the binary.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
ngs they were trying to
essentially disable in the API.
On the whole I think the quality is OK. It's not really possible to
accurately judge that when looking at a single diff this large.
--
Thanks,
Matt
___
OpenStack-operators mailing list
search?search=live%20migration
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-__JnxCFUbSVlEoPX26Jz6VaOyNg-jZbBsmmKA2f0c/edit?usp=sharing
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
--
Matt Van Winkle
Senior Manager, Software Engineering | Salesforce
Mobile: 210-445-4183
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
rade it. I wanted to ask because "evacuate" as a server
operation is something else entirely (it's rebuild on another host which
is definitely disruptive to the workload on that server).
http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all
hared MQ between the controller services and the cells. In
other words, just do the right thing from the start rather than have to
worry about maybe changing the deployment / configuration for that one
cell down the road when it's harder.
--
Thanks,
Matt
___
, not creating a
snapshot from a server. That would be 'nova image-create':
https://docs.openstack.org/python-novaclient/latest/cli/nova.html#nova-image-create
What is the error message in the 400 response? It should be in the CLI
output but if not, what's in the nova-api logs?
--
Thanks,
Matt
Just an update on an old thread, but I've been working on the
cross_az_attach=False issues again this past week and I think I have a
couple of decent fixes.
On 5/31/2017 6:08 PM, Matt Riedemann wrote:
This is a request for any operators out there that configure nova to set:
[cinder
uite sure.
Oh wow, great timing:
http://lists.openstack.org/pipermail/openstack-dev/2018-June/131308.html
I've also queued that up for the upcoming bug smash in China next week.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-opera
thread first.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
/343c2bee234568855fd9e6ba075a05c2e70f3388/nova/virt/libvirt/driver.py#L8136
However, StarlingX has a patch for that (pretty sure anyway, I know
WindRiver had one):
https://review.openstack.org/#/c/337334/
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
as a template for other deployment projects to
integrate similar checks into their upgrade (and install verification)
flows.
[1] https://bugs.launchpad.net/nova/+bug/1772973
[2] https://docs.openstack.org/nova/latest/cli/nova-status.html
[3] https://review.openstack.org/#/c/575125/
--
Thanks,
Matt
.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
+operators (I forgot)
On 6/7/2018 1:07 PM, Matt Riedemann wrote:
On 6/7/2018 12:56 PM, melanie witt wrote:
Recently, we've received interest about increasing the maximum number
of allowed volumes to attach to a single instance > 26. The limit of
26 is because of a historical limitat
On 2/6/2018 6:44 PM, Matt Riedemann wrote:
On 2/6/2018 2:14 PM, Chris Apsey wrote:
but we would rather have intermittent build failures rather than
compute nodes falling over in the future.
Note that once a compute has a successful build, the consecutive build
failures counter is reset. So
#/c/557369/
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
+openstack-operators since we need to have more operator feedback in our
community-wide goals decisions.
+Melvin as my elected user committee person for the same reasons as
adding operators into the discussion.
On 6/4/2018 3:38 PM, Matt Riedemann wrote:
On 6/4/2018 1:07 PM, Sean McGinnis
egal approval, license
agreements, etc? If so, please be up front about that.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
in that
aggregate unless you also assign those to their own aggregates.
It sounds like you're might be looking for a dedicated hosts feature?
There is an RFE from the public cloud work group for that:
https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771523
--
Thanks,
Matt
in the neutron agent would
eliminate the need for this option altogether and still gain the
performance benefits.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman
On 5/30/2018 9:30 AM, Matt Riedemann wrote:
I can start pushing some docs patches and report back here for review help.
Here are the docs patches in both nova and neutron:
https://review.openstack.org/#/q/topic:bug/1774217+(status:open+OR+status:merged)
--
Thanks,
Matt
+openstack-operators
On 5/31/2018 3:04 PM, Matt Riedemann wrote:
On 5/31/2018 1:35 PM, melanie witt wrote:
This cycle at the PTG, we had decided to start making some progress
toward removing nova-network [1] (thanks to those who have helped!)
and so far, we've landed some patches to extract
On 5/30/2018 9:41 AM, Matt Riedemann wrote:
Thanks for your patience in debugging this Massimo! I'll get a bug
reported and patch posted to fix it.
I'm tracking the problem with this bug:
https://bugs.launchpad.net/nova/+bug/1774205
I found that this has actually been fixed since Pike
reaks down and takes the project_id
from the current context (admin) rather than the instance:
https://github.com/openstack/nova/blob/stable/ocata/nova/objects/request_spec.py#L407
Thanks for your patience in debugging this Massimo! I'll get a bug
reported and patch posted to fix it.
--
Tha
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
/aggregate_multitenancy_isolation.py#L50
And make sure when it fails, it matches what you'd expect. If it's None
or '' or something weird then we have a bug.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http
if this is a new server created in your Ocata
test environment that you're trying to move? Or is this a server created
before Ocata?
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http
though if this is a clean Ocata environment with new instances, you
shouldn't have that problem.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
in the instance and creates
allocations against the compute node provider using the flavor. It has
no explicit knowledge about granular request groups or more advanced
features like that.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack
in this iteration, let me know what's
missing and I can add that in to the patch.
[1] https://review.openstack.org/#/c/565886/
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin
/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement
[3]
https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
/de52fefa1fd52ccaac6807e5010c5f2a2dcbaab5/nova/objects/instance.py#L66
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
On 5/15/2018 3:48 AM, saga...@nttdata.co.jp wrote:
We store the service logs which are created by VM on that storage.
I don't mean to be glib, but have you considered maybe not doing that?
--
Thanks,
Matt
___
OpenStack-operators mailing list
would
report as ACTIVE but the ports weren't wired up so ssh would fail.
Having an ACTIVE guest that you can't actually do anything with is kind
of pointless.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
-server logs should log when external events are being sent
to nova for the given port, you probably need to trace the requests and
compare the nova-compute and neutron logs for a given server create request.
--
Thanks,
Matt
___
OpenStack-operators
, set
region_name to RegionOne and see if it makes a difference (although I
thought RegionOne was the default if not specified?).
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org
function).
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
a unique problem.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
On 5/2/2018 12:39 PM, Matt Riedemann wrote:
FWIW, I think we can also backport the data migration CLI to stable
branches once we have it available so you can do your migration in let's
say Queens before g
FYI, here is the start on the data migration CLI:
https://review.openstack.org/#/c
let's say they change the hypervisor or something less drastic
but still image property invalidating.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
your migration in let's
say Queens before getting to Rocky.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
, required in Ocata).
[1] https://review.openstack.org/#/c/492210/
[2] https://etherpad.openstack.org/p/nova-ptg-rocky-placement
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin
tus:merged)
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
in with different
perspective, like melwitt or dansmith.
All of the solutions are bad in their own way, either because they add
technical debt and poor user experience, or because they make rebuild
more complicated and harder to maintain for the developers.
--
Thanks,
Matt
in #openstack-nova and discussing it.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
://review.openstack.org/#/c/543263/
If you can find the bug, or report a new one, I could take a look.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
progress if possible.
[1] https://review.openstack.org/#/c/486204/
[2]
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote:
Keep in mind that Matt has a tendency to sometimes unfairly
over-simplify others views;-). More seriously, c'mon Matt; I went out
of my way to spend time learning about Debian's packaging structure and
trying to get the details right by talking
ut bumping from
minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein)
and qemu 2.5.0 to at least 2.8.0, so I think that's already covering
some good ground. Let's not get greedy. :)
--
Thanks,
Matt
___
OpenStack-operators mailing list
can still support the features for the newer versions if
you're running a system with those versions, but not penalize people
with slightly older versions if not.
--
Thanks,
Matt
___
OpenStack-operators mailing list
OpenStack-operators
1 - 100 of 620 matches
Mail list logo