[Yahoo-eng-team] [Bug 1989391] [NEW] Allowed Address Pairs which is netmask doesn't work

2022-09-12 Thread ZhouHeng
Public bug reported:

I have an environment ovn==21.03

I set one port's allowed address pairs is 192.168.1.12/24, I found only
ip=192.168.1.12 traffic can pass through, other ip eg:192.168.1.11 can
not. if I set allowed address pairs is 192.168.1.0/24, ip=192.168.1.11
traffic can pass through.

I looked at the code of ovn: https://github.com/ovn-
org/ovn/blob/98bac97c656c720780fae9b1e4c700eb13c36c29/northd/ovn-
northd.c#L4333

I think we should convert it and send it to ovn.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989391

Title:
  Allowed Address Pairs which is netmask doesn't work

Status in neutron:
  In Progress

Bug description:
  I have an environment ovn==21.03

  I set one port's allowed address pairs is 192.168.1.12/24, I found
  only ip=192.168.1.12 traffic can pass through, other ip
  eg:192.168.1.11 can not. if I set allowed address pairs is
  192.168.1.0/24, ip=192.168.1.11 traffic can pass through.

  I looked at the code of ovn: https://github.com/ovn-
  org/ovn/blob/98bac97c656c720780fae9b1e4c700eb13c36c29/northd/ovn-
  northd.c#L4333

  I think we should convert it and send it to ovn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989391/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586731] Re: restart neutron ovs agent will leave the fanout queue behind

2022-09-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/856411
Committed: 
https://opendev.org/openstack/neutron/commit/2402145713bb349a9d1852b92e2b37a56f26874b
Submitter: "Zuul (22348)"
Branch:master

commit 2402145713bb349a9d1852b92e2b37a56f26874b
Author: Felix Huettner 
Date:   Thu Sep 8 09:44:50 2022 +0200

Cleanup fanout queues on ovs agent stop (part 2)

As a followup from the previous commit we here now also cleanup the
SubPort an Trunk fanout queues.

Closes-Bug: #1586731
Change-Id: I047603b647dec7787c2471d9edb70fa4ec599a2a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586731

Title:
  restart neutron ovs agent will leave the fanout queue behind

Status in neutron:
  Fix Released

Bug description:
  to reproduce,
  sudo rabbitmqctl list_queues
  restart neutron-openvswitch-agent
  sudo rabbitmqctl list_queues

  
  q-agent-notifier-dvr-update   0
  q-agent-notifier-dvr-update.ubuntu64  0
  q-agent-notifier-dvr-update_fanout_714f4e99b33a4a41863406fcc26b9162   0
  q-agent-notifier-dvr-update_fanout_a2771eb21e914195b9a6cc3f930b5afb   0
  q-agent-notifier-l2population-update  0
  q-agent-notifier-l2population-update.ubuntu64 0
  q-agent-notifier-l2population-update_fanout_6b2637e57995416ab772259a974315e0  
3
  q-agent-notifier-l2population-update_fanout_fe9c07aaa8894f55bfb49717f955aa55  0
  q-agent-notifier-network-update   0
  q-agent-notifier-network-update.ubuntu64  0
  q-agent-notifier-network-update_fanout_1ae903109fe844a39c925e49d5f06498   0
  q-agent-notifier-network-update_fanout_8c15bef355c645e58226a9b98efe3f28   0
  q-agent-notifier-port-delete  0
  q-agent-notifier-port-delete.ubuntu64 0
  q-agent-notifier-port-delete_fanout_cd794c4456cc4bedb7993f5d32f0b1b9  0
  q-agent-notifier-port-delete_fanout_f09ffae3b0fa48c882eddd59baae2169  0
  q-agent-notifier-port-update  0
  q-agent-notifier-port-update.ubuntu64 0
  q-agent-notifier-port-update_fanout_776b9b5b1d0244fc8ddc0a1e309d9ab2  0
  q-agent-notifier-port-update_fanout_f3345013434545fd9b72b7f54a5c9818  0
  q-agent-notifier-security_group-update0
  q-agent-notifier-security_group-update.ubuntu64   0
  
q-agent-notifier-security_group-update_fanout_b5421c8ae5e94c318502ee8fbc62852d  
  0
  
q-agent-notifier-security_group-update_fanout_f4d73a80c9a9444c8a9899cbda3e71ed  
  0
  q-agent-notifier-tunnel-delete0
  q-agent-notifier-tunnel-delete.ubuntu64   0
  q-agent-notifier-tunnel-delete_fanout_743b58241f6243c0a776a0dbf58da6520
  q-agent-notifier-tunnel-delete_fanout_ddb8fad952b348a8bf12bc5c741d0a250
  q-agent-notifier-tunnel-update0
  q-agent-notifier-tunnel-update.ubuntu64   0
  q-agent-notifier-tunnel-update_fanout_1e0b0f7ca63f404ba5f41def9d12f00d0
  q-agent-notifier-tunnel-update_fanout_e86e9b073ec74766b9e755439827badc
1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586731/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1981631] Re: Nova fails to reuse mdev vgpu devices

2022-09-12 Thread Sylvain Bauza
OK, I maybe mistriaged this bug report, as this is specific to the
Ampere architecture with SR-IOV support, so nevermind comment #2.

FWIW, this hardware support is very special as you indeed need to enable VFs, 
as described in nvidia docs : 
https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#creating-sriov-vgpu-device-red-hat-el-kvm

Indeed, 32 VFs would be configured *but* if you specify
enabled_vgpu_types to the right nvidia-471 type for the PCI address,
then the VGPU inventory for this PCI device will have a total of 4, not
32 as I tested earlier.

Anyway, this whole Ampere support is very fragile upstream as this is
not fully supported upstream, so I'm about to set this bug to Opinion,
as Ampere GPUs won't be able to be tested upstream.

Please do further testing to identify whether something is missing with
current vGPU support we have in Nova but for the moment and which would
break Ampere support, but please understand that upstream support is
absolutely hardware-independant and has to not be nvidia-specific.

** Tags added: vgpu

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1981631

Title:
  Nova fails to reuse mdev vgpu devices

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description:
  
  Hello we are experiencing a weird issue where Nova creates the mdev devices 
from virtual functions when none are created but then will not reuse them once 
they are all created and vgpu instances are removed.

  
  I believe part of this issue was the uuid issue from this bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1701281

  Manually applying the latest patch partially fixed the issue
  (placement stopped reporting no hosts available), now the error is on
  the hypervisor side saying 'no vgpu resources available'.

  If I manually remove the mdev device by with commands like the following:
  echo "1" > /sys/bus/mdev/devices/150c155c-da0b-45a6-8bc1-a8016231b100/remove

  then Im able to spin up an instance again.

  all mdev devices match in mdevctl list and virsh nodedev-list

  Steps to reproduce:
  
  1) freshly setup hypervisor with no mdev devices created yet
  2) spin up vgpu instances until all mdevs are created that will fit on 
physical gpu(s)
  3) delete vgpu instances
  4) try and spin up new vgpu instances

  Expected Result:
  =
  Instance spin up and use reuse the mdev vgpu devices

  Actual Result:
  =
  Build error from Nova API:
  Error: Failed to perform requested operation on instance "colby_gpu_test23", 
the instance has an error status: Please try again later [Error: Exceeded 
maximum number of retries. Exhausted all hosts available for retrying build 
failures for instance c18565f9-da37-42e9-97b9-fa33da5f1ad0.].

  Error in hypervisor logs:
  nova.exception.ComputeResourcesUnavailable: Insufficient compute resources: 
vGPU resource is not available

  mdevctl output:
  cdc98056-8597-4531-9e55-90ab44a71b4e :21:00.7 nvidia-563 manual
  298f1e4b-784d-42a9-b3e5-bdedd0eeb8e1 :21:01.2 nvidia-563 manual
  2abee89e-8cb4-4727-ac2f-62888daab7b4 :21:02.4 nvidia-563 manual
  32445186-57ca-43f4-b599-65a455fffe65 :21:04.2 nvidia-563 manual
  0c4f5d07-2893-49a1-990e-4c74c827083b :81:00.7 nvidia-563 manual
  75d1b78c-b097-42a9-b736-4a8518b02a3d :81:01.2 nvidia-563 manual
  a54d33e0-9ddc-49bb-8908-b587c72616a9 :81:02.5 nvidia-563 manual
  cd7a49a8-9306-41bb-b44e-00374b1e623a :81:03.4 nvidia-563 manual

  virsh nodedev-list -cap mdev:
  mdev_0c4f5d07_2893_49a1_990e_4c74c827083b__81_00_7
  mdev_298f1e4b_784d_42a9_b3e5_bdedd0eeb8e1__21_01_2
  mdev_2abee89e_8cb4_4727_ac2f_62888daab7b4__21_02_4
  mdev_32445186_57ca_43f4_b599_65a455fffe65__21_04_2
  mdev_75d1b78c_b097_42a9_b736_4a8518b02a3d__81_01_2
  mdev_a54d33e0_9ddc_49bb_8908_b587c72616a9__81_02_5
  mdev_cd7a49a8_9306_41bb_b44e_00374b1e623a__81_03_4
  mdev_cdc98056_8597_4531_9e55_90ab44a71b4e__21_00_7

  nvidia-smi vgpu output:
  Wed Jul 13 20:15:16 2022   
  
+-+
  | NVIDIA-SMI 510.73.06  Driver Version: 510.73.06 
|
  
|-+--++
  | GPU  Name   | Bus-Id   | GPU-Util   
|
  |  vGPU ID Name   | VM ID VM Name| vGPU-Util  
|
  
|=+==+|
  |   0  NVIDIA A40 | :21:00.0 |   0%   
|
  |  3251635106  NVIDIA A40-12Q | 2786...  instance-00014520   |  0%
|
  |  3251635117  

[Yahoo-eng-team] [Bug 1989361] [NEW] extension using collection_actions and collection_methods with path_prefix doesn't get proper URLs

2022-09-12 Thread Johannes Kulik
Public bug reported:

We're creating a new extension downstream to add some special-sauce API
endpoints. During that, we tried to use "collection_actions" to create
some special actions for our resource. Those ended up being uncallable
always returning a 404 as the call was interpreted as a standard
"update" call instead of calling our special function.

We debugged this down and it turns out the Route object created when
registering the API endpoint in [0] ff doesn't contain a "/" at the
start of its regexp. Therefore, it doesn't match.

This seems to come from the fact that we - other than e.g. the quotasv2
extension [1] - have to set a "path_prefix".

Looking at the underlying "routes" library, we automatically get a "/"
prefixed for the "resource()" call [2], while the "Submap"'s
"submapper()" call needs to already contain the prefixed "/" as
exemplified in [3].

Therefore, I propose to prepend a "/" to the "path_prefix" for the code
handling "collection_actions" and "collection_methods" and will open a
review-request for this.

[0] 
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/api/extensions.py#L159
[1] 
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/extensions/quotasv2.py#L210-L215
[2] https://github.com/bbangert/routes/blob/main/routes/mapper.py#L1126-L1132
[3] https://github.com/bbangert/routes/blob/main/routes/mapper.py#L78

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989361

Title:
  extension using collection_actions and collection_methods with
  path_prefix doesn't get proper URLs

Status in neutron:
  New

Bug description:
  We're creating a new extension downstream to add some special-sauce
  API endpoints. During that, we tried to use "collection_actions" to
  create some special actions for our resource. Those ended up being
  uncallable always returning a 404 as the call was interpreted as a
  standard "update" call instead of calling our special function.

  We debugged this down and it turns out the Route object created when
  registering the API endpoint in [0] ff doesn't contain a "/" at the
  start of its regexp. Therefore, it doesn't match.

  This seems to come from the fact that we - other than e.g. the
  quotasv2 extension [1] - have to set a "path_prefix".

  Looking at the underlying "routes" library, we automatically get a "/"
  prefixed for the "resource()" call [2], while the "Submap"'s
  "submapper()" call needs to already contain the prefixed "/" as
  exemplified in [3].

  Therefore, I propose to prepend a "/" to the "path_prefix" for the
  code handling "collection_actions" and "collection_methods" and will
  open a review-request for this.

  [0] 
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/api/extensions.py#L159
  [1] 
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/extensions/quotasv2.py#L210-L215
  [2] https://github.com/bbangert/routes/blob/main/routes/mapper.py#L1126-L1132
  [3] https://github.com/bbangert/routes/blob/main/routes/mapper.py#L78

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989361/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1987378] Re: [RFE] Add DSCP mark 44

2022-09-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/854117
Committed: 
https://opendev.org/openstack/neutron-lib/commit/b0eaf6e1534b472f45923267ab38bbee5c04fa83
Submitter: "Zuul (22348)"
Branch:master

commit b0eaf6e1534b472f45923267ab38bbee5c04fa83
Author: Rodolfo Alonso Hernandez 
Date:   Fri Aug 12 10:56:18 2022 +0200

Add DSCP mark 44

Added a new DSCP mark value: 44. This new mark value was included
recently in the RFC5865 [1].

[1]https://www.rfc-editor.org/rfc/rfc5865.html

Change-Id: Ieba8835cbb5a71e83791324ed1fcbb983afe19fa
Closes-Bug: #1987378


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1987378

Title:
  [RFE] Add DSCP mark 44

Status in neutron:
  Fix Released

Bug description:
  This RFE proposes to add a new valid DSCP mark value: 44.

  This value was recently added in the RFC5865 [1]: "A Differentiated
  Services Code Point (DSCP) for Capacity-Admitted Traffic".

  
  [1]https://www.rfc-editor.org/rfc/rfc5865.html
  [2]https://www.iana.org/assignments/dscp-registry/dscp-registry.xhtml

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1987378/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989357] Re: Nova doesn’t update Neutron about changing compute instance name

2022-09-12 Thread Arkady Shtempler
** Package changed: sssd (Ubuntu) => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1989357

Title:
  Nova doesn’t update Neutron about changing compute instance name

Status in OpenStack Compute (nova):
  New

Bug description:
  This is something that was raised on Neutron Designate DNS integration 
testing. 
  When VM server is created, its Nova name is used by Neutron as a hostname and 
then propagated to the DNS backends using Designate DNS, for example:
  
https://docs.openstack.org/neutron/yoga/admin/config-dns-int-ext-serv.html#use-case-1-floating-ips-are-published-with-associated-port-dns-attributes

  Created VM is named “my_vm” and the “A” type Recordset propagated to the DNS 
backends is:
  my-vm.example.org. | A| 198.51.100.4 

  Now, let’s say that the customer has decided to change VM’s name and that he 
would expect the previously created “A” type recordset to be change accordingly.
  Unfortunately,  such a change won’t affect either Neutron or Designate DNS, 
because Nova doesn’t update Neutron about changing VMs’ names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1989357/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989357] [NEW] Nova doesn’t update Neutron about changing compute instance name

2022-09-12 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

This is something that was raised on Neutron Designate DNS integration testing. 
When VM server is created, its Nova name is used by Neutron as a hostname and 
then propagated to the DNS backends using Designate DNS, for example:
https://docs.openstack.org/neutron/yoga/admin/config-dns-int-ext-serv.html#use-case-1-floating-ips-are-published-with-associated-port-dns-attributes

Created VM is named “my_vm” and the “A” type Recordset propagated to the DNS 
backends is:
my-vm.example.org. | A| 198.51.100.4 

Now, let’s say that the customer has decided to change VM’s name and that he 
would expect the previously created “A” type recordset to be change accordingly.
Unfortunately,  such a change won’t affect either Neutron or Designate DNS, 
because Nova doesn’t update Neutron about changing VMs’ names.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova
-- 
Nova doesn’t update Neutron about changing compute instance name
https://bugs.launchpad.net/bugs/1989357
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989269] Re: Wrong assertion method in a unit test

2022-09-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/856926
Committed: 
https://opendev.org/openstack/neutron/commit/653949808d2d5078102e2f6e2a70643071006b5b
Submitter: "Zuul (22348)"
Branch:master

commit 653949808d2d5078102e2f6e2a70643071006b5b
Author: Takashi Natsume 
Date:   Sun Sep 11 15:19:51 2022 +0900

Fix a wrong assertion method

Replace 'has_calls' with 'assert_has_calls'.

Change-Id: Iff796608ac981aea2d093ab0e99e2de0c2cbb9b1
Closes-Bug: 1989269
Signed-off-by: Takashi Natsume 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989269

Title:
  Wrong assertion method in a unit test

Status in neutron:
  Fix Released

Bug description:
  There is a wrong assertion method in the master.

  
https://opendev.org/openstack/neutron/src/commit/ead685b9381bccb536a943409dc8ded57b65c70f/neutron/tests/unit/agent/l3/test_ha_router.py#L154

  mock_pm.disable.has_calls(calls)

  'has_calls' should be 'assert_has_calls'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989269/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp