[Yahoo-eng-team] [Bug 1989361] Re: extension using collection_actions and collection_methods with path_prefix doesn't get proper URLs

2022-09-13 Thread Johannes Kulik
Looking a little more into it, the tests [0] actually always have a "/"
prefix in their "path_prefix", which works fine, because the "routes"
library calls "stripslashes()" in the "resource()" call and thus we
shouldn't end up with double-slashes.

[0]
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/tests/unit/api/test_extensions.py#L237

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989361

Title:
  extension using collection_actions and collection_methods with
  path_prefix doesn't get proper URLs

Status in neutron:
  Invalid

Bug description:
  We're creating a new extension downstream to add some special-sauce
  API endpoints. During that, we tried to use "collection_actions" to
  create some special actions for our resource. Those ended up being
  uncallable always returning a 404 as the call was interpreted as a
  standard "update" call instead of calling our special function.

  We debugged this down and it turns out the Route object created when
  registering the API endpoint in [0] ff doesn't contain a "/" at the
  start of its regexp. Therefore, it doesn't match.

  This seems to come from the fact that we - other than e.g. the
  quotasv2 extension [1] - have to set a "path_prefix".

  Looking at the underlying "routes" library, we automatically get a "/"
  prefixed for the "resource()" call [2], while the "Submap"'s
  "submapper()" call needs to already contain the prefixed "/" as
  exemplified in [3].

  Therefore, I propose to prepend a "/" to the "path_prefix" for the
  code handling "collection_actions" and "collection_methods" and will
  open a review-request for this.

  [0] 
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/api/extensions.py#L159
  [1] 
https://github.com/sapcc/neutron/blob/64bef10cd97d1f56647a4d20a7ce0644c18b8ece/neutron/extensions/quotasv2.py#L210-L215
  [2] https://github.com/bbangert/routes/blob/main/routes/mapper.py#L1126-L1132
  [3] https://github.com/bbangert/routes/blob/main/routes/mapper.py#L78

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989361/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989254] Re: Neutron-dynamic-routing: misuse of assertTrue

2022-09-13 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/856900
Committed: 
https://opendev.org/openstack/neutron-dynamic-routing/commit/1c15cd1bc594201dc5cc1dc3593ee7264c8759e2
Submitter: "Zuul (22348)"
Branch:master

commit 1c15cd1bc594201dc5cc1dc3593ee7264c8759e2
Author: Takashi Natsume 
Date:   Sat Sep 10 22:12:50 2022 +0900

Fix misuse of assertTrue

Replace assertTrue with assertEqual.

Change-Id: I0f0b6e5a7a4b7799ecda2fcc5b5179c0f97fd44f
Closes-Bug: 1989254
Signed-off-by: Takashi Natsume 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989254

Title:
  Neutron-dynamic-routing: misuse of assertTrue

Status in neutron:
  Fix Released

Bug description:
  There is a misuse of assertTrue in the neutron-dynamic-routing master
  (commit ddac34b3845a261d7c12cc37bf053948f6a3cfb9).

  neutron_dynamic_routing/tests/unit/db/test_bgp_dragentscheduler_db.py:83-84

  self.assertTrue(bgp_speaker_id,
  res['bgp_speakers'][0]['id'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989254/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989460] [NEW] [ovn-octavia-provider] HealthMonitor event received on port deleted

2022-09-13 Thread Fernando Royo
Public bug reported:

When the port associated to a VM is deleted, no event is received by the
driver agent, so basically the LB reflects a wrong ONLINE
operating_status of the member associated to the affected VM.

As the port associated to the VM can be deleted, that case need to be
covered.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989460

Title:
  [ovn-octavia-provider] HealthMonitor event received on port deleted

Status in neutron:
  New

Bug description:
  When the port associated to a VM is deleted, no event is received by
  the driver agent, so basically the LB reflects a wrong ONLINE
  operating_status of the member associated to the affected VM.

  As the port associated to the VM can be deleted, that case need to be
  covered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989460/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989357] Re: Nova doesn’t update Neutron about changing compute instance name

2022-09-13 Thread Lucas Kanashiro
I added a task for sssd here to not miss this other bug report which is
a dup of this one (#1989358). However, I am not sure how sssd is
involved on this issue. Please, provide more information and detailed
steps on how to reproduce the issue.

** Also affects: sssd (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: sssd (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1989357

Title:
  Nova doesn’t update Neutron about changing compute instance name

Status in OpenStack Compute (nova):
  Incomplete
Status in sssd package in Ubuntu:
  Incomplete

Bug description:
  This is something that was raised on Neutron Designate DNS integration 
testing. 
  When VM server is created, its Nova name is used by Neutron as a hostname and 
then propagated to the DNS backends using Designate DNS, for example:
  
https://docs.openstack.org/neutron/yoga/admin/config-dns-int-ext-serv.html#use-case-1-floating-ips-are-published-with-associated-port-dns-attributes

  Created VM is named “my_vm” and the “A” type Recordset propagated to the DNS 
backends is:
  my-vm.example.org. | A| 198.51.100.4 

  Now, let’s say that the customer has decided to change VM’s name and that he 
would expect the previously created “A” type recordset to be change accordingly.
  Unfortunately,  such a change won’t affect either Neutron or Designate DNS, 
because Nova doesn’t update Neutron about changing VMs’ names.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1989357/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989480] [NEW] [OVN] Neutron server floods logs with hash ring messages on startup

2022-09-13 Thread Lucas Alvares Gomes
Public bug reported:

Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2125828

Neutron server issues over 300 messages per second during startup with
debug on:

2022-09-10 21:55:32.998 55 DEBUG networking_ovn.common.hash_ring_manager
[-] Disallow caching, nodes 0<26 _wait_startup_before_caching
/usr/lib/python3.6/site-
packages/networking_ovn/common/hash_ring_manager.py:61

This message is logged when the number of nodes in the hash ring is
different than the number of API workers at Neutron's startup. The hash
ring waits until all API workers are connected to OVSDB prior to
building the hash ring cache.

We need to rate limit this message, in case there are problems with the
API workers not being able to connected to OVSDB this message does not
spam the Neutron logs.

The message itself is still useful IMHO, for knowing that the hash ring
cache is not yet built.

** Affects: neutron
 Importance: High
 Assignee: Lucas Alvares Gomes (lucasagomes)
 Status: In Progress


** Tags: ovn

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989480

Title:
  [OVN] Neutron server floods logs with hash ring messages on startup

Status in neutron:
  In Progress

Bug description:
  Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2125828

  Neutron server issues over 300 messages per second during startup with
  debug on:

  2022-09-10 21:55:32.998 55 DEBUG
  networking_ovn.common.hash_ring_manager [-] Disallow caching, nodes
  0<26 _wait_startup_before_caching /usr/lib/python3.6/site-
  packages/networking_ovn/common/hash_ring_manager.py:61

  This message is logged when the number of nodes in the hash ring is
  different than the number of API workers at Neutron's startup. The
  hash ring waits until all API workers are connected to OVSDB prior to
  building the hash ring cache.

  We need to rate limit this message, in case there are problems with
  the API workers not being able to connected to OVSDB this message does
  not spam the Neutron logs.

  The message itself is still useful IMHO, for knowing that the hash
  ring cache is not yet built.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989480/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988311] Re: Concurrent evacuation of vms with pinned cpus to the same host fail randomly

2022-09-13 Thread Sylvain Bauza
Setting to High as we need to bump our requirements on master to prevent
older releases of oslo.concurrency.

Also, need to backport the patch into stable releases of
oslo.concurrency for Yoga.

** Also affects: nova/yoga
   Importance: Undecided
   Status: New

** Changed in: nova/yoga
   Status: New => Confirmed

** Changed in: nova/yoga
   Importance: Undecided => High

** Changed in: nova
   Importance: Critical => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1988311

Title:
  Concurrent evacuation of vms with pinned cpus to the same host fail
  randomly

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) yoga series:
  Confirmed
Status in oslo.concurrency:
  Fix Released

Bug description:
  Reproduction:

  Boot two vms (each with one pinned cpu) on devstack0.
  Then evacuate them to devtack0a.
  devstack0a has two dedicated cpus, so both vms should fit.
  However sometimes (for example 6 out of 10 times) the evacuation of one vm 
fails with this error message: 'CPU set to pin [0] must be a subset of free CPU 
set [1]'.

  devstack0 - all-in-one host
  devstack0a - compute-only host

  # have two dedicated cpus for pinning on the evacuation target host
  devstack0a:/etc/nova/nova-cpu.conf:
  [compute]
  cpu_dedicated_set = 0,1

  # the dedicated cpus are properly tracked in placement
  $ openstack resource provider list
  
+--+++--+--+
  | uuid | name   | generation | 
root_provider_uuid   | parent_provider_uuid |
  
+--+++--+--+
  | a0574d87-42ee-4e13-b05a-639dc62c1196 | devstack0a |  2 | 
a0574d87-42ee-4e13-b05a-639dc62c1196 | None |
  | 2e6fac42-d6e3-4366-a864-d5eb2bdc2241 | devstack0  |  2 | 
2e6fac42-d6e3-4366-a864-d5eb2bdc2241 | None |
  
+--+++--+--+
  $ openstack resource provider inventory list 
a0574d87-42ee-4e13-b05a-639dc62c1196
  
++--+--+--+--+---+---+--+
  | resource_class | allocation_ratio | min_unit | max_unit | reserved | 
step_size | total | used |
  
++--+--+--+--+---+---+--+
  | MEMORY_MB  |  1.5 |1 | 3923 |  512 |
 1 |  3923 |0 |
  | DISK_GB|  1.0 |1 |   28 |0 |
 1 |28 |0 |
  | PCPU   |  1.0 |1 |2 |0 |
 1 | 2 |0 |
  
++--+--+--+--+---+---+--+

  # use vms with one pinned cpu
  openstack flavor create cirros256-pinned --public --ram 256 --disk 1 --vcpus 
1 --property hw_rng:allowed=True --property hw:cpu_policy=dedicated

  # boot two vms (each with one pinned cpu) on devstack0
  n=2 ; for i in $( seq $n ) ; do openstack server create --flavor 
cirros256-pinned --image cirros-0.5.2-x86_64-disk --nic net-id=private 
--availability-zone :devstack0 --wait vm$i ; done

  # kill n-cpu on devstack0
  devstack0 $ sudo systemctl stop devstack@n-cpu
  # and force it down, so we can start evacuating
  openstack compute service set devstack0 nova-compute --down

  # evacuate both vms to devstack0a concurrently
  for vm in $( openstack server list --host devstack0 -f value -c ID ) ; do 
openstack --os-compute-api-version 2.29 server evacuate --host devstack0a $vm & 
done

  # follow up on how the evacuation is going, check if the bug occured, see 
details a bit below
  for i in $( seq $n ) ; do openstack server show vm$i -f value -c 
OS-EXT-SRV-ATTR:host -c status ; done

  # clean up
  devstack0 $ sudo systemctl start devstack@n-cpu
  openstack compute service set devstack0 nova-compute --up
  for i in $( seq $n ) ; do openstack server delete vm$i --wait ; done

  This bug is not deterministic. For example out of 10 tries (like
  above) I have seen 4 successes - when both vms successfully evacuated
  to (went to ACTIVE on) devstack0a.

  But in the other 6 cases only one vm evacuated successfully. The other
  vm went to ERROR state, with the error message: "CPU set to pin [0]
  must be a subset of free CPU set [1]". For example:

  $ openstack server show vm2
  ...
  | fault   | {'code': 400, 'created': 
'2022-08-24T13:50:33Z', 'message': 'CPU set to pin [0] must be a subset of free 
CPU set [1]'} |
  ...

  In n-cpu logs we see the following:

  aug 24 1

[Yahoo-eng-team] [Bug 1896617] Re: [SRU] Creation of image (or live snapshot) from the existing VM fails if libvirt-image-backend is configured to qcow2 starting from Ussuri

2022-09-13 Thread Sylvain Bauza
Putting the bug to Opinion/Wishlist as this sounds half a Nova problem
(since we set the chmod) and half a distro-specific configuration.

I'm not against any modification but we need to correctly address this
gap as a blueprint ideally.

** Changed in: nova
   Status: Triaged => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1896617

Title:
  [SRU] Creation of image (or live snapshot) from the existing VM fails
  if libvirt-image-backend is configured to qcow2 starting from Ussuri

Status in OpenStack Nova Compute Charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in Ubuntu Cloud Archive victoria series:
  Fix Released
Status in OpenStack Compute (nova):
  Opinion
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Focal:
  Fix Released
Status in nova source package in Groovy:
  Fix Released

Bug description:
  [Impact]

  tl;dr

  1) creating the image from the existing VM fails if qcow2 image backend is 
used, but everything is fine if using rbd image backend in nova-compute.
  2) openstack server image create --name   fails with some unrelated error:

  $ openstack server image create --wait 842fa12c-19ee-44cb-bb31-36d27ec9d8fc
  HTTP 404 Not Found: No image found with ID 
f4693860-cd8d-4088-91b9-56b2f173ffc7

  == Details ==

  Two Tempest tests ([1] and [2]) from the 2018.02 Refstack test lists
  [0] are failing with the following exception:

  49701867-bedc-4d7d-aa71-7383d877d90c
  Traceback (most recent call last):
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/api/compute/base.py",
 line 369, in create_image_from_server
  waiters.wait_for_image_status(client, image_id, wait_until)
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/common/waiters.py",
 line 161, in wait_for_image_status
  image = show_image(image_id)
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/services/compute/images_client.py",
 line 74, in show_image
  resp, body = self.get("images/%s" % image_id)
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/common/rest_client.py",
 line 298, in get
  return self.request('GET', url, extra_headers, headers)
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/services/compute/base_compute_client.py",
 line 48, in request
  method, url, extra_headers, headers, body, chunked)
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/common/rest_client.py",
 line 687, in request
  self._error_checker(resp, resp_body)
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/lib/common/rest_client.py",
 line 793, in _error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {'code': 404, 'message': 'Image not found.'}

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/api/compute/images/test_images_oneserver.py",
 line 69, in test_create_delete_image
  wait_until='ACTIVE')
    File 
"/home/ubuntu/snap/fcbtest/14/.rally/verification/verifier-2d9cbf4d-fcbb-491d-848d-5137a9bde99e/repo/tempest/api/compute/base.py",
 line 384, in create_image_from_server
  image_id=image_id)
  tempest.exceptions.SnapshotNotFoundException: Server snapshot image 
d82e95b0-9c62-492d-a08c-5bb118d3bf56 not found.

  So far I was able to identify the following:

  1) 
https://github.com/openstack/tempest/blob/master/tempest/api/compute/images/test_images_oneserver.py#L69
 invokes a "create image from server"
  2) It fails with the following error message in the nova-compute logs: 
https://pastebin.canonical.com/p/h6ZXdqjRRm/

  The same occurs if the "openstack server image create --wait" will be
  executed; however, according to
  https://docs.openstack.org/nova/ussuri/admin/migrate-instance-with-
  snapshot.html the VM has to be shut down before the image creation:

  "Shut down the source VM before you take the snapshot to ensure that
  all data is flushed to disk. If necessary, list the instances to view
  the instance name. Use the openstack server stop command to shut down
  the instance:"

  This step is definitely being skipped by the test (e.g it's trying to

[Yahoo-eng-team] [Bug 1988199] Re: [OVN][live-migration] Nova port binding request and "LogicalSwitchPortUpdateUpEvent" race condition

2022-09-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/855257
Committed: 
https://opendev.org/openstack/neutron/commit/91f0864dc0ccf0f67be7162f011706dbc6383cb3
Submitter: "Zuul (22348)"
Branch:master

commit 91f0864dc0ccf0f67be7162f011706dbc6383cb3
Author: Rodolfo Alonso Hernandez 
Date:   Tue Aug 30 18:09:34 2022 +0200

Add an active wait during the port provisioning event

In ML2/OVN, during a live-migration process, it could
happend that the port provisioning event is received before
the port binding has been updated. That means the port has
been created in the destination host and the event received
(this event will remove any pending provisioning block). But
the Nova port binding request has not arrived yet, updating
the port binding registers. Because the port is considered
"not bound" (yet), the port provisioning doesn't set the port
status to ACTIVE.

This patch creates an active wait during the port provisioning
event method. If the port binding is still "unbound", the method
retries the port retrieval several times, giving some time to the
port binding request from Nova to arrive.

Closes-Bug: #1988199
Change-Id: I50091c84e67c172c94ce9140f23235421599185c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988199

Title:
  [OVN][live-migration] Nova port binding request and
  "LogicalSwitchPortUpdateUpEvent" race condition

Status in neutron:
  Fix Released

Bug description:
  Related Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2120409

  Summary: after the a live-migration, the VM port status is DOWN.

  During a live-migration, the following events happen in the Neutron server:
  1) We receive a port update. Because the "migrating_to" field is in the port 
binding, the OVN mech driver forces a port update from DOWN to UP. This (1) 
sets the port status to UP and (2) sends the vif-plugged event to Nova. That 
will trigger the port creation (layer 1) in the destination node.

  2) Then the "LogicalSwitchPortUpdateDownEvent", because the source
  port was deleted. That sets the port status to DOWN.

  3) At the same time we receive the "LogicalSwitchPortUpdateUpEvent",
  because the port in the destination host has been created. This last
  event won't manually set the port status to UP. Instead it will remove
  any port provisioning block [1].

  3.1) If the port provisioned is considered as complete
  ("provisioning_complete" event), this is processed in
  "Ml2Plugin._port_provisioned". The problem we are hitting here is that
  the port has no host (the port is still not bound):

  2022-08-26 10:08:23.373 17 DEBUG neutron.plugins.ml2.plugin
  [req-2b13d263-5748-46e2-9fdf-33df50634607 - - - - -] Port
  943db0db-773f-45e9-8b68-0ebcc1840207 cannot update to ACTIVE because
  it is not bound. _port_provisioned /usr/lib/python3.9/site-
  packages/neutron/plugins/ml2/plugin.py:339

  4) Right after the Nova port binding request is received and the port
  is bound: https://paste.opendev.org/show/bIUoJkiStCIe8TBb0573/

  This is basically the issue we have here: there is a race condition
  between (1) the Nova port binding request and (2) the
  "LogicalSwitchPortUpdateUpEvent" that is received when the OVS port is
  created on a chassis.

  Just for testing, if I add a 1 second sleep at the very first line of
  "_port_provisioned", allowing to receive the Nova port binding request
  (that will bind the port to a host), the port provisioning succeeds
  and the port is set to UP. I'll find a way to fix that in the
  Ml2Plugin code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988199/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988793] Re: OVN as a Provider Driver for Octavia in ovn-octavia-provider

2022-09-13 Thread Luis Tomas Bolivar
This is not a limitation. The failover action is already properly handle
to state it is not supported. But this is not due to a limitation into
the ovn-octavia driver, but due to not needed this functionality at all
(I would say this is an improvement). In amphora case you have a VM that
needs to be recovered (failover) in certain ocassions. In ovn-octavia
there is no VM for loadbalancing (with its pros and cons), and the flows
are already distributed in all the nodes, so there is no need to
failover/recover anything

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988793

Title:
  OVN as a Provider Driver for Octavia in ovn-octavia-provider

Status in neutron:
  Invalid

Bug description:
  - [X] This is a doc addition request.

  Under Limitations of the OVN Provider Driver[1] I believe we should
  add that manual failover is not supported as per [2]

  
  This also should be updated imo [3]

  [1] https://docs.openstack.org/ovn-octavia-
  provider/latest/admin/driver.html#limitations-of-the-ovn-provider-
  driver

  [2] https://bugs.launchpad.net/neutron/+bug/1901936

  [3] https://docs.openstack.org/ovn-octavia-
  provider/latest/contributor/loadbalancer.html#limitations

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1988793/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp