[Yahoo-eng-team] [Bug 1964575] Re: [DB] Migration to SQLAlchemy 2.0

2024-01-24 Thread Rodolfo Alonso
Yes, I think we can close this one and consider it as released. So far
we didn't find any new error in the CI jobs. Any new bug can be tracked
in a new LP bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964575

Title:
  [DB] Migration to SQLAlchemy 2.0

Status in neutron:
  Fix Released

Bug description:
  This is a container for the efforts to be done in Neutron, neutron-lib
  and plugins projects to migrate to SQLAlchemy 2.0.

  There is currently a patch in neutron-lib to disable the session
  "__autocommit" flag, that will be optional in SQLAlchemy 1.4 and
  mandatory in SQLAlchemy 2.0 [1]. We have found problems with how the
  session transactions are now handled by SQLAlchemy.

  In Neutron there are many places where we make a database call running
  on an implicit transaction, that means we don't explicitly create a
  reader/writer context. With "autocommit=True", this transaction is
  discarded immediately; under non-autocommit sessions, the transaction
  created remains open. That is leading to database errors as seen in
  the tempest tests.

  In [2], as recommended by Mike Bayer (main maintainer and author of
  SQLAlchemy), we have enabled again the "autocommit" flag and create a
  log message to track when Neutron tries to execute a command with
  session with an inactive transaction.

  The goal of this bug is to move all Neutron database interactions to
  be SQLAlchemy 2.0 compliant.

  
  [1]https://review.opendev.org/c/openstack/neutron-lib/+/828738
  [2]https://review.opendev.org/c/openstack/neutron-lib/+/833103

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1964575/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051172] [NEW] [ovn-octavia-provider] Deleting or updating Load Balancer members in on pool can effect other pools or same pool if the same IP is used

2024-01-24 Thread Jay Rhine
Public bug reported:

* High level description:
We have discovered that when using Octavia OVN Load Balancers, if you have 2 or 
more pools with the same member ips (but different ports) if you remove the 
member from one pool it will remove the ip configuration from the 
"ip_port_mappings" in the OVN Load Balancer even if another pool is still using 
this IP.  Additionally, if you update the pool members to point to the some 
member IP's but a different port the "ip_port_mappings" will also be cleared 
for those IP's.  The update behavior is because the additions are done by the 
provider followed by the deletions, so if the IP addresses are the same, the 
final deletions will clear out the IP's.

We experienced this problem when using Magnum with OVN load balancers,
as Magnum in some cases would update the load balancers to ensure proper
configuration, and this would result in the load balancers not function
even though the final Octavia configuration was correct (but since the
ip_port_mapping table was cleared the LB were not operational)

This behavior was likely introduced by the fix for bug:

https://bugs.launchpad.net/neutron/+bug/2007835

Which resulted in commit:
https://opendev.org/openstack/ovn-octavia-provider/commit/e40e0d62ac089afd14c03102d80990e792211af3

This commit updates the OVS / OVN databases in a more efficient way
(only adding or deleting what is needed).  However, since it is not
recomputing all ip_port_mappings for every update, its possible to hit
this scenario.

The "ovn-octavia-provider/ovn_octavia_provider/helper.py" code already
includes a solution to this scenario in the "_clean_ip_port_mappings"
method.  In this case, the method will verify if the ip address is in
use by any other pools before removing the ip address from the
"ip_port_mappings" table.  The same logic can also be applied to the
"_update_ip_port_mappings" to prevent this issue from occurring in the
pool member update or delete cases.  I have tried making such a change,
and it works well for us.

However, it should be noted that the logic in the
"_clean_ip_port_mappings" only looks at other pools, not at the current
pool.  Therefore, if pool members were updated from one port to another
port but with the same IP,  my patch to apply this to thae same logic
"_update_ip_port_mappings" may not cover all scenarios.

Therefore, it may be optimal to update the checks to be more thorough.
If the maintainers of this project are in agreement with the logic of my
proposed updates, I could work to submit an oficial patch for this
issue.

* Version:
  ** OpenStack version
Openstack Antelope (2023.1)
  ** Linux distro, kernel. 
Ubuntu 22.04
  ** DevStack or other _deployment_ mechanism?
Kolla Anisable

Patch for fixing issue:
```
--- helper.py.new_orig  2024-01-21 03:05:27.0 +
+++ helper.py.proposed1 2024-01-24 19:12:17.843028336 +
@@ -2441,7 +2441,7 @@
 self._execute_commands(commands)
 return True
 
-def _update_ip_port_mappings(self, ovn_lb, backend_ip, port_name, src_ip,
+def _update_ip_port_mappings(self, ovn_lb, backend_ip, port_name, src_ip, 
pool_key,
  delete=False):
 
 # ip_port_mappings:${MEMBER_IP}=${LSP_NAME_MEMBER}:${HEALTH_SRC}
@@ -2449,10 +2449,42 @@
 #  MEMBER_IP: IP of member_lsp
 #  LSP_NAME_MEMBER: Logical switch port
 #  HEALTH_SRC: source IP of hm_port
-
 if delete:
-self.ovn_nbdb_api.lb_del_ip_port_mapping(ovn_lb.uuid,
- backend_ip).execute()
+# NOTE(jrhine): This is basically the same code as in the 
_clean_ip_port_mappings
+   # function, but with a single member ip. Just like in that function,
+   # before removing a member from the ip_port_mappings
+# list, we need to ensure that the member is not being used
+# by an other pool to prevent accidentally removing the
+# member we can use the neutron:member_status to search for any
+# other members with the same address
+other_members = []
+for k, v in ovn_lb.external_ids.items():
+if ovn_const.LB_EXT_IDS_POOL_PREFIX in k and k != pool_key:
+other_members.extend(self._extract_member_info(
+ovn_lb.external_ids[k]))
+
+member_statuses = ovn_lb.external_ids.get(
+ovn_const.OVN_MEMBER_STATUS_KEY)
+
+try:
+member_statuses = jsonutils.loads(member_statuses)
+except TypeError:
+LOG.debug("no member status on external_ids: %s",
+  str(member_statuses))
+member_statuses = {}
+
+execute_delete = True
+for member_id in [item[3] for item in other_members
+  if item[0] == backend_ip]:
+if member_statuses.get(
+member_id, 

[Yahoo-eng-team] [Bug 2051171] [NEW] SQLalchemy 2.0 warning in neutron-lib

2024-01-24 Thread Brian Haley
Public bug reported:

Running 'tox -e pep8' in neutron-lib or neutron repo generates this new
warning:

/home/bhaley/git/neutron-lib/neutron_lib/db/model_base.py:113: 
MovedIn20Warning: Deprecated API features detected! These feature(s) are not 
compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to 
updating applications, ensure requirements files are pinned to 
"sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all 
deprecation warnings.  Set environment variable 
SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on 
SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
  BASEV2 = declarative.declarative_base(cls=NeutronBaseV2)

Google eventually points in this direction:

https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html#step-one-
orm-declarative-base-is-superseded-by-orm-declarativebase

So moving to use sqlalchemy.orm.DeclarativeBase class is the future.

Might be a little tricky to implement as sqlalchemy is currently pinned
in UC:

sqlalchemy===1.4.50

** Affects: neutron
 Importance: High
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051171

Title:
  SQLalchemy 2.0 warning in neutron-lib

Status in neutron:
  Confirmed

Bug description:
  Running 'tox -e pep8' in neutron-lib or neutron repo generates this
  new warning:

  /home/bhaley/git/neutron-lib/neutron_lib/db/model_base.py:113: 
MovedIn20Warning: Deprecated API features detected! These feature(s) are not 
compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to 
updating applications, ensure requirements files are pinned to 
"sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all 
deprecation warnings.  Set environment variable 
SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on 
SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
BASEV2 = declarative.declarative_base(cls=NeutronBaseV2)

  Google eventually points in this direction:

  https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html#step-one-
  orm-declarative-base-is-superseded-by-orm-declarativebase

  So moving to use sqlalchemy.orm.DeclarativeBase class is the future.

  Might be a little tricky to implement as sqlalchemy is currently
  pinned in UC:

  sqlalchemy===1.4.50

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051171/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2019190] Re: [RBD] Retyping of in-use boot volumes renders instances unusable (possible data corruption)

2024-01-24 Thread Edward Hope-Morley
** Also affects: cinder (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Mantic)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Lunar)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2019190

Title:
  [RBD] Retyping of in-use boot volumes renders instances unusable
  (possible data corruption)

Status in Cinder:
  New
Status in Cinder wallaby series:
  New
Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive bobcat series:
  New
Status in Ubuntu Cloud Archive caracal series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in OpenStack Compute (nova):
  Invalid
Status in cinder package in Ubuntu:
  New
Status in cinder source package in Jammy:
  New
Status in cinder source package in Lunar:
  New
Status in cinder source package in Mantic:
  New
Status in cinder source package in Noble:
  New

Bug description:
  While trying out the volume retype feature in cinder, we noticed that after 
an instance is
  rebooted it will not come back online and be stuck in an error state or if it 
comes back
  online, its filesystem is corrupted.

  ## Observations

  Say there are the two volume types `fast` (stored in ceph pool `volumes`) and 
`slow`
  (stored in ceph pool `volumes.hdd`). Before the retyping we can see that the 
volume
  for example is present in the `volumes.hdd` pool and has a watcher accessing 
the
  volume.

  ```sh
  [ceph: root@mon0 /]# rbd ls volumes.hdd
  volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9

  [ceph: root@mon0 /]# rbd status 
volumes.hdd/volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9
  Watchers:
  watcher=[2001:XX:XX:XX::10ad]:0/3914407456 client.365192 
cookie=140370268803456
  ```

  Starting the retyping process using the migration policy `on-demand` for that 
volume either
  via the horizon dashboard or the CLI causes the volume to be correctly 
transferred to the
  `volumes` pool within the ceph cluster. However, the watcher does not get 
transferred, so
  nobody is accessing the volume after it has been transferred.

  ```sh
  [ceph: root@mon0 /]# rbd ls volumes
  volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9

  [ceph: root@mon0 /]# rbd status 
volumes/volume-81cfbafc-4fbb-41b0-abcb-8ec7359d0bf9
  Watchers: none
  ```

  Taking a look at the libvirt XML of the instance in question, one can see 
that the `rbd`
  volume path does not change after the retyping is completed. Therefore, if 
the instance is
  restarted nova will not be able to find its volume preventing an instance 
start.

   Pre retype

  ```xml
  [...]
  
  
  
  
  
  [...]
  ```

   Post retype (no change)

  ```xml
  [...]
  
  
  
  
  
  [...]
  ```

  ### Possible cause

  While looking through the code that is responsible for the volume retype we 
found a function
  `swap_volume` volume which by our understanding should be responsible for 
fixing the association
  above. As we understand cinder should use an internal API path to let nova 
perform this action.
  This doesn't seem to happen.

  (`_swap_volume`:
  
https://github.com/openstack/nova/blob/stable/wallaby/nova/compute/manager.py#L7218)

  ## Further observations

  If one tries to regenerate the libvirt XML by e.g. live migrating the 
instance and rebooting the
  instance after, the filesystem gets corrupted.

  ## Environmental Information and possibly related reports

  We are running the latest version of TripleO Wallaby using the hardened 
(whole disk)
  overcloud image for the nodes.

  Cinder Volume Version: `openstack-
  cinder-18.2.2-0.20230219112414.f9941d2.el8.noarch`

  ### Possibly related

  - https://bugzilla.redhat.com/show_bug.cgi?id=1293440

  
  (might want to paste the above to a markdown file for better readability)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2019190/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051126] [NEW] stores-info fails if unrecognised backend specified

2024-01-24 Thread Abhishek Kekane
Public bug reported:

If deployer specifies invalid backend for glance using
'enabled_backends' in glance-api.conf file then glance stores-info
command fails with HTTP 500 error with below stacktrace;

Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
[None req-71e90940-8b19-4711-b2fa-910c145c1960 admin admin] Caught error: no 
such option foo in group [DEFAULT]: oslo_config.cfg.NoSuchOptError: no such 
option foo in group [DEFAULT]
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
Traceback (most recent call last):
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/data/venv/lib/python3.10/site-packages/oslo_config/cfg.py", 
line 2219, in __getattr__
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   return self._get(name)
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/data/venv/lib/python3.10/site-packages/oslo_config/cfg.py", 
line 2653, in _get
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   value, loc = self._do_get(name, group, namespace)
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/data/venv/lib/python3.10/site-packages/oslo_config/cfg.py", 
line 2671, in _do_get
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   info = self._get_opt_info(name, group)
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/data/venv/lib/python3.10/site-packages/oslo_config/cfg.py", 
line 2876, in _get_opt_info
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   raise NoSuchOptError(opt_name, group)
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
oslo_config.cfg.NoSuchOptError: no such option foo in group [DEFAULT]
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
During handling of the above exception, another exception occurred:
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
Traceback (most recent call last):
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/glance/glance/common/wsgi.py", line 1297, in __call__
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   action_result = self.dispatch(self.controller, action,
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/glance/glance/common/wsgi.py", line 1340, in dispatch
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   return method(*args, **kwargs)
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/glance/glance/api/v2/discovery.py", line 68, in get_stores
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   description = getattr(CONF, backend).store_description
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
 File "/opt/stack/data/venv/lib/python3.10/site-packages/oslo_config/cfg.py", 
line 2223, in __getattr__
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi  
   raise NoSuchOptError(name)
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
oslo_config.cfg.NoSuchOptError: no such option foo in group [DEFAULT]
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR glance.common.wsgi 
Jan 24 14:25:08 devstack-caracal glance-api[1906424]: INFO eventlet.wsgi.server 
[None req-71e90940-8b19-4711-b2fa-910c145c1960 admin admin] 
10.0.109.128,10.0.109.128 - - [24/Jan/2024 14:25:08] "GET /v2/info/stores 
HTTP/1.1" 500 454 0.075514


How to reproduce:

1. Define enabled_backends as shown below in glance-api.conf
[DEFAULT]
enabled_backends = fast:file,foo:bar

[glance_store]
default_backend = fast

[fast]
filesystem_store_datadir = /opt/stack/data/glance/images/

[foo]
foo = bar

2. Restart g-api service
3. Run glance stores-info command

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2051126

Title:
  stores-info fails if unrecognised backend specified

Status in Glance:
  New

Bug description:
  If deployer specifies invalid backend for glance using
  'enabled_backends' in glance-api.conf file then glance stores-info
  command fails with HTTP 500 error with below stacktrace;

  Jan 24 14:25:08 devstack-caracal glance-api[1906424]: ERROR 
glance.common.wsgi [None req-71e90940-8b19-4711-b2fa-910c145c1960 admin admin] 
Caught error: no 

[Yahoo-eng-team] [Bug 1884708] Re: explicity_egress_direct prevents learning of local MACs and causes flooding of ingress packets

2024-01-24 Thread Bence Romsics
I'm reopening this because I believe the fix committed fixes only part
of the problem. With firewall_driver=noop the unnecessary ingress
flooding on br-int is gone. However we still have the same unnecessary
flooding with firewall_driver=openvswitch. For details and a full
reproduction please comments to bug #2048785:

https://bugs.launchpad.net/neutron/+bug/2048785/comments/2
https://bugs.launchpad.net/neutron/+bug/2048785/comments/6


** Changed in: neutron
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1884708

Title:
  explicity_egress_direct prevents learning of local MACs and causes
  flooding of ingress packets

Status in neutron:
  New

Bug description:
  We took this bug fix: https://bugs.launchpad.net/neutron/+bug/1732067
  and then also backported ourselves
  https://bugs.launchpad.net/neutron/+bug/1866445

  The latter is for iptables based firewall.

  We have VLAN based networks, and seeing ingress packets destined to
  local MACs being flooded. We are not seeing any local MACs present
  under ovs-appctl fdb/show br-int.

  Consider following example:

  HOST 1:
  MAC A = fa:16:3e:c1:01:43
  MAC B = fa:16:3e:de:0b:8a

  HOST 2:
  MAC C = fa:16:3e:d6:3f:31

  A is talking to C. Snooping on qvo interface of B, we are seeing all
  the traffic destined to MAC A (along with other unicast traffic not
  destined to or sourced from MAC B. Neither Mac A or B are present in
  br-int FDB, despite sending heavy traffic.

  
  Here is ofproto trace for such packet. in_port 8313 is qvo of MAC A:

  sudo ovs-appctl ofproto/trace br-int 
in_port=8313,tcp,dl_src=fa:16:3e:c1:01:43,dl_dst=fa:16:3e:d6:3f:31
  Flow: 
tcp,in_port=8313,vlan_tci=0x,dl_src=fa:16:3e:c1:01:43,dl_dst=fa:16:3e:d6:3f:31,nw_src=0.0.0.0,nw_dst=0.0.0.0,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=0,tcp_flags=0

  bridge("br-int")
  
   0. in_port=8313, priority 9, cookie 0x9a67096130ac45c2
  goto_table:25
  25. in_port=8313,dl_src=fa:16:3e:c1:01:43, priority 2, cookie 
0x9a67096130ac45c2
  goto_table:60
  60. in_port=8313,dl_src=fa:16:3e:c1:01:43, priority 9, cookie 
0x9a67096130ac45c2
  resubmit(,61)
  61. 
in_port=8313,dl_src=fa:16:3e:c1:01:43,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00,
 priority 10, cookie 0x9a67096130ac45c2
  push_vlan:0x8100
  set_field:4098->vlan_vid
  output:1

  bridge("br-ext")
  
   0. in_port=2, priority 2, cookie 0xab09adf2af892674
  goto_table:1
   1. priority 0, cookie 0xab09adf2af892674
  goto_table:2
   2. in_port=2,dl_vlan=2, priority 4, cookie 0xab09adf2af892674
  set_field:4240->vlan_vid
  NORMAL
   -> forwarding to learned port

  bridge("br-vlan")
  -
   0. priority 1, cookie 0x651552fc69601a2d
  goto_table:3
   3. priority 1, cookie 0x651552fc69601a2d
  NORMAL
   -> forwarding to learned port

  Final flow: 
tcp,in_port=8313,dl_vlan=2,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=fa:16:3e:c1:01:43,dl_dst=fa:16:3e:d6:3f:31,nw_src=0.0.0.0,nw_dst=0.0.0.0,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=0,tcp_flags=0
  Megaflow: 
recirc_id=0,eth,ip,in_port=8313,vlan_tci=0x/0x1fff,dl_src=fa:16:3e:c1:01:43,dl_dst=fa:16:3e:d6:3f:31,nw_frag=no
  Datapath actions: push_vlan(vid=144,pcp=0),51

  
  Because it took output: action from table=61, added by fix 
explicitly_egress_direct, the local MAC is not learned. But on ingress, the 
packet is hitting table=60's NORMAL action, causing it to be flooded because it 
never knows where to send the local MAC.

  sudo ovs-appctl ofproto/trace br-int 
in_port=1,dl_vlan=144,dl_src=fa:16:3e:d6:3f:31,dl_dst=fa:16:3e:c1:01:43
  Flow: 
in_port=1,dl_vlan=144,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=fa:16:3e:d6:3f:31,dl_dst=fa:16:3e:c1:01:43,dl_type=0x

  bridge("br-int")
  
   0. in_port=1,dl_vlan=144, priority 3, cookie 0x9a67096130ac45c2
  set_field:4098->vlan_vid
  goto_table:60
  60. priority 3, cookie 0x9a67096130ac45c2
  NORMAL
   -> no learned MAC for destination, flooding

  bridge("br-vlan")
  -
   0. in_port=4, priority 2, cookie 0x651552fc69601a2d
  goto_table:1
   1. priority 0, cookie 0x651552fc69601a2d
  goto_table:2
   2. in_port=4, priority 2, cookie 0x651552fc69601a2d
  drop

  bridge("br-tun")
  
   0. in_port=1, priority 1, cookie 0xf1baf24d000c6f7c
  goto_table:1
   1. priority 0, cookie 0xf1baf24d000c6f7c
  goto_table:2
   2. dl_dst=00:00:00:00:00:00/01:00:00:00:00:00, priority 0, cookie 
0xf1baf24d000c6f7c
  goto_table:20
  20. priority 0, cookie 0xf1baf24d000c6f7c
  goto_table:22
  22. priority 0, cookie 0xf1baf24d000c6f7c
  drop

  Final flow: 
in_port=1,dl_vlan=2,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=fa:16:3e:d6:3f:31,dl_dst=fa:16:3e:c1:01:43,dl_type=0x
  

[Yahoo-eng-team] [Bug 2038978] Re: [OVN] Floating IP <=> Floating IP across subnets

2024-01-24 Thread Rodolfo Alonso
*** This bug is a duplicate of bug 2035281 ***
https://bugs.launchpad.net/bugs/2035281

This issue is the same as
https://bugs.launchpad.net/neutron/+bug/2035281 and fixed in
https://review.opendev.org/c/openstack/neutron/+/895260.

** This bug has been marked a duplicate of bug 2035281
   [ML2/OVN] DGP/Floating IP issue - no flows for chassis gateway port

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038978

Title:
  [OVN] Floating IP <=> Floating IP across subnets

Status in neutron:
  In Progress

Bug description:
  When using OVN, if you have a virtual router with a gateway that is in
  subnet A, and has a port that has a floating IP attached to it from
  subnet B, they seem to not be reachable.

  https://mail.openvswitch.org/pipermail/ovs-dev/2021-July/385253.html

  There was a fix brought into OVN with this not long ago, it introduces
  an option called `options:add_route` to `true`.

  see: https://mail.openvswitch.org/pipermail/ovs-
  dev/2021-July/385255.html

  I think we should do this in order to mirror the same behaviour in
  ML2/OVS since we install scope link routes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051108] [NEW] Support for the "bring your own keys" approach for Cinder

2024-01-24 Thread NotTheEvilOne
Public bug reported:

Description
===
Cinder currently lags support the API to create a volume with a predefined 
(e.g. already stored in Barbican) encryption key. This feature would be useful 
for use cases where end-users should be enabled to store keys later on used to 
encrypt volumes.

Work flow would be as follow:
1. End user creates a new key and stores it in OpenStack Barbican
2. User requests a new volume with volume type "LUKS" and gives an 
"encryption_reference_key_id" (or just "key_id").
3. Internally the key is copied (like in volume_utils.clone_encryption_key_()) 
and a new "encryption_key_id".

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- Cinder currently lags support the API to create a volume with a
- predefined (e.g. already stored in Barbican) encryption key. This
- feature would be useful for use cases where end-users should be enabled
- to store keys later on used to encrypt volumes.
+ Description
+ ===
+ Cinder currently lags support the API to create a volume with a predefined 
(e.g. already stored in Barbican) encryption key. This feature would be useful 
for use cases where end-users should be enabled to store keys later on used to 
encrypt volumes.
  
  Work flow would be as follow:
  1. End user creates a new key and stores it in OpenStack Barbican
  2. User requests a new volume with volume type "LUKS" and gives an 
"encryption_reference_key_id" (or just "key_id").
  3. Internally the key is copied (like in 
volume_utils.clone_encryption_key_()) and a new "encryption_key_id".

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2051108

Title:
  Support for the "bring your own keys" approach for Cinder

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Cinder currently lags support the API to create a volume with a predefined 
(e.g. already stored in Barbican) encryption key. This feature would be useful 
for use cases where end-users should be enabled to store keys later on used to 
encrypt volumes.

  Work flow would be as follow:
  1. End user creates a new key and stores it in OpenStack Barbican
  2. User requests a new volume with volume type "LUKS" and gives an 
"encryption_reference_key_id" (or just "key_id").
  3. Internally the key is copied (like in 
volume_utils.clone_encryption_key_()) and a new "encryption_key_id".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2051108/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp