[Yahoo-eng-team] [Bug 2061947] [NEW] stores-info --detail command fails if swift store is enabled

2024-04-16 Thread Abhishek Kekane
Public bug reported:

If you enable swift store in multiple stores setup then glance stores-
info --detail or glance stores-info command fails with 500 error
"oslo_config.cfg.NoSuchOptError: no such option store_description in
group [dummy]".

Note: This error only occurs when you specify "swift_store_config_file =
/etc/glance/glance-swift-store.conf and swift_store_multi_tenant = True"
for swift store. Ideally it is recommended not to use
"swift_store_config_file" when multi tenant is enabled for swift but it
should not fail with 500 error and give appropriate 400 BadRequest error
to the user.

Sample configuration glance-api.conf:

[DEFAULT]
enabled_backends = dummy:swift

[dummy]
swift_store_multi_tenant = True
default_swift_reference = ref1
swift_store_config_file = /etc/glance/glance-swift-store.conf
swift_store_create_container_on_put = True
store_description = "This is swift store"


Also in second scenario if you set swift store as below in glance-api.conf then 
it gives 500 error "Apr 17 04:40:20 akekane-zed-dev glance-api[3389648]: ERROR 
glance.common.wsgi [None req-7dcd5c18-7b31-43e5-9b22-77e20505cab7 admin admin] 
Caught error: 'MultiTenantStore' object has no attribute 'container': 
AttributeError: 'MultiTenantStore' object has no attribute 'container'" fot 
glance stores-info --detail command

[dummy]
swift_store_multi_tenant = True
default_swift_reference = ref1
swift_store_create_container_on_put = True
store_description = "This is swift store"


Ideally in scenario 1 glance stores-info --detail command should raise 400 Bad 
Request and for scenario 2 we need to identify what is ideal configuration for 
swift multi tenant and return the response accordingly.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2061947

Title:
  stores-info --detail command fails if swift store is enabled

Status in Glance:
  New

Bug description:
  If you enable swift store in multiple stores setup then glance stores-
  info --detail or glance stores-info command fails with 500 error
  "oslo_config.cfg.NoSuchOptError: no such option store_description in
  group [dummy]".

  Note: This error only occurs when you specify "swift_store_config_file
  = /etc/glance/glance-swift-store.conf and swift_store_multi_tenant =
  True" for swift store. Ideally it is recommended not to use
  "swift_store_config_file" when multi tenant is enabled for swift but
  it should not fail with 500 error and give appropriate 400 BadRequest
  error to the user.

  Sample configuration glance-api.conf:

  [DEFAULT]
  enabled_backends = dummy:swift

  [dummy]
  swift_store_multi_tenant = True
  default_swift_reference = ref1
  swift_store_config_file = /etc/glance/glance-swift-store.conf
  swift_store_create_container_on_put = True
  store_description = "This is swift store"

  
  Also in second scenario if you set swift store as below in glance-api.conf 
then it gives 500 error "Apr 17 04:40:20 akekane-zed-dev glance-api[3389648]: 
ERROR glance.common.wsgi [None req-7dcd5c18-7b31-43e5-9b22-77e20505cab7 admin 
admin] Caught error: 'MultiTenantStore' object has no attribute 'container': 
AttributeError: 'MultiTenantStore' object has no attribute 'container'" fot 
glance stores-info --detail command

  [dummy]
  swift_store_multi_tenant = True
  default_swift_reference = ref1
  swift_store_create_container_on_put = True
  store_description = "This is swift store"

  
  Ideally in scenario 1 glance stores-info --detail command should raise 400 
Bad Request and for scenario 2 we need to identify what is ideal configuration 
for swift multi tenant and return the response accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2061947/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2061922] [NEW] max_password_length config and logs inconsistent

2024-04-16 Thread Sam Morrison
Public bug reported:

We recently rolled out a config change to update the max_password_length
to avoid all the log messages. We set this to 54 as mentioned in the
release notes which we discovered was a BIG mistake as this broke
everyone authenticating using existing application credentials.

There is a bit of confusion as to what to do here and the code and the
release notes are inconsistent.


Upgrading to zed we got a lot of these in the logs [1]:

"Truncating password to algorithm specific maximum length 72
characters."

In the config help [2] for "max_password_length" it says:

"The bcrypt max_password_length is 72 bytes."

In the release notes [1] it say:

"Currently only bcrypt has fixed allowed lengths defined which is 54
characters."


[1] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/common/password_hashing.py#L89
[2] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/conf/identity.py#L106
[3] https://docs.openstack.org/releasenotes/keystone/zed.html

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2061922

Title:
  max_password_length config and logs inconsistent

Status in OpenStack Identity (keystone):
  New

Bug description:
  We recently rolled out a config change to update the
  max_password_length to avoid all the log messages. We set this to 54
  as mentioned in the release notes which we discovered was a BIG
  mistake as this broke everyone authenticating using existing
  application credentials.

  There is a bit of confusion as to what to do here and the code and the
  release notes are inconsistent.

  
  Upgrading to zed we got a lot of these in the logs [1]:

  "Truncating password to algorithm specific maximum length 72
  characters."

  In the config help [2] for "max_password_length" it says:

  "The bcrypt max_password_length is 72 bytes."

  In the release notes [1] it say:

  "Currently only bcrypt has fixed allowed lengths defined which is 54
  characters."

  
  [1] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/common/password_hashing.py#L89
  [2] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/conf/identity.py#L106
  [3] https://docs.openstack.org/releasenotes/keystone/zed.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2061922/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2060587] Re: [ML2][OVS] more precise flow table cleaning

2024-04-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/915302
Committed: 
https://opendev.org/openstack/neutron/commit/bac1b1f721e6b23da2063340827576fd9c59d0f4
Submitter: "Zuul (22348)"
Branch:master

commit bac1b1f721e6b23da2063340827576fd9c59d0f4
Author: LIU Yulong 
Date:   Tue Apr 9 09:11:03 2024 +0800

More precise flow table cleaning

OVS-agent wants to clean flows table by table during restart,
but actually it does not. If one table has same cookie with
other tables, all related flows will be clean at once.

This patch adds the table_id param to the related call
to limit the flow clean on one table at once.

Closes-Bug: #2060587
Change-Id: I266eb0f5115af718b91f930d759581616310999d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060587

Title:
  [ML2][OVS] more precise flow table cleaning

Status in neutron:
  Fix Released

Bug description:
  OVS-agent wants to clean flows table by table during restart, but
  actually it does not. [1] If one table has same cookie with other
  tables, all related flows will be clean at once. A bit radical in such
  style.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py#L186

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060587/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2061883] [NEW] [fwaas] Duplicate entry for key default_firewall_groups.PRIMARY'

2024-04-16 Thread Lajos Katona
Public bug reported:

In periodic neutron-tempest-plugin-fwaas job there are sporadic failures with 
internal server error (see [1]):
Apr 13 08:51:56.167863 np0037278106 neutron-server[59018]: ERROR 
neutron.api.v2.resource oslo_db.exception.DBDuplicateEntry: 
(pymysql.err.IntegrityError) (1062, "Duplicate entry 
'802cc07da18040609dc5772f1d4149b9' for key 'default_firewall_groups.PRIMARY'")

802cc07da18040609dc5772f1d4149b9 is the uuid of the project/tenant in
the above exception.


Opensearch link:
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22line%20269,%20in%20test_create_show_delete_firewall_group%22'),sort:!())

[1]: https://paste.opendev.org/show/bIwAbuJ88F8IPdTCJjYN/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2061883

Title:
  [fwaas] Duplicate entry  for key
  default_firewall_groups.PRIMARY'

Status in neutron:
  New

Bug description:
  In periodic neutron-tempest-plugin-fwaas job there are sporadic failures with 
internal server error (see [1]):
  Apr 13 08:51:56.167863 np0037278106 neutron-server[59018]: ERROR 
neutron.api.v2.resource oslo_db.exception.DBDuplicateEntry: 
(pymysql.err.IntegrityError) (1062, "Duplicate entry 
'802cc07da18040609dc5772f1d4149b9' for key 'default_firewall_groups.PRIMARY'")

  802cc07da18040609dc5772f1d4149b9 is the uuid of the project/tenant in
  the above exception.

  
  Opensearch link:
  
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22line%20269,%20in%20test_create_show_delete_firewall_group%22'),sort:!())

  [1]: https://paste.opendev.org/show/bIwAbuJ88F8IPdTCJjYN/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2061883/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028795] Re: Restarting OVS with DVR creates a network loop

2024-04-16 Thread Jakub Libosvar
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028795

Title:
  Restarting OVS with DVR creates a network loop

Status in neutron:
  Fix Released

Bug description:
  restarting OVS agent with DVR enabled creates a network loop between
  the external network and a tunneling network for a very short period
  of time. This causes big problems when 2 agents are restarted at the
  same time.

  Steps to reproduce:
  1) Have ml2/ovs with DVR enabled
  2) Have a VM with a FIP on compute node A
  3) Have a gw port for snat traffic on network node B
  4) ping the FIP with -i 0.1 option to send icmp request every 0.1 seconds
  5) restart OVS agents on both compute node A and network node B at the same 
time

  Now the replies for the FIP traffic gets dropped on the compute node A
  for about 3-5 minutes because the loop causes that local OVS on
  compute node A learns that GW port MAC is on the tunneling interface.
  All reply traffic uses that MAC in its destination field and normal
  OVS action no longer floods such traffic but as per its FDB entry
  forwards it to the patch port between br-int and br-tun, where it's
  dropped until the FDB entry expires.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2028795/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863113] Re: [RFE] Introduce new testing framework for Neutron - OVN integration - a.k.a George

2024-04-16 Thread Jakub Libosvar
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863113

Title:
  [RFE] Introduce new testing framework for Neutron - OVN integration -
  a.k.a George

Status in neutron:
  Won't Fix

Bug description:
  Currently there is a testing framework in Neutron tree called
  fullstack that has been proven very useful over its time being, it
  discover multiple issues that were not revealed by any other testing
  suites.

  With networking-ovn, there is a new POC of a similar tool, where
  multiple environments can run on a single host in parallel simulating
  multi-node network and inject failures. The tool uses containers
  managed by podman to isolate Neutron processes, essentially each
  container represents one node in the cluster. Host network is used for
  underlaying networking between containers using podman networks, that
  in practice use linux bridges on hypervisor.

  There is already a WIP patch [1] sent to upstream gerrit to prove its
  functionality on Ubuntu boxes.

  The goal of this RFE is to deliver the framework to Neutron tree and
  later we can expand with the test coverage or copy tests from
  fullstack suite as lots of things are common there.

  [1] https://review.opendev.org/#/c/696926/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863113/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038234] Re: PortBindingChassisEvent matches all port types

2024-04-16 Thread Jakub Libosvar
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038234

Title:
  PortBindingChassisEvent matches all port types

Status in neutron:
  Fix Released

Bug description:
  This is a regression introduced by
  
https://opendev.org/openstack/neutron/commit/6890204765c5de1a91284b9b0b6bf0565673f53f
  that introduced match_fn() but doesn't check the port type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038234/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2061813] [NEW] format error on glance architecture docs

2024-04-16 Thread frankming
Public bug reported:

While reading glance document
(https://docs.openstack.org/glance/latest/contributor/architecture.html,
I occasionally noticed that it had a little format error at glance
components description. Details are in the attachment.

In the description, it uses a hyphen to split component names and its
desc. Component names are bold and desc are normal. However, some
components desc are bold too. After digging codes, it seems the codes
have no problem. So I want to know what is the real reason of such
situation? and how can we fix it?

Problem screenshot is in the attachment.

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "aaa.png"
   https://bugs.launchpad.net/bugs/2061813/+attachment/5766083/+files/aaa.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2061813

Title:
  format error on glance architecture docs

Status in Glance:
  New

Bug description:
  While reading glance document
  (https://docs.openstack.org/glance/latest/contributor/architecture.html,
  I occasionally noticed that it had a little format error at glance
  components description. Details are in the attachment.

  In the description, it uses a hyphen to split component names and its
  desc. Component names are bold and desc are normal. However, some
  components desc are bold too. After digging codes, it seems the codes
  have no problem. So I want to know what is the real reason of such
  situation? and how can we fix it?

  Problem screenshot is in the attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2061813/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2060974] Re: neutron-dhcp-agent attemps to read pid.haproxy but can't

2024-04-16 Thread Bernard Cafarelli
Thanks for the update and confirmation, the links will come useful if
people stumble on this LP

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060974

Title:
  neutron-dhcp-agent attemps to read pid.haproxy but can't

Status in neutron:
  Invalid

Bug description:
  Hi,

  From neutron-dhcp-agent.log, I can see it's trying to access:

  /var/lib/neutron/external/pids/*.pid.haproxy

  It used to be that these files where having the unix rights (at least
  in Debian 11, aka Bullseye):

  -rw-r--r--

  However, in Debian 12 (aka Bookworm), for a reason, they now are:

  -rw-r-

  and then the agent doesn't have the necessary rights to read these
  files.

  Note that in devstack, these PIDs are owned by the stack user, so
  that's not an issue. But that's not the case in a Debian package,
  where haproxy writes these pid files as root:root, when the neutron-
  dhcp-agent is running under neutron:neutron, and therefore, can't read
  the files.

  One possibility would be reading the PIDs through privsep.

  Another fix would be to understand why the PID files aren't world
  readable. At this point, I can't tell why.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060974/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp