[Yahoo-eng-team] [Bug 1566622] Re: live migration fails with xenapi virt driver and SRs with old-style naming convention

2023-10-16 Thread Takashi Kajinami
I don't understand what can be a reason to change the affected project.
Please describe it in case the change was appropriate and intentional.

** Project changed: ilh-facebook => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566622

Title:
  live migration fails with xenapi virt driver and SRs with old-style
  naming convention

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  version: commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

  1. create a boot-from-volume instance prior to deploying commit 
5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  2. upgrade nova to commit 5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  3. live-migrate instance
  4. observe live-migrate action fail

  based on my analysis of logs and code:
  1. destination uses new-style SR naming convention in sr_uuid_map.
  2. source tries to use new-style SR naming convention in talking to XenAPI 
(in nova.virt.xenapi.vmops.py:VMOps.live_migrate() -> 
_call_live_migrate_command())
  3. xenapi throws XenAPI.Failure exception because it "Got exception 
UUID_INVALID" because it only knows the SR by the old-style naming convention

  example destination nova-compute, source nova-compute, and xenapi logs
  from a live-migrate request to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1566622/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566622] [NEW] live migration fails with xenapi virt driver and SRs with old-style naming convention

2023-10-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

version: commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

1. create a boot-from-volume instance prior to deploying commit 
5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
2. upgrade nova to commit 5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
3. live-migrate instance
4. observe live-migrate action fail

based on my analysis of logs and code:
1. destination uses new-style SR naming convention in sr_uuid_map.
2. source tries to use new-style SR naming convention in talking to XenAPI (in 
nova.virt.xenapi.vmops.py:VMOps.live_migrate() -> _call_live_migrate_command())
3. xenapi throws XenAPI.Failure exception because it "Got exception 
UUID_INVALID" because it only knows the SR by the old-style naming convention

example destination nova-compute, source nova-compute, and xenapi logs
from a live-migrate request to follow.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: live-migration xenserver
-- 
live migration fails with xenapi virt driver and SRs with old-style naming 
convention
https://bugs.launchpad.net/bugs/1566622
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566622] Re: live migration fails with xenapi virt driver and SRs with old-style naming convention

2023-10-16 Thread Dat Tong Ngoc
** Project changed: nova => ilh-facebook

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566622

Title:
  live migration fails with xenapi virt driver and SRs with old-style
  naming convention

Status in ILH-Facebook:
  Confirmed

Bug description:
  version: commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

  1. create a boot-from-volume instance prior to deploying commit 
5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  2. upgrade nova to commit 5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  3. live-migrate instance
  4. observe live-migrate action fail

  based on my analysis of logs and code:
  1. destination uses new-style SR naming convention in sr_uuid_map.
  2. source tries to use new-style SR naming convention in talking to XenAPI 
(in nova.virt.xenapi.vmops.py:VMOps.live_migrate() -> 
_call_live_migrate_command())
  3. xenapi throws XenAPI.Failure exception because it "Got exception 
UUID_INVALID" because it only knows the SR by the old-style naming convention

  example destination nova-compute, source nova-compute, and xenapi logs
  from a live-migrate request to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ilh-facebook/+bug/1566622/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039381] Re: Regarding Nova's inability to delete the Cinder volume for creating virtual machines (version Y)

2023-10-16 Thread sam
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2039381

Title:
  Regarding Nova's inability to delete the Cinder volume for creating
  virtual machines (version Y)

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
   
  The relevant error logs are shown in the image, but the openstack CLI can 
delete its volume. The specific commands are as follows.

  CLI:

  source /etc/keystone/admin-openrc.sh (Verify password file)
  openstack volume set --detached 191e555c-3947-4928-be46-9f09e2190877(volumeID)
  openstack volume delete  191e555c-3947-4928-be46-9f09e2190877(volumeID)

  It seems that Nova is unable to interact with the Cinder API to
  delete(or detached) commands, but I am not very professional. I don't
  know if it's a bug?

  
  此错误跟踪器适用于文档错误,请使用以下内容作为模板,并根据需要删除或添加字段。将 [ ] 转换为 [x] 以复选框:

  - [ ] 此文档以这种方式不准确:__
  - [ ] 这是一个文档添加请求。
  - [ ] 我对文档有一个修复程序,我可以粘贴到下面,包括示例:输入和输出。

  如果您有故障排除或支持问题,请使用以下资源:

  - 邮件列表:https://lists.openstack.org
   - IRC:电讯局的「开放栈」频道

  ---
  发布: 25.2.2.dev1 在 2019-10-08 11:20:05
  SHA: fd0d336ab5be71917ef9bd94dda51774a697eca8
  来源: https://opendev.org/openstack/nova/src/doc/source/install/index.rst
  网址: https://docs.openstack.org/nova/yoga/install/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2039381/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1983863] Re: Can't log within tpool.execute

2023-10-16 Thread melanie witt
Setting this to Fix Released because the fix landed on the master branch
which is continuously released.

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1983863

Title:
  Can't log within tpool.execute

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.log:
  Fix Released

Bug description:
  There is a bug in eventlet where logging within a native thread can
  lead to a deadlock situation:
  https://github.com/eventlet/eventlet/issues/432

  When encountered with this issue some projects in OpenStack using
  oslo.log, eg. Cinder, resolve them by removing any logging withing
  native threads.

  There is actually a better approach.  The Swift team came up with a
  solution a long time ago, and it would be great if oslo.log could use
  this workaround automaticaly:
  
https://opendev.org/openstack/swift/commit/69c715c505cf9e5df29dc1dff2fa1a4847471cb6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1983863/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039463] [NEW] live migration jobs failing missing lxml

2023-10-16 Thread Dan Smith
Public bug reported:

Our jobs that run the evacuate post hook are failing due to not being
able to run the ansible virt module because of a missing lxml library:

2023-10-16 14:38:57.818847 | TASK [run-evacuate-hook : Register running domains 
on subnode]
2023-10-16 14:38:58.598524 | controller -> 172.99.67.184 | ERROR
2023-10-16 14:38:58.598912 | controller -> 172.99.67.184 | {
2023-10-16 14:38:58.598981 | controller -> 172.99.67.184 |   "msg": "The `lxml` 
module is not importable. Check the requirements."
2023-10-16 14:38:58.599046 | controller -> 172.99.67.184 | }

Not sure why this is coming up now, but it's likely related to the
recent switch to global venv for our services and some other dep change
that no longer gets us this on the host for free.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2039463

Title:
  live migration jobs failing missing lxml

Status in OpenStack Compute (nova):
  New

Bug description:
  Our jobs that run the evacuate post hook are failing due to not being
  able to run the ansible virt module because of a missing lxml library:

  2023-10-16 14:38:57.818847 | TASK [run-evacuate-hook : Register running 
domains on subnode]
  2023-10-16 14:38:58.598524 | controller -> 172.99.67.184 | ERROR
  2023-10-16 14:38:58.598912 | controller -> 172.99.67.184 | {
  2023-10-16 14:38:58.598981 | controller -> 172.99.67.184 |   "msg": "The 
`lxml` module is not importable. Check the requirements."
  2023-10-16 14:38:58.599046 | controller -> 172.99.67.184 | }

  Not sure why this is coming up now, but it's likely related to the
  recent switch to global venv for our services and some other dep
  change that no longer gets us this on the host for free.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2039463/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039464] [NEW] disallowed by policy error when user try to create_port with fixed_Ips

2023-10-16 Thread Satish Patel
Public bug reported:

OS: Ubuntu 22.04
Openstack Release: Zed 
Deployment tool: Kolla-ansible
Neutron Plugin: OVN 


I have setup RBAC policy on my external network and here is the policy.yaml 
file 

"create_port:fixed_ips": "rule:context_is_advsvc or rule:network_owner or 
rule:admin_only or rule:shared"
"create_port:fixed_ips:ip_address": "rule:context_is_advsvc or 
rule:network_owner or rule:admin_only or rule:shared"
"create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or 
rule:network_owner or rule:admin_only or rule:shared"

I have RBAC setup on following network to allow access to specific
project to access network.

# openstack network show public-network-948
+---++
| Field | Value 
 |
+---++
| admin_state_up| UP
 |
| availability_zone_hints   |   
 |
| availability_zones|   
 |
| created_at| 2023-09-01T20:31:36Z  
 |
| description   |   
 |
| dns_domain|   
 |
| id| 5aacb586-c234-449e-a209-45fc63c8de26  
 |
| ipv4_address_scope| None  
 |
| ipv6_address_scope| None  
 |
| is_default| False 
 |
| is_vlan_transparent   | None  
 |
| mtu   | 1500  
 |
| name  | public-network-948
 |
| port_security_enabled | True  
 |
| project_id| 1ed68ab792854dc99c1b2d31bf90019b  
 |
| provider:network_type | None  
 |
| provider:physical_network | None  
 |
| provider:segmentation_id  | None  
 |
| qos_policy_id | None  
 |
| revision_number   | 9 
 |
| router:external   | External  
 |
| segments  | None  
 |
| shared| True  
 |
| status| ACTIVE
 |
| subnets   | d36886a2-99d3-4e2b-93ed-9e3cfabf5817, 
dba7a427-dccb-4a5a-a8e0-23fcda64666d |
| tags  |   
 |
| tenant_id | 1ed68ab792854dc99c1b2d31bf90019b  
 |
| updated_at| 2023-10-15T18:13:52Z  
 |
+---++

When normal user try to create port then getting following error:

# openstack port create --network public-network-1 --fixed-ip 
subnet=dba7a427-dccb-4a5a-a8e0-23fcda64666d,ip-address=204.247.186.133 test1
ForbiddenException: 403: Client Error for url: 
http://192.168.18.100:9696/v2.0/ports, (rule:create_port and 
(rule:create_port:fixed_ips and (rule:create_port:fixed_ips:subnet_id and 
rule:create_port:fixed_ips:ip_address))) is disallowed by policy


openstack in debug output: https://pastebin.com/act1n7cv


Reference Bug: 
https://bugs.launchpad.net/neutron/+bug/1808112
https://bugs.launchpad.net/neutron/+bug/1833455

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 1998789] Re: [SRU] PooledLDAPHandler.result3 does not release pool connection back when an exception is raised

2023-10-16 Thread Corey Bryant
Thanks Mustafa. For the SRU, this will be included for victoria+ in
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/2039176. Let's
target this bug for focal/ussuri.

** Changed in: keystone (Ubuntu Lunar)
   Status: Fix Released => Fix Committed

** Changed in: cloud-archive/antelope
   Status: Fix Released => Fix Committed

** Changed in: keystone (Ubuntu Jammy)
   Status: New => Fix Committed

** Changed in: cloud-archive/victoria
   Status: New => Fix Committed

** Changed in: cloud-archive/wallaby
   Status: New => Fix Committed

** Changed in: cloud-archive/xena
   Status: New => Fix Committed

** Changed in: cloud-archive/yoga
   Status: New => Fix Committed

** Changed in: cloud-archive/zed
   Status: New => Fix Committed

** Changed in: cloud-archive/ussuri
   Status: New => Triaged

** Changed in: keystone (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: keystone (Ubuntu Focal)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1998789

Title:
  [SRU] PooledLDAPHandler.result3 does not release pool connection back
  when an exception is raised

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive antelope series:
  Fix Committed
Status in Ubuntu Cloud Archive ussuri series:
  Triaged
Status in Ubuntu Cloud Archive victoria series:
  Fix Committed
Status in Ubuntu Cloud Archive wallaby series:
  Fix Committed
Status in Ubuntu Cloud Archive xena series:
  Fix Committed
Status in Ubuntu Cloud Archive yoga series:
  Fix Committed
Status in Ubuntu Cloud Archive zed series:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Focal:
  Triaged
Status in keystone source package in Jammy:
  Fix Committed
Status in keystone source package in Lunar:
  Fix Committed

Bug description:
  [Impact]

  This SRU is a backport of
  https://review.opendev.org/c/openstack/keystone/+/866723 to the
  respective Ubuntu and UCA releases. The patch is merged to the all
  respective upstream branches (master & stable/[u,v,w,x,y,z]).

  This SRU intends to fix a denial-of-service bug that happens when
  keystone uses pooled ldap connections. In pooled ldap connection mode,
  keystone borrows a connection from the pool, do the LDAP operation and
  release it back to the pool. But, if an exception or error happens
  while the LDAP connection is still borrowed, Keystone fails to release
  the connection back to the pool, hogging it forever. If this happens
  for all the pooled connections, the connection pool will be exhausted
  and Keystone will no longer be able to perform LDAP operations.

  The fix corrects this behavior by allowing the connection to release
  back to the pool even if an exception/error happens during the LDAP
  operation.

  [Test Case]

  - Deploy an LDAP server of your choice
  - Fill it with many data so the search takes more than 
`pool_connection_timeout` seconds
  - Define a keystone domain with the LDAP driver with following options:

  [ldap]
  use_pool = True
  page_size = 100
  pool_connection_timeout = 3
  pool_retry_max = 3
  pool_size = 10

  - Point the domain to the LDAP server
  - Try to login to the OpenStack dashboard, or try to do anything that uses 
the LDAP user
  - Observe the /var/log/apache2/keystone_error.log, it should contain 
ldap.TIMEOUT() stack traces followed by `ldappool.MaxConnectionReachedError` 
stack traces

  To confirm the fix, repeat the scenario and observe that the
  "/var/log/apache2/keystone_error.log" does not contain
  `ldappool.MaxConnectionReachedError` stack traces and LDAP operation
  in motion is successful (e.g. OpenStack Dashboard login)

  [Regression Potential]
  The patch is quite trivial and should not affect any deployment in a negative 
way. The LDAP pool functionality can be disabled by setting "use_pool=False" in 
case of any regression.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1998789/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998789] Re: [SRU] PooledLDAPHandler.result3 does not release pool connection back when an exception is raised

2023-10-16 Thread Edward Hope-Morley
** Changed in: cloud-archive/yoga
   Status: Fix Released => New

** Changed in: cloud-archive/zed
   Status: Fix Released => New

** Also affects: cloud-archive/antelope
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/antelope
   Status: New => Fix Released

** Also affects: keystone (Ubuntu Lunar)
   Importance: Undecided
   Status: New

** Changed in: keystone (Ubuntu Lunar)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1998789

Title:
  [SRU] PooledLDAPHandler.result3 does not release pool connection back
  when an exception is raised

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  New
Status in keystone source package in Focal:
  New
Status in keystone source package in Jammy:
  New
Status in keystone source package in Lunar:
  Fix Released

Bug description:
  [Impact]

  This SRU is a backport of
  https://review.opendev.org/c/openstack/keystone/+/866723 to the
  respective Ubuntu and UCA releases. The patch is merged to the all
  respective upstream branches (master & stable/[u,v,w,x,y,z]).

  This SRU intends to fix a denial-of-service bug that happens when
  keystone uses pooled ldap connections. In pooled ldap connection mode,
  keystone borrows a connection from the pool, do the LDAP operation and
  release it back to the pool. But, if an exception or error happens
  while the LDAP connection is still borrowed, Keystone fails to release
  the connection back to the pool, hogging it forever. If this happens
  for all the pooled connections, the connection pool will be exhausted
  and Keystone will no longer be able to perform LDAP operations.

  The fix corrects this behavior by allowing the connection to release
  back to the pool even if an exception/error happens during the LDAP
  operation.

  [Test Case]

  - Deploy an LDAP server of your choice
  - Fill it with many data so the search takes more than 
`pool_connection_timeout` seconds
  - Define a keystone domain with the LDAP driver with following options:

  [ldap]
  use_pool = True
  page_size = 100
  pool_connection_timeout = 3
  pool_retry_max = 3
  pool_size = 10

  - Point the domain to the LDAP server
  - Try to login to the OpenStack dashboard, or try to do anything that uses 
the LDAP user
  - Observe the /var/log/apache2/keystone_error.log, it should contain 
ldap.TIMEOUT() stack traces followed by `ldappool.MaxConnectionReachedError` 
stack traces

  To confirm the fix, repeat the scenario and observe that the
  "/var/log/apache2/keystone_error.log" does not contain
  `ldappool.MaxConnectionReachedError` stack traces and LDAP operation
  in motion is successful (e.g. OpenStack Dashboard login)

  [Regression Potential]
  The patch is quite trivial and should not affect any deployment in a negative 
way. The LDAP pool functionality can be disabled by setting "use_pool=False" in 
case of any regression.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1998789/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1980214] Re: Django4: TemplateTagTests.test_site_branding_tag test failure in Debian unstable

2023-10-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/851262
Committed: 
https://opendev.org/openstack/horizon/commit/b893bcdee32a640f148e1682485da849f0058f31
Submitter: "Zuul (22348)"
Branch:master

commit b893bcdee32a640f148e1682485da849f0058f31
Author: Akihiro Motoki 
Date:   Thu Jul 28 04:29:58 2022 +0900

Make site_branding tag work with Django 4.0

A test for site_branding tag starts to fail with Django 4.0.
It seems to happen as settings.SITE_BRANDING is _("Horizon") and
a translation marker _() is no longer evaluated during rendering.

As a solution, this commit changes the implementation of
site_branding tag to use "simple_tag" method
as django.template.Library.simple_tag() [1] seems to handle
an i18n-ed string properly.

[1] 
https://docs.djangoproject.com/en/4.0/howto/custom-template-tags/#simple-tags

Closes-Bug: #1980214
Change-Id: I6fdfffbeef2b405da21289d37722e3f068e27fea


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1980214

Title:
  Django4: TemplateTagTests.test_site_branding_tag test failure in
  Debian unstable

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Hi,

  Building Horizon (yoga) in Debian Unstable resulted in failure with
  TemplateTagTests.test_site_branding_tag. Here's the build log:

  ___ TemplateTagTests.test_site_branding_tag 

  [gw9] linux -- Python 3.9.13 /usr/bin/python3.9

  self =
  

  def test_site_branding_tag(self):
  """Test if site_branding tag renders the correct setting."""
  >   rendered_str = self.render_template_tag("site_branding", "branding")

  horizon/test/unit/templatetags/test_templatetags.py:58: 
  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ 
  horizon/test/unit/templatetags/test_templatetags.py:47: in render_template_tag
  return self.render_template(tag_call, tag_require)
  horizon/test/unit/templatetags/test_templatetags.py:54: in render_template
  return template.render(Context(context))
  /usr/lib/python3/dist-packages/django/template/base.py:175: in render
  return self._render(context)
  /usr/lib/python3/dist-packages/django/test/utils.py:111: in 
instrumented_test_render
  return self.nodelist.render(context)
  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ 

  self = [, 
, ]
  context = [{'True': True, 'False': False, 'None': None}, {}]

  def render(self, context):
  >   return SafeString("".join([node.render_annotated(context) for node in 
self]))
  E   TypeError: sequence item 2: expected str instance, __proxy__ found

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1980214/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2032770] Re: [OVN] port creation with --enable-uplink-status-propagation does not work with OVN mechanism driver

2023-10-16 Thread Mustafa Kemal Gilor
** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2032770

Title:
  [OVN] port creation with --enable-uplink-status-propagation does not
  work with OVN mechanism driver

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  In Progress
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  New
Status in neutron source package in Lunar:
  New

Bug description:
  The port "uplink_status_propagation" feature does not work when OVN is
  used as the mechanism driver. The reproducer below is working fine
  with openvswitch as the mechanism driver, but not with the OVN:

  openstack port create --binding-profile trusted=true --enable-uplink-
  status-propagation --net private --vnic-type direct test-sriov-bond-
  enable-uplink-status-propagation-vm-1-port-1

  The command fails with the following error when OVN is the mech
  driver:

  BadRequestException: 400: Client Error for url:
  https://10.5.3.81:9696/v2.0/ports, Unrecognized attribute(s)
  'propagate_uplink_status'

  With ML2/OVS, the port creation command above succeeds without any
  errors.

  As for the ml2_conf, "uplink_status_propagation" is listed in the
  extension drivers:

  [ml2]
  extension_drivers=port_security,dns_domain_ports,uplink_status_propagation
  type_drivers = geneve,gre,vlan,flat,local
  tenant_network_types = geneve,gre,vlan,flat,local
  mechanism_drivers = ovn,sriovnicswitch
  /*...*/

  I also found the following document which shows the feature gap
  between ML2/OVS and OVN, but the uplink_status_propagation is not
  listed: https://docs.openstack.org/neutron/latest/ovn/gaps.html#id9 ,
  maybe this page can be updated as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2032770/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039417] [NEW] [master][stable/2023.2][functional] test_maintenance.TestMaintenance tests fails randomly

2023-10-16 Thread yatin
Public bug reported:

The functional test fails randomly as:-
ft1.3: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_maintenance.TestMaintenance.test_porttesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py",
 line 306, in test_port
neutron_net = self._create_network('network1')
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py",
 line 68, in _create_network
return self.deserialize(self.fmt, res)['network']
KeyError: 'network'

neutron server log when it fails:-
2023-10-06 08:16:33.966 37713 DEBUG neutron.db.ovn_revision_numbers_db [None 
req-34e699fe-0fdc-4373-8b91-5b7dd1ac60cb - 46f70361-ba71-4bd0-9769-3573fd227c4b 
- - - -] create_initial_revision uuid=6ab7bd53-277b-4133-ac32-52b1d0c90f78, 
type=security_groups, rev=-1 create_initial_revision 
/home/zuul/src/opendev.org/openstack/neutron/neutron/db/ovn_revision_numbers_db.py:108
2023-10-06 08:16:33.973 37713 ERROR ovsdbapp.backend.ovs_idl.transaction [-] 
OVSDB Error: The transaction failed because the IDL has been configured to 
require a database lock but didn't get it yet or has already lost it
2023-10-06 08:16:33.974 37713 ERROR ovsdbapp.backend.ovs_idl.transaction [None 
req-34e699fe-0fdc-4373-8b91-5b7dd1ac60cb - 46f70361-ba71-4bd0-9769-3573fd227c4b 
- - - -] Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 118, in run
txn.results.put(txn.do_commit())
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 123, in do_commit
raise RuntimeError(msg)
RuntimeError: OVSDB Error: The transaction failed because the IDL has been 
configured to require a database lock but didn't get it yet or has already lost 
it

2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager [None 
req-34e699fe-0fdc-4373-8b91-5b7dd1ac60cb - 46f70361-ba71-4bd0-9769-3573fd227c4b 
- - - -] Error during notification for 
neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver._create_security_group-11049860
 security_group, after_create: RuntimeError: OVSDB Error: The transaction 
failed because the IDL has been configured to require a database lock but 
didn't get it yet or has already lost it
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron_lib/callbacks/manager.py",
 line 181, in _notify_loop
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, payload=payload)
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 409, in _create_security_group
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
self._ovn_client.create_security_group(context,
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 2328, in create_security_group
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager with 
self._nb_idl.transaction(check_error=True) as txn:
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.10/contextlib.py", line 142, in __exit__
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
next(self.gen)
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 272, in transaction
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager with 
super(OvsdbNbOvnIdl, self).transaction(*args, **kwargs) as t:
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.10/contextlib.py", line 142, in __exit__
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
next(self.gen)
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/ovsdbapp/api.py",
 line 114, in transaction
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager with 

[Yahoo-eng-team] [Bug 2032770] Re: [OVN] port creation with --enable-uplink-status-propagation does not work with OVN mechanism driver

2023-10-16 Thread Mustafa Kemal Gilor
** No longer affects: cloud-archive/bobcat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2032770

Title:
  [OVN] port creation with --enable-uplink-status-propagation does not
  work with OVN mechanism driver

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Jammy:
  New
Status in neutron source package in Lunar:
  New

Bug description:
  The port "uplink_status_propagation" feature does not work when OVN is
  used as the mechanism driver. The reproducer below is working fine
  with openvswitch as the mechanism driver, but not with the OVN:

  openstack port create --binding-profile trusted=true --enable-uplink-
  status-propagation --net private --vnic-type direct test-sriov-bond-
  enable-uplink-status-propagation-vm-1-port-1

  The command fails with the following error when OVN is the mech
  driver:

  BadRequestException: 400: Client Error for url:
  https://10.5.3.81:9696/v2.0/ports, Unrecognized attribute(s)
  'propagate_uplink_status'

  With ML2/OVS, the port creation command above succeeds without any
  errors.

  As for the ml2_conf, "uplink_status_propagation" is listed in the
  extension drivers:

  [ml2]
  extension_drivers=port_security,dns_domain_ports,uplink_status_propagation
  type_drivers = geneve,gre,vlan,flat,local
  tenant_network_types = geneve,gre,vlan,flat,local
  mechanism_drivers = ovn,sriovnicswitch
  /*...*/

  I also found the following document which shows the feature gap
  between ML2/OVS and OVN, but the uplink_status_propagation is not
  listed: https://docs.openstack.org/neutron/latest/ovn/gaps.html#id9 ,
  maybe this page can be updated as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2032770/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp