[Yahoo-eng-team] [Bug 2062573] [NEW] pysendfile library is unmaintained

2024-04-19 Thread Takashi Kajinami
Public bug reported:

The pysendfile library[1] was added as an optimal dependency for zero-
copy image upload[2] but the library got no release for 10 years.

We should consider replacing it by os.sendfile or removing the feature
instead of using the unmaintained library.

[1] https://pypi.org/project/pysendfile/
[2] https://review.opendev.org/c/openstack/glance/+/3863

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2062573

Title:
  pysendfile library is unmaintained

Status in Glance:
  In Progress

Bug description:
  The pysendfile library[1] was added as an optimal dependency for zero-
  copy image upload[2] but the library got no release for 10 years.

  We should consider replacing it by os.sendfile or removing the feature
  instead of using the unmaintained library.

  [1] https://pypi.org/project/pysendfile/
  [2] https://review.opendev.org/c/openstack/glance/+/3863

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2062573/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062559] [NEW] deep container (swift) folders not working properly

2024-04-19 Thread Martin Oravec
Public bug reported:

I have Swift containers with folder structure deeper than 2 levels.
When browsing these containers via Horizon dashboard, "%2F" instead of "/" 
starts to appear in address bar as i browse deeper than 2-nd directory level. 
No objects are found until address is manually rewritten with slashes.
Also, when i try to create deep folder via Horizon, folders with multiple "%2F" 
in name are created instead of proper deep folder structure.
I started experiencing this behavior after update from Yoga to Zed (26.5.2) / 
Antelope (27.4.2).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2062559

Title:
  deep container (swift) folders not working properly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have Swift containers with folder structure deeper than 2 levels.
  When browsing these containers via Horizon dashboard, "%2F" instead of "/" 
starts to appear in address bar as i browse deeper than 2-nd directory level. 
No objects are found until address is manually rewritten with slashes.
  Also, when i try to create deep folder via Horizon, folders with multiple 
"%2F" in name are created instead of proper deep folder structure.
  I started experiencing this behavior after update from Yoga to Zed (26.5.2) / 
Antelope (27.4.2).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2062559/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062127] Re: Rebuild from BFV Instance Snapshot is failed

2024-04-19 Thread Rajat Dhasmana
Adding the error trace here since the paste could expire

Apr 18 09:29:40 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:40.553 
2499881 INFO nova.compute.manager [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Rebuilding instance
Apr 18 09:29:45 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:45.986 
2499881 INFO nova.virt.libvirt.driver [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Instance shutdown successfully after 5 
seconds.
Apr 18 09:29:45 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:45.994 
2499881 INFO nova.virt.libvirt.driver [-] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Instance destroyed successfully.
Apr 18 09:29:46 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:46.004 
2499881 INFO nova.virt.libvirt.driver [-] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Instance destroyed successfully.
Apr 18 09:29:46 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:46.220 
2499881 INFO os_vif [None req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 
4841276dcdbe4ab096ef60b1744c4fa9 f2e52a5c5d1c4ca1b51274619b517e0e - - default 
default] Successfully unplugged vif 
VIFBridge(active=True,address=fa:16:3e:ea:b9:d7,bridge_name='qbrc6b8410e-fb',has_traffic_filtering=True,id=c6b8410e-fbdc-45b6-b3b3-6dd5515226c5,network=Network(579804e9-8b7c-4402-b23b-02a89c31284d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6b8410e-fb')
Apr 18 09:29:46 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:46.266 
2499881 INFO nova.virt.libvirt.driver [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Deleting instance files 
/var/lib/nova/instances/43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd_del
Apr 18 09:29:46 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:46.268 
2499881 INFO nova.virt.libvirt.driver [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Deletion of 
/var/lib/nova/instances/43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd_del complete
Apr 18 09:29:46 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:46.432 
2499881 WARNING nova.virt.libvirt.driver [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] During detach_volume, instance 
disappeared.: nova.exception.InstanceNotFound: Instance 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd could not be found.
Apr 18 09:29:47 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:47.520 
2499881 WARNING nova.compute.manager [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Timeout waiting for 
['volume-reimaged-d4b22ad0-dcc7-485f-bab7-b4d82eb61987'] for instance with 
vm_state active and task_state rebuilding. Event states are: 
volume-reimaged-d4b22ad0-dcc7-485f-bab7-b4d82eb61987: timed out after 0.00 
seconds: eventlet.timeout.Timeout: 0 seconds
Apr 18 09:29:47 dev-compute1 nova-compute[2499881]: 
/openstack/venvs/nova-27.3.0/lib/python3.10/site-packages/oslo_serialization/jsonutils.py:180:
 UserWarning: Cannot convert  to 
primitive, will raise ValueError instead of warning in version 3.0
Apr 18 09:29:47 dev-compute1 nova-compute[2499881]:   warnings.warn("Cannot 
convert %r to primitive, will raise ValueError "
Apr 18 09:29:48 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:48.523 
2499881 INFO nova.compute.manager [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] [instance: 
43dc3e18-5ca7-4df7-8cac-b5d96eba6bfd] Successfully reverted task state from 
rebuilding on failure for instance.
Apr 18 09:29:48 dev-compute1 nova-compute[2499881]: 2024-04-18 09:29:48.537 
2499881 ERROR oslo_messaging.rpc.server [None 
req-3dfd2905-9dbe-4a2d-ac7c-e0a7328c39f3 4841276dcdbe4ab096ef60b1744c4fa9 
f2e52a5c5d1c4ca1b51274619b517e0e - - default default] Exception during message 
handling: ValueError: Circular reference detected
2024-04-18 09:29:48.537 
2499881 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2024-04-18 09:29:48.537 
2499881 ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-27.3.0/lib/python3.10/site-packages/nova/compute/utils.py",
 line 1439, in decorated_function

[Yahoo-eng-team] [Bug 2062127] [NEW] Rebuild from BFV Instance Snapshot is failed

2024-04-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Issue: Rebuild from Booted From Volume Instance Snapshot is failed

Timeout waiting for ['volume-
reimaged-d4b22ad0-dcc7-485f-bab7-b4d82eb61987'] for instance with
vm_state active and task_state rebuilding. Event states are: volume-
reimaged-d4b22ad0-dcc7-485f-bab7-b4d82eb61987: timed out after 0.00
seconds: eventlet.timeout.Timeout: 0 seconds

Expected State: Min Timeout Per Gb is 20 second.
Analysis:
 I have followed below steps:

 1. Create  Booted From Volume Instance
https://paste.openstack.org/show/bWBSIO7Mr9OIHJna9wk7/

 2. Create snapshot of  Booted From Volume Instance
https://paste.openstack.org/show/bJVPXDYB45FsRbTIhel5/

 3. Rebuild from Booted From Volume Instance Snapshot
https://paste.openstack.org/show/bHau3AXs789hpYSnLFvW/
 
Status: It is failed and throws below error logs:
https://paste.openstack.org/show/bRsJ69NmWzzDVbFiaWW5/

** Affects: nova
 Importance: Medium
 Assignee: Rajat Dhasmana (whoami-rajat)
 Status: In Progress


** Tags: bfv cinder nova rebuild
-- 
Rebuild from BFV Instance Snapshot is failed
https://bugs.launchpad.net/bugs/2062127
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062536] [NEW] [ovn] agent_health_check does not work for ovn agents

2024-04-19 Thread Liu Xie
Public bug reported:


When "debug" is set to True, some logs show "found 0 active agents" after
the agent_health_check. 
It seems that the agent_health_check mechanism does not work for ovn agents.

** Affects: neutron
 Importance: Undecided
 Assignee: Liu Xie (liushy)
 Status: New


** Tags: ovn

** Changed in: neutron
 Assignee: (unassigned) => Liu Xie (liushy)

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2062536

Title:
  [ovn] agent_health_check does not work for ovn agents

Status in neutron:
  New

Bug description:
  
  When "debug" is set to True, some logs show "found 0 active agents" after
  the agent_health_check. 
  It seems that the agent_health_check mechanism does not work for ovn agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2062536/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062523] [NEW] evacuation, running VM shown stop after compute service started at firsh host.

2024-04-19 Thread Amit Uniyal
Public bug reported:

Description
===
After evacuation, when original host/node become seriveable (like nova is 
running properly in host ), VM status changed in nova DB as 'SHUTOFF' 
automatically from 'ACTIVE'.

This is because on nova initialization nova checks for all evacuated
instances on original server-host, and delete local copy from host. and
also update status at DB.

Steps to reproduce (reporduced 100%)


- create a VM (you must have multinode setup to run evacuation)
- get server-host
openstack server list --long

- server-host -  stop service
systemctl stop devstack@n-cpu

- server-host - force-down compute service
openstack compute service --down  vm1

- stop VM
openstack server stop vm

- look for VM power state - stuck at powering off
openstack server list --long

- open logs api/cpu in both nodes
- evacuate VM
 openstack server evacuate --host=  --os-compute-api-version 
2.29

- server moved to new host and become active

- orginal server-host start compute service (systemctl)

- look for VM status 
openstack server list --long


Expected result
===
After service start at original host, it should not affect VM in any way.

Actual result
=
openstack server list shows  -

VM went to shutoff,  task state=None, power=no state, stays at expected
new host

although VM can be used, can login to server (tried with virsh)

Environment
===

Nova: current master or future 2024.2

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: db evacuate

** Tags added: evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2062523

Title:
  evacuation, running VM shown stop after compute service started at
  firsh host.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  After evacuation, when original host/node become seriveable (like nova is 
running properly in host ), VM status changed in nova DB as 'SHUTOFF' 
automatically from 'ACTIVE'.

  This is because on nova initialization nova checks for all evacuated
  instances on original server-host, and delete local copy from host.
  and also update status at DB.

  Steps to reproduce (reporduced 100%)
  

  - create a VM (you must have multinode setup to run evacuation)
  - get server-host
  openstack server list --long

  - server-host -  stop service
  systemctl stop devstack@n-cpu

  - server-host - force-down compute service
  openstack compute service --down  vm1

  - stop VM
  openstack server stop vm

  - look for VM power state - stuck at powering off
  openstack server list --long

  - open logs api/cpu in both nodes
  - evacuate VM
   openstack server evacuate --host=  --os-compute-api-version 
2.29

  - server moved to new host and become active

  - orginal server-host start compute service (systemctl)

  - look for VM status 
  openstack server list --long

  
  Expected result
  ===
  After service start at original host, it should not affect VM in any way.

  Actual result
  =
  openstack server list shows  -

  VM went to shutoff,  task state=None, power=no state, stays at
  expected new host

  although VM can be used, can login to server (tried with virsh)

  Environment
  ===

  Nova: current master or future 2024.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2062523/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062511] [NEW] [ovn] ovn metadata agent occasionally down when SB connections are error

2024-04-19 Thread Liu Xie
Public bug reported:

If the metadata agent write twice failures to the OVN SB within the
agent_down_time, an alert will be triggered indicating that the agent is
down. Although the SB is snapshoting and quickly recovers thereafter.

Because the "SbGlobalUpdateEvent" is Event driven and it would not retry
after "_update_chassis" falied.

** Affects: neutron
 Importance: Undecided
 Assignee: Liu Xie (liushy)
 Status: New


** Tags: ovn

** Changed in: neutron
 Assignee: (unassigned) => Liu Xie (liushy)

** Description changed:

  If the metadata agent write twice failures to the OVN SB within the
  agent_down_time, an alert will be triggered indicating that the agent is
  down. Although the SB is snapshoting and quickly recovers thereafter.
  
  Because the "SbGlobalUpdateEvent" is Event driven and it would not retry
- after write SB falied.
+ after "_update_chassis" falied.

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2062511

Title:
  [ovn] ovn metadata agent occasionally down when SB connections  are
  error

Status in neutron:
  New

Bug description:
  If the metadata agent write twice failures to the OVN SB within the
  agent_down_time, an alert will be triggered indicating that the agent
  is down. Although the SB is snapshoting and quickly recovers
  thereafter.

  Because the "SbGlobalUpdateEvent" is Event driven and it would not
  retry after "_update_chassis" falied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2062511/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp