[Yahoo-eng-team] [Bug 1999519] Re: Typo in French translation concerning admin state

2022-12-14 Thread Vishal Manchanda
This issue needs to be fixed in https://translate.openstack.org/ 
That's why marked it as invalid for the horizon.
If you like to know more about the process of how translation happens, please 
refer https://docs.openstack.org/i18n/latest/

** Project changed: horizon => openstack-i18n

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1999519

Title:
  Typo in French translation concerning admin state

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in openstack i18n:
  New

Bug description:
  In horizon we can read in the French translation: Etat administateur
  instead of Etat administrateur (a 'r' is missing).

  cf
  
https://opendev.org/openstack/horizon/src/branch/master/openstack_dashboard/locale/fr/LC_MESSAGES/django.po#L498

  
  --- openstack_dashboard/locale/fr/LC_MESSAGES/django.po.ori   2022-12-13 
14:02:08.865183595 +0100
  +++ openstack_dashboard/locale/fr/LC_MESSAGES/django.po   2022-12-13 
14:00:29.410016035 +0100
  @@ -495,7 +495,7 @@
   msgstr "Mot de Passe Administrateur"
   
   msgid "Admin State"
  -msgstr "État Administateur"
  +msgstr "État Administrateur"
   
   msgid "Admin State ="
   msgstr "État Admin ="

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1999519/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999125] Re: cinder retype volume raise exception with shutoff vm

2022-12-14 Thread Sofia Enriquez
Hi Sang Tran, hope this message finds you well.

From the logs, looks like a nova problem. 
`
ERROR oslo_messaging.rpc.server novaclient.exceptions.Conflict: Cannot 
'swap_volume' instance a14b7344-3054-4d9a-9357-c0adc1b00650 while it is in 
vm_state stopped (HTTP 409) (Request-ID: 
req-7913c7fb-51d0-4a2a-8463-d8a9883dea0b)
`

Question: are you trying to migrate in-use volumes (attached volumes)? 
Just for the sake of completeness I need to mentioned that migration of 
‘in-use’ volumes will not succeed for any backend when they are attached to an 
instance in any of these states: SHUTOFF, SUSPENDED, or SOFT-DELETED[1].

Adding Nova team for more feedback.

Thanks, 
Sofia

[1] https://docs.openstack.org/cinder/latest/contributor/migration.html

** Tags added: migration retype

** Changed in: cinder
   Importance: Undecided => Medium

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1999125

Title:
  cinder retype volume raise exception with shutoff vm

Status in Cinder:
  Incomplete
Status in OpenStack Compute (nova):
  New

Bug description:
  Environment:
  - openstack victoria version
  - iSCSI storage using Netapp driver

  Step to reproduce:
  1. Shutoff vm and wait for its complete
  2. Run the command cinder retype --migration-policy on-demand  

  3. Task will fail with Error log below

  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager 
[req-aea12cc1-b7f5-4c38-85d7-1c9094a1006a d3802c30ff814feaa3197d29e8a3a731 
2c97f9ded4c843279d2de2c4a5c837
  01 - - -] Failed to copy volume f81a905d-05ea-4534-9b3b-a6762adf87f3 to 
099ce4e2-8539-43dc-a2fa-8feca2845a3d: novaclient.exceptions.Conflict: Cannot 
'swap_volume' instance a14b7344-3054-4d9a-9357-c0adc1b00650 while it is in 
vm_state stopped (HTTP 409) (Request-ID: 
req-7913c7fb-51d0-4a2a-8463-d8a9883dea0b)
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager Traceback (most 
recent call last):
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/manager.py", 
line 2292, in _migrate_volume_generic
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager 
nova_api.update_server_volume(ctxt, instance_uuid,
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/compute/nova.py", line 
186, in update_server_volume
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager 
nova.volumes.update_server_volume(server_id,
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/novaclient/api_versions.py", 
line 393, in substitution
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager return 
methods[-1].func(obj, *args, **kwargs)
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/novaclient/v2/volumes.py", 
line 124, in update_server_volume
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager return 
self._update("/servers/%s/os-volume_attachments/%s" %
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/novaclient/base.py", line 380, 
in _update
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager resp, body = 
self.api.client.put(url, body=body)
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/keystoneauth1/adapter.py", 
line 404, in put
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager return 
self.request(url, 'PUT', **kwargs)
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager   File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/novaclient/client.py", line 
78, in request
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager raise 
exceptions.from_response(resp, body, url, method)
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager 
novaclient.exceptions.Conflict: Cannot 'swap_volume' instance 
a14b7344-3054-4d9a-9357-c0adc1b00650 whileit is in vm_state stopped (HTTP 409) 
(Request-ID: req-7913c7fb-51d0-4a2a-8463-d8a9883dea0b)
  2022-12-07 16:49:29.381 966 ERROR cinder.volume.manager
  2022-12-07 16:49:29.408 966 ERROR oslo_messaging.rpc.server 
[req-aea12cc1-b7f5-4c38-85d7-1c9094a1006a d3802c30ff814feaa3197d29e8a3a731 
2c97f9ded4c843279d2de2c4a5c83701 - - -] Exception during message handling: 
novaclient.exceptions.Conflict: Cannot 'swap_volume' instance 
a14b7344-3054-4d9a-9357-c0adc1b00650 while it is in vm_state stopped (HTTP 409) 
(Request-ID: req-7913c7fb-51d0-4a2a-8463-d8a9883dea0b)
  2022-12-07 16:49:29.408 966 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2022-12-07 16:49:29.408 966 ERROR oslo_messaging.rpc.server   Fi

[Yahoo-eng-team] [Bug 1974070] Re: Ironic builds fail when landing on a cleaning node, it doesn't try to reschedule

2022-12-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/864773
Committed: 
https://opendev.org/openstack/nova/commit/3c022e968375c1b2eadf3c2dd7190b9434c6d4c1
Submitter: "Zuul (22348)"
Branch:master

commit 3c022e968375c1b2eadf3c2dd7190b9434c6d4c1
Author: John Garbutt 
Date:   Wed Nov 16 17:12:40 2022 +

Ironic nodes with instance reserved in placement

Currently, when you delete an ironic instance, we trigger
and undeploy in ironic and we release our allocation in placement.
We do this well before the ironic node is actually available.

We have attempted to fix this my marking unavailable nodes
as reserved in placement. This works great until you try
and re-image lots of nodes.

It turns out, ironic nodes that are waiting for their automatic
clean to finish, are returned as a valid allocation candidates
for quite some time. Eventually we mark then as reserved.

This patch takes a strange approach, if we mark all nodes as
reserved as soon as the instance lands, we close the race.
That is, when the allocation is removed the node is still
unavailable until the next update of placement is done and
notices that the node has become available. That may or may
not have been after automatic cleaning. The trade off is
that when you don't have automatic cleaning, we wait a bit
longer to notice the node is available again.

Note, this is also useful when a broken Ironic node is
marked as in-maintainance while it is in-use by a nova
instance. In a similar way, we mark the Nova as reserved
immmeidately, rather than first waiting for the instance to be
deleted before reserving the resources in Placement.

Closes-Bug: #1974070
Change-Id: Iab92124b5776a799c7f90d07281d28fcf191c8fe


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1974070

Title:
  Ironic builds fail when landing on a cleaning node, it doesn't try to
  reschedule

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In a happy world, placement reserved gets updated when a node is not
  availabe any more, so the scheduler doesn't pick that one, everyone is
  happy.

  Howerver, as is fairly well known, it takes a while for Nova to notice
  if a node has been marked as in maintenance or if it has started
  cleaning due to the instance now having been deleted, and you can
  still reach a node in a bad state.

  This actually fails hard when setting the instance uuid, as expected here:
  
https://github.com/openstack/nova/blob/4939318649650b60dd07d161b80909e70d0e093e/nova/virt/ironic/driver.py#L378

  You get a conflict errors, as the ironic node is in a transitioning
  state (i.e. its not actually available any more).

  When people are busy rebuilding large numbers of nodes, they tend to
  hit this problem, even when only building when you know there
  available nodes, you sometimes pick the ones you just deleted.

  In an idea world this would trigger a re-schedule, a bit like when you
  hit errors in the resource tracker such as ComputeResourcesUnavailable

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1974070/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999674] [NEW] nova compute service does not reset instance with task_state in rebooting_hard

2022-12-14 Thread Pierre-Samuel LE STANG
Public bug reported:

Description
===
When a user ask for a reboot hard of a running instance while nova compute is 
unavailable (service stopped or host down) it might happens under certain 
conditions that the instance stays in rebooting_hard task_state after 
nova-compute start again.

The condition to get this issue is to have a rabbitmq message-ttl of
messages in queue which is lower than the time needed to get nova
compute up again.


Steps to reproduce
==

Prerequisites:
* Set a low message-ttl (let's say 60 seconds) in your rabbitmq 
* Have a running instance on a host

First case is having a failure on nova-compute service
1/ stop nova compute service on host
2/ ask for a reboot hard: openstack server reboot --hard 
3/ wait 60 seconds
4/ start nova compute service
5/ check instance task_state and status 

Second case is having a failure on the host
1/ hard shutdown the host (let's say a power supply issue)
2/ ask for a reboot hard: openstack server reboot --hard 
3/ wait 60 seconds
2/ restart the host
5/ check instance task_state and status 


Expected result
===
We expect nova compute to be able to reset the state to active as we lost the 
message, to let the user take some other actions on the instance.

Actual result
=
The instance is stuck in rebooting_hard task_state, user is blocked

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1999674

Title:
  nova compute service does not reset instance with task_state in
  rebooting_hard

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When a user ask for a reboot hard of a running instance while nova compute is 
unavailable (service stopped or host down) it might happens under certain 
conditions that the instance stays in rebooting_hard task_state after 
nova-compute start again.

  The condition to get this issue is to have a rabbitmq message-ttl of
  messages in queue which is lower than the time needed to get nova
  compute up again.

  
  Steps to reproduce
  ==

  Prerequisites:
  * Set a low message-ttl (let's say 60 seconds) in your rabbitmq 
  * Have a running instance on a host

  First case is having a failure on nova-compute service
  1/ stop nova compute service on host
  2/ ask for a reboot hard: openstack server reboot --hard 
  3/ wait 60 seconds
  4/ start nova compute service
  5/ check instance task_state and status 

  Second case is having a failure on the host
  1/ hard shutdown the host (let's say a power supply issue)
  2/ ask for a reboot hard: openstack server reboot --hard 
  3/ wait 60 seconds
  2/ restart the host
  5/ check instance task_state and status 

  
  Expected result
  ===
  We expect nova compute to be able to reset the state to active as we lost the 
message, to let the user take some other actions on the instance.

  Actual result
  =
  The instance is stuck in rebooting_hard task_state, user is blocked

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1999674/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999678] [NEW] Static route can get stuck in the router snat namespace

2022-12-14 Thread Anton Kurbatov
Public bug reported:

I ran into a problem where a static route just gets stuck in the snat 
namepsace, even when removing all static routes from a distributed router with 
ha enabled.
Here is a simple demo from my devstack setup:

[root@node0 ~]# openstack network create private
[root@node0 ~]# openstack subnet create private --network private 
--subnet-range 192.168.10.0/24 --dhcp --gateway 192.168.10.1
[root@node0 ~]# openstack router create r1 --external-gateway public 
--distributed --ha
[root@node0 ~]# openstack router add subnet r1 private
[root@node0 ~]# openstack router set r1 --route 
destination=8.8.8.0/24,gateway=192.168.10.100 --route 
destination=8.8.8.0/24,gateway=192.168.10.200

After multipath route was added, snat-ns routes look like this:

[root@node0 ~]# ip netns exec snat-dcbec74b-2003-4447-8854-524d918260ac ip r
default via 10.136.16.1 dev qg-94c43336-56 proto keepalived
8.8.8.0/24 via 192.168.10.200 dev sg-dcf4a20b-8a proto keepalived
8.8.8.0/24 via 192.168.10.100 dev sg-dcf4a20b-8a proto keepalived
8.8.8.0/24 via 192.168.10.100 dev sg-dcf4a20b-8a proto static
10.136.16.0/20 dev qg-94c43336-56 proto kernel scope link src 10.136.17.171
169.254.0.0/24 dev ha-11b5b7d3-4e proto kernel scope link src 169.254.0.21
169.254.192.0/18 dev ha-11b5b7d3-4e proto kernel scope link src 169.254.195.228
192.168.10.0/24 dev sg-dcf4a20b-8a proto kernel scope link src 192.168.10.228
[root@node0 ~]#

Note that there is only one 'static' route added by neutron and no multipath 
route.
And two routes with 'proto keepalived' that have been added by keepalived 
process.
Now delete all routes and check the routes inside snat-ns, the route is still 
there:

[root@node0 ~]# openstack router set r1 --no-route
[root@node0 ~]# ip netns exec snat-dcbec74b-2003-4447-8854-524d918260ac ip r
default via 10.136.16.1 dev qg-94c43336-56 proto keepalived
8.8.8.0/24 via 192.168.10.100 dev sg-dcf4a20b-8a proto static
10.136.16.0/20 dev qg-94c43336-56 proto kernel scope link src 10.136.17.171
169.254.0.0/24 dev ha-11b5b7d3-4e proto kernel scope link src 169.254.0.21
169.254.192.0/18 dev ha-11b5b7d3-4e proto kernel scope link src 169.254.195.228
192.168.10.0/24 dev sg-dcf4a20b-8a proto kernel scope link src 192.168.10.228
[root@node0 ~]#

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999678

Title:
  Static route can get stuck in the router snat namespace

Status in neutron:
  In Progress

Bug description:
  I ran into a problem where a static route just gets stuck in the snat 
namepsace, even when removing all static routes from a distributed router with 
ha enabled.
  Here is a simple demo from my devstack setup:

  [root@node0 ~]# openstack network create private
  [root@node0 ~]# openstack subnet create private --network private 
--subnet-range 192.168.10.0/24 --dhcp --gateway 192.168.10.1
  [root@node0 ~]# openstack router create r1 --external-gateway public 
--distributed --ha
  [root@node0 ~]# openstack router add subnet r1 private
  [root@node0 ~]# openstack router set r1 --route 
destination=8.8.8.0/24,gateway=192.168.10.100 --route 
destination=8.8.8.0/24,gateway=192.168.10.200

  After multipath route was added, snat-ns routes look like this:

  [root@node0 ~]# ip netns exec snat-dcbec74b-2003-4447-8854-524d918260ac ip r
  default via 10.136.16.1 dev qg-94c43336-56 proto keepalived
  8.8.8.0/24 via 192.168.10.200 dev sg-dcf4a20b-8a proto keepalived
  8.8.8.0/24 via 192.168.10.100 dev sg-dcf4a20b-8a proto keepalived
  8.8.8.0/24 via 192.168.10.100 dev sg-dcf4a20b-8a proto static
  10.136.16.0/20 dev qg-94c43336-56 proto kernel scope link src 10.136.17.171
  169.254.0.0/24 dev ha-11b5b7d3-4e proto kernel scope link src 169.254.0.21
  169.254.192.0/18 dev ha-11b5b7d3-4e proto kernel scope link src 
169.254.195.228
  192.168.10.0/24 dev sg-dcf4a20b-8a proto kernel scope link src 192.168.10.228
  [root@node0 ~]#

  Note that there is only one 'static' route added by neutron and no multipath 
route.
  And two routes with 'proto keepalived' that have been added by keepalived 
process.
  Now delete all routes and check the routes inside snat-ns, the route is still 
there:

  [root@node0 ~]# openstack router set r1 --no-route
  [root@node0 ~]# ip netns exec snat-dcbec74b-2003-4447-8854-524d918260ac ip r
  default via 10.136.16.1 dev qg-94c43336-56 proto keepalived
  8.8.8.0/24 via 192.168.10.100 dev sg-dcf4a20b-8a proto static
  10.136.16.0/20 dev qg-94c43336-56 proto kernel scope link src 10.136.17.171
  169.254.0.0/24 dev ha-11b5b7d3-4e proto kernel scope link src 169.254.0.21
  169.254.192.0/18 dev ha-11b5b7d3-4e proto kernel scope link src 
169.254.195.228
  192.168.10.0/24 dev sg-dcf4a20b-8a proto kernel scope link src 192.168.10.228
  [root@node0 ~]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bu

[Yahoo-eng-team] [Bug 1999677] [NEW] Defunct nodes are reported as happy in network agent list

2022-12-14 Thread Giuseppe Petralia
Public bug reported:

When decommissioning a node from a cloud using Neutron and OVN, the Chassis is 
not removed from OVN SB db and also it always shows as happy in "openstack 
network agent list"
which is a bit weird and the operator would expect to have that as XXX in the 
agent list

This is more for the upstream neutron but adding the charm for
visibility.

** Affects: charm-neutron-api
 Importance: Undecided
 Status: New

** Affects: networking-ovn
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: networking-ovn
   Importance: Undecided
   Status: New

** Description changed:

- When decommissioning a node from the cloud, the Chassis is not removed from 
OVN SB db and also it always shows as happy in "openstack network agent list" 
- which is a bit weird and the operator would expect to have that as XXX in the 
agent list 
+ When decommissioning a node from a cloud using Neutron and OVN, the Chassis 
is not removed from OVN SB db and also it always shows as happy in "openstack 
network agent list"
+ which is a bit weird and the operator would expect to have that as XXX in the 
agent list
  
  This is more for the upstream neutron but adding the charm for
  visibility.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999677

Title:
  Defunct nodes are reported as happy in network agent list

Status in OpenStack Neutron API Charm:
  New
Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  When decommissioning a node from a cloud using Neutron and OVN, the Chassis 
is not removed from OVN SB db and also it always shows as happy in "openstack 
network agent list"
  which is a bit weird and the operator would expect to have that as XXX in the 
agent list

  This is more for the upstream neutron but adding the charm for
  visibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1999677/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928678] Re: Horizon side bar and data-table menus are not seeing with long lists

2022-12-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/791525
Committed: 
https://opendev.org/openstack/horizon/commit/ba1e7ddc9b0b6a604c4f8f52d64e7f6c57142e52
Submitter: "Zuul (22348)"
Branch:master

commit ba1e7ddc9b0b6a604c4f8f52d64e7f6c57142e52
Author: Rafael Weingärtner 
Date:   Fri May 14 15:48:41 2021 -0300

Enable floating search bar

Depending on the size of the datatable, sometimes, the search
bar is "hidden" due to the user scrolling. To make the
interface more user-friendly, it is interesting that both
the search bar and the sidebar are always displayed. Therefore,
this patch is introducing changes to always pin the search bar
and the sidebar at the top of the page.

Closes-Bug: #1928678
Change-Id: I9186a4fa1dd2a16f75464ff3bb1c0c9b76a12cc7


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1928678

Title:
  Horizon side bar and data-table menus are not seeing with long lists

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Depending on the size of the datatable, sometimes, the search bar is
  "hidden" due to the user scrolling. To make the interface more user-
  friendly, it is interesting that both the search bar and the sidebar
  are always displayed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1928678/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999680] [NEW] Fix remove machine-id

2022-12-14 Thread l33tname
Public bug reported:

Right now --machine-id removes the /etc/machine-id file.
But apparently that's not the thing that should happen it should
only be truncated.

https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1508766

Either fix it here or provide the systemd-firstboot with ubuntu.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1999680

Title:
  Fix remove machine-id

Status in cloud-init:
  New

Bug description:
  Right now --machine-id removes the /etc/machine-id file.
  But apparently that's not the thing that should happen it should
  only be truncated.

  https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1508766

  Either fix it here or provide the systemd-firstboot with ubuntu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1999680/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999705] [NEW] neutron-ovs-grenade-dvr-multinode job failing in check queue

2022-12-14 Thread Brian Haley
Public bug reported:

The neutron-ovs-grenade-dvr-multinode job started failing in the check
queue sometime today.

Typical trace coming from the compute1 node:

[...]
+ inc/python:setup_package:445 :   safe_chown -R stack 
/opt/stack/old/neutron/neutron.egg-info
+ functions-common:safe_chown:2340 :   _safe_permission_operation chown 
-R stack /opt/stack/old/neutron/neutron.egg-info
+ functions-common:_safe_permission_operation:2188 :   sudo chown -R stack 
/opt/stack/old/neutron/neutron.egg-info
+ inc/python:_setup_package_with_constraints_edit:390 :   use_library_from_git 
/opt/stack/old/neutron
+ inc/python:use_library_from_git:246  :   local name=/opt/stack/old/neutron
+ inc/python:use_library_from_git:247  :   local enabled=1
+ inc/python:use_library_from_git:248  :   [[ 
cinder,devstack,glance,grenade,keystone,neutron,nova,placement,requirements,swift,tempest
 = \A\L\L ]]
+ inc/python:use_library_from_git:248  :   [[ 
,cinder,devstack,glance,grenade,keystone,neutron,nova,placement,requirements,swift,tempest,
 =~ ,/opt/stack/old/neutron, ]]
+ inc/python:use_library_from_git:249  :   return 1
+ lib/neutron-legacy:install_mutnauq:480   :   [[ ovn == \o\v\n ]]
+ lib/neutron-legacy:install_mutnauq:481   :   install_ovn
+ lib/neutron_plugins/ovn_agent:install_ovn:363 :   echo 'Installing OVN and 
dependent packages'
Installing OVN and dependent packages
+ lib/neutron_plugins/ovn_agent:install_ovn:366 :   ovn_sanity_check
+ lib/neutron_plugins/ovn_agent:ovn_sanity_check:350 :   is_service_enabled 
q-agt neutron-agt
+ functions-common:is_service_enabled:2095 :   return 0
+ lib/neutron_plugins/ovn_agent:ovn_sanity_check:351 :   die 351 'The 
q-agt/neutron-agt service must be disabled with OVN.'
+ functions-common:die:264 :   local exitcode=0
[Call Trace]
./stack.sh:926:stack_install_service
/opt/stack/old/devstack/lib/stack:32:install_neutron
/opt/stack/old/devstack/lib/neutron:704:install_mutnauq
/opt/stack/old/devstack/lib/neutron-legacy:481:install_ovn
/opt/stack/old/devstack/lib/neutron_plugins/ovn_agent:366:ovn_sanity_check
/opt/stack/old/devstack/lib/neutron_plugins/ovn_agent:351:die
[ERROR] /opt/stack/old/devstack/lib/neutron_plugins/ovn_agent:351 The 
q-agt/neutron-agt service must be disabled with OVN.
Error on exit


Here's one example:

https://zuul.opendev.org/t/openstack/build/a511fc140ad444bb8df77c670815663f


Looking through open changes, this is the oldest one that seems to fail:

https://review.opendev.org/c/openstack/neutron/+/866635
Started at 2022-12-14 06:37:25


I am guessing this is the culprit:

https://review.opendev.org/c/openstack/grenade/+/862475

I'll create a revert of that since it's EOD here.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999705

Title:
  neutron-ovs-grenade-dvr-multinode job failing in check queue

Status in neutron:
  Confirmed

Bug description:
  The neutron-ovs-grenade-dvr-multinode job started failing in the check
  queue sometime today.

  Typical trace coming from the compute1 node:

  [...]
  + inc/python:setup_package:445 :   safe_chown -R stack 
/opt/stack/old/neutron/neutron.egg-info
  + functions-common:safe_chown:2340 :   _safe_permission_operation 
chown -R stack /opt/stack/old/neutron/neutron.egg-info
  + functions-common:_safe_permission_operation:2188 :   sudo chown -R stack 
/opt/stack/old/neutron/neutron.egg-info
  + inc/python:_setup_package_with_constraints_edit:390 :   
use_library_from_git /opt/stack/old/neutron
  + inc/python:use_library_from_git:246  :   local 
name=/opt/stack/old/neutron
  + inc/python:use_library_from_git:247  :   local enabled=1
  + inc/python:use_library_from_git:248  :   [[ 
cinder,devstack,glance,grenade,keystone,neutron,nova,placement,requirements,swift,tempest
 = \A\L\L ]]
  + inc/python:use_library_from_git:248  :   [[ 
,cinder,devstack,glance,grenade,keystone,neutron,nova,placement,requirements,swift,tempest,
 =~ ,/opt/stack/old/neutron, ]]
  + inc/python:use_library_from_git:249  :   return 1
  + lib/neutron-legacy:install_mutnauq:480   :   [[ ovn == \o\v\n ]]
  + lib/neutron-legacy:install_mutnauq:481   :   install_ovn
  + lib/neutron_plugins/ovn_agent:install_ovn:363 :   echo 'Installing OVN and 
dependent packages'
  Installing OVN and dependent packages
  + lib/neutron_plugins/ovn_agent:install_ovn:366 :   ovn_sanity_check
  + lib/neutron_plugins/ovn_agent:ovn_sanity_check:350 :   is_service_enabled 
q-agt neutron-agt
  + functions-common:is_service_enabled:2095 :   return 0
  + lib/neutron_plugins/ovn_agent:ovn_sanity_check:351 :   die 351 'The 
q-agt/neutron-agt service must be disabled with OVN.'
  + functions-common:die:264 :   loc