[Yahoo-eng-team] [Bug 1976366] Re: OVN: Metadata is not provisioned for a network with DHCP turned off

2022-08-18 Thread Rodolfo Alonso
Hello Max:

If any of the subnets of a network have DHCP capabilities, the metadata
port of this network namespace will have no IP assigned (exactly the OVN
metadata code you are pointing [1]). That means we can't provide the
metadata via the metadata IP. You need to use a config-drive in this
case [2].

Regards.

[1]https://github.com/openstack/neutron/blob/09207ba731bfc859c2f6d175588a8bfe09be01db/neutron/agent/ovn/metadata/agent.py#L431-L437
[2]https://docs.openstack.org/nova/latest/admin/config-drive.html


** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1976366

Title:
  OVN: Metadata is not provisioned for a network with DHCP turned off

Status in neutron:
  Invalid

Bug description:
  Can be related to #1918914

  Openstack version: Ussuri

  I have a very simple deployment with a single flat provider network
  (not sure it matters) with a subnet with DHCP turned off (instances
  are created only with this network).

  In this case neutron-ovn-metadata-agent does not provision metadata
  service for this network:

  ---
  root@eq-os1:~# ovn-sbctl find Port_Binding type=localport
  _uuid   : b6329cbe-e80f-48a3-921d-e1031afd85d8
  chassis : []
  datapath: 097732e0-85d1-4744-a9c6-bafa0d861700
  encap   : []
  external_ids: {"neutron:cidrs"="", 
"neutron:device_id"=ovnmeta-81954d74-51e6-4598-b6b6-3da3832f20df, 
"neutron:device_owner"="network:dhcp", 
"neutron:network_name"=neutron-81954d74-51e6-4598-b6b6-3da3832f20df, 
"neutron:port_name"="", "neutron:project_id"=f11221fbfbb844209cd49c7ca3a12a00, 
"neutron:revision_number"="1", "neutron:security_group_ids"=""}
  gateway_chassis : []
  ha_chassis_group: []
  logical_port: "a557f47a-dae7-4150-96c2-71abbf48b84b"
  mac : ["fa:16:3e:06:ed:9b"]
  nat_addresses   : []
  options : {requested-chassis=""}
  parent_port : []
  tag : []
  tunnel_key  : 2
  type: localport
  virtual_parent  : []
  root@eq-os1:~#
  ---

  The reason is that external_ids:neutron:cidrs is empty so
  provision_datapath() ignores this network in this case:

  --- neutron/agent/ovn/metadata/agent.py ---
  # If there's no metadata port or it doesn't have a MAC or IP
  # addresses, then tear the namespace down if needed. This might happen
  # when there are no subnets yet created so metadata port doesn't have
  # an IP address.
  if not (port and port.mac and
  port.external_ids.get(ovn_const.OVN_CIDRS_EXT_ID_KEY, None)):
  LOG.debug("There is no metadata port for network %s or it has no "
    "MAC or IP addresses configured, tearing the namespace "
    "down if needed", net_name)
  self.teardown_datapath(datapath, net_name)
  return
  ---

  When DHCP is enabled neutron:cidrs gets not empty and metadata is
  properly provisioned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1976366/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986947] [NEW] cc_grub_dpkg: race condition on dpkg-set-selections lock

2022-08-18 Thread Chad Smith
Public bug reported:

Intermittent failure found during jenkins test runs:

We probably want a couple quick retries if lock is held. 
https://github.com/canonical/cloud-init/blob/main/cloudinit/config/cc_grub_dpkg.py#L155


'2022-08-18 06:54:21,464 - handlers.py[DEBUG]: start: '
 'modules-config/config-grub-dpkg: running config-grub-dpkg with frequency '
 'once-per-instance\n'
 '2022-08-18 06:54:21,464 - util.py[DEBUG]: Writing to '
 
'/var/lib/cloud/instances/32493477-976f-924a-b363-1ddcabc724d9/sem/config_grub_dpkg
 '
 '- wb: [644] 25 bytes\n'
 '2022-08-18 06:54:21,464 - helpers.py[DEBUG]: Running config-grub-dpkg using '
 'lock ()\n"
 "2022-08-18 06:54:21,464 - subp.py[DEBUG]: Running command ['grub-probe', "
 "'-t', 'disk', '/boot'] with allowed return codes [0] (shell=False, "
 'capture=True)\n'
 "2022-08-18 06:54:21,710 - subp.py[DEBUG]: Running command ['udevadm', "
 "'info', '--root', '--query=symlink', '/dev/sda'] with allowed return codes "
 '[0] (shell=False, capture=True)\n'
 '2022-08-18 06:54:21,713 - cc_grub_dpkg.py[DEBUG]: considering these device '
 'symlinks: '
 
'/dev/disk/azure/root,/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751,/dev/disk/by-id/wwn-0x6002248026fe88a16909c01e11d36751,/dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0,/dev/disk/cloud/azure_root\n'
 '2022-08-18 06:54:21,714 - cc_grub_dpkg.py[DEBUG]: filtered to these '
 'disk/by-id symlinks: '
 
'/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751,/dev/disk/by-id/wwn-0x6002248026fe88a16909c01e11d36751\n'
 '2022-08-18 06:54:21,714 - cc_grub_dpkg.py[DEBUG]: selected '
 '/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751\n'
 '2022-08-18 06:54:21,714 - cc_grub_dpkg.py[DEBUG]: Setting grub '
 'debconf-set-selections with '
 "'/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751','false'\n"
 '2022-08-18 06:54:21,714 - subp.py[DEBUG]: Running command '
 "['debconf-set-selections'] with allowed return codes [0] (shell=False, "
 'capture=True)\n'
 '2022-08-18 06:54:21,817 - util.py[WARNING]: Failed to run '
 'debconf-set-selections for grub-dpkg\n'
 '2022-08-18 06:54:21,823 - util.py[DEBUG]: Failed to run '
 'debconf-set-selections for grub-dpkg\n'
 'Traceback (most recent call last):\n'
 '  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_grub_dpkg.py", '
 'line 155, in handle\n'
 'subp.subp(["debconf-set-selections"], dconf_sel)\n'
 '  File "/usr/lib/python3/dist-packages/cloudinit/subp.py", line 336, in '
 'subp\n'
 'stdout=out, stderr=err, exit_code=rc, cmd=args\n'
 'cloudinit.subp.ProcessExecutionError: Unexpected error while running '
 'command.\n'
 "Command: ['debconf-set-selections']\n"
 'Exit code: 1\n'
 'Reason: -\n'
 'Stdout: \n'
 'Stderr: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked '
 'by another process: Resource temporarily unavailable\n'

** Affects: cloud-init
 Importance: Medium
 Status: Triaged


** Tags: bitesize

** Changed in: cloud-init
   Importance: Undecided => Medium

** Tags added: bitesize

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1986947

Title:
  cc_grub_dpkg: race condition on dpkg-set-selections lock

Status in cloud-init:
  Triaged

Bug description:
  Intermittent failure found during jenkins test runs:

  We probably want a couple quick retries if lock is held. 
  
https://github.com/canonical/cloud-init/blob/main/cloudinit/config/cc_grub_dpkg.py#L155

  
  '2022-08-18 06:54:21,464 - handlers.py[DEBUG]: start: '
   'modules-config/config-grub-dpkg: running config-grub-dpkg with frequency '
   'once-per-instance\n'
   '2022-08-18 06:54:21,464 - util.py[DEBUG]: Writing to '
   
'/var/lib/cloud/instances/32493477-976f-924a-b363-1ddcabc724d9/sem/config_grub_dpkg
 '
   '- wb: [644] 25 bytes\n'
   '2022-08-18 06:54:21,464 - helpers.py[DEBUG]: Running config-grub-dpkg using 
'
   'lock ()\n"
   "2022-08-18 06:54:21,464 - subp.py[DEBUG]: Running command ['grub-probe', "
   "'-t', 'disk', '/boot'] with allowed return codes [0] (shell=False, "
   'capture=True)\n'
   "2022-08-18 06:54:21,710 - subp.py[DEBUG]: Running command ['udevadm', "
   "'info', '--root', '--query=symlink', '/dev/sda'] with allowed return codes "
   '[0] (shell=False, capture=True)\n'
   '2022-08-18 06:54:21,713 - cc_grub_dpkg.py[DEBUG]: considering these device '
   'symlinks: '
   
'/dev/disk/azure/root,/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751,/dev/disk/by-id/wwn-0x6002248026fe88a16909c01e11d36751,/dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0,/dev/disk/cloud/azure_root\n'
   '2022-08-18 06:54:21,714 - cc_grub_dpkg.py[DEBUG]: filtered to these '
   'disk/by-id symlinks: '
   
'/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751,/dev/disk/by-id/wwn-0x6002248026fe88a16909c01e11d36751\n'
   '2022-08-18 06:54:21,714 - cc_grub_dpkg.py[DEBUG]: selected '
   '/dev/disk/by-id/scsi-36002248026fe88a16909c01e11d36751\n'
   '2022-08-18 06:54:21,714 - cc_g

[Yahoo-eng-team] [Bug 1965140] Re: Eventlet fails when starting network agents

2022-08-18 Thread Rodolfo Alonso
*** This bug is a duplicate of bug 1934937 ***
https://bugs.launchpad.net/bugs/1934937

This is a duplicate of [1] (also check [2] and [3]). For non-wsgi
services, the "oslo_messaging_rabit.heartbeat_in_pthread" option should
be "False".

[1]https://bugs.launchpad.net/oslo.messaging/+bug/1934937
[2]https://bugs.launchpad.net/tripleo/+bug/1984076
[3]https://bugzilla.redhat.com/show_bug.cgi?id=2115383


** Bug watch added: Red Hat Bugzilla #2115383
   https://bugzilla.redhat.com/show_bug.cgi?id=2115383

** This bug has been marked a duplicate of bug 1934937
   Heartbeat in pthreads in nova-wallaby crashes with greenlet error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1965140

Title:
  Eventlet fails when starting network agents

Status in neutron:
  Incomplete

Bug description:
  I have a two nodes openstack setup, where one node (w3) runs all
  controller services (keystone, glance, placement, nova, neutron,
  horizon, cinder) as well as nova-compute and cinder-volume and the
  second (w6) runs nova-compute and linuxbridge agent.

  All network agents on w3 are dead
  [root@w3 ~]# openstack network agent list
  
+--++---+---+---+---+---+
  | ID   | Agent Type | Host  | 
Availability Zone | Alive | State | Binary|
  
+--++---+---+---+---+---+
  | 330269b7-b73c-4207-abc7-21f1a2972b7b | Linux bridge agent | w6.int.lunarc | 
None  | :-)   | UP| neutron-linuxbridge-agent |
  | 83d16241-8a3a-42b0-beda-87246d945dc1 | L3 agent   | w3.int.lunarc | 
nova  | XXX   | UP| neutron-l3-agent  |
  | a52ab60f-d893-491d-a43e-823a0d482810 | Linux bridge agent | w3.int.lunarc | 
None  | XXX   | UP| neutron-linuxbridge-agent |
  | abd75644-d895-41ae-94fa-6c4351cbc4bf | Metadata agent | w3.int.lunarc | 
None  | XXX   | UP| neutron-metadata-agent|
  | c05c65bc-779e-4fe5-a19e-350c44900be4 | DHCP agent | w3.int.lunarc | 
nova  | XXX   | UP| neutron-dhcp-agent|
  
+--++---+---+---+---+---+

  , and I cannot start them anymore. I tried restarting said agent
  alone, restarting all openstack daemon on w3 and even restarting the
  whole node but nothing seems to help and I always have teh same issue
  and the same trace as show below.

  I could not find any useful info in the logs, but systemd does report an 
issue with eventlet/greenlet:
  [root@w3 ~]# journalctl -fu neutron-linuxbridge-agent
  -- Logs begin at Wed 2022-03-16 04:32:31 EDT. --
  Mar 16 09:14:09 w3.int.lunarc sudo[37085]:  neutron : TTY=unknown ; PWD=/ ; 
USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
privsep-helper --config-file /usr/share/neutron/neutron-dist.conf --config-file 
/etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir 
/etc/neutron/conf.d/neutron-linuxbridge-agent --privsep_context 
neutron.privileged.default --privsep_sock_path /tmp/tmp93tzwqg3/privsep.sock
  Mar 16 09:14:12 w3.int.lunarc sudo[37107]:  neutron : TTY=unknown ; PWD=/ ; 
USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf 
privsep-helper --config-file /usr/share/neutron/neutron-dist.conf --config-file 
/etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir 
/etc/neutron/conf.d/neutron-linuxbridge-agent --privsep_context 
neutron.privileged.link_cmd --privsep_sock_path /tmp/tmp81iy5eni/privsep.sock
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]: Traceback 
(most recent call last):
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]:   File 
"/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 476, in 
fire_timers
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]: timer()
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]:   File 
"/usr/lib/python3.6/site-packages/eventlet/hubs/timer.py", line 59, in __call__
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]: cb(*args, 
**kw)
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]:   File 
"/usr/lib/python3.6/site-packages/eventlet/semaphore.py", line 152, in 
_do_acquire
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]: 
waiter.switch()
  Mar 16 09:14:15 w3.int.lunarc neutron-linuxbridge-agent[37073]: 
greenlet.error: cannot switch to a different thread

  I am running OpenStack Xena on CentOS Stream 8 freshly installed. Here are 
other details:
  [r

[Yahoo-eng-team] [Bug 1986969] [NEW] Manually assign --device and --device-owner to a port does NOT binds the port inmediatly

2022-08-18 Thread Rodolfo Alonso
Public bug reported:

This could be considered as a documentation bug.

When a VM is created (there is a device ID), a user can create a port and 
assign the port device_id to the VM ID and the device_owner="compute:nova". 
That makes this port visible when executing:
  $ openstack port list --server serverID


The port is not bound, of course. But when the VM is rebooted (hard reboot), 
the port is assigned and bound to this VM.

There is another related issue from the administrator point of view. A user can 
assign (by mistake or coincidence) the device ID of another project VM ID. This 
non-admin user can't see the other project VM. But the administrator, when 
executing the previous command, will see a VM assigned to a project with a port 
from another. This scenario:
* Is difficult to reproduce: the non-admin user must guess the VM ID of another 
project without having access.
* Affect only to the admin view, who can access to both projects.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1986969

Title:
  Manually assign --device and --device-owner to a port does NOT binds
  the port inmediatly

Status in neutron:
  New

Bug description:
  This could be considered as a documentation bug.

  When a VM is created (there is a device ID), a user can create a port and 
assign the port device_id to the VM ID and the device_owner="compute:nova". 
That makes this port visible when executing:
$ openstack port list --server serverID

  
  The port is not bound, of course. But when the VM is rebooted (hard reboot), 
the port is assigned and bound to this VM.

  There is another related issue from the administrator point of view. A user 
can assign (by mistake or coincidence) the device ID of another project VM ID. 
This non-admin user can't see the other project VM. But the administrator, when 
executing the previous command, will see a VM assigned to a project with a port 
from another. This scenario:
  * Is difficult to reproduce: the non-admin user must guess the VM ID of 
another project without having access.
  * Affect only to the admin view, who can access to both projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1986969/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986977] [NEW] [ovn-octavia-provider] HealthMonitor affecting several LBs

2022-08-18 Thread Fernando Royo
Public bug reported:

HealthMonitor over a pool that share a member (over a different port)
with another pool or indeed over a different LB is updating all pools
involving the member notified on the ServiceMonitorUpdateEvent. In
particular the information mixed is on NBDB, on the field external_ids.

E.g

A member 10.0.0.100 on LB1 over pool1 (port 80)
Same member 10.0.0.100 on LB2 over pool2 (port 22)
HM over pool1

if HM notifies an offline status for member over pool 80, both LBs are
including the information about the member in ERROR state on the
external_ids.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

** Summary changed:

- HealthMonitor affecting several LBs
+ [ovn-octavia-provider] HealthMonitor affecting several LBs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1986977

Title:
  [ovn-octavia-provider] HealthMonitor affecting several LBs

Status in neutron:
  In Progress

Bug description:
  HealthMonitor over a pool that share a member (over a different port)
  with another pool or indeed over a different LB is updating all pools
  involving the member notified on the ServiceMonitorUpdateEvent. In
  particular the information mixed is on NBDB, on the field
  external_ids.

  E.g

  A member 10.0.0.100 on LB1 over pool1 (port 80)
  Same member 10.0.0.100 on LB2 over pool2 (port 22)
  HM over pool1

  if HM notifies an offline status for member over pool 80, both LBs are
  including the information about the member in ERROR state on the
  external_ids.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1986977/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1987005] [NEW] ipv6_ready referenced before assignment

2022-08-18 Thread Jerry Cheng
Public bug reported:

cloud-init crashes due to reference ipv6_ready before assignment. 
cloud-init version: 22.2.2-1.ph3

traceback in cloudinit/sources/DataSourceVMware.py:

[2022-08-15 17:38:14] 2022-08-15 17:38:14,682 - util.py[WARNING]: failed
stage init

[2022-08-15 17:38:14] failed run of stage init

[2022-08-15 17:38:14]


[2022-08-15 17:38:14] Traceback (most recent call last):

[2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
packages/cloudinit/cmd/main.py", line 740, in status_wrapper

[2022-08-15 17:38:14] ret = functor(name, args)

[2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
packages/cloudinit/cmd/main.py", line 429, in main_init

[2022-08-15 17:38:14] init.setup_datasource()

[2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
packages/cloudinit/stages.py", line 468, in setup_datasource

[2022-08-15 17:38:14]
self.datasource.setup(is_new_instance=self.is_new_instance())

[2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
packages/cloudinit/sources/DataSourceVMware.py", line 340, in setup

[2022-08-15 17:38:14] host_info = wait_on_network(self.metadata)

[2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
packages/cloudinit/sources/DataSourceVMware.py", line 963, in
wait_on_network

[2022-08-15 17:38:14] ipv6_ready,

[2022-08-15 17:38:14] UnboundLocalError: local variable 'ipv6_ready'
referenced before assignment


There is an issue in the source code: under certain conditions,
ipv6_ready may be referenced in LOG.debug() before assignment if
wait_on_ipv6 = false. The same issue may also happen for ipv4_ready if
wait_on_ipv4 = false.


host_info = None

while host_info is None:

# This loop + sleep results in two logs every second while
waiting

# for either ipv4 or ipv6 up. Do we really need to log each
iteration

# or can we log once and log on successful exit?

host_info = get_host_info()


network = host_info.get("network") or {}

interfaces = network.get("interfaces") or {}

by_ipv4 = interfaces.get("by-ipv4") or {}

by_ipv6 = interfaces.get("by-ipv6") or {}


if wait_on_ipv4:

ipv4_ready = len(by_ipv4) > 0 if by_ipv4 else False

if not ipv4_ready:

host_info = None


if wait_on_ipv6:

ipv6_ready = len(by_ipv6) > 0 if by_ipv6 else False

if not ipv6_ready:

host_info = None


if host_info is None:

LOG.debug(

"waiting on network: wait4=%s, ready4=%s, wait6=%s,
ready6=%s",

wait_on_ipv4,

ipv4_ready,

wait_on_ipv6,

ipv6_ready,

)

time.sleep(1)

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1987005

Title:
  ipv6_ready referenced before assignment

Status in cloud-init:
  New

Bug description:
  cloud-init crashes due to reference ipv6_ready before assignment. 
  cloud-init version: 22.2.2-1.ph3

  traceback in cloudinit/sources/DataSourceVMware.py:

  [2022-08-15 17:38:14] 2022-08-15 17:38:14,682 - util.py[WARNING]:
  failed stage init

  [2022-08-15 17:38:14] failed run of stage init

  [2022-08-15 17:38:14]
  

  [2022-08-15 17:38:14] Traceback (most recent call last):

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/cmd/main.py", line 740, in status_wrapper

  [2022-08-15 17:38:14] ret = functor(name, args)

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/cmd/main.py", line 429, in main_init

  [2022-08-15 17:38:14] init.setup_datasource()

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/stages.py", line 468, in setup_datasource

  [2022-08-15 17:38:14]
  self.datasource.setup(is_new_instance=self.is_new_instance())

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/sources/DataSourceVMware.py", line 340, in setup

  [2022-08-15 17:38:14] host_info = wait_on_network(self.metadata)

  [2022-08-15 17:38:14]   File "/usr/lib/python3.7/site-
  packages/cloudinit/sources/DataSourceVMware.py", line 963, in
  wait_on_network

  [2022-08-15 17:38:14] ipv6_ready,

  [2022-08-15 17:38:14] UnboundLocalError: local variable 'ipv6_ready'
  referenced before assignment


  There is an issue in the source code: under certain conditions,
  ipv6_ready may be referenced in LOG.debug() before assignment if
  wait_on_ipv6 = false. The same issue may also happen for ipv4_ready if
  wait_on_ipv4 = false.


  host_info = None

  while host_info is None:

  # This loop + sleep results in two logs every second while
  waiting

  # for

[Yahoo-eng-team] [Bug 1848471] Re: No infiniband support in netplan renderer

2022-08-18 Thread Launchpad Bug Tracker
This bug was fixed in the package netplan.io - 0.105-0ubuntu1

---
netplan.io (0.105-0ubuntu1) kinetic; urgency=medium

  * New upstream release: 0.105
- Add support for VXLAN tunnels (#288), LP: #1764716
- Add support for VRF devices (#285), LP: #1773522
- Add support for InfiniBand (IPoIB) (#283), LP: #1848471
- Allow key configuration for GRE tunnels (#274), LP: #1966476
- Allow setting the regulatory domain (#281), LP: #1951586
- Documentation improvements & restructuring (#287)
- Add meson build system (#268)
- Add abigail ABI compatibility checker (#269)
- Update of Fedora RPM spec (#264)
- CI improvements (#265, #282)
- Netplan `set` uses the consolidated libnetplan YAML parser (#254)
- Refactor ConfigManager to use the libnetplan YAML parser (#255)
- New `netplan_netdef_get_filepath` API (#275)
- Improve NetworkManager device management logic (#276), LP: #1951653
Bug fixes:
- Fix `apply` netdev rename/create race condition (#260), LP: #1962095
- Fix `try` timeout (#271), LP: #1967084
- Fix infinite timeouts in ovs-vsctl (#266), Closes: #1000137
- Fix offload options using tristate setting (#270), LP: #1956264
- Fix rendering of NetworkManager passthrough WPA (#279), LP: #1972800
- Fix CLI crash on LibNetplanException (#286)
- Fix NetworkManager internal DHCP client lease lookup (#284), LP: #1979674
  * Update symbols file for 0.105
  * d/patches/: Drop patches, applied upstream
  * d/p/autopkgtest-fixes.patch: Refresh
  * d/control: bump Standards-Version, no changes needed
  * d/control, d/tests/control: suggest/add iw for setting a regulatory domain
  * d/control: merge with Debian, dropping deprecated versioned depends
  * d/control: Update Vcs-* tags for Ubuntu
  * d/watch: sync with Debian
  * d/u/metadata: sync with Debian
  * d/tests: partially merge with Debian

 -- Lukas Märdian   Thu, 18 Aug 2022 14:53:33 +0200

** Changed in: netplan.io (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1848471

Title:
  No infiniband support in netplan renderer

Status in cloud-init:
  Triaged
Status in MAAS:
  Triaged
Status in netplan:
  In Progress
Status in netplan.io package in Ubuntu:
  Fix Released

Bug description:
  Infiniband support has been added to both the sysconfig and eni
  renderers (at time of writing, eni support is unmerged from
  https://code.launchpad.net/~darren-birkett/cloud-init/+git/cloud-
  init/+merge/373759).

  We need to add support to the netplan renderer too, but at this time
  netplan itself does not support the long form HWADDR that IB devices
  have. Once that is added, we should add support for infiniband devices
  in netplan too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1848471/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951586] Re: Need option to specify wifi regulatory domain

2022-08-18 Thread Launchpad Bug Tracker
This bug was fixed in the package netplan.io - 0.105-0ubuntu1

---
netplan.io (0.105-0ubuntu1) kinetic; urgency=medium

  * New upstream release: 0.105
- Add support for VXLAN tunnels (#288), LP: #1764716
- Add support for VRF devices (#285), LP: #1773522
- Add support for InfiniBand (IPoIB) (#283), LP: #1848471
- Allow key configuration for GRE tunnels (#274), LP: #1966476
- Allow setting the regulatory domain (#281), LP: #1951586
- Documentation improvements & restructuring (#287)
- Add meson build system (#268)
- Add abigail ABI compatibility checker (#269)
- Update of Fedora RPM spec (#264)
- CI improvements (#265, #282)
- Netplan `set` uses the consolidated libnetplan YAML parser (#254)
- Refactor ConfigManager to use the libnetplan YAML parser (#255)
- New `netplan_netdef_get_filepath` API (#275)
- Improve NetworkManager device management logic (#276), LP: #1951653
Bug fixes:
- Fix `apply` netdev rename/create race condition (#260), LP: #1962095
- Fix `try` timeout (#271), LP: #1967084
- Fix infinite timeouts in ovs-vsctl (#266), Closes: #1000137
- Fix offload options using tristate setting (#270), LP: #1956264
- Fix rendering of NetworkManager passthrough WPA (#279), LP: #1972800
- Fix CLI crash on LibNetplanException (#286)
- Fix NetworkManager internal DHCP client lease lookup (#284), LP: #1979674
  * Update symbols file for 0.105
  * d/patches/: Drop patches, applied upstream
  * d/p/autopkgtest-fixes.patch: Refresh
  * d/control: bump Standards-Version, no changes needed
  * d/control, d/tests/control: suggest/add iw for setting a regulatory domain
  * d/control: merge with Debian, dropping deprecated versioned depends
  * d/control: Update Vcs-* tags for Ubuntu
  * d/watch: sync with Debian
  * d/u/metadata: sync with Debian
  * d/tests: partially merge with Debian

 -- Lukas Märdian   Thu, 18 Aug 2022 14:53:33 +0200

** Changed in: netplan.io (Ubuntu Kinetic)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1951586

Title:
  Need option to specify wifi regulatory domain

Status in cloud-init:
  Invalid
Status in netplan:
  Fix Committed
Status in NetworkManager:
  New
Status in netplan.io package in Ubuntu:
  Fix Released
Status in network-manager package in Ubuntu:
  Incomplete
Status in netplan.io source package in Jammy:
  Triaged
Status in network-manager source package in Jammy:
  Incomplete
Status in netplan.io source package in Kinetic:
  Fix Released
Status in network-manager source package in Kinetic:
  Incomplete

Bug description:
  It would be nice if netplan offered an option to specify the wifi
  regulatory domain (country code).

  
  For devices such as the Raspberry Pi you are currently advertising that users 
can simply setup Ubuntu Server headless by putting the wifi configuration 
details in cloudinit/netplan's "network-config" on the FAT partition of the SD 
card: 
https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#3-wifi-or-ethernet
  But an option to set the wifi country code there does not seem to exist, so 
may not work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1951586/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1846820] Related fix merged to nova (master)

2022-08-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/852900
Committed: 
https://opendev.org/openstack/nova/commit/c178d9360665c219cbcc71c9f37b9e6e3055a5e5
Submitter: "Zuul (22348)"
Branch:master

commit c178d9360665c219cbcc71c9f37b9e6e3055a5e5
Author: Dan Smith 
Date:   Thu Aug 11 09:50:30 2022 -0700

Unify placement client singleton implementations

We have many places where we implement singleton behavior for the
placement client. This unifies them into a single place and
implementation. Not only does this DRY things up, but may cause us
to initialize it fewer times and also allows for emitting a common
set of error messages about expected failures for better
troubleshooting.

Change-Id: Iab8a791f64323f996e1d6e6d5a7e7a7c34eb4fb3
Related-Bug: #1846820


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1846820

Title:
  nova-conductor may crash during deploy due to haproxy-keystone 504

Status in kolla-ansible:
  Triaged
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  keystone was busy (behind haproxy)

  nova-conductor:
  2019-10-04 15:39:17.103 6 CRITICAL nova [-] Unhandled error: GatewayTimeout: 
Gateway Timeout (HTTP 504)
  2019-10-04 15:39:17.103 6 ERROR nova Traceback (most recent call last):
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/bin/nova-conductor", line 10, in 
  2019-10-04 15:39:17.103 6 ERROR nova sys.exit(main())
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/cmd/conductor.py", line 
44, in main
  2019-10-04 15:39:17.103 6 ERROR nova topic=rpcapi.RPC_TOPIC)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/service.py", line 257, in 
create
  2019-10-04 15:39:17.103 6 ERROR nova 
periodic_interval_max=periodic_interval_max)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/service.py", line 129, in 
__init__
  2019-10-04 15:39:17.103 6 ERROR nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/conductor/manager.py", 
line 117, in __init__
  2019-10-04 15:39:17.103 6 ERROR nova self.compute_task_mgr = 
ComputeTaskManager()
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/conductor/manager.py", 
line 243, in __init__
  2019-10-04 15:39:17.103 6 ERROR nova self.report_client = 
report.SchedulerReportClient()
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/scheduler/client/report.py",
 line 186, in __init__
  2019-10-04 15:39:17.103 6 ERROR nova self._client = self._create_client()
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/scheduler/client/report.py",
 line 229, in _create_client
  2019-10-04 15:39:17.103 6 ERROR nova client = self._adapter or 
utils.get_sdk_adapter('placement')
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/utils.py", line 1039, in 
get_sdk_adapter
  2019-10-04 15:39:17.103 6 ERROR nova return getattr(conn, service_type)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/openstack/service_description.py",
 line 92, in __get__
  2019-10-04 15:39:17.103 6 ERROR nova endpoint = 
proxy_mod.Proxy.get_endpoint(proxy)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/keystoneauth1/adapter.py", 
line 282, in get_endpoint
  2019-10-04 15:39:17.103 6 ERROR nova return 
self.session.get_endpoint(auth or self.auth, **kwargs)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/keystoneauth1/session.py", 
line 1200, in get_endpoint
  2019-10-04 15:39:17.103 6 ERROR nova return auth.get_endpoint(self, 
**kwargs)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/keystoneauth1/identity/base.py",
 line 380, in get_endpoint
  2019-10-04 15:39:17.103 6 ERROR nova 
allow_version_hack=allow_version_hack, **kwargs)
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/keystoneauth1/identity/base.py",
 line 271, in get_endpoint_data
  2019-10-04 15:39:17.103 6 ERROR nova service_catalog = 
self.get_access(session).service_catalog
  2019-10-04 15:39:17.103 6 ERROR nova   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/keystoneauth1/identity/base.py",
 line 134, in get_access
  2019-10-04 15:39:17.103 6 ERROR nova self.auth_ref = 
self.get_auth_ref(session