[Yahoo-eng-team] [Bug 2014712] [NEW] cloud-init azure

2023-03-31 Thread Victor Berisha
Public bug reported:

My linux machine is running in Azure - Below is the message I'm seeing
when I try logging into my device.


A new feature in cloud-init identified possible datasources for#
# this system as:#
#   ['Ec2', 'None']  #
# However, the datasource used was: Azure#
##
# In the future, cloud-init will only attempt to use datasources that#
# are identified or specifically configured. #
# For more information see   #
#   https://bugs.launchpad.net/bugs/1669675  #
##
# If you are seeing this message, please file a bug against  #
# cloud-init at  #
#https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
# Make sure to include the cloud provider your instance is   #
# running on.#
##
# After you have filed a bug, you can disable this warning by launching  #
# your instance with the cloud-config below, or putting that content #
# into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
##
# #cloud-config  #
# warnings:  #
#   dsid_missing_source: off

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: dsid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2014712

Title:
  cloud-init azure

Status in cloud-init:
  New

Bug description:
  My linux machine is running in Azure - Below is the message I'm seeing
  when I try logging into my device.

  
  A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   ['Ec2', 'None']  #
  # However, the datasource used was: Azure#
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #
  # For more information see   #
  #   https://bugs.launchpad.net/bugs/1669675  #
  ##
  # If you are seeing this message, please file a bug against  #
  # cloud-init at  #
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
  # Make sure to include the cloud provider your instance is   #
  # running on.#
  ##
  # After you have filed a bug, you can disable this warning by launching  #
  # your instance with the cloud-config below, or putting that content #
  # into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
  ##
  # #cloud-config  #
  # warnings:  #
  #   dsid_missing_source: off

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/2014712/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2014226] [NEW] cloud-init crashes with IPv6 routes

2023-03-31 Thread Michael Camilli
Public bug reported:


I have static routes specified for two networks, and during cloud-init an error 
occurs as it tries to make use of NETMASK1.

  # Network 2
  eth1:
addresses: # List of IP[v4,v6] addresses to assign to this interface
  - 2001:db8:abcd:abce:fe::1000/96

routes: # List of static routes for this interface
  - to: 2001:db8:abcd:abce:fe::0/96
via: 2001:db8:abcd:bbce:fe::2

  # Network 3
  eth2:
addresses: # List of IP[v4,v6] addresses to assign to this interface
  - 2001:db8:abcd:abcf:fe::1000/96

routes: # List of static routes for this interface
  - to: 2001:db8:abcd:abcf:fe::0/96
via: 2001:db8:abcd:bbcf:fe::2

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 761, in 
status_wrapper
ret = functor(name, args)
  File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 433, in 
main_init
init.apply_network_config(bring_up=bring_up_interfaces)
  File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 926, in 
apply_network_config
netcfg, bring_up=bring_up
  File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
233, in apply_network_config
self._write_network_state(network_state)
  File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
129, in _write_network_state
renderer.render_network_state(network_state)
  File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1011, in render_network_state
base_sysconf_dir, network_state, self.flavor, templates=templates
  File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1002, in _render_sysconfig
contents[cpath] = iface_cfg.routes.to_string(proto)
  File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 199, 
in to_string
netmask_value = str(self._conf["NETMASK" + index])
KeyError: 'NETMASK1'

Additional Info:
1. Using KVM on a private server
2. See above configuration details that cause an issue. Note in the 
documentation I could only find an example of an ipv4 route, so maybe you could 
enhance the documentation with an example for ipv6 if possible.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar.gz"
   
https://bugs.launchpad.net/bugs/2014226/+attachment/5659716/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2014226

Title:
  cloud-init crashes with IPv6 routes

Status in cloud-init:
  New

Bug description:
  
  I have static routes specified for two networks, and during cloud-init an 
error occurs as it tries to make use of NETMASK1.

# Network 2
eth1:
  addresses: # List of IP[v4,v6] addresses to assign to this interface
- 2001:db8:abcd:abce:fe::1000/96

  routes: # List of static routes for this interface
- to: 2001:db8:abcd:abce:fe::0/96
  via: 2001:db8:abcd:bbce:fe::2

# Network 3
eth2:
  addresses: # List of IP[v4,v6] addresses to assign to this interface
- 2001:db8:abcd:abcf:fe::1000/96

  routes: # List of static routes for this interface
- to: 2001:db8:abcd:abcf:fe::0/96
  via: 2001:db8:abcd:bbcf:fe::2

  Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 761, in 
status_wrapper
  ret = functor(name, args)
File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 433, in 
main_init
  init.apply_network_config(bring_up=bring_up_interfaces)
File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 926, in 
apply_network_config
  netcfg, bring_up=bring_up
File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
233, in apply_network_config
  self._write_network_state(network_state)
File "/usr/lib/python3.6/site-packages/cloudinit/distros/__init__.py", line 
129, in _write_network_state
  renderer.render_network_state(network_state)
File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1011, in render_network_state
  base_sysconf_dir, network_state, self.flavor, templates=templates
File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
1002, in _render_sysconfig
  contents[cpath] = iface_cfg.routes.to_string(proto)
File "/usr/lib/python3.6/site-packages/cloudinit/net/sysconfig.py", line 
199, in to_string
  netmask_value = str(self._conf["NETMASK" + index])
  KeyError: 'NETMASK1'

  Additional Info:
  1. Using KVM on a private server
  2. See above configuration details that cause an issue. Note in the 
documentation I could only find an example of an ipv4 route, so maybe you could 
enhance the documentation with an example for ipv6 if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/clo

[Yahoo-eng-team] [Bug 1967268] Re: glance install with `USE_VENV=True` fails

2023-03-31 Thread Dr. Jens Harbott
Not sure why this was reopened.

** Changed in: devstack
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1967268

Title:
  glance install with `USE_VENV=True` fails

Status in devstack:
  Won't Fix
Status in Glance:
  New

Bug description:
  On a fresh Fedora 35 installation `devstack` installation fails with 
`USE_VENV=True`.
  Current blocking bug is with `glance`:
  ```
  cp: cannot stat '/usr/local/etc/glance/rootwrap.*': No such file or directory
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1967268/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013540] [NEW] [RFE] Add support for Napatech LinkVirt SmartNICs

2023-03-31 Thread Danylo Vodopianov
Public bug reported:

Napatech SmartNICs can offload several computational resource intensive tasks
from the hypervisor, such as packet switching, QoS enforcement, and V(x)LAN
tunnel encapsulation/decapsulation. Upstream and Out of tree OVS
implementations can leverage these offloads when using dpdk via DPDK port
representors 
(https://docs.openvswitch.org/en/latest/topics/dpdk/phy/#representors).

Nova and Os-vif currently support kernel-based VF representors, but not the
DPDK VF representors which leverage vhost-user socket. This spec seeks to
address this gap.

This is related to: https://review.opendev.org/c/openstack/nova-
specs/+/859290

At the PTG meeting on Thursday 30, we've discussed blueprint what was
mention above.

This is meeting notes:
(dvo-plv) Blueprint: "Add support for Napatech LinkVirt SmartNICs" review
https://review.opendev.org/c/openstack/nova-specs/+/859290

1. the base feature is supported in vanilla ovs
2. additional features which require non-vanilla ovs are not targeted here
3. targets ml2/ovs and ml2/ovn
4. why make it different from the already existing hw offloaded ovs?
5. we need to know if a port is a hw offloaded ovs port or a hw offloaded ovs 
dpdk port
6. would eliminate the need for a special os-vif plugin that's currently used
n-lib: https://review.opendev.org/c/openstack/neutron-lib/+/483530

7. POC code is for ml2/ovs, for ml2/ovn would be implemented later
https://review.opendev.org/c/openstack/neutron/+/869510
https://review.opendev.org/c/openstack/os-vif/+/859574
https://review.opendev.org/c/openstack/nova/+/859577
https://review.opendev.org/c/openstack/neutron-lib/+/859573
https://review.opendev.org/c/openstack/nova-specs/+/859290

8. how can we test this in upstream/community CI?
(action) a 3rd party CI system is needed and Napatech is open to provide 
this
this needs to be maintained too, not just that the code works when merged, 
but that it keeps 
working in the future
documentation for 3rd party CI providers:
https://docs.openstack.org/infra/openstackci/third_party_ci.html
https://docs.opendev.org/opendev/system-config/latest/third_party.html

9. (action) track this in a neutron RFE bug (likely spec-less) and discuss it 
in the neutron-drivers meeting
neutron drivers: please see the POC patches
Napatech folks: please attend to answer questions (and create a one-two 
paragraph RFE in 
https://bugs.launchpad.net/neutron/ for tracking purposes)
https://meetings.opendev.org/#Neutron_drivers_Meeting
https://bugs.launchpad.net/neutron/

** Affects: neutron
 Importance: Undecided
 Assignee: Danylo Vodopianov (dvoplv)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Danylo Vodopianov (dvoplv)

** Summary changed:

- Add support for Napatech LinkVirt SmartNICs
+ [RFE] Add support for Napatech LinkVirt SmartNICs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2013540

Title:
  [RFE] Add support for Napatech LinkVirt SmartNICs

Status in neutron:
  In Progress

Bug description:
  Napatech SmartNICs can offload several computational resource intensive tasks
  from the hypervisor, such as packet switching, QoS enforcement, and V(x)LAN
  tunnel encapsulation/decapsulation. Upstream and Out of tree OVS
  implementations can leverage these offloads when using dpdk via DPDK port
  representors 
(https://docs.openvswitch.org/en/latest/topics/dpdk/phy/#representors).

  Nova and Os-vif currently support kernel-based VF representors, but not the
  DPDK VF representors which leverage vhost-user socket. This spec seeks to
  address this gap.

  This is related to: https://review.opendev.org/c/openstack/nova-
  specs/+/859290

  At the PTG meeting on Thursday 30, we've discussed blueprint what was
  mention above.

  This is meeting notes:
  (dvo-plv) Blueprint: "Add support for Napatech LinkVirt SmartNICs" review
  https://review.opendev.org/c/openstack/nova-specs/+/859290

  1. the base feature is supported in vanilla ovs
  2. additional features which require non-vanilla ovs are not targeted here
  3. targets ml2/ovs and ml2/ovn
  4. why make it different from the already existing hw offloaded ovs?
  5. we need to know if a port is a hw offloaded ovs port or a hw offloaded ovs 
dpdk port
  6. would eliminate the need for a special os-vif plugin that's currently used
  n-lib: https://review.opendev.org/c/openstack/neutron-lib/+/483530

  7. POC code is for ml2/ovs, for ml2/ovn would be implemented later
  https://review.opendev.org/c/openstack/neutron/+/869510
  https://review.opendev.org/c/openstack/os-vif/+/859574
  https://review.opendev.org/c/openstack/nova/+/859577
  https://review.opendev.org/c/openstack/neutron-lib/+/859573
  https://review.opendev.org/c/openstack/nova-specs/+/859290

  8. how can we test this in ups

[Yahoo-eng-team] [Bug 2013541] [NEW] [Ubuntu 22.04.2]Not able to Install RHE9 ISO on Existing Ubuntu OS system with secure boot enabled in BIOS

2023-03-31 Thread Shubhakar Gowda P S
Private bug reported:

On DELL EMC 15G PowerEdge system when Trying to Install RHEL9 ISO on
Existing Ubuntu system with secure boot enabled in BIOS, RHEL9 ISO Fails
to boot and showing an error "Verification Failed: (0x1A) Security
violation".

Steps to Reproduce: -

1. Enable secure boot in BIOS.
2. Install Ubuntu 22.04.2 OS and boot to OS.
3. Reboot.
4. Try Installing RHEL9 DVD ISO.
5. Observed Security Violation Message on the screen.

Expected Results: -

RHEL9 DVD ISO Should Install successfully on Existing Ubuntu 22.04.2
system without showing any error on the screen.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Information type changed from Public to Private

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2013541

Title:
  [Ubuntu 22.04.2]Not able to Install RHE9 ISO on Existing Ubuntu OS
  system with secure boot enabled in BIOS

Status in cloud-init:
  New

Bug description:
  On DELL EMC 15G PowerEdge system when Trying to Install RHEL9 ISO on
  Existing Ubuntu system with secure boot enabled in BIOS, RHEL9 ISO
  Fails to boot and showing an error "Verification Failed: (0x1A)
  Security violation".

  Steps to Reproduce: -

  1. Enable secure boot in BIOS.
  2. Install Ubuntu 22.04.2 OS and boot to OS.
  3. Reboot.
  4. Try Installing RHEL9 DVD ISO.
  5. Observed Security Violation Message on the screen.

  Expected Results: -

  RHEL9 DVD ISO Should Install successfully on Existing Ubuntu 22.04.2
  system without showing any error on the screen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/2013541/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951296] Re: OVN db sync script fails with OVN schema that has label column in ACL table

2023-03-31 Thread Edward Hope-Morley
** Description changed:

+ [Impact]
+ Backport fix to Focal/Ussuri so that neutron-ovn-db-sync-util does not trip 
up when it finds ovn ACL table entries with a "label" column that does not 
exist in neutron db.
+ 
+ [Test Plan]
+  * Deploy Openstack Ussuri
+  * Create a network with security groups
+  * Create an instance using this network so that ports get tied to SGs
+  * Go to neutron-api unit (neutron-server) and do the following
+  * cp /etc/neutron/neutron.conf 
/etc/neutron/neutron.conf.no_keystone_authtoken
+  * remove "auth_section = keystone_authtoken" in the [nova] section of 
neutron.conf.no_keystone_authtoken
+  * run 'neutron-ovn-db-sync-util --config-file 
/etc/neutron/neutron.conf.no_keystone_authtoken --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini --ovn-neutron_sync_mode repair'
+  * the above should not produce any errors like the following:
+ 
+ RuntimeError: ACL ... already exists
+ 
+ [Regression Potential]
+ there is no regression potential expected with this patch.
+ 
+ --
+ 
  OVN introduced a new column in ACL table. The column name is label and
  when running db-sync script, we compare ACL generated by the ovn mech
  driver from Neutron DB with the actual ACLs in the OVN DB. Because of
  the new label column, everything seems like a new ACL because the column
  differs to what Neutron generated. Thus the script attempts to create a
  new ACL that already exists.
  
- b'Traceback (most recent call last):'
- b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
- b'return f(self, *args, **kwargs)'
- b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
- b'return f(self, *args, **kwargs)'
- b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1547, in test_ovn_nb_sync_repair'
- b"self._test_ovn_nb_sync_helper('repair')"
- b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1543, in _test_ovn_nb_sync_helper'
- b'self._sync_resources(mode)'
- b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1523, in _sync_resources'
- b'nb_synchronizer.do_sync()'
- b'  File "/home/cloud-user/networking-ovn/networking_ovn/ovn_db_sync.py", 
line 104, in do_sync'
- b'self.sync_acls(ctx)'
- b'  File "/home/cloud-user/networking-ovn/networking_ovn/ovn_db_sync.py", 
line 288, in sync_acls'
- b'txn.add(self.ovn_api.pg_acl_add(**acla))'
- b'  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__'
- b'next(self.gen)'
- b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/ovsdb/impl_idl_ovn.py", line 
230, in transaction'
- b'yield t'
- b'  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__'
- b'next(self.gen)'
- b'  File "/usr/local/lib/python3.6/site-packages/ovsdbapp/api.py", line 
110, in transaction'
- b'del self._nested_txns_map[cur_thread_id]'
- b'  File "/usr/local/lib/python3.6/site-packages/ovsdbapp/api.py", line 
61, in __exit__'
- b'self.result = self.commit()'
- b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 65, in commit'
- b'raise result.ex'
- b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run'
- b'txn.results.put(txn.do_commit())'
- b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 93, in do_commit'
- b'command.run_idl(txn)'
- b'  File 
"/usr/local/lib/python3.6/site-packages/ovsdbapp/schema/ovn_northbound/commands.py",
 line 124, in run_idl'
- b'self.direction, self.priority, self.match))'
- b'RuntimeError: ACL (from-lport, 1001, inport == @neutron_pg_drop && ip) 
already exists'
+ b'Traceback (most recent call last):'
+ b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
+ b'return f(self, *args, **kwargs)'
+ b'  File "/usr/local/lib/python3.6/site-packages/neutron/tests/base.py", 
line 181, in func'
+ b'return f(self, *args, **kwargs)'
+ b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1547, in test_ovn_nb_sync_repair'
+ b"self._test_ovn_nb_sync_helper('repair')"
+ b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1543, in _test_ovn_nb_sync_helper'
+ b'self._sync_resources(mode)'
+ b'  File 
"/home/cloud-user/networking-ovn/networking_ovn/tests/functional/test_ovn_db_sync.py",
 line 1523, in _sync_resources'
+ b'nb_synchronizer.do_sync()'
+ b'  File "/home/cloud-user/networking-ovn/networking

[Yahoo-eng-team] [Bug 2013473] [NEW] default_catalog.templates is outdated

2023-03-31 Thread Takashi Kajinami
Public bug reported:

It seems the catalog template file is horribly outdated and contains the
following problem.

 - keystone v2 was removed long ago
 - cinder no longer provides v2 api and v3 api should be used
 - cinder and nova no longer requires tenant_id templates in url. tenant_id 
templates prevents API access with domain/system scope tokens
 - telemetry endpoint was removed
 - now placement is required by nova
 - ec2 api was split out from nova and now is independent and optional service

** Affects: keystone
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2013473

Title:
  default_catalog.templates is outdated

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  It seems the catalog template file is horribly outdated and contains
  the following problem.

   - keystone v2 was removed long ago
   - cinder no longer provides v2 api and v3 api should be used
   - cinder and nova no longer requires tenant_id templates in url. tenant_id 
templates prevents API access with domain/system scope tokens
   - telemetry endpoint was removed
   - now placement is required by nova
   - ec2 api was split out from nova and now is independent and optional service

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2013473/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: New => Invalid

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Committed
Status in Glance:
  Invalid
Status in OpenStack Heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kolla:
  Expired
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Invalid
Status in oslo.config:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Rally:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
     oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
     By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
     override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
     In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
     parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.

     [1]
  https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173

  2. Proposal
     1) Fix violations when enforce_type=True in each project.

    2) Make method CONF.set_override with  enforce_type=True by default
  in oslo_config

   You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

  3. How to find violations in your projects.

     1. Run tox -e py27

     2. then modify oslo.config with enforce_type=True
    cd .tox/py27/lib64/python2.7/site-packages/oslo_config
    edit cfg.py with enforce_type=True

  -def set_override(self, name, override, group=None, enforce_type=False):
  +def set_override(self, name, override, group=None, enforce_type=True):

    3. Run tox -e py27 again, you will find violations.

  
  The current state is that oslo.config make enforce_type as True by default 
and deprecate this parameter, will remove it in the future, the current work
  is that remove usage of enforce_type in consuming projects. We can list the
  usage of it in 
http://codesearch.openstack.org/?q=enforce_type&i=nope&files=&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1517839/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615502] Re: LBAAS - housekeeping serive does not cleanup stale amphora VMs

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: In Progress => Invalid

** Changed in: octavia
 Assignee: Ravikumar (ravikumar-vallabhu) => (unassigned)

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615502

Title:
  LBAAS - housekeeping serive does not cleanup stale amphora VMs

Status in neutron:
  Invalid
Status in octavia:
  Invalid

Bug description:
  1.Initially there were no spare VMs since the “spare_amphora_pool_size = 
0”
   .

   [house_keeping]
  # Pool size for the spare pool
  spare_amphora_pool_size = 0

  
  stack@hlm:~/scratch/ansible/next/hos/ansible$ nova list --all
  WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this 
option will be removed in novaclient 3.3.0.
  
+--+--+--+++-+---+
  | ID   | Name 
| Tenant ID| Status | Task State | Power State 
| Networks  |
  
+--+--+--+++-+---+
  | 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.5|
  | 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.6|
  
+--+--+--+++-+---+

  2.  Change the spare pool size to 1 and restart Octavia-
  housekeeping service. Spare Amphora VM gets created as below.

  
  stack@hlm:~/scratch/ansible/next/hos/ansible$  nova list --all
  WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this 
option will be removed in novaclient 3.3.0.
  
+--+--+--+++-+---+
  | ID   | Name 
| Tenant ID| Status | Task State | Power State 
| Networks  |
  
+--+--+--+++-+---+
  | 6a1101cd-d9d3-4c8e-aa1d-0790f7f4ac8b | 
amphora-18f4d90f-fe6e-4085-851e-7571cba0c65a | a5e6e87d402847e7b4210e035a0fceec 
| ACTIVE | -  | Running | OCTAVIA-MGMT-NET=100.74.25.13 |
  | 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.5|
  | 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 
| n1=4.5.6.6|
  
+--+--+--+++-+---+

  3.  Now change the spare pool size to 0 and restart Octavia-
  housekeeping service. Spare Amphora VM does not get deleted.

  stack@hlm:~/scratch/ansible/next/hos/ansible$  nova list --all
  WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this 
option will be removed in novaclient 3.3.0.
  
+--+--+--+++-+---+
  | ID   | Name 
| Tenant ID| Status | Task State | Power State 
| Networks  |
  
+--+--+--+++-+---+
  | 6a1101cd-d9d3-4c8e-aa1d-0790f7f4ac8b | 
amphora-18f4d90f-fe6e-4085-851e-7571cba0c65a | a5e6e87d402847e7b4210e035a0fceec 
| ACTIVE | -  | Running | OCTAVIA-MGMT-NET=100.74.25.13 |
  | 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1  
| d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running 

[Yahoo-eng-team] [Bug 2013467] [NEW] Ansible run_user ignored when distro install method is used

2023-03-31 Thread Ondřej Caletka
Public bug reported:

When Ansible module is used with `install_method: distro`, the value of
attribute `run_user:` is ignored and the whole Ansible playbook runs as
root.

The problem lies somewhere around [class
AnsiblePullDistro](https://github.com/canonical/cloud-
init/blob/main/cloudinit/config/cc_ansible.py#L150) which does not set
`self.run_user` unlike [class
AnsiblePullPip](https://github.com/canonical/cloud-
init/blob/main/cloudinit/config/cc_ansible.py#L116)

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/2013467

Title:
  Ansible run_user ignored when distro install method is used

Status in cloud-init:
  New

Bug description:
  When Ansible module is used with `install_method: distro`, the value
  of attribute `run_user:` is ignored and the whole Ansible playbook
  runs as root.

  The problem lies somewhere around [class
  AnsiblePullDistro](https://github.com/canonical/cloud-
  init/blob/main/cloudinit/config/cc_ansible.py#L150) which does not set
  `self.run_user` unlike [class
  AnsiblePullPip](https://github.com/canonical/cloud-
  init/blob/main/cloudinit/config/cc_ansible.py#L116)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/2013467/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548774] Re: LBaas V2: operating_status of 'dead' member is always online with Healthmonitor

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: In Progress => Invalid

** Changed in: octavia
 Assignee: Carlos Goncalves (cgoncalves) => (unassigned)

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548774

Title:
  LBaas V2: operating_status of 'dead' member is always online with
  Healthmonitor

Status in neutron:
  Won't Fix
Status in octavia:
  Invalid
Status in senlin:
  New

Bug description:
  Expectation:
  Lbaas v2 healthmonitor will update status of "bad" member just as it behaves 
with v1. However, operating_status of pool members will not change no matter it 
is normal or not.

  ENV:
  My devstack runs in a single node of ubuntu14.04 and uses master branch code, 
mysql and rabbitmq. Tenantname is 'demo', username is 'demo'. I am using 
private-subnet for loadbalancer and member VM. octavia provider.

  Steps to reproduce:
  create a vm from cirros-0.3.4-x86_64-uec image and create one member 
accordingly into loadbalancer pool with healthmonitor. Then curl to get the 
statues of loadbalancer, find member status is online. Then nova stop the 
member mapped VM, curl again and again. Its operating_status of member keeps 
'online' instead of 'error'. 

  Below comes the curl response. No difference before and after pool
  member VM turns into SHUTOFF since no status change happens ever.

  {"statuses": {"loadbalancer": {"name": "", "listeners": [{"pools":
  [{"name": "", "provisioning_status": "ACTIVE", "healthmonitor":
  {"type": "PING", "id": "cb41b4e4-7008-479f-a6d9-4751ac7a1ee4", "name":
  "", "provisioning_status": "ACTIVE"}, "members": [{"name": "",
  "provisioning_status": "ACTIVE", "address": "10.0.0.13",
  "protocol_port": 80, "id": "6d682536-e9fe-4456-ad24-df8521857ee0",
  "operating_status": "ONLINE"}], "id":
  "eaef79a9-d5e0-4582-b45b-cd460beea4fc", "operating_status":
  "ONLINE"}], "name": "", "id": "4e3a7d98-3ab9-4a39-b915-a9651fcada65",
  "operating_status": "ONLINE", "provisioning_status": "ACTIVE"}], "id":
  "ef45be96-15e0-42d9-af34-34608dafdb6c", "operating_status": "ONLINE",
  "provisioning_status": "ACTIVE"}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548774/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670585] Re: lbaas-agent: 'ascii' codec can't encode characters

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: New => Invalid

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670585

Title:
  lbaas-agent: 'ascii' codec can't encode characters

Status in neutron:
  Invalid
Status in octavia:
  Invalid

Bug description:
  version: liberty

  1). Once the Chinese characters are used as the loadbalance name, there will 
be the following error:
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
[req-4a3f6b62-c449-4d88-82d1-96b8b96c7307 18295a4db5364daaa9f27e1169b96926 
65fe786567a341829aa05751b2b7360f - - -] Create listener 
75fef462-fe18-46a3-9722-6db2cf0be8ea failed on device driver haproxy_ns
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
300, in create_listener
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
driver.listener.create(listener)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 405, in create
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(listener.loadbalancer)e
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 369, in refresh
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager if 
(not self.driver.deploy_instance(loadbalancer) and
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254, in 
inner
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 174, in deploy_instance
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self.create(loadbalancer)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 202, in create
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 352, in _spawn
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 90, in save_config
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
utils.replace_file(conf_path, config_str)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 192, in 
replace_file
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
tmp_file.write(data)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager   File 
"/usr/lib64/python2.7/socket.py", line 316, in write
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
data = str(data) # XXX Should really reject non-string non-buffers
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager 
UnicodeEncodeError: 'ascii' codec can't encode characters in position 20-43: 
ordinal not in range(128)
  2017-03-07 14:29:00.599 37381 ERROR neutron_lbaas.agent.agent_manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670585/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827746] Re: Port detach fails when compute host is unreachable

2023-03-31 Thread Gregory Thiemonge
Abandoned after re-enabling the Octavia launchpad.

** Changed in: octavia
   Status: New => Invalid

** Tags added: auto-abandon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1827746

Title:
  Port detach fails when compute host is unreachable

Status in OpenStack Compute (nova):
  Confirmed
Status in octavia:
  Invalid

Bug description:
  When a compute host is unreachable, a port detach for a VM on that
  host will not complete until the host is reachable again. In some
  cases, this may for an extended period or even indefinitely (for
  example, a host is powered down for hardware maintenance, and possibly
  needs to be removed from the fleet entirely). This is problematic for
  multiple reasons:

  1) The port should not be deleted in this state (it can be, but for reasons 
outside the scope of this bug, that is not recommended). Thus, the quota cannot 
be reclaimed by the project.
  2) The port cannot be reassigned to another VM. This means that for projects 
that rely heavily on maintaining a published IP (or possibly even a published 
port ID), there is no way to proceed. For example, if Octavia wanted to allow 
failing over from one VM to another in a VM down event (as would happen if the 
host was powered off) without using AAP, it would be unable to do so, leading 
to an extended downtime.

  Nova will supposedly clean up such resources after the host has been
  powered up, but that could take hours or possibly never happen. So,
  there should be a way to force the port to detach regardless of
  ability to reach the compute host, and simply allow the cleanup to
  happen on that host in the future (if possible) but immediately
  release the port for delete or rebinding.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1827746/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp