[Yahoo-eng-team] [Bug 1869853] [NEW] flavor is filtered only in current page

2020-03-31 Thread HYSong
Public bug reported:

I wanna to search a flavor in dashboard, 
but it can only filtered in current page.
It is hard to judge whether the flavor is exists in horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1869853

Title:
  flavor is filtered only in current page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I wanna to search a flavor in dashboard, 
  but it can only filtered in current page.
  It is hard to judge whether the flavor is exists in horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1869853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869862] [NEW] neutron-tempest-plugin-designate-scenario failes frequently with imageservice doesn't have supported version

2020-03-31 Thread Lajos Katona
Public bug reported:

neutron-tempest-plugin-designate-scenario job fails frequently with the 
following error:
...
2020-03-30 18:49:44.170062 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:256
 :   openstack --os-cloud=devstack-admin --os-region-name=RegionOne image 
create ubuntu-16.04-server-cloudimg-amd64-disk1 --property hw_rng_model=virtio 
--public --container-format=bare --disk-format qcow2
2020-03-30 18:49:46.242923 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
2020-03-30 18:49:46.247351 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
2020-03-30 18:49:46.247894 | controller | The image service for 
devstack-admin:RegionOne exists but does not have any supported versions.
2020-03-30 18:49:46.384047 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap
...

Example is here:
https://94d5d118ec3db75721c2-a00e37315b6784119b950c4b112ef30c.ssl.cf2.rackcdn.com/711610/13/check/neutron-tempest-plugin-designate-scenario/b23bb46/job-output.txt

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22The%20image%20service%20for%20devstack-admin%3ARegionOne%20exists%20but%20does%20not%20have%20any%20supported%20versions.%5C%22

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869862

Title:
  neutron-tempest-plugin-designate-scenario failes frequently with
  imageservice doesn't have supported version

Status in neutron:
  New

Bug description:
  neutron-tempest-plugin-designate-scenario job fails frequently with the 
following error:
  ...
  2020-03-30 18:49:44.170062 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:256
 :   openstack --os-cloud=devstack-admin --os-region-name=RegionOne image 
create ubuntu-16.04-server-cloudimg-amd64-disk1 --property hw_rng_model=virtio 
--public --container-format=bare --disk-format qcow2
  2020-03-30 18:49:46.242923 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
  2020-03-30 18:49:46.247351 | controller | Failed to contact the endpoint at 
http://10.209.38.120/image for discovery. Fallback to using that endpoint as 
the base url.
  2020-03-30 18:49:46.247894 | controller | The image service for 
devstack-admin:RegionOne exists but does not have any supported versions.
  2020-03-30 18:49:46.384047 | controller | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap
  ...

  Example is here:
  
https://94d5d118ec3db75721c2-a00e37315b6784119b950c4b112ef30c.ssl.cf2.rackcdn.com/711610/13/check/neutron-tempest-plugin-designate-scenario/b23bb46/job-output.txt

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22The%20image%20service%20for%20devstack-admin%3ARegionOne%20exists%20but%20does%20not%20have%20any%20supported%20versions.%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869877] [NEW] Segment doesn't exist network info

2020-03-31 Thread Maciej Jozefczyk
Public bug reported:

Each neutron network has at least one segment.

While the network has only one segment, the 'segment' key is not added
to the info returned by API, but it is merged with a network,

Example:
(Pdb++) pp context.current
{'admin_state_up': True,
 'availability_zone_hints': [],
 'availability_zones': [],
 'created_at': '2020-03-25T09:04:26Z',
 'description': 'test',
 'dns_domain': '',
 'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
 'ipv4_address_scope': None,
 'ipv6_address_scope': None,
 'is_default': True,
 'l2_adjacency': True,
 'mtu': 1500,
 'name': 'public',
 'port_security_enabled': True,
 'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'revision_number': 57,
 'router:external': True,
 'provider:network_type': 'flat',
 'provider:physical_network': 'public',
 'provider:segmentation_id': None,
 'shared': False,
 'status': 'ACTIVE',
 'subnets': ['40863f03-6b9a-4543-a9cb-ad122dfcde5d',
 'e5ae108b-a04b-4f23-84ff-e89db3222772'],
 'tags': [],
 'tenant_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'updated_at': '2020-03-25T13:55:38Z',
 'vlan_transparent': None}


When then network has more than one segment defined, then the network info 
looks as follows, the segment key is there:

(Pdb++) pp context.current
{'admin_state_up': True,
 'availability_zone_hints': [],
 'availability_zones': [],
 'created_at': '2020-03-25T09:04:26Z',
 'description': 'test',
 'dns_domain': '',
 'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
 'ipv4_address_scope': None,
 'ipv6_address_scope': None,
 'is_default': True,
 'l2_adjacency': True,
 'mtu': 1500,
 'name': 'public',
 'port_security_enabled': True,
 'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'revision_number': 57,
 'router:external': True,
 'segments': [{'provider:network_type': 'flat',
   'provider:physical_network': 'public',
   'provider:segmentation_id': None},
  {'provider:network_type': 'flat',
   'provider:physical_network': 'public2',
   'provider:segmentation_id': None}],
 'shared': False,
 'status': 'ACTIVE',
 'subnets': ['40863f03-6b9a-4543-a9cb-ad122dfcde5d',
 'e5ae108b-a04b-4f23-84ff-e89db3222772'],
 'tags': [],
 'tenant_id': '5b69f4bc9cba4b1ab38f434785e27db8',
 'updated_at': '2020-03-25T13:55:38Z',
 'vlan_transparent': None}


We should make this behavior to be unique - add segments to the keys() in all 
cases.
The segments should also include each segment 'id' - it is required for OVN to 
setup localnet ports.

** Affects: neutron
 Importance: Undecided
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869877

Title:
  Segment doesn't exist network info

Status in neutron:
  Triaged

Bug description:
  Each neutron network has at least one segment.

  While the network has only one segment, the 'segment' key is not added
  to the info returned by API, but it is merged with a network,

  Example:
  (Pdb++) pp context.current
  {'admin_state_up': True,
   'availability_zone_hints': [],
   'availability_zones': [],
   'created_at': '2020-03-25T09:04:26Z',
   'description': 'test',
   'dns_domain': '',
   'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
   'ipv4_address_scope': None,
   'ipv6_address_scope': None,
   'is_default': True,
   'l2_adjacency': True,
   'mtu': 1500,
   'name': 'public',
   'port_security_enabled': True,
   'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
   'revision_number': 57,
   'router:external': True,
   'provider:network_type': 'flat',
   'provider:physical_network': 'public',
   'provider:segmentation_id': None,
   'shared': False,
   'status': 'ACTIVE',
   'subnets': ['40863f03-6b9a-4543-a9cb-ad122dfcde5d',
   'e5ae108b-a04b-4f23-84ff-e89db3222772'],
   'tags': [],
   'tenant_id': '5b69f4bc9cba4b1ab38f434785e27db8',
   'updated_at': '2020-03-25T13:55:38Z',
   'vlan_transparent': None}

  
  When then network has more than one segment defined, then the network info 
looks as follows, the segment key is there:

  (Pdb++) pp context.current
  {'admin_state_up': True,
   'availability_zone_hints': [],
   'availability_zones': [],
   'created_at': '2020-03-25T09:04:26Z',
   'description': 'test',
   'dns_domain': '',
   'id': '7ec01be9-1bdc-409b-8bc7-047f337a9722',
   'ipv4_address_scope': None,
   'ipv6_address_scope': None,
   'is_default': True,
   'l2_adjacency': True,
   'mtu': 1500,
   'name': 'public',
   'port_security_enabled': True,
   'project_id': '5b69f4bc9cba4b1ab38f434785e27db8',
   'revision_number': 57,
   'router:external': True,
   'segments': [{'provider:network_type': 'flat',
 'provider:physical_network': 'public',
 'provider:segmentation_id': None},
{'provider:network_type': 'flat',
 'provider:physical_network': 'public2',
 'provider:segmentation_id': None}],
   'shared': False,
   

[Yahoo-eng-team] [Bug 1863021] Re: eventlet monkey patch results in assert len(_active) == 1 AssertionError

2020-03-31 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/716058
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=90146b62c765ad1b8be1ffec1799cba9f3994c2d
Submitter: Zuul
Branch:master

commit 90146b62c765ad1b8be1ffec1799cba9f3994c2d
Author: Corey Bryant 
Date:   Mon Mar 30 15:14:15 2020 -0400

Monkey patch original current_thread _active

Monkey patch the original current_thread to use the up-to-date _active
global variable. This solution is based on that documented at:
https://github.com/eventlet/eventlet/issues/592

Change-Id: I95a8d8cf02a0cb923418c0b5655442b8d7bc6b08
Closes-Bug: #1863021


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1863021

Title:
  eventlet monkey patch results in assert len(_active) == 1
  AssertionError

Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  This appears to be the same issue documented here:
  https://github.com/eventlet/eventlet/issues/592

  However I seem to only hit this with python3.8. Basically nova and
  glance services fail with:

   Exception ignored in: 
   Traceback (most recent call last):
     File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
   assert len(_active) == 1
   AssertionError:
   Exception ignored in: 
   Traceback (most recent call last):
     File "/usr/lib/python3.8/threading.py", line 1454, in _after_fork
   assert len(_active) == 1
   AssertionError:

  Patching nova/monkey_patch.py with the following appears to fix this:

  diff --git a/nova/monkey_patch.py b/nova/monkey_patch.py
  index a07ff91dac..bb7252c643 100644
  --- a/nova/monkey_patch.py
  +++ b/nova/monkey_patch.py
  @@ -59,6 +59,9 @@ def _monkey_patch():
   else:
   eventlet.monkey_patch()

  +import __original_module_threading
  +import threading
  +__original_module_threading.current_thread.__globals__['_active'] = 
threading._active
   # NOTE(rpodolyaka): import oslo_service first, so that it makes eventlet
   # hub use a monotonic clock to avoid issues with drifts of system time 
(see

  Similar patches to glance/cmd/api.py, glance/cmd/scrubber.py and
  glance/cmd/registry.py appears to fix it for glance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1863021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869887] [NEW] L3 DVR ARP population gets incorrect MAC address in some cases

2020-03-31 Thread Slawek Kaplonski
Public bug reported:

L3 dvr router is setting permanent arp entries in qrouter's namespace for all 
ports plugged to the subnets which are connected to the router.
In most cases it's fine, but as it uses MAC address defined in Neutron DB for 
that (which is fine in general) it may cause connectivity problem in specific 
conditions.

It happens for example with Octavia as Octavia creates unbound ports just to 
allocate IP address for their VIP in Neutron's db. And Octavia then sets this 
IP address in allowed_address_pair of other ports which are plugged to 
Amphora's VMs.
But in DVR case such IP address is populated in arp cache with mac address from 
own port, it don't works fine when is configured as additional IP on interface 
with different MAC.

Octavia is only one, most common known example of such use case, but we
know that there are other users who are doing something similar with
keepalived on their instances.

So as this additional port is always "unbound", and "unbound" means that
such port is basically just entry in Neutron DB, I think that there is
no need to set it in arp cache. Only bound ports should be set there.

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: In Progress


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869887

Title:
  L3 DVR ARP population gets incorrect MAC address in some cases

Status in neutron:
  In Progress

Bug description:
  L3 dvr router is setting permanent arp entries in qrouter's namespace for all 
ports plugged to the subnets which are connected to the router.
  In most cases it's fine, but as it uses MAC address defined in Neutron DB for 
that (which is fine in general) it may cause connectivity problem in specific 
conditions.

  It happens for example with Octavia as Octavia creates unbound ports just to 
allocate IP address for their VIP in Neutron's db. And Octavia then sets this 
IP address in allowed_address_pair of other ports which are plugged to 
Amphora's VMs.
  But in DVR case such IP address is populated in arp cache with mac address 
from own port, it don't works fine when is configured as additional IP on 
interface with different MAC.

  Octavia is only one, most common known example of such use case, but
  we know that there are other users who are doing something similar
  with keepalived on their instances.

  So as this additional port is always "unbound", and "unbound" means
  that such port is basically just entry in Neutron DB, I think that
  there is no need to set it in arp cache. Only bound ports should be
  set there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734204] Re: Insufficient free host memory pages available to allocate guest RAM with Open vSwitch DPDK in Newton

2020-03-31 Thread Corey Bryant
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Invalid

** Changed in: cloud-archive/queens
   Status: New => Triaged

** Changed in: nova (Ubuntu)
   Status: New => Invalid

** Changed in: nova (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: cloud-archive/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1734204

Title:
   Insufficient free host memory pages available to allocate guest RAM
  with Open vSwitch DPDK in Newton

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Bionic:
  Triaged

Bug description:
  When spawning an instance and scheduling it onto a compute node which still 
has sufficient pCPUs for the instance and also sufficient free huge pages for 
the instance memory, nova returns:
  Raw

  [stack@undercloud-4 ~]$ nova show 1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc
  (...)
  | fault| {"message": "Exceeded maximum number 
of retries. Exceeded max scheduling attempts 3 for instance 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc. Last exception: internal error: process 
exited while connecting to monitor: 2017-11-23T19:53:20.311446Z qemu-kvm: 
-chardev pty,id=cha", "code": 500, "details": "  File 
\"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 492, in 
build_instances |
  |  | filter_properties, 
instances[0].uuid)  



 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 184, in 
populate_retry  


  |
  |  | raise 
exception.MaxRetriesExceeded(reason=msg)



  |
  |  | ", "created": 
"2017-11-23T19:53:22Z"} 
  (...) 


 

  And /var/log/nova/nova-compute.log on the compute node gives the following 
ERROR message:
  Raw

  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager 
[req-2ad59cdf-4901-4df1-8bd7-ebaea20b9361 5d1785ee87294a6fad5e2b91cc20 
8c307c08d2234b339c504bfdd896c13e - - -] [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] Instance failed 
  to spawn
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] Traceback (most recent call last):
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2087, in 
_build_resources
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] yield resources
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1928, in 
_build_and_run_instance
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] block_device_info=block_device_info)
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7

[Yahoo-eng-team] [Bug 1869155] Re: Cloud-init uses macaddress keyword on s390x where MAC addresses are not necessarily stable/unique across reboots

2020-03-31 Thread Dan Watkins
cloud-init isn't the source of this configuration, so I'm marking our
task Incomplete and adding subiquity (which I believe generates this
config).

(Regardless, I've responded to a couple of things below for background.)

> I think with the right tooling (ip, ifconfig, ethtool or even the
network-manager UI) you can even change MAC addresses today on other
platforms.

There's a substantial difference between people being able to opt into
changing MACs and the platform not providing stable MACs.  In the former
case, we can tell people to stop doing it, or tell them other manual
steps that they can perform when they are manually changing their MAC.
In the latter case, we don't have that option. ;)

> Nowadays interface names are based on their underlying physical
device/address (here in this case '600' or to be precise '0600' -
leading '0' are removed), which makes the interface and it's name
already quite unique - since it is not possible to have two devices (in
one system) with the exact same address.

This may be true for Z, but it isn't for cloud instances because the
"physical" device can move between PCI ports on reboot, for example, or
be named differently based on the order in which the kernel detects each
interface.  In these cases, using the MAC address is a lot more reliable
than using the physical address.

** Changed in: cloud-init
   Status: New => Incomplete

** Also affects: subiquity
   Importance: Undecided
   Status: New

** Summary changed:

- Cloud-init uses macaddress keyword on s390x where MAC addresses are not 
necessarily stable/unique across reboots
+ When installing with subiquity, the generated network config uses the 
macaddress keyword on s390x (where MAC addresses are not necessarily stable 
across reboots)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869155

Title:
  When installing with subiquity, the generated network config uses the
  macaddress keyword on s390x (where MAC addresses are not necessarily
  stable across reboots)

Status in cloud-init:
  Incomplete
Status in subiquity:
  New
Status in Ubuntu on IBM z Systems:
  New

Bug description:
  While performing a subiquity focal installation on an s390x LPAR (where the 
LPAR is connected to a VLAN trunk) I saw a section like this:
 match:
  macaddress: 02:28:0b:00:00:53
  So the macaddress keyword is used, but on several s390x machine generation 
MAC addresses are
  not necessarily stable and uniquie across reboots.
  (z14 GA2 and newer system have in between a modified firmware that ensures 
that MAC addresses are stable and uniquire across reboots, but for z14 GA 1 and 
older systems, incl. the z13 that I used this is not the case - and a backport 
of the firmware modification is very unlikely)

  The configuration that I found is this:

  $ cat /etc/netplan/50-cloud-init.yaml
  # This file is generated from information provided by the datasource. Changes
  # to it will not persist across an instance reboot. To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  ethernets:
  enc600:
  addresses:
  - 10.245.236.26/24
  gateway4: 10.245.236.1
  match:
  macaddress: 02:28:0b:00:00:53
  nameservers:
  addresses:
  - 10.245.236.1
  set-name: enc600
  version: 2

  (This is a spin-off of ticket LP 1868246.)

  It's understood that the initial idea for the MAC addresses was to have a 
unique identifier, but
  I think with the right tooling (ip, ifconfig, ethtool or even the 
network-manager UI) you can even change MAC addresses today on other platforms.

  Nowadays interface names are based on their underlying physical
  device/address (here in this case '600' or to be precise '0600' -
  leading '0' are removed), which makes the interface and it's name
  already quite unique - since it is not possible to have two devices
  (in one system) with the exact same address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696308] Re: list revoked tokens API returns 500 when pki_setup is not run

2020-03-31 Thread Vishakha Agarwal
This bug seems invalid due to [1], token revocation list is deprecated
and only returns 410.

[1] https://review.opendev.org/#/c/672334

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1696308

Title:
  list revoked tokens API returns 500 when pki_setup is not run

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  list revoked tokens API returns 500 InternalServerError

  The documentation [1] says that the API should return list of expired PKI 
tokens, signed by the cryptographic message syntax (CMS) but 
  I am using token format as UUID.

  [1] https://developer.openstack.org/api-ref/identity/v3/?expanded
  =list-revoked-tokens-detail#list-revoked-tokens

  
  Sample program:

1 from keystoneauth1.identity import v3
2 from keystoneauth1 import session
3 from keystoneclient.v3 import client
4 auth = v3.Password(auth_url='http:///identity/v3',
5user_id=,
6password=,
7project_id=)
8 sess = session.Session(auth=auth)
9 keystone = client.Client(session=sess)
   10
   11 a =  keystone.tokens.get_revoked()

  
  The API which is getting used is below:

  GET http:///identity/v3/auth/tokens/OS-PKI/revoked
   
   
  Curl command:
  $ curl -g -i -X GET 
http://10.232.48.201/identity/v3/auth/tokens/OS-PKI/revoked  -H "X-Auth-Token: 
eb8fc9de9d154c6daa6b26a14d7c4e0f"
  HTTP/1.1 500 Internal Server Error
  Date: Wed, 07 Jun 2017 05:51:14 GMT
  Server: Apache/2.4.18 (Ubuntu)
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 143
  x-openstack-request-id: req-a6517dc2-08ac-4d62-8d21-c3405159e1f3
  Connection: close

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request.", "code": 500, "title": "Internal Server
  Error"}}


  command prompt traceback:

  Traceback (most recent call last):
File "3_keystoneclient_program.py", line 12, in 
  a =  keystone.tokens.get_revoked()
File "/usr/local/lib/python2.7/dist-packages/positional/__init__.py", line 
101, in inner
  return wrapped(*args, **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/v3/tokens.py", line 
62, in get_revoked
  resp, body = self._client.get(path)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", 
line 223, in get
  return self.request(url, 'GET', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", 
line 382, in request
  resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", 
line 148, in request
  return self.session.request(url, method, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/positional/__init__.py", line 
101, in inner
  return wrapped(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py", 
line 655, in request
  raise exceptions.from_response(resp, method, url)
  keystoneauth1.exceptions.http.InternalServerError: An unexpected error 
prevented the server from fulfilling your request. (HTTP 500) (Request-ID: 
req-7004583f-3556-4b38-877a-b7669b3df3f8)

  
  Keystone logs:

  
  2017-06-07 11:07:13.262 DEBUG keystone.middleware.auth 
[req-78ad2fdd-6a2d-4489-96c0
  -98c7373b3eb2 None None] Authenticating user token from (pid=9498) 
process_request
  
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py:40
  1
  2017-06-07 11:07:13.270 DEBUG keystone.middleware.auth 
[req-44f7294f-8430-48d3-b9a6
  -4f531544c893 None None] RBAC: auth_context: {'is_delegated_auth': False, 
'access_t
  oken_id': None, 'user_id': u'3ad182b5723d4e88b97ea7a52bf50cea', 'roles': 
[u'admin']
  , 'user_domain_id': u'default', 'consumer_id': None, 'trustee_id': None, 
'is_domain
  ': False, 'is_admin_project': True, 'trustor_id': None, 'token': 
, 'project_id': u'c76af8728a56496fb67c6ace6e78657d', 'trust_id': None, 
'projec
  t_domain_id': u'default'} from (pid=9498) fill_context 
/opt/stack/keystone/keystone
  /middleware/auth.py:239
  2017-06-07 11:07:13.271 INFO keystone.common.wsgi 
[req-44f7294f-8430-48d3-b9a6-4f53
  1544c893 None None] GET 
http://10.232.48.201/identity/v3/auth/tokens/OS-PKI/revoked
  2017-06-07 11:07:13.271 DEBUG keystone.common.authorization 
[req-44f7294f-8430-48d3
  -b9a6-4f531544c893 None None] RBAC: Authorizing identity:revocation_list() 
from (pi
  d=9498) _build_policy_check_credentials 
/opt/stack/keystone/keystone/common/authori
  zation.py:136
  2017-06-07 11:07:13.272 DEBUG keystone.policy.backends.rules 
[req-44f7294f-8430-48d
  3-b9a6-4f531544c893 None None] enforce identity:revocation_list: 
{'is_delegated_aut
  h': False, 'access_token_id': None, 'user_id': 
u'3ad182b5723d4e88b97ea7a52bf50cea',
  

[Yahoo-eng-team] [Bug 1869430] Re: cloud-init persists in running state on Kali in AWS

2020-03-31 Thread Ryan Harper
Excellent news!  Thanks for following up with Kali upstream.  I'm
closing this issue as invalid for cloud-init.  If you find out after the
Kali changes there are still issues, either re-open this bug or file a
new one if the issue/behavior is different.

Thanks!

** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869430

Title:
  cloud-init persists in running state on Kali in AWS

Status in cloud-init:
  Invalid

Bug description:
  Hello,

  We're trying to customize published Kali AMIs using packer & cloud-
  init. The entire process works with Ubuntu, CentOS, and Amazon Linux 2
  targets, but seemingly breaks with Kali. We've tried it with both the
  2020.01 and 2019.03.

  We're also experiencing a long timeout for ec2 data source:

  root@kali:~# cloud-init status --long
  status: running
  time: Fri, 27 Mar 2020 20:06:54 +
  detail:
  DataSourceEc2Local

  root@kali:~# cloud-init analyze blame
  -- Boot Record 01 --
   51.20500s (init-local/search-Ec2Local)
   00.91700s (init-network/config-users-groups)
   00.67200s (init-network/config-growpart)
   00.27400s (init-network/config-resizefs)
   00.24800s (init-network/config-ssh)
   00.00600s (init-network/consume-user-data)
   00.00300s (init-network/check-cache)

  Attached is the log tarball produced by cloud-init. We'd appreciate
  any hints as to what may be happening. It's worth noting that these
  targets are starting in a VPC without direct connection to the outside
  world, but there's a squid proxy available for web traffic. We have
  relevant parts set up to use that proxy.

  
  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869342] Re: OVNMechanismDriver _ovn_client is a read-only property

2020-03-31 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/715375
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ba00b06ae446f25d09b350b08f95a06f569b3758
Submitter: Zuul
Branch:master

commit ba00b06ae446f25d09b350b08f95a06f569b3758
Author: Rodolfo Alonso Hernandez 
Date:   Fri Mar 27 09:54:58 2020 +

mech_driver.OVNMechanismDriver "_ovn_client" is a read-only property

mech_driver.OVNMechanismDriver "_ovn_client" is not a class member but
a read-only property and can't be assigned.

Change-Id: I6fdd9d929e75a6092e0a874b8ffcf283c64b076a
Closes-Bug: #1869342


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869342

Title:
  OVNMechanismDriver _ovn_client is a read-only property

Status in neutron:
  Fix Released

Bug description:
  OVNMechanismDriver "_ovn_client" is a read-only property and can't be 
assigned in "ovn_client" property:
  
https://github.com/openstack/neutron/blob/805fb5c970c8b761ce7f4877052ffef74b524e41/neutron/cmd/ovn/neutron_ovn_db_sync_util.py#L58

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869396] Re: os-ips API policy is allowed for everyone even policy defaults is admin_or_owner

2020-03-31 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/715496
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=58701be6159fff33f1800f6047361176b74d512b
Submitter: Zuul
Branch:master

commit 58701be6159fff33f1800f6047361176b74d512b
Author: Ghanshyam Mann 
Date:   Fri Mar 27 07:43:16 2020 -0500

Fix os-ips policy to be admin_or_owner

os-ips API policy is default to admin_or_owner[1] but API
is allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/715477

This is because API does not pass the server project_id in policy target[2]
and if no target is passed then, policy.py add the default targets which is
nothing but context.project_id (allow for everyone who try to access)[3]

This commit fix this policy by passing the server's project_id in policy
target.

Closes-bug: #1869396
[1] 
https://github.com/openstack/nova/blob/eaf08c0b7b8250408e5d10c6471f2e3155cc0edb/nova/policies/ips.py#L27

Change-Id: Ie7bcb6537f90813cc5b23d69c886037d25b15a42


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869396

Title:
  os-ips API policy is allowed for everyone even policy defaults is
  admin_or_owner

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  os-ips API policy is default to admin_or_owner[1] but API is allowed
  for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/715477/

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/96f6622316993fb41f4c5f37852d4c879c9716a5/nova/api/openstack/compute/ips.py#L41

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/eaf08c0b7b8250408e5d10c6471f2e3155cc0edb/nova/policies/ips.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1869396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734204] Re: Insufficient free host memory pages available to allocate guest RAM with Open vSwitch DPDK in Newton

2020-03-31 Thread Corey Bryant
For the Ubuntu SRU, the stable/queens upstream backport is failing
tests. Additionally the fix for
https://bugs.launchpad.net/nova/+bug/1810977 would need backporting to
stable/queens. These would both need to land upstream in stable/queens
first and we would need capability to test both fixes and ensure no
regressions. For now, marking as won't fix. Suggestion is to upgrade to
rocky+ where this is already fixed.

** Changed in: cloud-archive/queens
   Status: In Progress => Won't Fix

** Changed in: nova (Ubuntu Bionic)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1734204

Title:
   Insufficient free host memory pages available to allocate guest RAM
  with Open vSwitch DPDK in Newton

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive queens series:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in nova source package in Bionic:
  Won't Fix

Bug description:
  When spawning an instance and scheduling it onto a compute node which still 
has sufficient pCPUs for the instance and also sufficient free huge pages for 
the instance memory, nova returns:
  Raw

  [stack@undercloud-4 ~]$ nova show 1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc
  (...)
  | fault| {"message": "Exceeded maximum number 
of retries. Exceeded max scheduling attempts 3 for instance 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc. Last exception: internal error: process 
exited while connecting to monitor: 2017-11-23T19:53:20.311446Z qemu-kvm: 
-chardev pty,id=cha", "code": 500, "details": "  File 
\"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 492, in 
build_instances |
  |  | filter_properties, 
instances[0].uuid)  



 |
  |  |   File 
\"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 184, in 
populate_retry  


  |
  |  | raise 
exception.MaxRetriesExceeded(reason=msg)



  |
  |  | ", "created": 
"2017-11-23T19:53:22Z"} 
  (...) 


 

  And /var/log/nova/nova-compute.log on the compute node gives the following 
ERROR message:
  Raw

  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager 
[req-2ad59cdf-4901-4df1-8bd7-ebaea20b9361 5d1785ee87294a6fad5e2b91cc20 
8c307c08d2234b339c504bfdd896c13e - - -] [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] Instance failed 
  to spawn
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] Traceback (most recent call last):
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2087, in 
_build_resources
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] yield resources
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1928, in 
_build_and_run_instance
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc] block_device_info=block_device_info)
  2017-11-23 19:53:21.021 153615 ERROR nova.compute.manager [instance: 
1b72e7a1-c298-4c92-8d2c-0a9fe886e9bc]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2674, in 
spawn
  2017-11-23 19

[Yahoo-eng-team] [Bug 1869034] Re: Postgresql periodic job failing every day

2020-03-31 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/715011
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=79e8230e39c5ba30ace3afe3d9f1cc15463ea9ad
Submitter: Zuul
Branch:master

commit 79e8230e39c5ba30ace3afe3d9f1cc15463ea9ad
Author: Rodolfo Alonso Hernandez 
Date:   Wed Mar 25 17:35:15 2020 +

Filter subnet by segment ID or None

In "_query_filter_by_fixed_ips_segment", the subnet query should be
filtered by segment ID if exists, or None otherwise.

The "segment_id" field (from the "subnet" DB register) is a string.
As reported in the related bug, PostgreSQL does not accept to compare
this field with a boolean value ("false"). This patch avoids the
previous situation where the DB WHERE statement was trying to compare
a string and a boolean:

  operator does not exist: character varying = boolean
  LINE 5: WHERE anon_1.subnets_segment_id = false
  No operator matches the given name and argument type(s). You might \
need to add explicit type casts.

Change-Id: I1ff29eb45c6663885c2b8a126a3669e75b920c98
Closes-Bug: #1869034


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869034

Title:
  Postgresql periodic job failing every day

Status in neutron:
  Fix Released

Bug description:
  Periodic job "neutron-tempest-postgres-full" is failing every day since 
18.03.2020.
  Always same tests are red:
  
tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_slaac_subnet_with_ports
  
tempest.api.network.test_networks.NetworksIpV6TestAttrs.test_create_delete_stateless_subnet_with_ports

  Errors like
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_95c/periodic/opendev.org/openstack/neutron/master
  /neutron-tempest-postgres-full/95cb0e8/testr_results.html

  It may be that https://review.opendev.org/#/c/709444/ is culprit of
  that issue but we need to verify that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869967] [NEW] subiquity->cloud-init generates netplan yaml telling user not to edit it

2020-03-31 Thread Steve Langasek
Public bug reported:

As seen in , users who install with subiquity end up with a
/etc/cloud/cloud.cfg.d/50-curtin-networking.cfg that persists on the
target system, plus an /etc/netplan/50-cloud-init.yaml that tells users
not to edit it without taking steps to disable cloud-init.

I don't think this is what we want.  I think a subiquity install should
unambiguously treat cloud-init as a one-shot at installation, and leave
the user afterwards with config files that can be directly edited
without fear of cloud-init interfering; and the yaml files generated by
cloud-init on subiquity installs should therefore also not include this
scary language:

# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}

But we need to figure out how to fix this between subiquity and cloud-
init.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: subiquity
 Importance: Undecided
 Status: New

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869967

Title:
  subiquity->cloud-init generates netplan yaml telling user not to edit
  it

Status in cloud-init:
  New
Status in subiquity:
  New

Bug description:
  As seen in , users who install with subiquity end up
  with a /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg that persists
  on the target system, plus an /etc/netplan/50-cloud-init.yaml that
  tells users not to edit it without taking steps to disable cloud-init.

  I don't think this is what we want.  I think a subiquity install
  should unambiguously treat cloud-init as a one-shot at installation,
  and leave the user afterwards with config files that can be directly
  edited without fear of cloud-init interfering; and the yaml files
  generated by cloud-init on subiquity installs should therefore also
  not include this scary language:

  # This file is generated from information provided by the datasource.  Changes
  # to it will not persist across an instance reboot.  To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}

  But we need to figure out how to fix this between subiquity and cloud-
  init.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869155] Re: When installing with subiquity, the generated network config uses the macaddress keyword on s390x (where MAC addresses are not necessarily stable across reboots)

2020-03-31 Thread Michael Hudson-Doyle
Pretty sure it's initramfs-tools that is putting the mac addresses in
the netplan. That probably needs to grow a little platform-dependent
behaviour around this.

** Also affects: initramfs-tools (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869155

Title:
  When installing with subiquity, the generated network config uses the
  macaddress keyword on s390x (where MAC addresses are not necessarily
  stable across reboots)

Status in cloud-init:
  Incomplete
Status in subiquity:
  New
Status in Ubuntu on IBM z Systems:
  New
Status in initramfs-tools package in Ubuntu:
  New

Bug description:
  While performing a subiquity focal installation on an s390x LPAR (where the 
LPAR is connected to a VLAN trunk) I saw a section like this:
 match:
  macaddress: 02:28:0b:00:00:53
  So the macaddress keyword is used, but on several s390x machine generation 
MAC addresses are
  not necessarily stable and uniquie across reboots.
  (z14 GA2 and newer system have in between a modified firmware that ensures 
that MAC addresses are stable and uniquire across reboots, but for z14 GA 1 and 
older systems, incl. the z13 that I used this is not the case - and a backport 
of the firmware modification is very unlikely)

  The configuration that I found is this:

  $ cat /etc/netplan/50-cloud-init.yaml
  # This file is generated from information provided by the datasource. Changes
  # to it will not persist across an instance reboot. To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  ethernets:
  enc600:
  addresses:
  - 10.245.236.26/24
  gateway4: 10.245.236.1
  match:
  macaddress: 02:28:0b:00:00:53
  nameservers:
  addresses:
  - 10.245.236.1
  set-name: enc600
  version: 2

  (This is a spin-off of ticket LP 1868246.)

  It's understood that the initial idea for the MAC addresses was to have a 
unique identifier, but
  I think with the right tooling (ip, ifconfig, ethtool or even the 
network-manager UI) you can even change MAC addresses today on other platforms.

  Nowadays interface names are based on their underlying physical
  device/address (here in this case '600' or to be precise '0600' -
  leading '0' are removed), which makes the interface and it's name
  already quite unique - since it is not possible to have two devices
  (in one system) with the exact same address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870002] [NEW] The operating_status value of loadbalancer is abnormal

2020-03-31 Thread yanhua wei
Public bug reported:

Summary of problems:
1.One loadbalancer contains multiple pools and listeners,as long as the 
operating_status status of a pool is error, the operating_status status of 
loadbalancer is error.
2. The operating_status state of listener is inconsistent with that of pool and 
loadbalancer.

1. Loadbalancer contains multiple pools and listeners:

openstack loadbalancer show a6c134fa-eb05-47e3-b760-ae5ca7117996
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| created_at  | 2020-03-23T03:36:15  |
| description |  |
| flavor_id   | a3de1882-8ace-4df7-9979-ce11153f912c |
| id  | a6c134fa-eb05-47e3-b760-ae5ca7117996 |
| listeners   | c0759d56-4eb1-4404-a805-ff196c8fa9ad |
| | f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
| | 0fbf23f7-5b3d-48d8-b417-e7b770fb949f |
| | 4b067982-2cb2-47cc-856b-ab65307f2ba5 |
| name| gengjie-lvs  |
| operating_status| ERROR|
| pools   | 3ba5de47-3276-4687-aa27-9344d348cdda |
| | 407eaff9-b90e-4cde-a254-04f3047b270f |
| | 73edd2f9-78ea-4cd6-a20f-d02664dd4654 |
| | bf07f027-9793-44e4-b307-495b3273a1ae |
| | d479dba7-a7d2-4631-8eb0-0300800708a2 |
| project_id  | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
| provider| amphora  |
| provisioning_status | ACTIVE   |
| updated_at  | 2020-04-01T02:07:43  |
| vip_address | 192.168.0.170|
| vip_network_id  | 3d22ec75-5b4e-43d7-86bd-480d07c0784b |
| vip_port_id | 518304bc-41d3-4ac6-bc5a-328c5c2a0674 |
| vip_qos_policy_id   | None |
| vip_subnet_id   | 2f55d6f6-ba8b-4390-8679-9338f94afe3e |
+-+--+
2. as long as the operating_status status of a pool is error, the 
operating_statusstatus of loadbalancer is error

openstack loadbalancer pool show 3ba5de47-3276-4687-aa27-9344d348cdda
+--+--+
| Field| Value|
+--+--+
| admin_state_up   | True |
| created_at   | 2020-03-27T08:32:41  |
| description  |  |
| healthmonitor_id | d6a78953-a5a5-49dd-b780-e28c6bf9f16e |
| id   | 3ba5de47-3276-4687-aa27-9344d348cdda |
| lb_algorithm | LEAST_CONNECTIONS|
| listeners| c0759d56-4eb1-4404-a805-ff196c8fa9ad |
| loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
| members  | 5cff4fa5-39c0-4f8b-8c9c-bfb53ea7d028 |
| name | ysy-test-01  |
| operating_status | ERROR|
| project_id   | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
| protocol | HTTP |
| provisioning_status  | ACTIVE   |
| session_persistence  | None |
| updated_at   | 2020-03-31T11:56:30  |
| tls_container_ref| None |
| ca_tls_container_ref | None |
| crl_container_ref| None |
| tls_enabled  | False|
+--+--+
openstack loadbalancer pool show 407eaff9-b90e-4cde-a254-04f3047b270f
+--+--+
| Field| Value|
+--+--+
| admin_state_up   | True |
| created_at   | 2020-03-30T07:18:09  |
| description  |  |
| healthmonitor_id | 348905c1-313b-42ff-8a77-dc298ea239c2 |
| id   | 407eaff9-b90e-4cde-a254-04f3047b270f |
| lb_algorithm | ROUND_ROBIN  |
| listeners| f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
| loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
| members  |  |
| name | geng |
| operating_status | ONLINE   |
| project_id   | bb0cfc0da1cd46f09ac3c3ed978

[Yahoo-eng-team] [Bug 1870002] [NEW] The operating_status value of loadbalancer is abnormal

2020-03-31 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Summary of problems:
1.One loadbalancer contains multiple pools and listeners,as long as the 
operating_status status of a pool is error, the operating_status status of 
loadbalancer is error.
2. The operating_status state of listener is inconsistent with that of pool and 
loadbalancer.

1. Loadbalancer contains multiple pools and listeners:

openstack loadbalancer show a6c134fa-eb05-47e3-b760-ae5ca7117996
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| created_at  | 2020-03-23T03:36:15  |
| description |  |
| flavor_id   | a3de1882-8ace-4df7-9979-ce11153f912c |
| id  | a6c134fa-eb05-47e3-b760-ae5ca7117996 |
| listeners   | c0759d56-4eb1-4404-a805-ff196c8fa9ad |
| | f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
| | 0fbf23f7-5b3d-48d8-b417-e7b770fb949f |
| | 4b067982-2cb2-47cc-856b-ab65307f2ba5 |
| name| gengjie-lvs  |
| operating_status| ERROR|
| pools   | 3ba5de47-3276-4687-aa27-9344d348cdda |
| | 407eaff9-b90e-4cde-a254-04f3047b270f |
| | 73edd2f9-78ea-4cd6-a20f-d02664dd4654 |
| | bf07f027-9793-44e4-b307-495b3273a1ae |
| | d479dba7-a7d2-4631-8eb0-0300800708a2 |
| project_id  | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
| provider| amphora  |
| provisioning_status | ACTIVE   |
| updated_at  | 2020-04-01T02:07:43  |
| vip_address | 192.168.0.170|
| vip_network_id  | 3d22ec75-5b4e-43d7-86bd-480d07c0784b |
| vip_port_id | 518304bc-41d3-4ac6-bc5a-328c5c2a0674 |
| vip_qos_policy_id   | None |
| vip_subnet_id   | 2f55d6f6-ba8b-4390-8679-9338f94afe3e |
+-+--+
2. as long as the operating_status status of a pool is error, the 
operating_statusstatus of loadbalancer is error

openstack loadbalancer pool show 3ba5de47-3276-4687-aa27-9344d348cdda
+--+--+
| Field| Value|
+--+--+
| admin_state_up   | True |
| created_at   | 2020-03-27T08:32:41  |
| description  |  |
| healthmonitor_id | d6a78953-a5a5-49dd-b780-e28c6bf9f16e |
| id   | 3ba5de47-3276-4687-aa27-9344d348cdda |
| lb_algorithm | LEAST_CONNECTIONS|
| listeners| c0759d56-4eb1-4404-a805-ff196c8fa9ad |
| loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
| members  | 5cff4fa5-39c0-4f8b-8c9c-bfb53ea7d028 |
| name | ysy-test-01  |
| operating_status | ERROR|
| project_id   | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
| protocol | HTTP |
| provisioning_status  | ACTIVE   |
| session_persistence  | None |
| updated_at   | 2020-03-31T11:56:30  |
| tls_container_ref| None |
| ca_tls_container_ref | None |
| crl_container_ref| None |
| tls_enabled  | False|
+--+--+
openstack loadbalancer pool show 407eaff9-b90e-4cde-a254-04f3047b270f
+--+--+
| Field| Value|
+--+--+
| admin_state_up   | True |
| created_at   | 2020-03-30T07:18:09  |
| description  |  |
| healthmonitor_id | 348905c1-313b-42ff-8a77-dc298ea239c2 |
| id   | 407eaff9-b90e-4cde-a254-04f3047b270f |
| lb_algorithm | ROUND_ROBIN  |
| listeners| f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
| loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
| members  |  |
| name | geng |
| operating_status | ONLINE   |
| project_id   | bb0cfc

[Yahoo-eng-team] [Bug 1870002] Re: The operating_status value of loadbalancer is abnormal

2020-03-31 Thread yuanshuo
** Project changed: octavia => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870002

Title:
  The operating_status value of loadbalancer is abnormal

Status in neutron:
  New

Bug description:
  Summary of problems:
  1.One loadbalancer contains multiple pools and listeners,as long as the 
operating_status status of a pool is error, the operating_status status of 
loadbalancer is error.
  2. The operating_status state of listener is inconsistent with that of pool 
and loadbalancer.

  1. Loadbalancer contains multiple pools and listeners:

  openstack loadbalancer show a6c134fa-eb05-47e3-b760-ae5ca7117996
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | created_at  | 2020-03-23T03:36:15  |
  | description |  |
  | flavor_id   | a3de1882-8ace-4df7-9979-ce11153f912c |
  | id  | a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | listeners   | c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | | f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
  | | 0fbf23f7-5b3d-48d8-b417-e7b770fb949f |
  | | 4b067982-2cb2-47cc-856b-ab65307f2ba5 |
  | name| gengjie-lvs  |
  | operating_status| ERROR|
  | pools   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | | 407eaff9-b90e-4cde-a254-04f3047b270f |
  | | 73edd2f9-78ea-4cd6-a20f-d02664dd4654 |
  | | bf07f027-9793-44e4-b307-495b3273a1ae |
  | | d479dba7-a7d2-4631-8eb0-0300800708a2 |
  | project_id  | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | provider| amphora  |
  | provisioning_status | ACTIVE   |
  | updated_at  | 2020-04-01T02:07:43  |
  | vip_address | 192.168.0.170|
  | vip_network_id  | 3d22ec75-5b4e-43d7-86bd-480d07c0784b |
  | vip_port_id | 518304bc-41d3-4ac6-bc5a-328c5c2a0674 |
  | vip_qos_policy_id   | None |
  | vip_subnet_id   | 2f55d6f6-ba8b-4390-8679-9338f94afe3e |
  +-+--+
  2. as long as the operating_status status of a pool is error, the 
operating_statusstatus of loadbalancer is error

  openstack loadbalancer pool show 3ba5de47-3276-4687-aa27-9344d348cdda
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-27T08:32:41  |
  | description  |  |
  | healthmonitor_id | d6a78953-a5a5-49dd-b780-e28c6bf9f16e |
  | id   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | lb_algorithm | LEAST_CONNECTIONS|
  | listeners| c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | members  | 5cff4fa5-39c0-4f8b-8c9c-bfb53ea7d028 |
  | name | ysy-test-01  |
  | operating_status | ERROR|
  | project_id   | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | protocol | HTTP |
  | provisioning_status  | ACTIVE   |
  | session_persistence  | None |
  | updated_at   | 2020-03-31T11:56:30  |
  | tls_container_ref| None |
  | ca_tls_container_ref | None |
  | crl_container_ref| None |
  | tls_enabled  | False|
  +--+--+
  openstack loadbalancer pool show 407eaff9-b90e-4cde-a254-04f3047b270f
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-30T07:18:09  |
  | description  |  |
  | healthmonitor_id | 348905c1-313b-42ff-8a77-dc298ea239c2 |
  | id   | 407eaff9-b90e-4cde-a254-04f3047b270f |
  | lb_

[Yahoo-eng-team] [Bug 1870002] Re: The operating_status value of loadbalancer is abnormal

2020-03-31 Thread Michael Johnson
Octavia tracks bugs and RFEs in the new OpenStack Storyboard and not launchpad.
https://storyboard.openstack.org/#!/project/openstack/octavia
Please open your bug in Storyboard for the Octavia team.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870002

Title:
  The operating_status value of loadbalancer is abnormal

Status in neutron:
  Invalid

Bug description:
  Summary of problems:
  1.One loadbalancer contains multiple pools and listeners,as long as the 
operating_status status of a pool is error, the operating_status status of 
loadbalancer is error.
  2. The operating_status state of listener is inconsistent with that of pool 
and loadbalancer.

  1. Loadbalancer contains multiple pools and listeners:

  openstack loadbalancer show a6c134fa-eb05-47e3-b760-ae5ca7117996
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | created_at  | 2020-03-23T03:36:15  |
  | description |  |
  | flavor_id   | a3de1882-8ace-4df7-9979-ce11153f912c |
  | id  | a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | listeners   | c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | | f6b8eee5-2a0a-4ff3-b339-be9489511f1c |
  | | 0fbf23f7-5b3d-48d8-b417-e7b770fb949f |
  | | 4b067982-2cb2-47cc-856b-ab65307f2ba5 |
  | name| gengjie-lvs  |
  | operating_status| ERROR|
  | pools   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | | 407eaff9-b90e-4cde-a254-04f3047b270f |
  | | 73edd2f9-78ea-4cd6-a20f-d02664dd4654 |
  | | bf07f027-9793-44e4-b307-495b3273a1ae |
  | | d479dba7-a7d2-4631-8eb0-0300800708a2 |
  | project_id  | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | provider| amphora  |
  | provisioning_status | ACTIVE   |
  | updated_at  | 2020-04-01T02:07:43  |
  | vip_address | 192.168.0.170|
  | vip_network_id  | 3d22ec75-5b4e-43d7-86bd-480d07c0784b |
  | vip_port_id | 518304bc-41d3-4ac6-bc5a-328c5c2a0674 |
  | vip_qos_policy_id   | None |
  | vip_subnet_id   | 2f55d6f6-ba8b-4390-8679-9338f94afe3e |
  +-+--+
  2. as long as the operating_status status of a pool is error, the 
operating_statusstatus of loadbalancer is error

  openstack loadbalancer pool show 3ba5de47-3276-4687-aa27-9344d348cdda
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-27T08:32:41  |
  | description  |  |
  | healthmonitor_id | d6a78953-a5a5-49dd-b780-e28c6bf9f16e |
  | id   | 3ba5de47-3276-4687-aa27-9344d348cdda |
  | lb_algorithm | LEAST_CONNECTIONS|
  | listeners| c0759d56-4eb1-4404-a805-ff196c8fa9ad |
  | loadbalancers| a6c134fa-eb05-47e3-b760-ae5ca7117996 |
  | members  | 5cff4fa5-39c0-4f8b-8c9c-bfb53ea7d028 |
  | name | ysy-test-01  |
  | operating_status | ERROR|
  | project_id   | bb0cfc0da1cd46f09ac3c3ed9781c6a8 |
  | protocol | HTTP |
  | provisioning_status  | ACTIVE   |
  | session_persistence  | None |
  | updated_at   | 2020-03-31T11:56:30  |
  | tls_container_ref| None |
  | ca_tls_container_ref | None |
  | crl_container_ref| None |
  | tls_enabled  | False|
  +--+--+
  openstack loadbalancer pool show 407eaff9-b90e-4cde-a254-04f3047b270f
  +--+--+
  | Field| Value|
  +--+--+
  | admin_state_up   | True |
  | created_at   | 2020-03-30T07:18:09