[Yahoo-eng-team] [Bug 1861749] Re: Instance is under rebuilding status always when use 'arm64' architecture image

2020-03-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/711363
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8dada6d0f66f6c53a78725fb54f6a8e5d36fb127
Submitter: Zuul
Branch:master

commit 8dada6d0f66f6c53a78725fb54f6a8e5d36fb127
Author: ericxiett 
Date:   Thu Mar 5 07:49:38 2020 +0800

Catch exception when use invalid architecture of image

Currently, when attempting to rebuild an instance with an image with invalid
architecture, the API raises a 500 error and leaves the instance stuck in
the REBUILDING task state.This patch adds checking image's architecture
before updating instance's task_state. And catches
exception.InvalidArchitectureName then returns HTTPBadRequest.

Change-Id: I25eff0271c856a8d3e83867b448e1dec6f6732ab
Closes-Bug: #1861749


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1861749

Title:
  Instance is under rebuilding status always when use 'arm64'
  architecture image

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Got 'Unexpected API Error' when i use new image with property 
'hw_architecture=arm64' to rebuild instance.
  And the status of instance stay 'REBUILD' always.

  Steps to reproduce
  ==
  1. On x86 env, boot instance 'test'
  2. Set new image with  'hw_architecture=arm64'
  openstack image set --property hw_architecture=arm64 cirros-test
  3. Use the image to rebuild instance
  openstack server rebuild --image cirros-test test

  Expected result
  ===
  Got badrequest, the status of instance rollback to active

  Actual result
  =
  Got 'Unexpected API Error', the status of instance stay rebuild

  Environment
  ===
  $ git log
  commit ee6af34437069a23284f4521330057a95f86f9b7 (HEAD -> stable/rocky, 
origin/stable/rocky)
  Author: Luigi Toscano 
  Date:   Wed Dec 18 00:28:15 2019 +0100

  Zuul v3: use devstack-plugin-nfs-tempest-full

  ... and replace its legacy ancestor.

  Change-Id: Ifd4387a02b3103e1258e146e63c73be1ad10030c
  (cherry picked from commit e7e39b8c2e20f5d7b5e70020f0e42541dc772e68)
  (cherry picked from commit e82e1704caa1c2baea29f05e8d426337e8de7a3c)
  (cherry picked from commit 99aa8ebc12949f9bba76f22e877b07d02791bf5b)

  Logs & Configs
  ==
  # openstack server rebuild --image cirros-test sjt-test-1
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-383bbffc-5a85-40e7-86ff-ac7c8d563dfa)

  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi 
[req-383bbffc-5a85-40e7-86ff-ac7c8d563dfa 40e7b8c3d59943e08a52acd24fe30652 
d13f1690c08d41ac854d720ea510a710 - default default] Unexpected exception in API 
method: ValueError: Architecture name 'arm64' is not valid
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py",
 line 801, in wrapped
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return f(*args, 
**kwargs)
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
  2020-02-04 00:39:04.761 1 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
  2020

[Yahoo-eng-team] [Bug 1867214] [NEW] MTU too large error presented on create but not update

2020-03-12 Thread Nate Johnston
Public bug reported:

If an MTU is supplied when creating a network it is rejected if it is
above global_physnet_mtu.  If an MTU is supplied when updating a network
it is not rejected even if the value is too large.  When
global_physnet_mtu is 1500 I can easily set MTU 9000 or even beyond
through update.  This is not valid.

~~~
w(overcloud) [stack@undercloud-0 ~]$ openstack network show private1
+---++
| Field | Value 
 |
+---++
| admin_state_up| UP
 |
| availability_zone_hints   |   
 |
| availability_zones| nova  
 |
| created_at| 2020-03-09T15:55:38Z  
 |
| description   |   
 |
| dns_domain| None  
 |
| id| bffac18a-ceaa-4eeb-9a19-800de150def5  
 |
| ipv4_address_scope| None  
 |
| ipv6_address_scope| None  
 |
| is_default| False 
 |
| is_vlan_transparent   | None  
 |
| mtu   | 1500  
 |
| name  | private1  
 |
| port_security_enabled | True  
 |
| project_id| d69c1c6601c741deaa205fa1a7e9c632  
 |
| provider:network_type | vlan  
 |
| provider:physical_network | tenant
 |
| provider:segmentation_id  | 106   
 |
| qos_policy_id | None  
 |
| revision_number   | 8 
 |
| router:external   | External  
 |
| segments  | None  
 |
| shared| True  
 |
| status| ACTIVE
 |
| subnets   | 51fc6508-313f-41c4-839c-bcbe2fa8795d, 
7b6fcbe1-b064-4660-b04a-e433ab18ba73 |
| tags  |   
 |
| updated_at| 2020-03-09T15:56:41Z  
 |
+---++
(overcloud) [stack@undercloud-0 ~]$ openstack network set private1 --mtu 9000
(overcloud) [stack@undercloud-0 ~]$ openstack network set private1 --mtu 9500
(overcloud) [stack@undercloud-0 ~]$ openstack network show private1

+---++
| Field | Value 
 |
+---++
| admin_state_up| UP
 |
| availability_zone_hints   |   
 |
| availability_zones| nova  
 |
| created_at| 2020-03-09T15:55:38Z  
 |
| description   |   
 |
| dns_domain| None  
 |
| id| bffac18a-ceaa-4eeb-9a19-800de

[Yahoo-eng-team] [Bug 1867205] [NEW] Reload functionality is broken in Ussuri

2020-03-12 Thread Abhishek Kekane
Public bug reported:

Reload config files doesn't work upon sending sighup signal to glance-
api service parent process, resulting in terminating the parent and
keeping child processes orphan.

** Affects: glance
 Importance: High
 Status: New

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1867205

Title:
  Reload functionality is broken in Ussuri

Status in Glance:
  New

Bug description:
  Reload config files doesn't work upon sending sighup signal to glance-
  api service parent process, resulting in terminating the parent and
  keeping child processes orphan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1867205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862343] Re: compiled messages not shipped in packaging resulting in missing translations

2020-03-12 Thread Corey Bryant
This bug was fixed in the package horizon - 
3:18.0.1~git2020021409.bb959361b-0ubuntu3~cloud0
---

 horizon (3:18.0.1~git2020021409.bb959361b-0ubuntu3~cloud0) bionic-ussuri; 
urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 horizon (3:18.0.1~git2020021409.bb959361b-0ubuntu3) focal; urgency=medium
 .
   * d/rules: Force regeneration of SOURCES.txt to ensure that generated
 django{js}.mo files are included in binary packages, resolving issues
 with translations (LP: #1862343).


** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1862343

Title:
  compiled messages not shipped in packaging resulting in missing
  translations

Status in OpenStack openstack-dashboard charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  New
Status in Ubuntu Cloud Archive stein series:
  New
Status in Ubuntu Cloud Archive train series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Eoan:
  New
Status in horizon source package in Focal:
  Fix Released

Bug description:
  I changed the language in GUI to French but interface stays mostly
  English. Just a few strings are displayed in French, e.g.:

  - "Password" ("Mot de passe") on the login screen,
  - units "GB", "TB" as "Gio" and "Tio" in Compute Overview,
  - "New password" ("Noveau mot de passe") in User Settings.

  All other strings are in English.

  See screenshots attached.

  This is the Stein on Ubuntu Bionic deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1862343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867202] [NEW] Directory nonexistent(root home) error in ec2 instance userdata

2020-03-12 Thread Shay Keren
Public bug reported:

AWS EC2 instance
Ubuntu

Error while trying to create some file under root user home directory
This error occurs if userdata interpreter is #!/bin/sh, it works fine if I 
manually run it.

see attached logs or more details

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar.gz"
   
https://bugs.launchpad.net/bugs/1867202/+attachment/5336288/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1867202

Title:
  Directory nonexistent(root home) error in ec2 instance userdata

Status in cloud-init:
  New

Bug description:
  AWS EC2 instance
  Ubuntu

  Error while trying to create some file under root user home directory
  This error occurs if userdata interpreter is #!/bin/sh, it works fine if I 
manually run it.

  see attached logs or more details

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1867202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863058] Re: Arm64 CI for Nova

2020-03-12 Thread Stephen Finucane
Awesome! Could you bring this up on openstack-discuss [1]? It's far more
likely to get eyes (and volunteers to help with issues you'll have)
there.

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
discuss

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863058

Title:
  Arm64 CI for Nova

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Linaro has donate a cluster for OpenStack CI on Arm64.
  Now the cluster is ready, 
https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl03.openstack.org.yaml#L414

  We'd like to setup CI for Nova first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864279] Re: Unable to attach more than 6 scsi volumes

2020-03-12 Thread Stephen Finucane
Looks like it's been fixed on RHEL 7.7 too [1]. If you're on a different
OS, I'd suggest opening a bug against the libvirt component for same and
requesting a backport. I don't think there's much to do here from a nova
perspective.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1741782

** Bug watch added: Red Hat Bugzilla #1741782
   https://bugzilla.redhat.com/show_bug.cgi?id=1741782

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864279

Title:
  Unable to attach more than 6 scsi volumes

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Scsi volume with unit number 7 can not be attached because of this
  libvirt check:
  
https://github.com/libvirt/libvirt/blob/89237d534f0fe950d06a2081089154160c6c2224/src/conf/domain_conf.c#L4796

  
  Nova automatically increase volume unit number by 1, and when I attach 7th 
volume to vm I've got this error:
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver 
[req-156a4725-279d-4173-9f11-85125e4a3e47] [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] Failed to attach volume at mountpoint: 
/dev/sdh: libvirt.libvirtError: Requested operation is not valid: Domain 
already contains a disk with that address
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] Traceback (most recent call last):
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1810, in 
attach_volume
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] guest.attach_device(conf, 
persistent=True, live=live)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 305, in 
attach_device
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] 
self._domain.attachDeviceFlags(device_xml, flags=flags)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 190, in doit
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 148, in proxy_call
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] rv = execute(f, *args, **kwargs)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 129, in execute
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] six.reraise(c, e, tb)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/six.py", line 693, in reraise
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] raise value
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] rv = meth(*args, **kwargs)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]   File 
"/usr/lib/python3/dist-packages/libvirt.py", line 605, in attachDeviceFlags
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f] libvirt.libvirtError: Requested operation 
is not valid: Domain already contains a disk with that address
  2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 
3532baf6-a0a4-4a81-84f9-3622c713435f]

  After patching libvirt driver to skip unit 7 I can attach more than 6
  volumes.

  
  ii  nova-compute  2:20.0.0-0ubuntu1~cloud0
  ii  nova-compute-kvm  2:20.0.0-0ubuntu1~cloud0
  ii  nova-compute-libvirt  2:20.0.0-0ubuntu1~cloud0
  ii 

[Yahoo-eng-team] [Bug 1867197] [NEW] ec2: add support for rendering public IP addresses in network config

2020-03-12 Thread Chad Smith
Public bug reported:

cloud-init 20.1 currently renders only local ip address information in
the network configuration emitted for attached NICs.

Ec2 instances can be launched and configured with multiple public IP
addresses

https://aws.amazon.com/blogs/aws/multiple-ip-addresses-for-ec2
-instances-in-a-virtual-private-cloud/


Ec2 IMDS network data exposes public IPv4 addresses in the following metadata 
keys:
*  network.interfaces.macs..public-ipv4s =  "18.218.219.181"
* and an association of public ip to the private IPs allocated via the key
 "ipv4-associations": {
"18.218.219.181": "172.31.44.13"
  },


Cloud-init should render known public ipv4 address configuration on the node to 
avoid network round trips for local services that may connect with the VMs 
known public IP

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1867197

Title:
  ec2: add support for rendering public IP addresses in network config

Status in cloud-init:
  New

Bug description:
  cloud-init 20.1 currently renders only local ip address information in
  the network configuration emitted for attached NICs.

  Ec2 instances can be launched and configured with multiple public IP
  addresses

  https://aws.amazon.com/blogs/aws/multiple-ip-addresses-for-ec2
  -instances-in-a-virtual-private-cloud/

  
  Ec2 IMDS network data exposes public IPv4 addresses in the following metadata 
keys:
  *  network.interfaces.macs..public-ipv4s =  "18.218.219.181"
  * and an association of public ip to the private IPs allocated via the key
   "ipv4-associations": {
  "18.218.219.181": "172.31.44.13"
    },

  
  Cloud-init should render known public ipv4 address configuration on the node 
to avoid network round trips for local services that may connect with the VMs 
known public IP

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1867197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863757] Re: Insufficient memory for guest pages when using NUMA

2020-03-12 Thread Stephen Finucane
*** This bug is a duplicate of bug 1734204 ***
https://bugs.launchpad.net/bugs/1734204

Yes, this has been resolved since Stein as noted at 1734204.
Unfortunately Queen is in Extended Maintenance and we no longer release
new versions so this is not likely to be fixed there.

** This bug has been marked a duplicate of bug 1734204
Insufficient free host memory pages available to allocate guest RAM with 
Open vSwitch DPDK in Newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863757

Title:
  Insufficient memory for guest pages when using NUMA

Status in OpenStack Compute (nova):
  New

Bug description:
  This is a Queens / Bionic openstack deploy.

  Compute nodes are using hugepages for nova instances (reserved at boot
  time):

  root@compute1:~# cat /proc/meminfo | grep -i huge
  AnonHugePages: 0 kB
  ShmemHugePages:0 kB
  HugePages_Total: 332
  HugePages_Free:  184
  HugePages_Rsvd:0
  HugePages_Surp:0
  Hugepagesize:1048576 kB

  There are two numa nodes, as follows:

  root@compute1:~# lscpu | grep -i numa
  NUMA node(s):2
  NUMA node0 CPU(s):   0-19,40-59
  NUMA node1 CPU(s):   20-39,60-79

  Compute nodes are using DPDK, and memory for it has been reserved with
  the following directive:

  reserved-huge-pages: "node:0,size:1GB,count:8;node:1,size:1GB,count:8"

  A number of instances have already been created on node "compute1",
  until the point that current memory usage is as follows:

  root@compute1:~# cat /sys/devices/system/node/node*/meminfo  | grep -i huge
  Node 0 AnonHugePages: 0 kB
  Node 0 ShmemHugePages:0 kB
  Node 0 HugePages_Total:   166
  Node 0 HugePages_Free: 26
  Node 0 HugePages_Surp:  0
  Node 1 AnonHugePages: 0 kB
  Node 1 ShmemHugePages:0 kB
  Node 1 HugePages_Total:   166
  Node 1 HugePages_Free:158
  Node 1 HugePages_Surp:  0

  Problem:

  When a new instance is created (8 cores and 32gb ram), nova tries to
  schedule it on numa node 0 and fails with "Insufficient free host
  memory pages available to allocate guest RAM", even though there is
  enough memory available on numa node 1.

  This behavior has been seem by other users also here (although the
  solution on that bug seems to be more a coincidence than a proper
  solution -- then classified as not a bug, which I don't believe is the
  case):

  https://bugzilla.redhat.com/show_bug.cgi?id=1517004

  Flavor being used has nothing special except a property for
  hw:mem_page_size='large'.

  Instance is being forced to be created on "zone1::compute1", otherwise
  no kind of pinning of cpus or other resources. All the forcing of vm
  going to node0 seems to be nova's decision when instantiating it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864422] Re: can the instance supports online keys updates?

2020-03-12 Thread Stephen Finucane
To the best of my knowledge, this is not currently supported. We only
support it for rebuild [1], which is a destructive operation.

[1] https://specs.openstack.org/openstack/nova-
specs/specs/queens/implemented/rebuild-keypair-reset.html

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864422

Title:
  can the instance supports online keys updates?

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
As a tenant, the private key of the key may be lost, and the user needs to 
update the keys rather than break the business and stop instance.

Do you have any Suggestions and ideas?

Thank you very much.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1864422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864160] Re: mete_date shows region information

2020-03-12 Thread Stephen Finucane
Please bring questions and support requests to either the openstack-
discuss mailing list or the #openstack-nova IRC channel, please

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Status: Opinion => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864160

Title:
  mete_date shows region information

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
I want to show region_name in mete_date. Do you have any Suggestions and 
ideas?

In the instance execution 'curl
  http://169.254.169.254/openstack/2013-04-04/mete_date.json ', the
  information of the current instance region (region_name) is
  displayed.Do you have any good ideas?

Thank you very much.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1864160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1866288] Re: tox pep8 fails on ubuntu 18.04.3

2020-03-12 Thread Stephen Finucane
Yup, Rocky was tested on Xenial (16.04), not Bionic (18.04). Bionic
doesn't provide a suitable Python interpreter for this older code. This
is expected.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1866288

Title:
  tox pep8 fails on ubuntu 18.04.3

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  pep8 checking fails for rocky branch on ubuntu 18.04.3

  root@mgt02:~/src/nova# tox -epep8 -vvv
    removing /root/src/nova/.tox/log
  using tox.ini: /root/src/nova/tox.ini
  using tox-3.1.0 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc
  skipping sdist step
  pep8 start: getenv /root/src/nova/.tox/shared
  pep8 recreate: /root/src/nova/.tox/shared
  ERROR: InterpreterNotFound: python3.5
  pep8 finish: getenv after 0.00 seconds
  
__
 summary 
___
  ERROR:  pep8: InterpreterNotFound: python3.5

  
  root@mgt02:~/src/nova# uname -a
  Linux mgt02 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 
x86_64 x86_64 x86_64 GNU/Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1866288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861460] Re: cloud-init should parse initramfs rendered netplan if present

2020-03-12 Thread Frank Heimes
** Also affects: ubuntu-z-systems
   Importance: Undecided
   Status: New

** Changed in: ubuntu-z-systems
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1861460

Title:
  cloud-init should parse initramfs rendered netplan if present

Status in cloud-init:
  In Progress
Status in Ubuntu on IBM z Systems:
  In Progress

Bug description:
  initramfs-tools used to only execute klibc based networking with some
  resolvconf hooks.

  In recent releases, it has been greatly improved to use
  isc-dhcp-client instead of klibc, support vlan= key (like in
  dracut-network), bring up Z devices using chzdev, and generate netplan
  yaml from all of the above.

  Above improvements were driven in part by Oracle Cloud and in part by
  Subiquity netbooting on Z.

  Thus these days, instead of trying to reparse klibc files in
  /run/net-*, cloud-init should simply import /run/netplan/$device.yaml
  files as the ip=* provided networking information on the command line.
  I do not currently see cloud-init doing that in e.g.
  /cloudinit/net/cmdline.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1861460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867151] [NEW] Can't run tests on CentOS 8 (missing py3 dependencies)

2020-03-12 Thread Paride Legovini
Public bug reported:

The cloud-init test environment can't be set up on CentOS 8 due to
missing Python 3 dependencies. The missing libraries are at least the
following:

 - contextlib2
 - httpretty
 - unittest2

Of these unittest2 is available in the PowerTools repository as
python3-unittest2, the other two are not. The PowerTools repository is
currently added to the test system by the test helper scripts.

For this reason the unit testing and CI testing is currently disabled on
CentOS 8.

** Affects: cloud-init
 Importance: Undecided
 Status: Triaged

** Changed in: cloud-init
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1867151

Title:
  Can't run tests on CentOS 8 (missing py3 dependencies)

Status in cloud-init:
  Triaged

Bug description:
  The cloud-init test environment can't be set up on CentOS 8 due to
  missing Python 3 dependencies. The missing libraries are at least the
  following:

   - contextlib2
   - httpretty
   - unittest2

  Of these unittest2 is available in the PowerTools repository as
  python3-unittest2, the other two are not. The PowerTools repository is
  currently added to the test system by the test helper scripts.

  For this reason the unit testing and CI testing is currently disabled
  on CentOS 8.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1867151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1855708] Re: Reload tests broken in Py3

2020-03-12 Thread Erno Kuvaja
** Also affects: glance/train
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1855708

Title:
  Reload tests broken in Py3

Status in Glance:
  Triaged
Status in Glance train series:
  New

Bug description:
  Looks like our reload is also broken in PY3 that has gone unnoticed as
  the test breakage was blamed on ssl.

  Test fails on timeout at worker change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1855708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867119] Re: [security] Add allowed-address-pair 0.0.0.0/0 to one port will open all others' protocol under same security group

2020-03-12 Thread LIU Yulong
** This bug is no longer a duplicate of bug 1793029
   adding 0.0.0.0/0 address pair to a port  bypasses all other vm security 
groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867119

Title:
  [security] Add allowed-address-pair 0.0.0.0/0 to one port will open
  all others' protocol under same security group

Status in neutron:
  In Progress

Bug description:
  [security] Add allowed-address-pair 0.0.0.0/0 to one port will open
  all others' protocol under same security group

  When add allowed-address-pair 0.0.0.0/0 to one port, it will unexpectedly 
open all others' protocol under same security group. First found in 
stable/queens, but also confirmed in master branch.
  IPv6 has the same problem!

  Devstack test config:
  [DEFAULT]
  [l2pop]
  [ml2]
  type_drivers = flat,gre,vlan,vxlan
  tenant_network_types = vxlan
  extension_drivers = port_security,qos
  mechanism_drivers = openvswitch,l2population

  [ml2_type_vxlan]
  vni_ranges = 1:1

  [securitygroup]
  firewall_driver = openvswitch
  [ovs]
  local_ip = 10.0.5.10

  [agent]
  tunnel_types = vxlan
  l2_population = True
  arp_responder = True
  enable_distributed_routing = True
  extensions = qos

  
  Step to reproduce:
  1. Assuming you have following VMs
  | 24231705-ee79-4643-ae5a-9f0f7ff8f8ba | dvr-ha-vm-2  | ACTIVE | 
dvr-ha=192.168.30.44, 172.16.12.220  | cirros | nano   |
  | 4865d216-9f95-40bf-a6b4-221e3af06798 | dvr-ha-vm-1  | ACTIVE | 
dvr-ha=192.168.30.64, 172.16.13.52   | cirros | nano   |

  $ nova interface-list 4865d216-9f95-40bf-a6b4-221e3af06798
  
++--+--+---+---+-+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr  | Tag |
  
++--+--+---+---+-+
  | ACTIVE | b333b1ca-bb9a-41fd-a878-b524ffbc6d7a | 
a9e82560-f1ac-4909-9afa-686b57df62fa | 192.168.30.64 | fa:16:3e:12:66:05 | -   |
  
++--+--+---+---+-+
  $ nova interface-list 24231705-ee79-4643-ae5a-9f0f7ff8f8ba
  
++--+--+---+---+-+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr  | Tag |
  
++--+--+---+---+-+
  | ACTIVE | 93197f48-3fe4-47f4-9d15-ba8728c00409 | 
a9e82560-f1ac-4909-9afa-686b57df62fa | 192.168.30.44 | fa:16:3e:14:ff:f1 | -   |
  
++--+--+---+---+-+

  2. Security group rules
  $ openstack security group rule list 535018b5-7038-46f2-8f0e-2a6e193788aa 
--long|grep ingress
  | 01015261-0ca3-49ad-b033-bc2036a58e26 | tcp | IPv4  | 0.0.0.0/0 
| 22:22  | ingress   | None |
  | 36441851-7bd2-4680-be43-2f8119b65040 | icmp| IPv4  | 0.0.0.0/0 
|| ingress   | None |
  | 8326f59e-cf26-4372-9913-30c71c036a2e | None| IPv6  | ::/0  
|| ingress   | 535018b5-7038-46f2-8f0e-2a6e193788aa |
  | e47c6731-a0f7-42aa-8125-a9810e7b5a17 | None| IPv4  | 0.0.0.0/0 
|| ingress   | 535018b5-7038-46f2-8f0e-2a6e193788aa |

  3. Start a nc test server in dvr-ha-vm-2
  # nc -l -p 8000

  4. Try to curl that dvr-ha-vm-2 port 8000 in the outside world
  $ curl http://172.16.12.220:8000/index.html
  curl: (7) Failed connect to 172.16.12.220:8000; Connection timed out

  5. Add allowed address pair 0.0.0.0/0 to dvr-ha-vm-1
  openstack port set  --allowed-address ip-address=0.0.0.0/0  
b333b1ca-bb9a-41fd-a878-b524ffbc6d7a

  6. Try to curl that dvr-ha-vm-2 port 8000 again
  It is connected!!!

  # nc -l -p 8000 
  GET /index.html HTTP/1.1
  User-Agent: curl/7.29.0
  Host: 172.16.12.220:8000
  Accept: */*

  asdfasdf
  asdfasdf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1867119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867122] [NEW] Unnecessary network flapping while update floatingip without port or fixed ip changed

2020-03-12 Thread Taoyunxiang
Public bug reported:

While updating a FIP without port or fixed ip changed. For example,
updating its description via neutron API, or associate same floatingip
and fixed ip more than once, it will delete the old one ,than add a new
one[0]. It will make network flap.

Because we don't change fip and fiexd ip , so we don't need to do update
in NBDB table, and the network flap should be avoided.



[0]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1089

** Affects: neutron
 Importance: Undecided
 Assignee: Taoyunxiang (taoyunxiang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Taoyunxiang (taoyunxiang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867122

Title:
  Unnecessary network flapping while update floatingip without port or
  fixed ip changed

Status in neutron:
  New

Bug description:
  While updating a FIP without port or fixed ip changed. For example,
  updating its description via neutron API, or associate same floatingip
  and fixed ip more than once, it will delete the old one ,than add a
  new one[0]. It will make network flap.

  Because we don't change fip and fiexd ip , so we don't need to do
  update in NBDB table, and the network flap should be avoided.



  
  
[0]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1089

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1867122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867119] Re: [security] Add allowed-address-pair 0.0.0.0/0 to one port will open all others' protocol under same security group

2020-03-12 Thread Slawek Kaplonski
*** This bug is a duplicate of bug 1793029 ***
https://bugs.launchpad.net/bugs/1793029

** This bug has been marked a duplicate of bug 1793029
   adding 0.0.0.0/0 address pair to a port  bypasses all other vm security 
groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867119

Title:
  [security] Add allowed-address-pair 0.0.0.0/0 to one port will open
  all others' protocol under same security group

Status in neutron:
  In Progress

Bug description:
  [security] Add allowed-address-pair 0.0.0.0/0 to one port will open
  all others' protocol under same security group

  When add allowed-address-pair 0.0.0.0/0 to one port, it will unexpectedly 
open all others' protocol under same security group. First found in 
stable/queens, but also confirmed in master branch.
  IPv6 has the same problem!

  Devstack test config:
  [DEFAULT]
  [l2pop]
  [ml2]
  type_drivers = flat,gre,vlan,vxlan
  tenant_network_types = vxlan
  extension_drivers = port_security,qos
  mechanism_drivers = openvswitch,l2population

  [ml2_type_vxlan]
  vni_ranges = 1:1

  [securitygroup]
  firewall_driver = openvswitch
  [ovs]
  local_ip = 10.0.5.10

  [agent]
  tunnel_types = vxlan
  l2_population = True
  arp_responder = True
  enable_distributed_routing = True
  extensions = qos

  
  Step to reproduce:
  1. Assuming you have following VMs
  | 24231705-ee79-4643-ae5a-9f0f7ff8f8ba | dvr-ha-vm-2  | ACTIVE | 
dvr-ha=192.168.30.44, 172.16.12.220  | cirros | nano   |
  | 4865d216-9f95-40bf-a6b4-221e3af06798 | dvr-ha-vm-1  | ACTIVE | 
dvr-ha=192.168.30.64, 172.16.13.52   | cirros | nano   |

  $ nova interface-list 4865d216-9f95-40bf-a6b4-221e3af06798
  
++--+--+---+---+-+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr  | Tag |
  
++--+--+---+---+-+
  | ACTIVE | b333b1ca-bb9a-41fd-a878-b524ffbc6d7a | 
a9e82560-f1ac-4909-9afa-686b57df62fa | 192.168.30.64 | fa:16:3e:12:66:05 | -   |
  
++--+--+---+---+-+
  $ nova interface-list 24231705-ee79-4643-ae5a-9f0f7ff8f8ba
  
++--+--+---+---+-+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr  | Tag |
  
++--+--+---+---+-+
  | ACTIVE | 93197f48-3fe4-47f4-9d15-ba8728c00409 | 
a9e82560-f1ac-4909-9afa-686b57df62fa | 192.168.30.44 | fa:16:3e:14:ff:f1 | -   |
  
++--+--+---+---+-+

  2. Security group rules
  $ openstack security group rule list 535018b5-7038-46f2-8f0e-2a6e193788aa 
--long|grep ingress
  | 01015261-0ca3-49ad-b033-bc2036a58e26 | tcp | IPv4  | 0.0.0.0/0 
| 22:22  | ingress   | None |
  | 36441851-7bd2-4680-be43-2f8119b65040 | icmp| IPv4  | 0.0.0.0/0 
|| ingress   | None |
  | 8326f59e-cf26-4372-9913-30c71c036a2e | None| IPv6  | ::/0  
|| ingress   | 535018b5-7038-46f2-8f0e-2a6e193788aa |
  | e47c6731-a0f7-42aa-8125-a9810e7b5a17 | None| IPv4  | 0.0.0.0/0 
|| ingress   | 535018b5-7038-46f2-8f0e-2a6e193788aa |

  3. Start a nc test server in dvr-ha-vm-2
  # nc -l -p 8000

  4. Try to curl that dvr-ha-vm-2 port 8000 in the outside world
  $ curl http://172.16.12.220:8000/index.html
  curl: (7) Failed connect to 172.16.12.220:8000; Connection timed out

  5. Add allowed address pair 0.0.0.0/0 to dvr-ha-vm-1
  openstack port set  --allowed-address ip-address=0.0.0.0/0  
b333b1ca-bb9a-41fd-a878-b524ffbc6d7a

  6. Try to curl that dvr-ha-vm-2 port 8000 again
  It is connected!!!

  # nc -l -p 8000 
  GET /index.html HTTP/1.1
  User-Agent: curl/7.29.0
  Host: 172.16.12.220:8000
  Accept: */*

  asdfasdf
  asdfasdf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1867119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867119] [NEW] [security] Add allowed-address-pair 0.0.0.0/0 to one port will open all others' protocol under same security group

2020-03-12 Thread LIU Yulong
Public bug reported:

[security] Add allowed-address-pair 0.0.0.0/0 to one port will open all
others' protocol under same security group

When add allowed-address-pair 0.0.0.0/0 to one port, it will unexpectedly open 
all others' protocol under same security group. First found in stable/queens, 
but also confirmed in master branch.
IPv6 has the same problem!

Devstack test config:
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,gre,vlan,vxlan
tenant_network_types = vxlan
extension_drivers = port_security,qos
mechanism_drivers = openvswitch,l2population

[ml2_type_vxlan]
vni_ranges = 1:1

[securitygroup]
firewall_driver = openvswitch
[ovs]
local_ip = 10.0.5.10

[agent]
tunnel_types = vxlan
l2_population = True
arp_responder = True
enable_distributed_routing = True
extensions = qos


Step to reproduce:
1. Assuming you have following VMs
| 24231705-ee79-4643-ae5a-9f0f7ff8f8ba | dvr-ha-vm-2  | ACTIVE | 
dvr-ha=192.168.30.44, 172.16.12.220  | cirros | nano   |
| 4865d216-9f95-40bf-a6b4-221e3af06798 | dvr-ha-vm-1  | ACTIVE | 
dvr-ha=192.168.30.64, 172.16.13.52   | cirros | nano   |

$ nova interface-list 4865d216-9f95-40bf-a6b4-221e3af06798
++--+--+---+---+-+
| Port State | Port ID  | Net ID
   | IP addresses  | MAC Addr  | Tag |
++--+--+---+---+-+
| ACTIVE | b333b1ca-bb9a-41fd-a878-b524ffbc6d7a | 
a9e82560-f1ac-4909-9afa-686b57df62fa | 192.168.30.64 | fa:16:3e:12:66:05 | -   |
++--+--+---+---+-+
$ nova interface-list 24231705-ee79-4643-ae5a-9f0f7ff8f8ba
++--+--+---+---+-+
| Port State | Port ID  | Net ID
   | IP addresses  | MAC Addr  | Tag |
++--+--+---+---+-+
| ACTIVE | 93197f48-3fe4-47f4-9d15-ba8728c00409 | 
a9e82560-f1ac-4909-9afa-686b57df62fa | 192.168.30.44 | fa:16:3e:14:ff:f1 | -   |
++--+--+---+---+-+

2. Security group rules
$ openstack security group rule list 535018b5-7038-46f2-8f0e-2a6e193788aa 
--long|grep ingress
| 01015261-0ca3-49ad-b033-bc2036a58e26 | tcp | IPv4  | 0.0.0.0/0 | 
22:22  | ingress   | None |
| 36441851-7bd2-4680-be43-2f8119b65040 | icmp| IPv4  | 0.0.0.0/0 |  
  | ingress   | None |
| 8326f59e-cf26-4372-9913-30c71c036a2e | None| IPv6  | ::/0  |  
  | ingress   | 535018b5-7038-46f2-8f0e-2a6e193788aa |
| e47c6731-a0f7-42aa-8125-a9810e7b5a17 | None| IPv4  | 0.0.0.0/0 |  
  | ingress   | 535018b5-7038-46f2-8f0e-2a6e193788aa |

3. Start a nc test server in dvr-ha-vm-2
# nc -l -p 8000

4. Try to curl that dvr-ha-vm-2 port 8000 in the outside world
$ curl http://172.16.12.220:8000/index.html
curl: (7) Failed connect to 172.16.12.220:8000; Connection timed out

5. Add allowed address pair 0.0.0.0/0 to dvr-ha-vm-1
openstack port set  --allowed-address ip-address=0.0.0.0/0  
b333b1ca-bb9a-41fd-a878-b524ffbc6d7a

6. Try to curl that dvr-ha-vm-2 port 8000 again
It is connected!!!

# nc -l -p 8000 
GET /index.html HTTP/1.1
User-Agent: curl/7.29.0
Host: 172.16.12.220:8000
Accept: */*

asdfasdf
asdfasdf

** Affects: neutron
 Importance: Critical
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867119

Title:
  [security] Add allowed-address-pair 0.0.0.0/0 to one port will open
  all others' protocol under same security group

Status in neutron:
  Confirmed

Bug description:
  [security] Add allowed-address-pair 0.0.0.0/0 to one port will open
  all others' protocol under same security group

  When add allowed-address-pair 0.0.0.0/0 to one port, it will unexpectedly 
open all others' protocol under same security group. First found in 
stable/queens, but also confirmed in master branch.
  IPv6 has the same problem!

  Devstack test config:
  [DEFAULT]
  [l2pop]
  [ml2]
  type_drivers = flat,gre,vlan,vxlan
  tenant_network_types = vxlan
  extension_drivers = port_security,qos
  mechanism_drivers = openvswitch,l2population

  [ml2_type_vxlan]
  vni_ranges = 1:1

  [securitygroup]
  firewall_driver = openvswitch
  [ovs]
  local_ip = 10.0.5.10

  [agent]
  tunnel_types = vxlan
  l2_popula

[Yahoo-eng-team] [Bug 1867101] [NEW] Deployment has security group with empty tenant id

2020-03-12 Thread LIU Yulong
Public bug reported:

ENV: devstack, master

$ openstack security group list
+--+-++--+--+
| ID   | Name| Description| 
Project  | Tags |
+--+-++--+--+
| 2661ae74-f946-4ef9-b676-fe9be4274c1b | default | Default security group | 
 | []   |
| 535018b5-7038-46f2-8f0e-2a6e193788aa | default | Default security group | 
ae05deb7a16c485986c3666b65f71c71 | []   |
| c5d1b354-9896-4e2c-aeab-67c8cd20a489 | default | Default security group | 
972c01461f1b4441b8d8648691bb89ff | []   |
+--+-++--+--+

The empty project id (tenant id) causes some mechanism_driver like ODL
failed to init, and log errors.

** Affects: neutron
 Importance: Low
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867101

Title:
  Deployment has security group with empty tenant id

Status in neutron:
  New

Bug description:
  ENV: devstack, master

  $ openstack security group list
  
+--+-++--+--+
  | ID   | Name| Description| 
Project  | Tags |
  
+--+-++--+--+
  | 2661ae74-f946-4ef9-b676-fe9be4274c1b | default | Default security group |   
   | []   |
  | 535018b5-7038-46f2-8f0e-2a6e193788aa | default | Default security group | 
ae05deb7a16c485986c3666b65f71c71 | []   |
  | c5d1b354-9896-4e2c-aeab-67c8cd20a489 | default | Default security group | 
972c01461f1b4441b8d8648691bb89ff | []   |
  
+--+-++--+--+

  The empty project id (tenant id) causes some mechanism_driver like ODL
  failed to init, and log errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1867101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp