[Yahoo-eng-team] [Bug 1878583] Re: Unable to createImage/snapshot paused volume backed instances

2020-07-20 Thread melanie witt
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/ussuri
   Importance: Undecided => Low

** Changed in: nova/ussuri
   Status: New => Fix Committed

** Changed in: nova/ussuri
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1878583

Title:
  Unable to createImage/snapshot paused volume backed instances

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  Fix Committed

Bug description:
  Description
  ===
  Unable to createImage/snapshot paused volume backed instances.

  Steps to reproduce
  ==
  - Pause a volume backed instance
  - Attempt to snapshot the instance using the createImage API

  Expected result
  ===
  A snapshot image is successfully created as is the case for paused instances 
that are not volume backed.

  Actual result
  =
  n-api returns the following error:

  {'code': 409, 'message': "Cannot 'createImage' instance
  bc5a7ae4-fca9-4d83-b1b8-5534f51a9404 while it is in vm_state paused"}

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 N/A

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

  As above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1878583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867075] Re: Arm64: Instance with Configure Drive attach volume failed failed

2020-07-20 Thread melanie witt
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/train
   Importance: Undecided => Low

** Changed in: nova/train
   Status: New => Fix Committed

** Changed in: nova/train
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova/stein
   Importance: Undecided => Low

** Changed in: nova/stein
   Status: New => Fix Committed

** Changed in: nova/stein
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/rocky
   Importance: Undecided => Low

** Changed in: nova/rocky
   Status: New => Fix Committed

** Changed in: nova/rocky
 Assignee: (unassigned) => Elod Illes (elod-illes)

** Changed in: nova/queens
   Importance: Undecided => Low

** Changed in: nova/queens
   Status: New => In Progress

** Changed in: nova/queens
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1867075

Title:
  Arm64: Instance with Configure Drive attach volume failed failed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in OpenStack Compute (nova) train series:
  Fix Committed

Bug description:
  Arm64.

  Image: cirros-0.5.1
  hw_cdrom_bus='scsi', hw_disk_bus='scsi', hw_machine_type='virt', 
hw_rng_model='virtio', hw_scsi_model='virtio-scsi', 
os_command_line=''console=ttyAMA0''

  Boot a vm.
  Create a volume: openstack volume create --size 1 test

  Attach:
  openstack server add volume cirros-test test

  Error:
  DEBUG nova.virt.libvirt.guest [None req-8dfbf677-50bb-42be-869f-52c9ac638d59 
admin admin] attach device xml: 
  
   



  


b9abb789-1c55-4210-ab5c-78b0e3619405   


 

  


  ror: Requested operation is not valid: Domain already contains a disk with 
that address
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
Traceback (most recent call last):
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/opt/stack/nova/nova/virt/block_device.py", line 599, in _volume_attach
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
device_type=self['device_type'], encryption=encryption)
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1731, in 
attach_volume
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
conf = self._get_volume_config(connection_info, disk_info)
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
self.force_reraise()
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
six.reraise(self.type_, self.value, self.tb)
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
raise value
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1731, in 
attach_volume
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
conf = self._get_volume_config(connection_info, disk_info)
  ERROR nova.virt.block_device [instance: 

[Yahoo-eng-team] [Bug 1888315] Re: MAAS does not set type=VLAN for CentOS & RHEL

2020-07-20 Thread Lee Trager
MAAS passes the network configuration to cloud-init which performs the
actual network configuration. The cloud-init-el-stable repo is a yum/dnf
repository setup by Canonical so we can push out fixes for bugs like
this. Right now the MAAS CentOS/RHEL images use cloud-init from
CentOS/RHEL as the version they provide has no known bugs.

Adding cloud-init but keeping this assigned to MAAS as well for
tracking.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: maas
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1888315

Title:
  MAAS does not set type=VLAN for CentOS & RHEL

Status in cloud-init:
  New
Status in MAAS:
  Triaged

Bug description:
  With MAAS v2.6 and v2.7, when creating a VLAN in CentOS/RHEL 7 & 8
  images (before OS is deployed), the interface's VLAN configuration is
  incorrect, and as a result it does not come up.

  The definition in /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg is
  defined correctly, for example:

- id: bond0.100
  mtu: 1500
  name: bond0.100
  subnets:
  - type: manual
  type: vlan
  vlan_id: 100
  vlan_link: bond0

  (In this case it's a bond, but it's same for just an interface like
  eth0 or ens3)

  However, the network configuration file /etc/sysconfig/network-scripts
  /ifcfg-bond.100 has an incorrect TYPE entry:

  # Created by cloud-init on instance boot automatically, do not edit.
  #
  BOOTPROTO=none
  DEVICE=bond0.100
  MTU=1500
  ONBOOT=yes
  PHYSDEV=bond0
  TYPE=Ethernet
  USERCTL=no
  VLAN=yes

  Removing the TYPE=Ethernet entry or changing to TYPE=VLAN fixes the
  problem.

  I don't know if this is a problem with MAAS or cloud-init in
  /etc/yum.repos.d/cloud-init-el-stable.repo. If this is a known issue
  with cloud-init in those EPEL repos, is there a workaround?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1888315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888298] Re: cc_timezone fails on Ubuntu Bionic and Xenial minimal

2020-07-20 Thread Ryan Harper
https://cloud-images.ubuntu.com/daily/server/minimal/daily/focal/current
/focal-minimal-cloudimg-amd64.manifest

Has tzdata ...

This looks like a cloudimg issue, not cloud-init.

https://cloud-
images.ubuntu.com/daily/server/minimal/daily/bionic/current/bionic-
minimal-cloudimg-amd64.manifest


** Also affects: cloud-images
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1888298

Title:
  cc_timezone fails on Ubuntu Bionic and Xenial minimal

Status in cloud-images:
  New
Status in cloud-init:
  Incomplete

Bug description:
  Summary
  ===
  On Ubuntu Bionic and Xenial minimal images, there is no tzdata package. As a 
result, when cloud-init tries to set the timezone it will fail and produce a 
stack trace.

  Expected Result
  ===
  No trace and no failure of the cloud-config.service :)

  Actual result
  ===
  2020-07-20 18:13:22,515 - util.py[DEBUG]: Running module timezone () failed
    File "/usr/lib/python3/dist-packages/cloudinit/config/cc_timezone.py", line 
47, in handle
  cloud.distro.set_timezone(timezone)
    File "/usr/lib/python3/dist-packages/cloudinit/distros/debian.py", line 
165, in set_timezone
  distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
  OSError: Invalid timezone America/Vancouver, no file found at 
/usr/share/zoneinfo/America/Vancouver

  Steps to reproduce
  ===
  $ wget 
https://cloud-images.ubuntu.com/daily/server/minimal/releases/bionic/release/ubuntu-18.04-minimal-cloudimg-amd64.img
  $ multipass launch file:///$(pwd)/ubuntu-18.04-minimal-cloudimg-amd64.img 
--name=bionic-minimal
  $ multipass exec bionic-minimal -- sudo systemctl list-units --failed 
--no-legend
  # note that cloud-config.service fails
  $ multipass exec bionic-minimal -- sudo cat /var/log/cloud-init.log | grep 
timezone

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-images/+bug/1888298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888298] [NEW] cc_timezone fails on Ubuntu Bionic and Xenial minimal

2020-07-20 Thread Joshua Powers
Public bug reported:

Summary
===
On Ubuntu Bionic and Xenial minimal images, there is no tzdata package. As a 
result, when cloud-init tries to set the timezone it will fail and produce a 
stack trace.

Expected Result
===
No trace and no failure of the cloud-config.service :)

Actual result
===
2020-07-20 18:13:22,515 - util.py[DEBUG]: Running module timezone () failed
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_timezone.py", line 
47, in handle
cloud.distro.set_timezone(timezone)
  File "/usr/lib/python3/dist-packages/cloudinit/distros/debian.py", line 165, 
in set_timezone
distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
OSError: Invalid timezone America/Vancouver, no file found at 
/usr/share/zoneinfo/America/Vancouver

Steps to reproduce
===
$ wget 
https://cloud-images.ubuntu.com/daily/server/minimal/releases/bionic/release/ubuntu-18.04-minimal-cloudimg-amd64.img
$ multipass launch file:///$(pwd)/ubuntu-18.04-minimal-cloudimg-amd64.img 
--name=bionic-minimal
$ multipass exec bionic-minimal -- sudo systemctl list-units --failed 
--no-legend
# note that cloud-config.service fails
$ multipass exec bionic-minimal -- sudo cat /var/log/cloud-init.log | grep 
timezone

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

  Summary
  ===
  On Ubuntu Bionic and Xenial minimal images, there is no tzdata package. As a 
result, when cloud-init tries to set the timezone it will fail and produce a 
stack trace.
  
  Expected Result
  ===
- Options: a) we depend on tzdata b) we do not try to set timezone if tzdata is 
not available c) fail gracefully d) other?
+ No trace and no failure of the cloud-config.service :)
  
  Actual result
  ===
  2020-07-20 18:13:22,515 - util.py[DEBUG]: Running module timezone () failed
-   File "/usr/lib/python3/dist-packages/cloudinit/config/cc_timezone.py", line 
47, in handle
- cloud.distro.set_timezone(timezone)
-   File "/usr/lib/python3/dist-packages/cloudinit/distros/debian.py", line 
165, in set_timezone
- distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
+   File "/usr/lib/python3/dist-packages/cloudinit/config/cc_timezone.py", line 
47, in handle
+ cloud.distro.set_timezone(timezone)
+   File "/usr/lib/python3/dist-packages/cloudinit/distros/debian.py", line 
165, in set_timezone
+ distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
  OSError: Invalid timezone America/Vancouver, no file found at 
/usr/share/zoneinfo/America/Vancouver
  
  Steps to reproduce
  ===
  $ wget 
https://cloud-images.ubuntu.com/daily/server/minimal/releases/bionic/release/ubuntu-18.04-minimal-cloudimg-amd64.img
  $ multipass launch file:///$(pwd)/ubuntu-18.04-minimal-cloudimg-amd64.img 
--name=bionic-minimal
  $ multipass exec bionic-minimal -- sudo systemctl list-units --failed 
--no-legend
  # note that cloud-config.service fails
  $ multipass exec bionic-minimal -- sudo cat /var/log/cloud-init.log | grep 
timezone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1888298

Title:
  cc_timezone fails on Ubuntu Bionic and Xenial minimal

Status in cloud-init:
  New

Bug description:
  Summary
  ===
  On Ubuntu Bionic and Xenial minimal images, there is no tzdata package. As a 
result, when cloud-init tries to set the timezone it will fail and produce a 
stack trace.

  Expected Result
  ===
  No trace and no failure of the cloud-config.service :)

  Actual result
  ===
  2020-07-20 18:13:22,515 - util.py[DEBUG]: Running module timezone () failed
    File "/usr/lib/python3/dist-packages/cloudinit/config/cc_timezone.py", line 
47, in handle
  cloud.distro.set_timezone(timezone)
    File "/usr/lib/python3/dist-packages/cloudinit/distros/debian.py", line 
165, in set_timezone
  distros.set_etc_timezone(tz=tz, tz_file=self._find_tz_file(tz))
  OSError: Invalid timezone America/Vancouver, no file found at 
/usr/share/zoneinfo/America/Vancouver

  Steps to reproduce
  ===
  $ wget 
https://cloud-images.ubuntu.com/daily/server/minimal/releases/bionic/release/ubuntu-18.04-minimal-cloudimg-amd64.img
  $ multipass launch file:///$(pwd)/ubuntu-18.04-minimal-cloudimg-amd64.img 
--name=bionic-minimal
  $ multipass exec bionic-minimal -- sudo systemctl list-units --failed 
--no-legend
  # note that cloud-config.service fails
  $ multipass exec bionic-minimal -- sudo cat /var/log/cloud-init.log | grep 
timezone

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1888298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888168] Re: Change default num_retries for glance to 3

2020-07-20 Thread melanie witt
Here's where we default the cinder retries to 3:

https://github.com/openstack/nova/blob/b7161fe9b92f0045e97c300a80e58d32b6f49be1/nova/conf/cinder.py#L72-L73

and neutron retries are also defaulted to 3:

https://github.com/openstack/nova/blob/b7161fe9b92f0045e97c300a80e58d32b6f49be1/nova/conf/neutron.py#L97-L98

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1888168

Title:
  Change default num_retries for glance to 3

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New

Bug description:
  Currently, nova has a parameter to set the retry count for glance, 
num_retries.
  The default value of the option is 0. It means that the request is sent only 
once.
  On the other hand, other component's clients, cinder, neutron change its 
default value to 3.

  https://review.opendev.org/#/c/712226/
  https://review.opendev.org/#/c/736026/

  For glance, we should align the default value to 3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1888168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888258] [NEW] [neutron-tempest-plugin] greendns query has no attribute "_compute_expiration"

2020-07-20 Thread Rodolfo Alonso
Public bug reported:

Some tests are failing consistently with the following error:
http://paste.openstack.org/show/796134/

"AttributeError: module 'dns.query' has no attribute
'_compute_expiration'"


Error logs: 
https://8a0f799a619e7f365667-2de10bdd194d323966e80d1fe3d10503.ssl.cf1.rackcdn.com/741957/2/check/neutron-tempest-plugin-designate-scenario/db75334/testr_results.html

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888258

Title:
  [neutron-tempest-plugin] greendns query has no attribute
  "_compute_expiration"

Status in neutron:
  New

Bug description:
  Some tests are failing consistently with the following error:
  http://paste.openstack.org/show/796134/

  "AttributeError: module 'dns.query' has no attribute
  '_compute_expiration'"

  
  Error logs: 
https://8a0f799a619e7f365667-2de10bdd194d323966e80d1fe3d10503.ssl.cf1.rackcdn.com/741957/2/check/neutron-tempest-plugin-designate-scenario/db75334/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888256] [NEW] Neutron start radvd and mess up the routing table when: ipv6_ra_mode=not set ipv6-address-mode=slaac

2020-07-20 Thread Peter
Public bug reported:

Hello!

I would like to report a possible bug.
We currently using Rocky with Ubuntu 18.04.
We use custom ansible for deployment.

We have a setup, where the upstream core Cisco nexus DC switches answers
to RA-s. This works fine with a network, which we had for years
(upgraded from kilo)

Now, we made a new region, with new network nodes, etc. and the IPv6 not
works as in the old region.

In the new region, we had this subnet:

[PROD][root(cc1:0)] <~> openstack subnet show Flat1-subnet-v6
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | 2001:738:0:527::2-2001:738:0:527:::: |
| cidr  | 2001:738:0:527::/64  |
| created_at| 2020-07-01T22:59:53Z |
| description   |  |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 2001:738:0:527::1|
| host_routes   |  |
| id| a5a9991c-62f3-4f46-b1ef-e293dc0fb781 |
| ip_version| 6|
| ipv6_address_mode | slaac|
| ipv6_ra_mode  | None |
| name  | Flat1-subnet-v6  |
| network_id| fa55bfc7-ab42-4d97-987e-645cca7a0601 |
| project_id| b48a9319a66e45f3b04cc8bb70e3113c |
| revision_number   | 0|
| segment_id| None |
| service_types |  |
| subnetpool_id | None |
| tags  |  |
| updated_at| 2020-07-01T22:59:53Z |
+---+--+

As you can see, the address mode is SLAAC, the RA mode is: None.

Checking from network node, we see the qrouter:

[PROD][root(net1:0)]  ip netns exec 
qrouter-4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
35: ha-5dfb8647-f7:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:1c:4d:8d brd ff:ff:ff:ff:ff:ff
inet 169.254.192.3/18 brd 169.254.255.255 scope global ha-5dfb8647-f7
   valid_lft forever preferred_lft forever
inet 169.254.0.162/24 scope global ha-5dfb8647-f7
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe1c:4d8d/64 scope link
   valid_lft forever preferred_lft forever
36: qr-a6d7ceab-80:  mtu 1500 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:a1:7e:69 brd ff:ff:ff:ff:ff:ff
inet 193.224.218.251/24 scope global qr-a6d7ceab-80
   valid_lft forever preferred_lft forever
inet6 2001:738:0:527:f816:3eff:fea1:7e69/64 scope global nodad
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea1:7e69/64 scope link nodad
   valid_lft forever preferred_lft forever

If I check the running process on our net1 node, I got this:

[PROD][root(net1:0)]  ps aux |grep radvd |grep 
4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1
neutron  32540  0.0  0.0  19604  2372 ?Ss   júl02   0:05 radvd -C 
/var/lib/neutron/ra/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.radvd.conf -p 
/var/lib/neutron/external/pids/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.pid.radvd 
-m syslog -u neutron


The specific radvd config:
[PROD][root(net1:0)]  cat 
/var/lib/neutron/ra/4ffa4f55-95aa-4ce1-b4f8-8bbb2f9d53e1.radvd.conf
interface qr-a6d7ceab-80
{
   AdvSendAdvert on;
   MinRtrAdvInterval 30;
   MaxRtrAdvInterval 100;
   AdvLinkMTU 1500;
};

If I spin up an instance, I see this:

debian@test:~$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether fa:16:3e:71:ca:8d brd ff:ff:ff:ff:ff:ff
inet 193.224.218.9/24 brd 

[Yahoo-eng-team] [Bug 1888237] [NEW] nova-next job fails as novnc service fails with TypeError: _wrap_socket() argument 1 must be _socket.socket, not GreenSSLSocket

2020-07-20 Thread Balazs Gibizer
Public bug reported:

Since 18th of July nova-next job fails [1] with the following stack
trace in the n-novnc-cell1 service[2]:

Jul 20 10:44:42.419916 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: DEBUG nova.console.rfb.authvencrypt [None req-
4c318998-3a74-4929-beab-b774a72844cf None None] Server accepted the
requested sub-auth type {{(pid=23498) security_handshake
/opt/stack/nova/nova/console/rfb/authvencrypt.py:126}}

Jul 20 10:44:42.425350 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: INFO nova.console.websocketproxy [None req-
4c318998-3a74-4929-beab-b774a72844cf None None] handler exception:
_wrap_socket() argument 1 must be _socket.socket, not GreenSSLSocket

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: DEBUG nova.console.websocketproxy [None req-
4c318998-3a74-4929-beab-b774a72844cf None None] exception {{(pid=23498)
vmsg /usr/local/lib/python3.6/dist-
packages/websockify/websockifyserver.py:634}}

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy Traceback (most
recent call last):

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websockifyserver.py",
line 691, in top_new_client

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy client =
self.do_handshake(startsock, address)

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websockifyserver.py",
line 619, in do_handshake

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
self.RequestHandlerClass(retsock, address, self)

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/opt/stack/nova/nova/console/websocketproxy.py", line 100, in __init__

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
websockify.ProxyRequestHandler.__init__(self, *args, **kwargs)

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websockifyserver.py",
line 99, in __init__

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
SimpleHTTPRequestHandler.__init__(self, req, addr, server)

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/lib/python3.6/socketserver.py", line 724, in __init__

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy self.handle()

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websockifyserver.py",
line 315, in handle

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
SimpleHTTPRequestHandler.handle(self)

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/lib/python3.6/http/server.py", line 418, in handle

Jul 20 10:44:42.435706 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
self.handle_one_request()

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websocketserver.py",
line 47, in handle_one_request

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
super(WebSocketRequestHandlerMixIn, self).handle_one_request()

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/lib/python3.6/http/server.py", line 406, in handle_one_request

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy method()

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websocketserver.py",
line 60, in _websocket_do_GET

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy
self.handle_upgrade()

Jul 20 10:44:42.437262 ubuntu-bionic-rax-ord-0018472470 nova-
novncproxy[22231]: ERROR nova.console.websocketproxy   File
"/usr/local/lib/python3.6/dist-packages/websockify/websockifyserver.py",
line 221, in handle_upgrade


[Yahoo-eng-team] [Bug 1662324] Re: linux bridge agent disables ipv6 before adding an ipv6 address

2020-07-20 Thread Corey Bryant
This bug was fixed in the package neutron - 2:8.4.0-0ubuntu7.5~cloud0
---

 neutron (2:8.4.0-0ubuntu7.5~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:8.4.0-0ubuntu7.5) xenial; urgency=medium
 .
   * d/p/0001-Fix-linuxbridge-agent-startup-issue-with-IPv6.patch
 - Ensure network enable_ipv6 when using linuxbridge (LP: #1662324)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662324

Title:
  linux bridge agent disables ipv6 before adding an ipv6 address

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  [Impact]
  When using linuxbridge and after creating network & interface to ext-net, 
disable_ipv6 is 1. then linuxbridge-agent doesn't add ipv6 properly to newly 
created bridge.

  [Test Case]

  1. deploy basic mitaka env
  2. create external network(ext-net)
  3. create ipv6 network and interface to ext-net
  4. check if related bridge has ipv6 ip
  - no ipv6 originally
  or
  - cat /proc/sys/net/ipv6/conf/[BRIDGE]/disable_ipv6

  after this commit, I was able to see ipv6 address properly.

  [Regression]
  This has been patched in newer releases of neutron for a while regression 
potential of the backport should be fairly low. You need to restart 
neutron-linuxbridge-agent after applying the fix adn then there could be short 
downtime needed.
  This patch could cause bridge related issue. bridge can lose it's child 
interface's information. or assign wrong information to bridge or interface. 
and there could be issue related to interface deletion belongs to bridge. The 
risk is the same if it is ipv4 or ipv6.

  [Others]

  -- original description --

  Summary:
  
  I have a dual-stack NIC with only an IPv6 SLAAC and link local address 
plumbed. This is the designated provider network nic. When I create a network 
and then a subnet, the linux bridge agent first disables IPv6 on the bridge and 
then tries to add the IPv6 address from the NIC to the bridge. Since IPv6 was 
disabled on the bridge, this fails with 'RTNETLINK answers: Permission denied'. 
My intent was to create an IPv4 subnet over this interface with floating IPv4 
addresses for assignment to VMs via this command:
    openstack subnet create --network provider \
  --allocation-pool start=10.54.204.200,end=10.54.204.217 \
  --dns-nameserver 69.252.80.80 --dns-nameserver 69.252.81.81 \
  --gateway 10.54.204.129 --subnet-range 10.54.204.128/25 provider

  I don't know why the agent is disabling IPv6 (I wish it wouldn't),
  that's probably the problem. However, if the agent knows to disable
  IPv6 it should also know not to try to add an IPv6 address.

  Details:
  
  Version: Newton on CentOS 7.3 minimal (CentOS-7-x86_64-Minimal-1611.iso) as 
per these instructions: http://docs.openstack.org/newton/install-guide-rdo/

  Seemingly relevant section of /var/log/neutron/linuxbridge-agent.log:
  2017-02-06 15:09:20.863 1551 INFO 
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Skipping ARP spoofing 
rules for port 'tap3679987e-ce' because it has port security disabled
  2017-02-06 15:09:20.863 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'-o', 'link', 'show', 'tap3679987e-ce'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.870 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.871 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'addr', 'show', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.878 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 15:09:20.879 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Running command: ['ip', 
'route', 'list', 'dev', 'eno1', 'scope', 'global'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
  2017-02-06 15:09:20.885 1551 DEBUG neutron.agent.linux.utils 
[req-4917c507-369e-4a36-a381-e8b287cbc988 - - - - -] Exit code: 0 execute 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:146
  2017-02-06 

[Yahoo-eng-team] [Bug 1888213] Re: [FT] neutron.tests.functional.agent.linux.test_iptables.IptablesManagerNonRootTestCase test cases always failing

2020-07-20 Thread Rodolfo Alonso
ModuleNotFoundError is a child class of ImportError, that should be fine
then.

** No longer affects: python-stevedore

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888213

Title:
  [FT]
  
neutron.tests.functional.agent.linux.test_iptables.IptablesManagerNonRootTestCase
  test cases always failing

Status in neutron:
  New

Bug description:
  Test cases failing:
  - test_binary_name
  - test_binary_name_eventlet_spawn

  Those tests are blocking the CI.

  Logs:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9c1/735802/3/check
  /neutron-functional/9c1f55b/testr_results.html

  Snippet: http://paste.openstack.org/show/796114/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888213] Re: [FT] neutron.tests.functional.agent.linux.test_iptables.IptablesManagerNonRootTestCase test cases always failing

2020-07-20 Thread Rodolfo Alonso
Ok, this is the error:

>>> import importlib.metadata
Traceback (most recent call last):
  File "", line 1, in 
ModuleNotFoundError: No module named 'importlib.metadata'
>>> 

But [1] is only catching ImportError [2]

[2]https://github.com/openstack/stevedore/commit/d5297167e08468c75d2477f15004df61cf98e57e
#diff-11dca3f8d632f130fa82203beb6dd8cdR29


** Also affects: stevedore
   Importance: Undecided
   Status: New

** No longer affects: stevedore

** Also affects: python-stevedore
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888213

Title:
  [FT]
  
neutron.tests.functional.agent.linux.test_iptables.IptablesManagerNonRootTestCase
  test cases always failing

Status in neutron:
  New

Bug description:
  Test cases failing:
  - test_binary_name
  - test_binary_name_eventlet_spawn

  Those tests are blocking the CI.

  Logs:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9c1/735802/3/check
  /neutron-functional/9c1f55b/testr_results.html

  Snippet: http://paste.openstack.org/show/796114/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888213] [NEW] [FT] neutron.tests.functional.agent.linux.test_iptables.IptablesManagerNonRootTestCase test cases always failing

2020-07-20 Thread Rodolfo Alonso
Public bug reported:

Test cases failing:
- test_binary_name
- test_binary_name_eventlet_spawn

Those tests are blocking the CI.

Logs:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9c1/735802/3/check
/neutron-functional/9c1f55b/testr_results.html

Snippet: http://paste.openstack.org/show/796114/

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

** Description changed:

  Test cases failing:
  - test_binary_name
  - test_binary_name_eventlet_spawn
+ 
+ Those tests are blocking the CI.
  
  Logs:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9c1/735802/3/check
  /neutron-functional/9c1f55b/testr_results.html
  
  Snippet: http://paste.openstack.org/show/796114/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888213

Title:
  [FT]
  
neutron.tests.functional.agent.linux.test_iptables.IptablesManagerNonRootTestCase
  test cases always failing

Status in neutron:
  New

Bug description:
  Test cases failing:
  - test_binary_name
  - test_binary_name_eventlet_spawn

  Those tests are blocking the CI.

  Logs:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9c1/735802/3/check
  /neutron-functional/9c1f55b/testr_results.html

  Snippet: http://paste.openstack.org/show/796114/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837635] Re: HA router state change from "standby" to "master" should be delayed

2020-07-20 Thread Chris MacNaughton
This seems to be in the released Neutron at Stein, so I'm marking it as
Fix-Released

** Changed in: cloud-archive
   Status: New => Invalid

** Changed in: cloud-archive/stein
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837635

Title:
  HA router state change from "standby" to "master" should be delayed

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Committed
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Currently, when a HA state change occurs, the agent execute a series
  of actions [1]: updates the metadata proxy, updates the prefix
  delegation, executed L3 extension "ha_state_change" methods, updates
  the radvd status and notifies this to the server.

  When, in a system with more than two routers (one in "active" mode and
  the others in "standby"), a switch-over is done, the "keepalived"
  process [2] in each "standby" server will set the virtual IP in the HA
  interface and advert it. In case that other router HA interface has
  the same priority (by default in Neutron, the HA instances of the same
  router ID will have the same priority, 50) but higher IP [3], the HA
  interface of this instance will have the VIPs and routes deleted and
  will become "standby" again. E.g.: [4]

  In some cases, we have detected that when the master controller is
  rebooted, the change from "standby" to "master" of the other two
  servers is detected, but the change from "master" to "standby" of the
  server with lower IP (as commented before) is not registered by the
  server, because the Neutron server is still not accessible (the master
  controller was rebooted). This status change, sometimes, is lost. This
  is the situation when both "standby" servers become "master" but the
  "master"-"standby" transition of one of them is lost.

  1) INITIAL STATUS
  (overcloud) [stack@undercloud-0 ~]$ neutron l3-agent-list-hosting-router 
router
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--++---+--+
  | id   | host | 
admin_state_up | alive | ha_state |
  
+--+--++---+--+
  | 4056cd8e-e062-4f45-bc83-d3eb51905ff5 | controller-0.localdomain | True  
 | :-)   | standby  |
  | 527d6a6c-8d2e-4796-bbd0-8b41cf365743 | controller-2.localdomain | True  
 | :-)   | standby  |
  | edbdfc1c-3505-4891-8d00-f3a6308bb1de | controller-1.localdomain | True  
 | :-)   | active   |
  
+--+--++---+--+

  2) CONTROLLER 1 REBOOTED
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--++---+--+
  | id   | host | 
admin_state_up | alive | ha_state |
  
+--+--++---+--+
  | 4056cd8e-e062-4f45-bc83-d3eb51905ff5 | controller-0.localdomain | True  
 | :-)   | active   |
  | 527d6a6c-8d2e-4796-bbd0-8b41cf365743 | controller-2.localdomain | True  
 | :-)   | active   |
  | edbdfc1c-3505-4891-8d00-f3a6308bb1de | controller-1.localdomain | True  
 | :-)   | standby  |
  
+--+--++---+--+

  
  The aim of this bug is to make public this problem and propose a patch to 
delay the transition from "standby" to "master" to let keepalived, among all 
the instances running in the HA servers, to decide which one of them is the 
"master" server.

  
  [1] 
https://github.com/openstack/neutron/blob/stable/stein/neutron/agent/l3/ha.py#L115-L134
  [2] https://www.keepalived.org/
  [3] This method is used by keepalived to define which router is predominant 
and must be master.
  [4] http://paste.openstack.org/show/754760/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1837635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888121] Re: L3 agent fails to update routers with onlink gateway

2020-07-20 Thread LIU Yulong
*** This bug is a duplicate of bug 1861674 ***
https://bugs.launchpad.net/bugs/1861674

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Confirmed

** This bug has been marked a duplicate of bug 1861674
   Gateway which is not in subnet CIDR is unsupported in ha router

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888121

Title:
  L3 agent fails to update routers with onlink gateway

Status in neutron:
  Confirmed

Bug description:
  If a router uses an external gateway network with an "onlink" gateway
  (= gateway not in subnet range), L3 agents fails to process router
  update.

  * Versions: currently in Train, since the code did not changed I think
  in Ussuri too.

  * How to reproduce:

  # Create external network
  openstack network create public --external
  # Create associated subnet with a gateway not in the subnet range. This kind 
of gateway should be
  # handle as an "onlink" route.
  openstack subnet create --network public --subnet-range 192.168.144.0/24 
--gateway 192.168.0.1

  # Create router and set external gateway
  openstack router create external
  openstack router set --external-gateway public

  # Check l3 agent logs
  http://paste.openstack.org/show/796084/

  
  * Current fix:

  During gateway setup here
  
https://github.com/openstack/neutron/blob/stable/train/neutron/agent/linux/ip_lib.py#L604,
  adding 'onlink' flag allows pyroute2 to successfully add the onlink
  default gateway:

  ```
  def add_gateway(self, gateway, metric=None, table=None, scope='global', 
flags=[]):
  kwargs = {'flags': ['onlink']}
  self.add_route(None, via=gateway, table=table, metric=metric,
 scope=scope, **kwargs)
  ```

  Result: http://paste.openstack.org/show/796085/

  
  About the patch, I don't really know the consequences of adding the onlink 
flag standards gateway. Maybe we could add a check "if 'gateway not in subnet 
cidr' then onlink", this will impact all existing routers otherwise.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784342] Re: AttributeError: 'Subnet' object has no attribute '_obj_network_id'

2020-07-20 Thread James Page
*** This bug is a duplicate of bug 1839658 ***
https://bugs.launchpad.net/bugs/1839658

Ah - this behaviour was enforced @ train

see bug 1839658


** This bug has been marked a duplicate of bug 1839658
   "subnet" register in the DB can have network_id=NULL

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784342

Title:
  AttributeError: 'Subnet' object has no attribute '_obj_network_id'

Status in neutron:
  Confirmed
Status in neutron package in Ubuntu:
  Confirmed

Bug description:
  Running rally caused subnets to be created without a network_id
  causing this AttributeError.

  OpenStack Queens RDO packages
  [root@controller1 ~]# rpm -qa | grep -i neutron
  python-neutron-12.0.2-1.el7.noarch
  openstack-neutron-12.0.2-1.el7.noarch
  python2-neutron-dynamic-routing-12.0.1-1.el7.noarch
  python2-neutron-lib-1.13.0-1.el7.noarch
  openstack-neutron-dynamic-routing-common-12.0.1-1.el7.noarch
  python2-neutronclient-6.7.0-1.el7.noarch
  openstack-neutron-bgp-dragent-12.0.1-1.el7.noarch
  openstack-neutron-common-12.0.2-1.el7.noarch
  openstack-neutron-ml2-12.0.2-1.el7.noarch

  
  MariaDB [neutron]> select project_id, id, name, network_id, cidr from subnets 
where network_id is null;

  
+--+--+---++-+

  | project_id   | id
  | name  | network_id | cidr|

  
+--+--+---++-+

  | b80468629bc5410ca2c53a7cfbf002b3 | 7a23c72b-
  3df8-4641-a494-af7642563c8e | s_rally_1e4bebf1_1s3IN6mo | NULL   |
  1.9.13.0/24 |

  | b80468629bc5410ca2c53a7cfbf002b3 |
  f7a57946-4814-477a-9649-cc475fb4e7b2 | s_rally_1e4bebf1_qWSFSMs9 |
  NULL   | 1.5.20.0/24 |

  
+--+--+---++-+

  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
[req-c921b9fb-499b-41c1-9103-93e71a70820c b6b96932bbef41fdbf957c2dc01776aa 
050c556faa5944a8953126c867313770 - default default] GET failed.: 
AttributeError: 'Subnet' object has no attribute '_obj_network_id'
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 678, in __call__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/pecan/core.py", line 569, in 
invoke_controller
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 91, in wrapped
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/neutron/db/api.py", line 87, in wrapped
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_db/api.py", line 147, in wrapper
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-30 10:35:13.351 42618 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)