[Yahoo-eng-team] [Bug 1870723] [NEW] vm cannot be created after a large number of vms are removed from the same node

2020-04-03 Thread ZhouHeng
Public bug reported:

vm cannot be created after a large number of virtual machines are
removed from the same node

1、create a security group, has remote security group rule
2、create 50 vms in same node(eg: node01) and use same security group created in 
step 1
3、delete all vms created in step 2
4、create some vms in nodeo1
5、all vms create failed. error is: Build instance  aborted: Failed to 
allocate the network(s), ...


By observing the database when creating the vm, it is found that port's L2 
block has not been removed.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref

** Summary changed:

- vm cannot be created after a large number of virtual machines are removed 
from the same node
+ vm cannot be created after a large number of vms are removed from the same 
node

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870723

Title:
  vm cannot be created after a large number of vms are removed from the
  same node

Status in neutron:
  New

Bug description:
  vm cannot be created after a large number of virtual machines are
  removed from the same node

  1、create a security group, has remote security group rule
  2、create 50 vms in same node(eg: node01) and use same security group created 
in step 1
  3、delete all vms created in step 2
  4、create some vms in nodeo1
  5、all vms create failed. error is: Build instance  aborted: Failed to 
allocate the network(s), ...

  
  By observing the database when creating the vm, it is found that port's L2 
block has not been removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1870723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863707] Re: [neutron-tempest-plugin] test_trunk.TrunkTestInheritJSONBase.test_add_subport fails if unordered

2020-04-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/708305
Committed: 
https://git.openstack.org/cgit/openstack/neutron-tempest-plugin/commit/?id=167a5784ca42ddb225726f0b792c855a4efafc98
Submitter: Zuul
Branch:master

commit 167a5784ca42ddb225726f0b792c855a4efafc98
Author: Cédric Ollivier 
Date:   Tue Feb 18 07:42:30 2020 +0100

Protect vs unordered results in TrunkTestInheritJSONBase

Closes-Bug: #1863707

Change-Id: If99de32925da9f79ceacdccc86c5727e466347c0
Signed-off-by: Cédric Ollivier 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863707

Title:
  [neutron-tempest-plugin]
  test_trunk.TrunkTestInheritJSONBase.test_add_subport fails if
  unordered

Status in neutron:
  Fix Released

Bug description:
  Tested vs OpenStack master + OVN:
  Traceback (most recent call last):
File 
"/usr/lib/python3.8/site-packages/neutron_tempest_plugin/api/test_trunk.py", 
line 238, in test_add_subport
  self.assertEqual(expected_subports, trunk['sub_ports'])
File "/usr/lib/python3.8/site-packages/testtools/testcase.py", line 411, in 
assertEqual
  self.assertThat(observed, matcher, message)
File "/usr/lib/python3.8/site-packages/testtools/testcase.py", line 498, in 
assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = [{'port_id': '6eb624a9-368c-472c-a3ce-26608e77d2e7',
'segmentation_id': 3,
'segmentation_type': 'vlan'},
   {'port_id': '7a06845b-f379-4143-90f7-e664a8a602ec',
'segmentation_id': 2,
'segmentation_type': 'vlan'}]
  actual= [{'port_id': '7a06845b-f379-4143-90f7-e664a8a602ec',
'segmentation_id': 2,
'segmentation_type': 'vlan'},
   {'port_id': '6eb624a9-368c-472c-a3ce-26608e77d2e7',
'segmentation_id': 3,
'segmentation_type': 'vlan'}]

  A straightforward proposal would be:
  diff --git a/neutron_tempest_plugin/api/test_trunk.py 
b/neutron_tempest_plugin/api/test_trunk.py
  index 823a95d..bc8ec82 100644
  --- a/neutron_tempest_plugin/api/test_trunk.py
  +++ b/neutron_tempest_plugin/api/test_trunk.py
  @@ -235,7 +235,8 @@ class TrunkTestInheritJSONBase(TrunkTestJSONBase):
 'segmentation_id': segmentation_id2}]
   
   # Validate that subport got segmentation details from the network
  -self.assertEqual(expected_subports, trunk['sub_ports'])
  +self.assertin(expected_subports[0], trunk['sub_ports'])
  +self.assertin(expected_subports[1], trunk['sub_ports'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1859832] Re: L3 HA connectivity to GW port can be broken after reboot of backup node

2020-04-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/707406
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c52029c39aa824a67095fbbf9e59eff769d92587
Submitter: Zuul
Branch:master

commit c52029c39aa824a67095fbbf9e59eff769d92587
Author: LIU Yulong 
Date:   Thu Oct 31 19:06:37 2019 +0800

Do not link up HA router gateway in backup node

L3 router will set its devices link up by default.
For HA routers, the gateway device will be pluged
in all scheduled hosts. When the gateway deivce is
up in backup node, it will send out IPv6 related
packets (MLDv2) according to some kernal config.
This will cause the physical fabric think that the
gateway MAC is now working in the backup node. And
finally the master node L3 traffic will be broken.

This patch sets the backup gateway device link down
by default. When the VRRP sets the master state in
one host, the L3 agent state change procedure will
do link up action for the gateway device.

Closes-Bug: #1859832
Change-Id: I8dca2c1a2f8cb467cfb44420f0eea54ca0932b05


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1859832

Title:
  L3 HA connectivity to GW port can be broken after reboot of backup
  node

Status in neutron:
  Fix Released

Bug description:
  When neutron router is on some network node in backup state (other network 
node is "active" for this router), and such network node will be rebooted it 
may happen that connectivity to router's gateway port will be broken.
  It can happen due to race between L3 agent and OVS agent and is easier to 
reproduce when You have many routers in backup state on such node.
  I was testing it with 10 routers, all in backup state. In such case 1 or 2 
routers had got broken connectivity after reboot of host.

  It is like that because when L3 agent adds interface to the router, it checks 
if there is any IPv6 link-local address on interface and if there is, it flush 
such IPv6 addresses and adds them to keepalived config. So keepalived can 
manage such IPs as any other IP address from this interface.
  But the problem is that when IPv6 address is removed from the interface, it 
sends MLDv2 packets to unsubsribe from multicast group. And if those packets 
will go out from host e.g. to ToR switch, such switch will learn that MAC 
address of gw port is on wrong host (this rebooted one instead of one where 
router is in master state).

  Thos MLDv2 packets aren't send to the wire for each router but only for some 
of them due to race.
  Basically new qg-XXX port is created in br-int by L3 agent with DEAD_VLAN_TAG 
(4095) and than both agents, L3 and OVS are configuring it. If L3 agent flush 
IPv6 addresses from this interface BEFORE OVS agent sets proper tag 
(local_vlan_id) for the port, than all is fine because MLDv2 packets are 
dropped. But if L3 agent will flush AFTER tag is changed, than MLDv2 packets 
are send to the wire and cause ingress connectivity break.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1859832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864027] Re: [OVN] DHCP doesn't work while instance has disabled port security

2020-04-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/708852
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3d3b61f8792277b303e10bce51512d9a73ef187e
Submitter: Zuul
Branch:master

commit 3d3b61f8792277b303e10bce51512d9a73ef187e
Author: Maciej Józefczyk 
Date:   Thu Feb 20 11:27:13 2020 +

Revert "[OVN] Set 'unknown' address properly when port sec is disabled"

We can now revert this patch, because main cause has been already
fixed in Core OVN [1]. With this fix the ARP responder flows are not
installed on LS pipeline, when LSP has port security disabled, and
an 'unknown' address is set in addresses column.
This makes MAC spoofing possible.


[1] https://patchwork.ozlabs.org/patch/1258152/


This reverts commit 03b87ad963d5d8165a92e5c7c284c1517333dd00.



Change-Id: Ie4c87d325b671348e133d62818d99af147d50ca2
Closes-Bug: #1864027


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864027

Title:
  [OVN] DHCP doesn't work while instance has disabled port security

Status in neutron:
  Fix Released

Bug description:
  While instance has disabled port security its not able to reach DHCP service.
  Looks like the change [1] introduced this regression.

  Port has [unknown] address set:
  
+---++
  root@mjozefcz-ovn-train-lb:~# ovn-nbctl list logical_switch_port 
a09a1ac7-62ad-46ad-b802-c4abf65dcf70
  _uuid   : 32a741bc-a185-4291-8b36-dc9c387bb662
  addresses   : [unknown]
  dhcpv4_options  : 7c94ec89-3144-4920-b624-193d968c637a
  dhcpv6_options  : []
  dynamic_addresses   : []
  enabled : true
  external_ids: {"neutron:cidrs"="10.2.1.134/24", 
"neutron:device_id"="9f4a705f-b438-4da1-975d-1a0cdf81e124", 
"neutron:device_owner"="compute:nova", 
"neutron:network_name"=neutron-cd1ee69d-06b6-4502-ba26-e1280fd66ad9, 
"neutron:port_fip"="172.24.4.132", "neutron:port_name"="", 
"neutron:project_id"="98b165bfeeca4efd84724f3118d84f6f", 
"neutron:revision_number"="4", "neutron:security_group_ids"=""}
  ha_chassis_group: []
  name: "a09a1ac7-62ad-46ad-b802-c4abf65dcf70"
  options : {requested-chassis=mjozefcz-ovn-train-lb}
  parent_name : []
  port_security   : []
  tag : []
  tag_request : []
  type: ""
  up  : true

  
  ovn-controller doesn't respond for DHCP requests.

  
  It was caught by failing OVN Provider driver tempest test:
  
octavia_tempest_plugin.tests.scenario.v2.test_traffic_ops.TrafficOperationsScenarioTest


  
  [1] https://review.opendev.org/#/c/702249/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869929] Re: RuntimeError: maximum recursion depth exceeded while calling a Python object

2020-04-03 Thread Tobias Urdin
Think this isn't a bug but was related to SELinux. This issue happend
when I upgraded nova on our compute node and then this occured. So I
removed the @db.select_db_reader_mode decorator usage in
nova/objects/service.py to make it start.

I then proceeded to upgrade Neutron and Ceilometer on the compute nodes,
Neutron requires the following SELinux packages to be updated in order
for it to work:

libselinux libselinux-python libselinux-utils selinux-policy selinux-
policy-targeted

When I upgraded that, neutron and ceilometer I didn't bother testing again.
I removed the commented decorators now and restart nova-compute and it worked.

This is the install log:

Mar 31 17:22:07 Installed: 1:python2-nova-20.1.1-1.el7.noarch
Mar 31 17:22:08 Updated: 1:openstack-nova-common-20.1.1-1.el7.noarch
Mar 31 17:22:09 Updated: 1:openstack-nova-compute-20.1.1-1.el7.noarch
Mar 31 17:22:09 Erased: python-dogpile-cache-0.6.2-1.el7.noarch
Mar 31 17:22:11 Erased: 1:python-nova-18.2.3-1.el7.noarch
Mar 31 17:22:11 Erased: python-dogpile-core-0.4.1-2.el7.noarch
Apr 01 11:49:46 Updated: python2-os-traits-0.16.0-1.el7.noarch
Apr 01 11:55:16 Installed: python2-os-ken-0.4.1-1.el7.noarch
Apr 01 11:55:17 Updated: python2-neutron-lib-1.29.1-1.el7.noarch
Apr 01 11:55:17 Updated: python2-pyroute2-0.5.6-1.el7.noarch
Apr 01 11:55:19 Installed: 1:python2-neutron-15.0.2-1.el7.noarch
Apr 01 11:55:20 Updated: 1:openstack-neutron-common-15.0.2-1.el7.noarch
Apr 01 11:55:21 Updated: 1:openstack-neutron-openvswitch-15.0.2-1.el7.noarch
Apr 01 11:55:22 Updated: 1:openstack-neutron-15.0.2-1.el7.noarch
Apr 01 11:55:25 Erased: 1:python-neutron-13.0.6-1.el7.noarch
Apr 01 11:55:44 Installed: python2-zaqarclient-1.12.0-1.el7.noarch
Apr 01 11:55:45 Installed: 1:python2-ceilometer-13.1.0-1.el7.noarch
Apr 01 11:55:46 Updated: 1:openstack-ceilometer-common-13.1.0-1.el7.noarch
Apr 01 11:55:46 Updated: 1:openstack-ceilometer-polling-13.1.0-1.el7.noarch
Apr 01 11:55:48 Erased: 1:python-ceilometer-11.0.1-1.el7.noarch

The possibility of any of the additional packages after nova-compute
there fixed it is very low.

The only thing I did manually except for that was to upgrade the SELinux
packages mentioned above because that's required by Neutron.

** Changed in: nova
   Status: New => Invalid

** Changed in: oslo.config
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1869929

Title:
  RuntimeError: maximum recursion depth exceeded while calling a Python
  object

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.config:
  Invalid

Bug description:
  When testing upgrading nova packages from Rocky to Train the following
  issue occurs:

  versions:
  oslo.config 6.11.2
  oslo.concurrency 3.30.0
  oslo.versionedobjects 1.36.1
  oslo.db 5.0.2
  oslo.config 6.11.2
  oslo.cache 1.37.0

  Happens here 
https://github.com/openstack/oslo.db/blob/5.0.2/oslo_db/api.py#L304
  where it register_opts for options.database_opts

  This cmp operation:
  https://github.com/openstack/oslo.config/blob/6.11.2/oslo_config/cfg.py#L363

  If I edit above cmp operation and add print statements before like this:
  if opt.dest in opts:
  print('left: %s' % str(opts[opt.dest]['opt'].name))
  print('right: %s' % str(opt.name))
  if opts[opt.dest]['opt'] != opt:
  raise DuplicateOptError(opt.name)

  It stops here:
  $ nova-compute --help
  left: sqlite_synchronous
  right: sqlite_synchronous
  Traceback (most recent call last):
  same exception
  RuntimeError: maximum recursion depth exceeded while calling a Python object

  
  /usr/bin/nova-compute --help
  Traceback (most recent call last):
File "/usr/bin/nova-compute", line 6, in 
  from nova.cmd.compute import main
File "/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 29, in 

  from nova.compute import rpcapi as compute_rpcapi
File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 30, in 

  from nova.objects import service as service_obj
File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 170, 
in 
  base.NovaObjectDictCompat):
File "/usr/lib/python2.7/site-packages/nova/objects/service.py", line 351, 
in Service
  def _db_service_get_by_compute_host(context, host, use_slave=False):
File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 91, in 
select_db_reader_mode
  return IMPL.select_db_reader_mode(f)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File "/usr/lib/python2.7/site-packages/oslo_db/concurrency.py", line 72, in 
__getattr__
  return getattr(self._api, key)
File 

[Yahoo-eng-team] [Bug 1869967] Re: subiquity->cloud-init generates netplan yaml telling user not to edit it

2020-04-03 Thread Paride Legovini
I'm not marking this Fix Released for subiquity as the change has not
been released in all the subiquity channels yet.

** Changed in: cloud-init
   Status: New => Invalid

** Changed in: subiquity
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1869967

Title:
  subiquity->cloud-init generates netplan yaml telling user not to edit
  it

Status in cloud-init:
  Invalid
Status in subiquity:
  Fix Committed

Bug description:
  As seen in , users who install with subiquity end up
  with a /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg that persists
  on the target system, plus an /etc/netplan/50-cloud-init.yaml that
  tells users not to edit it without taking steps to disable cloud-init.

  I don't think this is what we want.  I think a subiquity install
  should unambiguously treat cloud-init as a one-shot at installation,
  and leave the user afterwards with config files that can be directly
  edited without fear of cloud-init interfering; and the yaml files
  generated by cloud-init on subiquity installs should therefore also
  not include this scary language:

  # This file is generated from information provided by the datasource.  Changes
  # to it will not persist across an instance reboot.  To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}

  But we need to figure out how to fix this between subiquity and cloud-
  init.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1869967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870569] [NEW] Unable to create network without default network_segment_range

2020-04-03 Thread Joseph Richard
Public bug reported:

When using the network_segment_range, it should be possible to specify
all ranges manually, without any default ranges being required.  Until
recently, this was the case, however a recent commit [1] went in that
broke that functionality.  Note this bug has also been merged [2] into
stable/train.

This line [3] assumes that there will be a default network segment
range.  This should be changed to check if there is a default range, and
if not, then return an empty set, to allow selecting a segment from the
shared and project ranges.


[1] 
https://opendev.org/openstack/neutron/commit/046672247de56bad950e8267a57bd26205f354a0
[2] 
https://opendev.org/openstack/neutron/commit/bbe401aaf9bfdd77e1d43d547b2cdb436b1440c8
[3] 
https://opendev.org/openstack/neutron/src/branch/master/neutron/objects/network_segment_range.py#L197

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1870569

Title:
  Unable to create network without default network_segment_range

Status in neutron:
  New

Bug description:
  When using the network_segment_range, it should be possible to specify
  all ranges manually, without any default ranges being required.  Until
  recently, this was the case, however a recent commit [1] went in that
  broke that functionality.  Note this bug has also been merged [2] into
  stable/train.

  This line [3] assumes that there will be a default network segment
  range.  This should be changed to check if there is a default range,
  and if not, then return an empty set, to allow selecting a segment
  from the shared and project ranges.

  
  [1] 
https://opendev.org/openstack/neutron/commit/046672247de56bad950e8267a57bd26205f354a0
  [2] 
https://opendev.org/openstack/neutron/commit/bbe401aaf9bfdd77e1d43d547b2cdb436b1440c8
  [3] 
https://opendev.org/openstack/neutron/src/branch/master/neutron/objects/network_segment_range.py#L197

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1870569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868531] Re: nova manage placement doesn't support registration per Cell.

2020-04-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/714459
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1a39ed9005306b0d3f42480b1fedf36b0b7834ff
Submitter: Zuul
Branch:master

commit 1a39ed9005306b0d3f42480b1fedf36b0b7834ff
Author: hackertron 
Date:   Mon Mar 23 15:21:17 2020 +0100

Support for nova-manage placement heal_allocations --cell

Closes-bug: #1868531

Change-Id: I98b3280583a6d12461d8aa52e5714d7606b84369


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1868531

Title:
  nova manage placement doesn't support registration per Cell.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  nova-manage allows to register allocations. However, it doesn't
  support registration per Cell.

  To fix this we need to support : nova-manage placement
  heal_allocations --cell 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1868531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870558] [NEW] Server's host not changed but actually on dest node after live-migrating

2020-04-03 Thread Eric Xie
Public bug reported:

Description
===
The insances has been migrating for more than two hours. Then it got error 
'Unauthorized'.
The host of output of CLI `openstack server show` was stall the old one.
But the instances had already been running on dest node.

Steps to reproduce
==
1. Create one instance with large mem
2. Run some application which cosume mem, like `memtester`
3. Execute live-migrate

Expected result
===
Rollback instance to old one, or update instance's host to dest node

Actual result
=
Instance on dest node but the host is src node in DB

Environment
===
$ git log -1
commit ee6af34437069a23284f4521330057a95f86f9b7 (HEAD -> stable/rocky, 
origin/stable/rocky)
Author: Luigi Toscano 
Date:   Wed Dec 18 00:28:15 2019 +0100

Zuul v3: use devstack-plugin-nfs-tempest-full

... and replace its legacy ancestor.

Change-Id: Ifd4387a02b3103e1258e146e63c73be1ad10030c
(cherry picked from commit e7e39b8c2e20f5d7b5e70020f0e42541dc772e68)
(cherry picked from commit e82e1704caa1c2baea29f05e8d426337e8de7a3c)
(cherry picked from commit 99aa8ebc12949f9bba76f22e877b07d02791bf5b)

Logs & Configs
==
2020-04-02 21:08:32,890.890 6358 INFO nova.virt.libvirt.driver 
[req-b8d694f5-f60a-4866-bcd2-c107b2caa809 bdb83637364c4db4ba1a01f6ea879ff1 
496db91424
254a85a4130a26801447c9 - default default] [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration running for 30 secs, memory 80% 
remaining; (byt
es processed=3503551373, remaining=27653689344, total=34364792832)
2020-04-02 23:08:05,165.165 6358 INFO nova.virt.libvirt.driver 
[req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 
496db91424254a85a4130a26801447c9 - default default] [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration operation has completed
2020-04-02 23:08:05,166.166 6358 INFO nova.compute.manager 
[req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 
496db91424254a85a4130a26801447c9 - default default] [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] _post_live_migration() is started..
2020-04-02 23:08:05,535.535 6358 WARNING nova.virt.libvirt.driver 
[req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 
496db91424254a85a4130a26801447c9 - default default] [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] Error monitoring migration: The request 
you have made requires authentication. (HTTP 401): Unauthorized: The request 
you have made requires authentication. (HTTP 401)
2020-04-02 23:08:05,537.537 6358 ERROR nova.compute.manager [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] Unauthorized: The request you have made 
requires authentication. (HTTP 401)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870558

Title:
  Server's host not changed but actually on dest node after live-
  migrating

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  The insances has been migrating for more than two hours. Then it got error 
'Unauthorized'.
  The host of output of CLI `openstack server show` was stall the old one.
  But the instances had already been running on dest node.

  Steps to reproduce
  ==
  1. Create one instance with large mem
  2. Run some application which cosume mem, like `memtester`
  3. Execute live-migrate

  Expected result
  ===
  Rollback instance to old one, or update instance's host to dest node

  Actual result
  =
  Instance on dest node but the host is src node in DB

  Environment
  ===
  $ git log -1
  commit ee6af34437069a23284f4521330057a95f86f9b7 (HEAD -> stable/rocky, 
origin/stable/rocky)
  Author: Luigi Toscano 
  Date:   Wed Dec 18 00:28:15 2019 +0100

  Zuul v3: use devstack-plugin-nfs-tempest-full

  ... and replace its legacy ancestor.

  Change-Id: Ifd4387a02b3103e1258e146e63c73be1ad10030c
  (cherry picked from commit e7e39b8c2e20f5d7b5e70020f0e42541dc772e68)
  (cherry picked from commit e82e1704caa1c2baea29f05e8d426337e8de7a3c)
  (cherry picked from commit 99aa8ebc12949f9bba76f22e877b07d02791bf5b)

  Logs & Configs
  ==
  2020-04-02 21:08:32,890.890 6358 INFO nova.virt.libvirt.driver 
[req-b8d694f5-f60a-4866-bcd2-c107b2caa809 bdb83637364c4db4ba1a01f6ea879ff1 
496db91424
  254a85a4130a26801447c9 - default default] [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration running for 30 secs, memory 80% 
remaining; (byt
  es processed=3503551373, remaining=27653689344, total=34364792832)
  2020-04-02 23:08:05,165.165 6358 INFO nova.virt.libvirt.driver 
[req-f22d9bee-9c1f-47a6-a2d5-3611f5b2529c bdb83637364c4db4ba1a01f6ea879ff1 
496db91424254a85a4130a26801447c9 - default default] [instance: 
8e76d7a1-e7f4-4476-94b3-724db6bfd467] Migration 

[Yahoo-eng-team] [Bug 1869887] Re: L3 DVR ARP population gets incorrect MAC address in some cases

2020-04-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/716302
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=eb775458c6da57426703289c7b969caddb83d677
Submitter: Zuul
Branch:master

commit eb775458c6da57426703289c7b969caddb83d677
Author: Slawek Kaplonski 
Date:   Tue Mar 31 05:33:06 2020 +0200

[DVR] Don't populate unbound ports in router's ARP cache

When user is using keepalived on their instances, he often creates
additional port in Neutron to allocate some IP address which will
be then used as VIP in keepalived and will be configured in
allowed_address_pair of other ports plugged to instances with
keepalived.
This is e.g. Octavia's use case.

This together with DVR caused problems with connectivity to such VIP
as it was populated in router's arp cache with MAC address from
Neutron db.

As this port isn't bound, it is only Neutron db entry so there is no
need to set it in arp cache of the router.
This patch is doing exactly that to filter such "unbound" and
"binding_failed" ports from the list.

Change-Id: Ia885ce00dbb5f2968859e8d0850bc511016f0846
Closes-Bug: #1869887


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869887

Title:
  L3 DVR ARP population gets incorrect MAC address in some cases

Status in neutron:
  Fix Released

Bug description:
  L3 dvr router is setting permanent arp entries in qrouter's namespace for all 
ports plugged to the subnets which are connected to the router.
  In most cases it's fine, but as it uses MAC address defined in Neutron DB for 
that (which is fine in general) it may cause connectivity problem in specific 
conditions.

  It happens for example with Octavia as Octavia creates unbound ports just to 
allocate IP address for their VIP in Neutron's db. And Octavia then sets this 
IP address in allowed_address_pair of other ports which are plugged to 
Amphora's VMs.
  But in DVR case such IP address is populated in arp cache with mac address 
from own port, it don't works fine when is configured as additional IP on 
interface with different MAC.

  Octavia is only one, most common known example of such use case, but
  we know that there are other users who are doing something similar
  with keepalived on their instances.

  So as this additional port is always "unbound", and "unbound" means
  that such port is basically just entry in Neutron DB, I think that
  there is no need to set it in arp cache. Only bound ports should be
  set there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1869887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870488] [NEW] server password API policy is allowed for everyone even policy defaults is admin_or_owner

2020-04-03 Thread Ghanshyam Mann
Public bug reported:

server password API policy is default to admin_or_owner[1] but API is
allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/717204

This is because API does not pass the server project_id in policy target
- 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/api/openstack/compute/server_password.py#L34

and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

[1]
- 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/policies/server_password.py#L27

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870488

Title:
  server password API policy is allowed for everyone even policy
  defaults is admin_or_owner

Status in OpenStack Compute (nova):
  New

Bug description:
  server password API policy is default to admin_or_owner[1] but API is
  allowed for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/717204

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/api/openstack/compute/server_password.py#L34

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/policies/server_password.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1870488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1870484] [NEW] server metadata API policy is allowed for everyone even policy defaults is admin_or_owner

2020-04-03 Thread Ghanshyam Mann
Public bug reported:

server metadata API policy is default to admin_or_owner[1] but API is
allowed for everyone.

We can see the test trying with other project context can access the API
- https://review.opendev.org/#/c/717182/

This is because API does not pass the server project_id in policy target
- 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/api/openstack/compute/server_metadata.py#L123

and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
- 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

[1]
- 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/policies/server_metadata.py#L27

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: policy

** Tags added: policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1870484

Title:
  server metadata API policy is allowed for everyone even policy
  defaults is admin_or_owner

Status in OpenStack Compute (nova):
  New

Bug description:
  server metadata API policy is default to admin_or_owner[1] but API is
  allowed for everyone.

  We can see the test trying with other project context can access the API
  - https://review.opendev.org/#/c/717182/

  This is because API does not pass the server project_id in policy target
  - 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/api/openstack/compute/server_metadata.py#L123

  and if no target is passed then, policy.py add the default targets which is 
nothing but context.project_id (allow for everyone try to access)
  - 
https://github.com/openstack/nova/blob/c16315165ce307c605cf4b608b2df3aa06f46982/nova/policy.py#L191

  [1]
  - 
https://github.com/openstack/nova/blob/e487b05f7e451af4f29699c3b34d9d2cc1b1205a/nova/policies/server_metadata.py#L27

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1870484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp