[Yahoo-eng-team] [Bug 1843867] Re: horizon does not activated in default devstack

2019-11-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1843867

Title:
  horizon does not activated in default devstack

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  - [x] I have a fix to the document that I can paste below including
  correction.

  In the
  
local_conf(https://docs.openstack.org/horizon/latest/contributor/ref/local_conf.html)
  example in DevStack, It says Horizon came in by default. But it is
  not.

  You should enable the horizon.

  So I have been using horizon with this:
  ENABLED_SERVICES=rabbit,mysql,key,horizon

  I am not sure all of the above listed services if needed. But I am
  sure if you want to use horizon, you must enabled in horizon.

  Best regards,
  Fatih

  ---
  Release:  on 2019-04-22 11:33:08
  SHA: 5fd5b4c8938e76be3850c05bae7ab7c68e0a7467
  Source: 
https://opendev.org/openstack/horizon/src/doc/source/contributor/ref/local_conf.rst
  URL: https://docs.openstack.org/horizon/latest/contributor/ref/local_conf.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1843867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815403] Re: “Suspending” an instance cannot release it occupied resource, but “shelving” can.

2019-11-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/663590
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3badb674f6604d3beca4ba557939d4fbc07f6178
Submitter: Zuul
Branch:master

commit 3badb674f6604d3beca4ba557939d4fbc07f6178
Author: Sharat Sharma 
Date:   Thu Jun 6 06:31:27 2019 -0400

"SUSPENDED" description changed in server_concepts guide and API REF

The description of "SUSPENDED" server status was misguiding. Rewording
it to make it more accurate.

Change-Id: Ie93b3b38c2000f7e9caa3ca89dea4ec04ed15067
Closes-Bug: #1815403


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815403

Title:
   “Suspending” an instance cannot release it occupied resource, but
  “shelving” can.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I see server concepts from doc:
  https://developer.openstack.org/api-guide/compute/server_concepts.html

  From the description of "SUSPENDED" server status, it shows:
  """The server is suspended, either by request or necessity. This status 
appears for only the following hypervisors: XenServer/XCP, KVM, and ESXi. 
Administrative users may suspend a server if it is infrequently used or to 
perform system maintenance. When you suspend a server, its state is stored on 
disk, all memory is written to disk, and the server is stopped. Suspending a 
server is similar to placing a device in hibernation; memory and vCPUs become 
available to create other servers."""

  However, after I suspend an active instance (named VM_1 on compute
  node CN_A), resource allocation of compute node CN_A has no change
  neither on OpenStack dashboard nor on database.

  On the other hand, if I shelve this instance, the occupied resource
  (vCPU and memory) will be released.

  Might the description of "SUSPENDED" server status should be modified as 
follows:
  """The server is suspended, either by request or necessity. This status 
appears for only the following hypervisors: XenServer/XCP, KVM, and ESXi. 
Administrative users may suspend a server if it is infrequently used or to 
perform system maintenance. When you suspend a server, its state is stored on 
disk, all memory is written to disk, and the server is stopped. Suspending a 
server is similar to placing a device in hibernation and its occupied resource 
will not be freed (kept for its next running). If an instance is infrequently 
used and the occupied resource needs to be freed to create other servers, it 
should be shelved."""

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852516] [NEW] [VPNaaS]: tempest gate failed

2019-11-13 Thread Dongcan Ye
Public bug reported:

Recently, neutron-vpnaas-tempest gate failed with scenario test:
neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in4
neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in6
neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas4in4

See log [1][2]
[1] 
https://d0f89e65b04aff25943d-bfab26b1456f69293167016566bc.ssl.cf5.rackcdn.com/693965/1/check/neutron-vpnaas-tempest/e501f2f/testr_results.html.gz
[2] 
https://736c534ce2f78bb48419-4edda77aff6f00cc876f5cc0df654845.ssl.cf2.rackcdn.com/691216/1/check/neutron-vpnaas-tempest/d673ea7/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852516

Title:
  [VPNaaS]: tempest gate failed

Status in neutron:
  New

Bug description:
  Recently, neutron-vpnaas-tempest gate failed with scenario test:
  neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in4
  neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas6in6  
  neutron_vpnaas.tests.tempest.scenario.test_vpnaas.Vpnaas4in4

  See log [1][2]
  [1] 
https://d0f89e65b04aff25943d-bfab26b1456f69293167016566bc.ssl.cf5.rackcdn.com/693965/1/check/neutron-vpnaas-tempest/e501f2f/testr_results.html.gz
  [2] 
https://736c534ce2f78bb48419-4edda77aff6f00cc876f5cc0df654845.ssl.cf2.rackcdn.com/691216/1/check/neutron-vpnaas-tempest/d673ea7/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1852516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823370] Re: Evacuations are not restricted to the source cell during scheduling

2019-11-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/650429
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=95df2a239c32f2ee5d00f06a59a9e91b59f3aca5
Submitter: Zuul
Branch:master

commit 95df2a239c32f2ee5d00f06a59a9e91b59f3aca5
Author: Matt Riedemann 
Date:   Fri Apr 5 15:36:00 2019 -0400

Restrict RequestSpec to cell when evacuating

When evacuating a server in a multi-cell environment
we need to restrict the scheduling request during
evacuate to the cell in which the instance already exists
since we don't support cross-cell evacuate.

This fixes the issue by restricting the RequestSpec to
the instance's current cell during evacuate in the same
way we do during unshelve.

Note that this should also improve performance when
rebuilding a server with a new image since we will only
look for the ComputeNode from the targeted cell rather
than iterate all enabled cells during scheduling.

Change-Id: I497180fb81fd966d1d3d4b54ac66d2609347583e
Closes-Bug: #1823370


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1823370

Title:
  Evacuations are not restricted to the source cell during scheduling

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  During most move operations we restrict the request spec to the cell
  the instance is in before calling the scheduler:

  unshelve:
  
https://github.com/openstack/nova/blob/a6963fa6858289d048e4d27ce8e61637cd023f4c/nova/conductor/manager.py#L822

  cold migrate:
  
https://github.com/openstack/nova/blob/a6963fa6858289d048e4d27ce8e61637cd023f4c/nova/conductor/tasks/migrate.py#L163

  live migrate:
  
https://github.com/openstack/nova/blob/a6963fa6858289d048e4d27ce8e61637cd023f4c/nova/conductor/tasks/live_migrate.py#L354

  But for some reason we don't do that during evacuate (or rebuild to
  the same host with forced hosts/nodes when the image changes - which
  in that rebuild case means the scheduler is getting nodes from all
  cells just to find the one we are forcing):

  
https://github.com/openstack/nova/blob/a6963fa6858289d048e4d27ce8e61637cd023f4c/nova/conductor/manager.py#L1011

  I'm not sure how this would fail, but if the scheduler did pick a host
  in another cell things would surely fail because evacuate won't work
  across cells (the instance data is in the source cell db).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1823370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852511] [NEW] Filter Scheduler in nova documents include obsolete filters

2019-11-13 Thread Albert Braden
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [X] This doc is inaccurate in this way:
RetryFilter was deprecated in Queens
RamFilter and CoreFilter were deprecated in Pike

There may be others; I'm not sure I have the complete list of deprecated
filters. Will you please remove the deprecated filters from the Rocky
documents, or else change the document to specify that they are
deprecated?

- [ ] This is a doc addition request.
- [X] I have a fix to the document that I can paste below including example: 
input and output. 

Change: CoreFilter - filters based on CPU core utilization. It passes hosts 
with sufficient number of CPU cores.
To: CoreFilter - Deprecated since Pike; please do not use.

Change: RamFilter - filters hosts by their RAM. Only hosts with sufficient RAM 
to host the instance are passed.
To: RamFilter - Deprecated since Pike; please do not use.

Change: RetryFilter - filters hosts that have been attempted for scheduling. 
Only passes hosts that have not been previously attempted.
To: RetryFilter - Deprecated since Queens; please do not use


If you have a troubleshooting or support issue, use the following  resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 18.2.4.dev20 on 2019-11-04 20:25
SHA: a90fe1951200ebd27fe74788c0a96c01104ac2cf
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/filter-scheduler.rst
URL: https://docs.openstack.org/nova/rocky/user/filter-scheduler.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1852511

Title:
  Filter Scheduler in nova documents include obsolete filters

Status in OpenStack Compute (nova):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way:
  RetryFilter was deprecated in Queens
  RamFilter and CoreFilter were deprecated in Pike

  There may be others; I'm not sure I have the complete list of
  deprecated filters. Will you please remove the deprecated filters from
  the Rocky documents, or else change the document to specify that they
  are deprecated?

  - [ ] This is a doc addition request.
  - [X] I have a fix to the document that I can paste below including example: 
input and output. 

  Change: CoreFilter - filters based on CPU core utilization. It passes hosts 
with sufficient number of CPU cores.
  To: CoreFilter - Deprecated since Pike; please do not use.

  Change: RamFilter - filters hosts by their RAM. Only hosts with sufficient 
RAM to host the instance are passed.
  To: RamFilter - Deprecated since Pike; please do not use.

  Change: RetryFilter - filters hosts that have been attempted for scheduling. 
Only passes hosts that have not been previously attempted.
  To: RetryFilter - Deprecated since Queens; please do not use

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 18.2.4.dev20 on 2019-11-04 20:25
  SHA: a90fe1951200ebd27fe74788c0a96c01104ac2cf
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/filter-scheduler.rst
  URL: https://docs.openstack.org/nova/rocky/user/filter-scheduler.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1852511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1808902] Re: Support matrix for abort migration is missing

2019-11-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/625781
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=70e5c214f06729eda06df4201349c135bec88107
Submitter: Zuul
Branch:master

commit 70e5c214f06729eda06df4201349c135bec88107
Author: Kevin_Zheng 
Date:   Tue Dec 18 12:00:53 2018 +0800

Add support matrix for Delete (Abort) on-going live migration

The info of Delete (Abort) on-going live migration is missing
in support matrix, it could be useful for users to consider
using this feature.

This patch adds it.

Change-Id: I2f917627fa451d20b1fd1ff35025481a4e525084
Closes-Bug: #1808902


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1808902

Title:
  Support matrix for abort migration is missing

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The support matrix for Delete(Abort) migration is missing,
  this could be a very useful info for users to consider using
  this featture, we should add it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1808902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852465] Re: ExternalNetworkAttachForbidden should result in BuildAbortException, not reschedule

2019-11-13 Thread Matt Riedemann
This goes back a long time but I'll just target to the non-extended-
maintenance branches.

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/train
   Status: New => Confirmed

** Changed in: nova/rocky
   Importance: Undecided => Low

** Changed in: nova/stein
   Status: New => Confirmed

** Changed in: nova/train
   Importance: Undecided => Low

** Changed in: nova/stein
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1852465

Title:
  ExternalNetworkAttachForbidden should result in BuildAbortException,
  not reschedule

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed
Status in OpenStack Compute (nova) train series:
  Confirmed

Bug description:
  I saw this in a CI run where creating a server on an external network
  as a non-admin user failed and was rescheduled. Here are the failures
  from the compute logs:

  
https://zuul.opendev.org/t/openstack/build/540a9fc0dbc64abb92d3f3e513573307/log/controller/logs/screen-n-cpu.txt.gz#28774

  
https://zuul.opendev.org/t/openstack/build/540a9fc0dbc64abb92d3f3e513573307/log/compute1/logs/screen-n-cpu.txt.gz#37415

  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [None req-41117798-8a4e-469f-bfbb-8bdfdea1a83f demo 
demo] [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] Instance failed to 
spawn: nova.exception.ExternalNetworkAttachForbidden: It is not allowed to 
create an interface on external network 29715f6f-24ab-49b7-abff-60d3f97596a0
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
Traceback (most recent call last):
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/compute/manager.py", line 2659, in _build_resources
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
yield resources
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/compute/manager.py", line 2433, in 
_build_and_run_instance
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
block_device_info=block_device_info)
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3467, in spawn
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
mdevs=mdevs)
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6019, in _get_guest_xml
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
network_info_str = str(network_info)
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/network/model.py", line 601, in __str__
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
return self._sync_wrapper(fn, *args, **kwargs)
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/network/model.py", line 584, in _sync_wrapper
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
self.wait()
  Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/network/model.py", line 616, in wait
  Nov 13 15:35:44.312293 

[Yahoo-eng-team] [Bug 1852504] [NEW] DHCP reserved ports that were unscheduled are advertised as DNS servers

2019-11-13 Thread Arjun Baindur
Public bug reported:

We have 2 DHCP servers per network. After network outages, and when
hosts come back online, the number of ACTIVE DHCP servers grow. This
happened again after more outages, with some networks having up to 9-10+
DHCP ports, many in ACTIVE state, despite neutron-server's neutron.conf
only having dhcp_agents_per_network = 2

It turns out these are "reserved_dhcp_port" as indicated by the
device_id.

As you can see here:
https://github.com/openstack/neutron/blob/master/neutron/db/agentschedulers_db.py#L399

When a network is rescheduled to a new DHCP agent, the old port is not
deleted, not is its status marked as DOWN. All that is done is it is
marked as reserved and the port updated.

However VMs on the network now get advertised all the DHCP ports on the
network as internal DNS servers, several stale entries in
/etc/resolv.conf in our case. Problem is some of these DHCP agents have
been unscheduled so the DNS servers don't actually exist. Also in the
VMs, more than 3 entries are not queried.

As you can see here, is resolv.conf on a VM:

[root@arjunpmk-master ~]# vim /etc/resolv.conf

# Generated by NetworkManager
search mpt1.pf9.io
nameserver 10.128.144.16
nameserver 10.128.144.23
nameserver 10.128.144.15
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
nameserver 10.128.144.7
nameserver 10.128.144.4
nameserver 10.128.144.8
nameserver 10.128.144.9
nameserver 10.128.144.17
nameserver 10.128.144.12
nameserver 10.128.144.45
nameserver 10.128.144.46
nameserver 10.128.144.51


Here you can see all the DHCP ports for the network of this VM:

[root@df-us-mpt1-kvm arjun(admin)]# openstack port list --network 
ead88ed3-f1e0-4498-8c1e-6d091083ae33 --device-owner network:dhcp
+--+--+---+--++
| ID   | Name | MAC Address   | Fixed IP 
Addresses   | Status |
+--+--+---+--++
| 02ff0f4c-f39d-4207-90b4-2a69585f4c8a |  | fa:16:3e:a9:36:82 | 
ip_address='10.128.144.16', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
ACTIVE |
| 0b612f86-ad06-4bce-a333-bc18f3e9e7b1 |  | fa:16:3e:bb:d8:3d | 
ip_address='10.128.144.23', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
DOWN   |
| 402338ac-2ca6-4312-a2df-a306fc589f10 |  | fa:16:3e:a3:a8:57 | 
ip_address='10.128.144.15', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
ACTIVE |
| 5d2edc73-4eff-44c0-8993-125636973384 |  | fa:16:3e:6c:cd:2b | 
ip_address='10.128.144.7', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6'  | 
ACTIVE |
| 78241da3-9674-479a-8b45-a580c7f8b117 |  | fa:16:3e:d0:9d:ef | 
ip_address='10.128.144.4', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6'  | 
ACTIVE |
| 7b41bf47-d4d4-434a-b704-4c67182ffcaa |  | fa:16:3e:4c:cf:54 | 
ip_address='10.128.144.8', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6'  | 
ACTIVE |
| 96897190-1aa8-4c17-a7d1-c3744f1bf962 |  | fa:16:3e:e8:55:29 | 
ip_address='10.128.144.45', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
ACTIVE |
| af87dde6-fb46-4516-9569-e46496398b64 |  | fa:16:3e:0e:61:14 | 
ip_address='10.128.144.9', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6'  | 
ACTIVE |
| c2a2112d-c6ef-4411-a415-1a453d74a838 |  | fa:16:3e:d0:39:67 | 
ip_address='10.128.144.46', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
DOWN   |
| c8298fbd-06e7-4488-a3e1-874e9341d4cf |  | fa:16:3e:d6:3c:ac | 
ip_address='10.128.144.51', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
DOWN   |
| d6f0206f-ae3c-4ebf-95cb-104dad786724 |  | fa:16:3e:ab:ab:22 | 
ip_address='10.128.144.17', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
ACTIVE |
| e2be0f98--4645-b58a-435e5513a4d3 |  | fa:16:3e:b4:ba:c0 | 
ip_address='10.128.144.12', subnet_id='9757ae4a-ccfb-49b0-a9cc-53b8664631a6' | 
DOWN   |
+--+--+---+--++


If I view the first DNS server for the VM's resolv.conf (10.128.144.16), you 
can see its status is ACTIVE but its actually a reserved port. This is the same 
case for 2nd nameserver entry. Luckily the 3rd entry is valid, but this causes 
timeouts and all DNS lookups to take 10 seconds since first two fail. VMs on 
other networks aren't so lucky, where all 3 nameservers are reserved.


Expectation: Only DHCP ports that are actually scheduled (not reserved) should 
be advertised as DNS nameservers. I don't know if this means marking the port 
as DOWN, or deleting the port when unscheduled. 

maybe status needs to also be updated here?

[Yahoo-eng-team] [Bug 1795816] Re: neutron_dynamic_routing Bgp floatingip_update KeyError: 'last_known_router_id'

2019-11-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/665328
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=4780fe548b86eba7f64a57ccf2d000958c238253
Submitter: Zuul
Branch:master

commit 4780fe548b86eba7f64a57ccf2d000958c238253
Author: Benoît Knecht 
Date:   Fri Jun 14 10:12:18 2019 +0200

bgp: Gracefully handle missing last_known_router_id

When a server is deleted before its floating IP has been disassociated,
the notification doens't contain a `last_known_router_id` key, which
results in a `KeyError` exception being thrown.

This commit gracefully handles this situation by setting
`last_router_id` to `None` when `last_known_router_id` is missing.

Change-Id: If127a33cec7ce6c4d264a191df37c30decab4daa
Closes-Bug: #1795816


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1795816

Title:
  neutron_dynamic_routing Bgp floatingip_update KeyError:
  'last_known_router_id'

Status in neutron:
  Fix Released

Bug description:
  Hi, every time when i run the tempest test:
  
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops

  a python exception appears in the neutron server.log:

  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager 
[req-3fe9d833-84d2-41ed-bae3-90f02b1425f4 c3f8dd04c65b44adbda2389ef5aa8f87 
38908f51f8c740b18e71fc62352076fb - default default] Error during notification 
for 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin.floatingip_update_callback-13778
 floatingip, after_update: KeyError: 'last_known_router_id'
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py", line 177, 
in _notify_loop
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron_dynamic_routing/services/bgp/bgp_plugin.py",
 line 236, in floatingip_update_callback
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager 
last_router_id = kwargs['last_known_router_id']
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager KeyError: 
'last_known_router_id'
  2018-10-02 16:13:01.380 11754 ERROR neutron_lib.callbacks.manager

  it seems to replace the line:
  233 last_router_id = kwargs['last_known_router_id']

  in /usr/lib/python2.7/site-
  packages/neutron_dynamic_routing/services/bgp/bgp_plugin.py with line
  last_router_id = kwargs.get('last_known_router_id') solves the
  problem.

  This problem is found in Pike and Queens releases of OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1795816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852491] [NEW] usr_lib_exec path is wrong on FreeBSD

2019-11-13 Thread Igor Galić
Public bug reported:

the FreeBSD Distro class does not override `usr_lib_exec`, so it defaults 
`/usr/lib`, which under FreeBSD is wrong.
The correct path (prefix) is usr_lib_exec = '/usr/local/lib'

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1852491

Title:
  usr_lib_exec path is wrong on FreeBSD

Status in cloud-init:
  New

Bug description:
  the FreeBSD Distro class does not override `usr_lib_exec`, so it defaults 
`/usr/lib`, which under FreeBSD is wrong.
  The correct path (prefix) is usr_lib_exec = '/usr/local/lib'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1852491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852468] Re: network router:external value is non-boolean (Internal) which causes server create failure

2019-11-13 Thread Matt Riedemann
Nevermind about the value, it's openstack CLI that is translating the
false value to "Internal":

https://github.com/openstack/python-
openstackclient/blob/d17a1c8039807cdac29e77eb5f0724d181bdd831/openstackclient/network/v2/network.py#L33

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852468

Title:
  network router:external value is non-boolean (Internal) which causes
  server create failure

Status in neutron:
  Invalid

Bug description:
  I noticed this here:

  https://review.opendev.org/#/c/693248/

  Nova has this post-test script:

  
https://zuul.opendev.org/t/openstack/build/540a9fc0dbc64abb92d3f3e513573307/console#4/0/0/controller

  Which creates an internal private network:

  + /opt/stack/nova/gate/post_test_hook.sh:main:120 :   openstack network 
create net0 --provider-network-type vlan --provider-physical-network public 
--provider-segment 100 --project demo
  
+---+--+
  | Field | Value   

 |
  
+---+--+
  | admin_state_up| UP  

 |
  | availability_zone_hints   | 

 |
  | availability_zones| 

 |
  | created_at| 2019-11-13T15:35:09Z

 |
  | description   | 

 |
  | dns_domain| None

 |
  | id| c0c449d7-267d-42f3-86a1-b27b472edf65

 |
  | ipv4_address_scope| None

 |
  | ipv6_address_scope| None

 |
  | is_default| False   

 |
  | is_vlan_transparent   | None

 |
  | location  | cloud='', project.domain_id=, 
project.domain_name=, project.id='5e899e65a6fc46ba9e8bf4210c4a6a2e', 
project.name=, region_name='RegionOne', zone= |
  | mtu   | 1450

 |
  | name  | net0

 |
  | port_security_enabled | True

 |
  | project_id| 5e899e65a6fc46ba9e8bf4210c4a6a2e

 |
  | provider:network_type | vlan

 |
  | provider:physical_network | public  

 |
  | provider:segmentation_id  | 100 

[Yahoo-eng-team] [Bug 1852468] [NEW] network router:external value is non-boolean (Internal) which causes server create failure

2019-11-13 Thread Matt Riedemann
Public bug reported:

I noticed this here:

https://review.opendev.org/#/c/693248/

Nova has this post-test script:

https://zuul.opendev.org/t/openstack/build/540a9fc0dbc64abb92d3f3e513573307/console#4/0/0/controller

Which creates an internal private network:

+ /opt/stack/nova/gate/post_test_hook.sh:main:120 :   openstack network create 
net0 --provider-network-type vlan --provider-physical-network public 
--provider-segment 100 --project demo
+---+--+
| Field | Value 

   |
+---+--+
| admin_state_up| UP

   |
| availability_zone_hints   |   

   |
| availability_zones|   

   |
| created_at| 2019-11-13T15:35:09Z  

   |
| description   |   

   |
| dns_domain| None  

   |
| id| c0c449d7-267d-42f3-86a1-b27b472edf65  

   |
| ipv4_address_scope| None  

   |
| ipv6_address_scope| None  

   |
| is_default| False 

   |
| is_vlan_transparent   | None  

   |
| location  | cloud='', project.domain_id=, 
project.domain_name=, project.id='5e899e65a6fc46ba9e8bf4210c4a6a2e', 
project.name=, region_name='RegionOne', zone= |
| mtu   | 1450  

   |
| name  | net0  

   |
| port_security_enabled | True  

   |
| project_id| 5e899e65a6fc46ba9e8bf4210c4a6a2e  

   |
| provider:network_type | vlan  

   |
| provider:physical_network | public

   |
| provider:segmentation_id  | 100   

   |
| qos_policy_id | None  

   |
| revision_number   | 1 

   |
| router:external   | Internal  

   |
| segments  | None   

[Yahoo-eng-team] [Bug 1852465] [NEW] ExternalNetworkAttachForbidden should result in BuildAbortException, not reschedule

2019-11-13 Thread Matt Riedemann
Public bug reported:

I saw this in a CI run where creating a server on an external network as
a non-admin user failed and was rescheduled. Here are the failures from
the compute logs:

https://zuul.opendev.org/t/openstack/build/540a9fc0dbc64abb92d3f3e513573307/log/controller/logs/screen-n-cpu.txt.gz#28774

https://zuul.opendev.org/t/openstack/build/540a9fc0dbc64abb92d3f3e513573307/log/compute1/logs/screen-n-cpu.txt.gz#37415

Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [None req-41117798-8a4e-469f-bfbb-8bdfdea1a83f demo 
demo] [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] Instance failed to 
spawn: nova.exception.ExternalNetworkAttachForbidden: It is not allowed to 
create an interface on external network 29715f6f-24ab-49b7-abff-60d3f97596a0
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
Traceback (most recent call last):
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/compute/manager.py", line 2659, in _build_resources
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
yield resources
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/compute/manager.py", line 2433, in 
_build_and_run_instance
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
block_device_info=block_device_info)
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3467, in spawn
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
mdevs=mdevs)
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6019, in _get_guest_xml
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
network_info_str = str(network_info)
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/network/model.py", line 601, in __str__
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
return self._sync_wrapper(fn, *args, **kwargs)
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/network/model.py", line 584, in _sync_wrapper
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
self.wait()
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/opt/stack/nova/nova/network/model.py", line 616, in wait
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
self[:] = self._gt.wait()
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 
181, in wait
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
return self._exit_event.wait()
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File "/usr/local/lib/python3.6/dist-packages/eventlet/event.py", line 132, in 
wait
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd] 
current.throw(*self._exc)
Nov 13 15:35:44.312293 ubuntu-bionic-rax-ord-0012778423 nova-compute[26791]: 
ERROR nova.compute.manager [instance: be6eb09a-f0d9-4c04-9c55-29230a253fbd]   
File 

[Yahoo-eng-team] [Bug 1852458] [NEW] "create" instance action not created when instance is buried in cell0

2019-11-13 Thread Matt Riedemann
Public bug reported:

Before cell0 was introduced the API would create the "create" instance
action for each instance in the nova cell database before casting off to
conductor to do scheduling:

https://github.com/openstack/nova/blob/mitaka-
eol/nova/compute/api.py#L1180

Note that conductor failed to "complete" the action with a failure
event:

https://github.com/openstack/nova/blob/mitaka-
eol/nova/conductor/manager.py#L374

But at least the action was created.

Since then, with cell0, if scheduling fails the instance is buried in
the cell0 database but no instance action is created. To illustrate, I
disabled the single nova-compute service on my devstack host and created
a server which failed with NoValidHost:

$ openstack server show build-fail1 -f value -c fault
{u'message': u'No valid host was found. ', u'code': 500, u'created': 
u'2019-11-13T15:57:13Z'}

When listing instance actions I expected to see a "create" action but
there were none:

$ nova instance-action-list 008a7d52-dd83-4f52-a720-b3cfcc498259
+++-+++
| Action | Request_ID | Message | Start_Time | Updated_At |
+++-+++
+++-+++

This is because the "create" action is only created when the instance is
scheduled to a specific cell:

https://github.com/openstack/nova/blob/20.0.0/nova/conductor/manager.py#L1460

Solution:

The ComputeTaskManager._bury_in_cell0 method should also create a
"create" action in cell0 like it does for the instance BDMs and tags.

This goes back to Ocata: https://review.opendev.org/#/c/319379/

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Affects: nova/ocata
 Importance: Low
 Status: Triaged

** Affects: nova/pike
 Importance: Low
 Status: Triaged

** Affects: nova/queens
 Importance: Low
 Status: Triaged

** Affects: nova/rocky
 Importance: Low
 Status: Triaged

** Affects: nova/stein
 Importance: Low
 Status: Triaged

** Affects: nova/train
 Importance: Low
 Status: Triaged


** Tags: cells regression

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1852458

Title:
  "create" instance action not created when instance is buried in cell0

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged
Status in OpenStack Compute (nova) stein series:
  Triaged
Status in OpenStack Compute (nova) train series:
  Triaged

Bug description:
  Before cell0 was introduced the API would create the "create" instance
  action for each instance in the nova cell database before casting off
  to conductor to do scheduling:

  https://github.com/openstack/nova/blob/mitaka-
  eol/nova/compute/api.py#L1180

  Note that conductor failed to "complete" the action with a failure
  event:

  https://github.com/openstack/nova/blob/mitaka-
  eol/nova/conductor/manager.py#L374

  But at least the action was created.

  Since then, with cell0, if scheduling fails the instance is buried in
  the cell0 database but no instance action is created. To illustrate, I
  disabled the single nova-compute service on my devstack host and
  created a server which failed with NoValidHost:

  $ openstack server show build-fail1 -f value -c fault
  {u'message': u'No valid host was found. ', u'code': 500, u'created': 
u'2019-11-13T15:57:13Z'}

  When listing instance actions I expected to see a "create" action but
  there were none:

  $ nova instance-action-list 008a7d52-dd83-4f52-a720-b3cfcc498259
  +++-+++
  | Action | Request_ID | Message | Start_Time | Updated_At |
  +++-+++
  +++-+++

  This is because the "create" action is only created when the instance
  is scheduled to a specific cell:

  https://github.com/openstack/nova/blob/20.0.0/nova/conductor/manager.py#L1460

  Solution:

  The ComputeTaskManager._bury_in_cell0 method should also create a
  "create" action in cell0 like it does for the instance BDMs and tags.

  This goes back to Ocata: 

[Yahoo-eng-team] [Bug 1852461] [NEW] Broken links in config-drive docs on RTD

2019-11-13 Thread Zane Bitter
Public bug reported:

On the page
https://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html
the two links to the "config drive extension" and "introduction" are
broken because of reorganisation of OpenStack docs.

Admin documentation (for enabling the config drive in Nova) is here: 
http://docs.openstack.org/trunk/openstack-compute/admin/content/config-drive.html

User documentation is here:
https://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html

Note that currently the "config drive extension" link points to the user
documentation, and the "introduction" link points to the admin
documentation, which is backwards.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1852461

Title:
  Broken links in config-drive docs on RTD

Status in cloud-init:
  New

Bug description:
  On the page
  https://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html
  the two links to the "config drive extension" and "introduction" are
  broken because of reorganisation of OpenStack docs.

  Admin documentation (for enabling the config drive in Nova) is here: 
  
http://docs.openstack.org/trunk/openstack-compute/admin/content/config-drive.html

  User documentation is here:
  https://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html

  Note that currently the "config drive extension" link points to the
  user documentation, and the "introduction" link points to the admin
  documentation, which is backwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1852461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852456] [NEW] doc: list of modules is no longer present

2019-11-13 Thread Ryan Harper
Public bug reported:

the list of modules has disappeared from the documentation sidebar.

- version 19.3: https://cloudinit.readthedocs.io/en/19.3/topics/modules.html
- version 19.2: https://cloudinit.readthedocs.io/en/19.2/topics/modules.html

In 19.2, the sidebar has an entry for each module, and it's a lot easier
to find and navigate to appropriate module.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1852456

Title:
  doc: list of modules is no longer present

Status in cloud-init:
  New

Bug description:
  the list of modules has disappeared from the documentation sidebar.

  - version 19.3: https://cloudinit.readthedocs.io/en/19.3/topics/modules.html
  - version 19.2: https://cloudinit.readthedocs.io/en/19.2/topics/modules.html

  In 19.2, the sidebar has an entry for each module, and it's a lot
  easier to find and navigate to appropriate module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1852456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852447] [NEW] FWaaS: adding a router port to fwg and removing it leaves the fwg active

2019-11-13 Thread Dr. Jens Harbott
Public bug reported:

Steps to reproduce:

- Create a router
- Optionally create a new firewall group (issue also happens when using the 
default FWG)
- Add a subnet to the router
- Add the router port to the firewall group
- Verify that the status of the firewall group changes from INACTIVE to ACTIVE
- Remove the subnet from the router again

Actual result:

The firewall group has an empty ports list but still has status ACTIVE.

Expected result:

The firewall group has an empty ports list and status INACTIVE.

Tested with devstack on current master. This may be related to
https://bugs.launchpad.net/neutron/+bug/1845300 but that one seems to
happen only sporadically and also the tempest test actually explictly
removes the router ports from the fwg.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852447

Title:
  FWaaS: adding a router port to fwg and removing it leaves the fwg
  active

Status in neutron:
  New

Bug description:
  Steps to reproduce:

  - Create a router
  - Optionally create a new firewall group (issue also happens when using the 
default FWG)
  - Add a subnet to the router
  - Add the router port to the firewall group
  - Verify that the status of the firewall group changes from INACTIVE to ACTIVE
  - Remove the subnet from the router again

  Actual result:

  The firewall group has an empty ports list but still has status
  ACTIVE.

  Expected result:

  The firewall group has an empty ports list and status INACTIVE.

  Tested with devstack on current master. This may be related to
  https://bugs.launchpad.net/neutron/+bug/1845300 but that one seems to
  happen only sporadically and also the tempest test actually explictly
  removes the router ports from the fwg.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1852447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852446] [NEW] Hypervisors in nova - no subpage details for ironic

2019-11-13 Thread Matt Riedemann
Public bug reported:

- [x] This is a doc addition request.

The admin configuration for hypervisors does not have a subpage with
details about configuring nova with the ironic compute driver. There are
at least a few things that could go into a page like that:

* Summary of what it does and how it interacts with the ironic service
as a 'hypervisor'. Some of that information is available in the ironic
docs, e.g.:

https://docs.openstack.org/ironic/latest/install/get_started.html?highlight=nova
#interaction-with-openstack-components

Since it comes up from time to time, I would also mention that the
ironic driver is the only one in nova where the compute_nodes table
record is 1:M with the compute services table record for the given host,
meaning a nova-compute service can manage multiple ComputeNodes, and the
ComputeNodes for the ironic driver managed compute service uses the
ironic node uuid for the compute node hypervisor_hostname (nodename) and
uuid fields. And ironic node : compute node : instance are 1:1:1. This
is more contributor/reference information but it's worth mentioning
somewhere since it's kind of tribal knowledge in nova.

* Ironic-specific configuration:
https://docs.openstack.org/nova/latest/configuration/config.html#ironic

- This could also include things like configuring baremetal flavors:
https://docs.openstack.org/ironic/latest/install/configure-nova-
flavors.html

- Running multiple nova-computes in HA mode managing the same set of
nodes:

https://specs.openstack.org/openstack/nova-
specs/specs/newton/implemented/ironic-multiple-compute-hosts.html

* Scaling and performance issues. Some of this is discussed in this
mailing list thread:

http://lists.openstack.org/pipermail/openstack-
discuss/2019-November/thread.html#10655

- Partitioning schemes: https://specs.openstack.org/openstack/nova-
specs/specs/stein/implemented/ironic-conductor-groups.html

* Known limitations / missing features, e.g. move operations
(migrate/resize).

---
Release:  on 2018-09-04 18:11:45
SHA: 8a71962e0149fa9ad7f66c17849bf69df3e78d33
Source: 
https://opendev.org/openstack/nova/src/doc/source/admin/configuration/hypervisors.rst
URL: https://docs.openstack.org/nova/latest/admin/configuration/hypervisors.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc ironic

** Description changed:

  - [x] This is a doc addition request.
  
  The admin configuration for hypervisors does not have a subpage with
  details about configuring nova with the ironic compute driver. There are
  at least a few things that could go into a page like that:
  
  * Summary of what it does and how it interacts with the ironic service
  as a 'hypervisor'. Some of that information is available in the ironic
  docs, e.g.:
  
  
https://docs.openstack.org/ironic/latest/install/get_started.html?highlight=nova
  #interaction-with-openstack-components
  
  Since it comes up from time to time, I would also mention that the
- ironic driver is the only one in nova would the compute_nodes table
+ ironic driver is the only one in nova where the compute_nodes table
  record is 1:M with the compute services table record for the given host,
  meaning a nova-compute service can manage multiple ComputeNodes, and the
  ComputeNodes for the ironic driver managed compute service uses the
  ironic node uuid for the compute node hypervisor_hostname (nodename) and
  uuid fields. And ironic node : compute node : instance are 1:1:1. This
  is more contributor/reference information but it's worth mentioning
  somewhere since it's kind of tribal knowledge in nova.
  
  * Ironic-specific configuration:
  https://docs.openstack.org/nova/latest/configuration/config.html#ironic
  
  - This could also include things like configuring baremetal flavors:
  https://docs.openstack.org/ironic/latest/install/configure-nova-
  flavors.html
  
  - Running multiple nova-computes in HA mode managing the same set of
  nodes:
  
  https://specs.openstack.org/openstack/nova-
  specs/specs/newton/implemented/ironic-multiple-compute-hosts.html
  
  * Scaling and performance issues. Some of this is discussed in this
  mailing list thread:
  
  http://lists.openstack.org/pipermail/openstack-
  discuss/2019-November/thread.html#10655
  
  - Partitioning schemes: https://specs.openstack.org/openstack/nova-
  specs/specs/stein/implemented/ironic-conductor-groups.html
  
  * Known limitations / missing features, e.g. move operations
  (migrate/resize).
  
  ---
  Release:  on 2018-09-04 18:11:45
  SHA: 8a71962e0149fa9ad7f66c17849bf69df3e78d33
  Source: 
https://opendev.org/openstack/nova/src/doc/source/admin/configuration/hypervisors.rst
  URL: 
https://docs.openstack.org/nova/latest/admin/configuration/hypervisors.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).

[Yahoo-eng-team] [Bug 1852442] [NEW] ds-identify uses the /sys filesystem which is linux specific and non-portable

2019-11-13 Thread Igor Galić
Public bug reported:

On FreeBSD, /sys is a convenient link to /usr/src/sys; the kernel source
code.

Especially for the DMI_ info, we should probably look into using
`dmidecode` which at least on FreeBSD we already pull in as dependency:

root@container-host-02:~ # pkg info -r dmidecode
dmidecode-3.2:
py36-cloud-init-19.2


The only problem i see is that many names don't map nicely.
For instance, sys_vendor is (either bios-vendor or) system-manufacturer.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1852442

Title:
  ds-identify uses the /sys filesystem which is linux specific and non-
  portable

Status in cloud-init:
  New

Bug description:
  On FreeBSD, /sys is a convenient link to /usr/src/sys; the kernel
  source code.

  Especially for the DMI_ info, we should probably look into using
  `dmidecode` which at least on FreeBSD we already pull in as
  dependency:

  root@container-host-02:~ # pkg info -r dmidecode
  dmidecode-3.2:
py36-cloud-init-19.2

  
  The only problem i see is that many names don't map nicely.
  For instance, sys_vendor is (either bios-vendor or) system-manufacturer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1852442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852437] Re: Allow ability to disable individual CPU features via `cpu_model_extra_flags`

2019-11-13 Thread Kashyap Chamarthy
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1852437

Title:
  Allow ability to disable individual CPU features via
  `cpu_model_extra_flags`

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  What?
  -

  When using a custom CPU model, Nova currently allows enabling
  individual CPU flags/features via the config attribute,
  `cpu_model_extra_flags`:

  [libvirt]
  cpu_mode=custom
  cpu_model=IvyBridge
  cpu_model_extra_flags="pcid,ssbd, md-clear"

  The above only lets you enable the CPU features.  This RFE is to also
  allow _disabling_ individual CPU features.

  
  Why?
  ---

  A couple of reasons:

- An Operator wants to generate a baseline CPU config (that facilates
  live migration) across his Compute node pool.  However, a certain
  CPU flag is causing an inteolerable performance issue for their
  guest workloads.  If the Operator isolated the problem to _that_
  specific CPU flag, then she would like to disable the flag.

- More importantly, a specific CPU flag might trigger a CPU
  vulnerability.  In such a case, the mitigation for it could be to
  simply _disable_ the offending CPU flag.

  Allowing disabling of individual CPU flags via Nova would enable the
  above use cases.

  
  How?
  

  By allowing the notion of '+' / '-' to indicate whether to enable to
  disable a given CPU flag.

  E.g. if you specify the below in 'nova.conf' (on the Compute nodes):

  [libvirt]
  cpu_mode=custom
  cpu_model=IvyBridge
  cpu_model_extra_flags="+pcid,-mtrr,ssbd"

  Then, when you start an instance, Nova should generate the below XML:

   
IvyBridge
Intel



  

  
  Note that the requirement to specify '+' / '-' for individual flags
  should be optional.  If neither is specified, then we should assume '+',
  and enable the feature (as shown above for the 'ssbd' flag).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1852437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp