[Yahoo-eng-team] [Bug 1821963] Re: Rally test delete-subnets fails at higher concurrency

2019-05-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821963

Title:
  Rally test delete-subnets fails at higher concurrency

Status in neutron:
  Expired

Bug description:
  We have been noticing that the rally test delete-subnets fails when
  concurrency is higher. We first noticed issues with a gbpservice based
  deployment on neutron with stable/newton. Have tried to triage this
  further with the community reference implementation from stable/ocata,
  stable/queens as well as master before submitting this defect.

  We notice failures at concurrency level of 6 and higher in older
  community reference implementation releases. It is better with newer
  releases and reference implementation master (where we notice failure
  at concurrency of 17 or higher).

  Sometimes we have noticed tracebacks with IP allocation cleanup with a
  FlushError. But consistently we see that the failure seems to be
  triggered with a DELETE for subnet (which has already been deleted).
  For some reason rally is prompted to send another DELETE for a subnet
  resulting in a 404 or 409.

  It is not clear if this is a rally issue or if the retry is prompted
  due to high latency seen with the DELETE.

  Notes below:

  1) subnet deletes showing latency as well the few 404.
  2) Sample traceback we notice in out gbpservice implementation which I have 
also seen on community stable/ocata.

  1)
  Mar 26 12:21:21 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-0b1112a5-be40-4b46-839e-4942328a0280 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_dCdKhYP8] 10.30.10.147 "DELETE 
/v2.0/subnets/e02dd6bc-94cd-40e8-941d-1d8d4c2640a5 HTTP/1.1" status: 404  len: 
342 time: 13.3713560
  Mar 26 12:21:21 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-36220d55-ec76-4fb7-8721-75638ce9e84b c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_dCdKhYP8] 10.30.10.147 "DELETE 
/v2.0/subnets/e02dd6bc-94cd-40e8-941d-1d8d4c2640a5 HTTP/1.1" status: 204  len: 
173 time: 10.7947230
  Mar 26 12:21:22 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-8e40b03f-1c1f-4e99-a707-b02b95e95e8f c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_BWCrXuCS] 10.30.10.147 "DELETE 
/v2.0/subnets/ab5c7118-59c2-4367-83da-8fb1ccf3d33f HTTP/1.1" status: 404  len: 
342 time: 14.2172780
  Mar 26 12:21:24 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-050fef8e-4342-4dce-a63e-ba14e4ee8890 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_c0s9ZE8D] 10.30.10.147 "DELETE 
/v2.0/subnets/b86002a0-731a-474b-b9df-159107b8212e HTTP/1.1" status: 204  len: 
173 time: 14.7142339
  Mar 26 12:21:24 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-189e4af7-8419-4894-833f-fabbd1d0ad14 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_eFa0pM1j] 10.30.10.147 "DELETE 
/v2.0/subnets/27c6e490-61af-4ea0-8723-99b1efdec072 HTTP/1.1" status: 204  len: 
173 time: 15.0186520
  Mar 26 12:21:26 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-0f7af610-724c-4ac2-bba9-11ef60e07b25 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_BWCrXuCS] 10.30.10.147 "DELETE 
/v2.0/subnets/ab5c7118-59c2-4367-83da-8fb1ccf3d33f HTTP/1.1" status: 204  len: 
173 time: 15.3438430
  Mar 26 12:21:27 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-0e945b06-7d79-4981-8cca-f42419f7d5f2 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_FaZYbsDZ] 10.30.10.147 "DELETE 
/v2.0/subnets/1cc7929d-f151-4dc3-a7de-5b7c1ad731e0 HTTP/1.1" status: 204  len: 
173 time: 17.7165470
  Mar 26 12:21:28 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-f101d5ee-54c5-4919-b5e7-5fd570ffc252 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_4MdtgRt1] 10.30.10.147 "DELETE 
/v2.0/subnets/70513aed-2ab3-4ae5-8e70-15180a662a2c HTTP/1.1" status: 204  len: 
173 time: 17.8223970
  Mar 26 12:21:28 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-623419d7-b9b7-4c83-aa62-1723b2a3964e c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_LwIpTIRg] 10.30.10.147 "DELETE 
/v2.0/subnets/951b111d-49ce-4e75-ae4f-7359bc11f9da HTTP/1.1" status: 204  len: 
173 time: 19.8612812
  Mar 26 12:21:30 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-631dab60-c4a3-4a40-8a1e-f972180c4195 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_Th4lV5RJ] 10.30.10.147 "DELETE 
/v2.0/subnets/008a7d25-0792-4829-b799-fa22f8bbfd0a HTTP/1.1" status: 204  len: 
173 time: 20.2435060
  Mar 26 12:21:30 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-9f6273e7-98ef-45d3-a543-f58614381ea6 c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_LQdIj5w1] 10.30.10.147 "DELETE 
/v2.0/subnets/b9c0d9b5-8f9d-4875-a11c-f9146d7dc7c8 HTTP/1.1" status: 204  len: 
173 time: 22.4103460
  Mar 26 12:21:31 ubuntu neutron-server[2832]: INFO neutron.wsgi [None 
req-12779a8f-ccde-445b-90bc-908efe5310cb c_rally_2c3dc3ea_v5nfLkdh 
c_rally_2c3dc3ea_sVGS0nRm] 

[Yahoo-eng-team] [Bug 1821926] Re: Nova Cloud Controller - very slow response to API calls

2019-05-28 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821926

Title:
  Nova Cloud Controller - very slow response to API calls

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description:

  - Openstack Rocky cloud + Bionic

  - Deployment it's good, all the services look good, Nova included.

  Nova takes ages to respond to any API call.

  - a simple 'openstack endpoint list' take around 1sec to show the
  output [1]

  - 'openstack host list' it takes 1+ minute to provide an output [2]
  and the Nova services seem to be down even if they are up & running

  
  [1] - https://pastebin.canonical.com/p/Mv9CsjkXR6/
  [2] - https://pastebin.canonical.com/p/HVXsT5bSPf/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829000] Re: live migration (block-migrate) may failed if instance image is deleted in glance

2019-05-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/659054
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c1782bacd8461bdd8c833792864e61228fa451f1
Submitter: Zuul
Branch:master

commit c1782bacd8461bdd8c833792864e61228fa451f1
Author: Alexandre Arents 
Date:   Tue May 14 11:37:12 2019 +

Fix live-migration when glance image deleted

When block live-migration is run on instance with
a deleted glance image, image.cache() is called
without specyfing instance disk size parameter,
preventing the resize of disk on the target host.

Change-Id: Id0f05bb1275cc816d98b662820e02eae25dc57a3
Closes-Bug: #1829000


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829000

Title:
  live migration (block-migrate) may failed if instance image is deleted
  in glance

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New

Bug description:
  Description
  ===
  When we run live block migration on instance with a deleted glance image,
  it may failed with following logs:

  -- nova-compute-log: --
  2019-05-10 11:06:27.417 248758 ERROR nova.virt.libvirt.driver 
[req-b28b9aca-9135-4258-93a6-a802e6192c60 f7929cd1d8994661b88aff12977c8b9e 
54f4d231201b4944a5fa4587a09bda28 - - -] [instance: 
84601bd4-a6ee-4e00-a5bc-f7c80def7ec5] Migration operation has aborted
  2019-05-10 11:06:27.566 248758 ERROR nova.virt.libvirt.driver 
[req-b28b9aca-9135-4258-93a6-a802e6192c60 f7929cd1d8994661b88aff12977c8b9e 
54f4d231201b4944a5fa4587a09bda28 - - -] [instance: 
84601bd4-a6ee-4e00-a5bc-f7c80def7ec5] Live Migration failure: internal error: 
info migration reply was missing return status

  -- on target host /var/log/libvirt/qemu/instance-xxx.log: --
  /build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1310: From: 2416967680, Len: 
65536, Size: 2361393152, Offset: 0
  /build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1311: requested operation past 
EOF--bad client?
  /build/qemu-nffE1h/qemu-2.x/nbd.c:nbd_trip():L1310: From: 3624927232, Len: 
589824, Size: 2361393152, Offset: 0

  Seems that pre_live_migration task do not setup correctly target instance 
disk:
  -because glance image is not existant, it fallbacks to remote host copy 
method.
  -in this context, image.cache() is called without instance disk size 
parameter.
  -consequence is instance disk is not resized to the correct size and remain 
with the size of backing file, so the disk is too small, making failed libvirt 
live migration.

  
  Steps to reproduce
  ==
  * Spawn qcow2 instance with glance image size << of flavor disk instance size
  * generate few user data in instance.
  * delete glance image.
  * run live block migration.

  Environment
  ===
  Issue observed in Newton, still present in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800848] Re: does not recognize cloned KVM VM as new instance

2019-05-28 Thread Bug Watch Updater
** Changed in: cloud-init (Suse)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1800848

Title:
  does not recognize cloned KVM VM as new instance

Status in cloud-init:
  New
Status in cloud-init package in Suse:
  Fix Released

Bug description:
  On a SLES 15 KVM VM on Proxmox VE  cloud-init 18.2-3.13 from
  module/repo Public-Cloud-Module_15-0  cloud-init fails to recognize a
  newly cloned VM from a template where cloud-init has initially been
  run for test purposes as a new instance. VM is using NoCloud resource.

  This leads to network configuration not applied and thus duplicate IP
  addresses when VM is started:

  % tail -f /var/log/cloud-init.log | grep -v util.py
  2018-10-31 13:55:27,370 - __init__.py[INFO]: 
/var/lib/cloud/data/previous-hostname differs from /etc/hostname, assuming user 
maintained hostname.
  2018-10-31 13:57:37,431 - main.py[DEBUG]: No kernel command line url found.
  2018-10-31 13:57:37,431 - main.py[DEBUG]: Closing stdin.
  2018-10-31 13:57:37,454 - main.py[DEBUG]: Checking to see if files that we 
need already exist from a previous run that would allow us to stop early.
  2018-10-31 13:57:37,455 - main.py[DEBUG]: Execution continuing, no previous 
run detected that would allow us to stop early.
  2018-10-31 13:57:37,455 - handlers.py[DEBUG]: start: 
init-network/check-cache: attempting to read from cache [trust]
  2018-10-31 13:57:37,463 - stages.py[DEBUG]: restored from cache with run 
check: DataSourceNoCloudNet [seed=/dev/sr0] [dsmode=net]
  2018-10-31 13:57:37,464 - handlers.py[DEBUG]: finish: 
init-network/check-cache: SUCCESS: restored from cache with run check: 
DataSourceNoCloudNet [seed=/dev/sr0][dsmode=net]
  2018-10-31 13:57:37,488 - stages.py[DEBUG]: previous iid found to be 
414842fe12da6f1078eca77443e6ab84592299ba
  2018-10-31 13:57:37,493 - main.py[DEBUG]: [net] init will now be targeting 
instance id: 414842fe12da6f1078eca77443e6ab84592299ba. new=False
  2018-10-31 13:57:37,514 - stages.py[DEBUG]: applying net config names for 
{'version': 1, 'config': [{'type': 'physical', 'name': 'eth0', 'mac_address': 
'76:61:cf:d5:65:b2', 'subnets': [{'type': 'static', 'address': '10.0.88.32', 
'netmask': '255.255.0.0', 'gateway': '10.0.0.4'}, {'type': 'static', 'address': 
'auto'}]}, {'type': 'nameserver', 'address': ['10.0.0.4'], 'search': 
['qs.de']}]}
  2018-10-31 13:57:37,515 - stages.py[DEBUG]: Using distro class 
  2018-10-31 13:57:37,529 - __init__.py[DEBUG]: no work necessary for renaming 
of [['76:61:cf:d5:65:b2', 'eth0', 'virtio_net', '0x0001']]
  2018-10-31 13:57:37,530 - stages.py[DEBUG]: not a new instance. network 
config is not applied.

  Network configuration obviously was changed however.

  Commands used for cloning (using a self-made shell script)

  qm shutdown 1032 ; qm clone 1032 2200 --name slesmaster ; qm template 2200
  qm clone 2200 2201 --name sles1 ; qm set 2201 --ipconfig0 
ip=10.0.88.201/8,gw=10.0.0.4 ; qm start 2201
  qm clone 2200 2202 --name sles2 ; qm set 2202 --ipconfig0 
ip=10.0.88.202/8,gw=10.0.0.4 ; qm start 2202

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1800848/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830800] Re: Compute API in nova - server group "policy" field is a string rather than an object

2019-05-28 Thread Takashi NATSUME
The API reference is generated from master branch only.
So stable branches do not have to be fixed.

** Changed in: nova
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova/rocky
   Status: Triaged => Won't Fix

** Changed in: nova/stein
   Status: Triaged => Won't Fix

** Changed in: nova
   Status: Triaged => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830800

Title:
  Compute API in nova - server group "policy" field is a string rather
  than an object

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Won't Fix
Status in OpenStack Compute (nova) stein series:
  Won't Fix

Bug description:
  - [x] This doc is inaccurate in this way:

  The server group policy field added in v2.64 is a string but the API
  reference says the parameter is an object.

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  /api-ref/source/parameters.yaml#L5368

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586/nova/api/openstack/compute/schemas/server_groups.py#L60

  ---
  Release: 19.1.0.dev441 on 2019-03-26 18:09:01
  SHA: 37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  Source: https://opendev.org/openstack/nova/src/api-ref/source/index.rst
  URL: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824571] Re: l3agent can't create router if there are multiple external networks

2019-05-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/661509
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0b3f5f429d2e495eb78d78d46186092ac735e0d5
Submitter: Zuul
Branch:master

commit 0b3f5f429d2e495eb78d78d46186092ac735e0d5
Author: Miguel Lavalle 
Date:   Sun May 26 19:15:25 2019 -0500

Support multiple external networks in L3 agent

Change [1] removed the deprecated option external_network_bridge. Per
commit message in change [2], "l3 agent can handle any networks by
setting the neutron parameter external_network_bridge and
gateway_external_network_id to empty". So the consequence of [1] was to
introduce a regression whereby multiple external networks are not
supported by the L3 agent anymore.

This change proposes a new simplified rule. If
gateway_external_network_id is defined, that is the network that the L3
agent will use. If not and multiple external networks exist, the L3
agent will handle any of them.

[1] https://review.opendev.org/#/c/567369/
[2] https://review.opendev.org/#/c/59359

Change-Id: Idd766bd069eda85ab6876a78b8b050ee5ab66cf6
Closes-Bug: #1824571


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1824571

Title:
  l3agent can't create router if there are multiple external networks

Status in neutron:
  Fix Released

Bug description:
  In case there are more than one external network the l3 agent unable
  to create routers with the following error:

  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/agent.py",
 line 701, in _process_routers_if_compatible
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/agent.py",
 line 548, in _process_router_if_compatible
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent target_ex_net_id 
= self._fetch_external_net_id()
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/agent/l3/agent.py",
 line 376, in _fetch_external_net_id
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent raise 
Exception(msg)
  2019-04-12 17:33:18.844 103 ERROR neutron.agent.l3.agent Exception: The 
'gateway_external_network_id' option must be configured for this agent as 
Neutron has more than one external network.

  It happens in DVR scenario on both dvr and dvr_snat agents and it
  started after upgraded from Rocky to Stein, before the upgrade it
  worked fine. The gateway_external_network_id is not set in my config,
  because I want the l3 agent to be able to use multiple external
  networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1824571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830800] Re: Compute API in nova - server group "policy" field is a string rather than an object

2019-05-28 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Triaged

** Changed in: nova/stein
   Status: New => Triaged

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/stein
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830800

Title:
  Compute API in nova - server group "policy" field is a string rather
  than an object

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged
Status in OpenStack Compute (nova) stein series:
  Triaged

Bug description:
  - [x] This doc is inaccurate in this way:

  The server group policy field added in v2.64 is a string but the API
  reference says the parameter is an object.

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  /api-ref/source/parameters.yaml#L5368

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586/nova/api/openstack/compute/schemas/server_groups.py#L60

  ---
  Release: 19.1.0.dev441 on 2019-03-26 18:09:01
  SHA: 37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  Source: https://opendev.org/openstack/nova/src/api-ref/source/index.rst
  URL: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830800] [NEW] Compute API in nova - server group "policy" field is a string rather than an object

2019-05-28 Thread Matt Riedemann
Public bug reported:

- [x] This doc is inaccurate in this way:

The server group policy field added in v2.64 is a string but the API
reference says the parameter is an object.

https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
/api-ref/source/parameters.yaml#L5368

https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586/nova/api/openstack/compute/schemas/server_groups.py#L60

---
Release: 19.1.0.dev441 on 2019-03-26 18:09:01
SHA: 37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
Source: https://opendev.org/openstack/nova/src/api-ref/source/index.rst
URL: https://developer.openstack.org/api-ref/compute/

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830800

Title:
  Compute API in nova - server group "policy" field is a string rather
  than an object

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  - [x] This doc is inaccurate in this way:

  The server group policy field added in v2.64 is a string but the API
  reference says the parameter is an object.

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  /api-ref/source/parameters.yaml#L5368

  
https://github.com/openstack/nova/blob/37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586/nova/api/openstack/compute/schemas/server_groups.py#L60

  ---
  Release: 19.1.0.dev441 on 2019-03-26 18:09:01
  SHA: 37ccd7ec3a0f0b8f636d8dd82e6e929f462f6586
  Source: https://opendev.org/openstack/nova/src/api-ref/source/index.rst
  URL: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1807697] Re: RFE: Token returns Project's tag properties

2019-05-28 Thread Colleen Murphy
Okay, in that case I'll close this bug for now. If you get stuck or have
questions, feel free to reopen this or contact us in #openstack-keystone
or on the openstack-discuss mailing list.

** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1807697

Title:
  RFE: Token returns Project's tag properties

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  
  From an operator perspective, there are many situations where you need to add 
an ACL for each project. Currently, keystore and openstack policies do not seem 
to have any fine-grained APIs for project-specific privilege control. 

  For specific, if we want to restrict some network resources per
  projects we have to assign neutron's rbac_policy which enable to map
  specific project with network sources rather than using oslo.policy.

  I found that if we can handle project's extra properties in policy
  code, developer can check the custom properties for their own ACL
  logic which can be added by oslo.policy. There is already enough
  required code in keystone codebase for returning token with project
  extra property, IMHO it can be added without major changes.

  Thanks in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1807697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830669] Re: Telemetry service can not get right "operator" of nova resources from notification

2019-05-28 Thread Matt Riedemann
Sounds like this is solved with versioned notifications so is not a nova
bug.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830669

Title:
  Telemetry service  can not get right "operator" of nova resources from
  notification

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Telemetry service(like ceilometer) can only get instance user id from
  notification, but in some situation, we want to get the "operator" of
  the resource. A solution is getting operator user id from context in
  notification. Does someone have any idea about this problem?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830708] Re: rc_fields.py file is missing in the master branch , which is only present in stable stein

2019-05-28 Thread Matt Riedemann
I don't understand what you're saying about this being a bug. The code
was removed in master, so why is that a bug? Do you have proprietary
code that depended on it? If so, you'll have to adjust your code on
master.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830708

Title:
  rc_fields.py file is missing in the master branch , which is only
  present in stable stein

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  rc_fields.py file is missing in the master branch , which is only
  present in stable stein


  Above mentioned file is only present in stable/stein branch , but its
  not added to master branch .

  Stable Stein:
  https://github.com/openstack/nova/blob/stable/stein/nova/rc_fields.py

  Master :
  https://github.com/openstack/nova/blob/master/nova/rc_fields.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830763] [NEW] Debug neutron-tempest-plugin-dvr-multinode-scenario failures

2019-05-28 Thread Miguel Lavalle
Public bug reported:

This bug is meant to track the activities to debug the neutron-tempest-
plugin-dvr-multinode-scenario job. We start byt trying to isolate
failures in this test case:
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22test_connectivity_through_2_routers%5C%22%20AND%20build_status:%5C%22FAILURE%5C%22%20AND%20build_branch:%5C%22master%5C%22%20AND%20build_name:%5C
%22neutron-tempest-plugin-dvr-multinode-
scenario%5C%22%20AND%20project:%5C%22openstack%2Fneutron%5C%22

** Affects: neutron
 Importance: High
 Assignee: Miguel Lavalle (minsel)
 Status: Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Miguel Lavalle (minsel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830763

Title:
  Debug neutron-tempest-plugin-dvr-multinode-scenario failures

Status in neutron:
  Confirmed

Bug description:
  This bug is meant to track the activities to debug the neutron-
  tempest-plugin-dvr-multinode-scenario job. We start byt trying to
  isolate failures in this test case:
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22test_connectivity_through_2_routers%5C%22%20AND%20build_status:%5C%22FAILURE%5C%22%20AND%20build_branch:%5C%22master%5C%22%20AND%20build_name:%5C
  %22neutron-tempest-plugin-dvr-multinode-
  scenario%5C%22%20AND%20project:%5C%22openstack%2Fneutron%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1830763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830295] Re: devstack py3 get_link_devices() KeyError: 'index'

2019-05-28 Thread iain MacDonnell
Reinstalling oslo.privsep seems to have "fixed" it on the 16.04 box too.
Still don't understand why - must have been some bad cache or something.
Will put this down to "gremlins" unless it resurfaces :/

** Changed in: neutron
   Status: New => Invalid

** Changed in: oslo.privsep
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830295

Title:
  devstack py3 get_link_devices() KeyError: 'index'

Status in neutron:
  Invalid
Status in oslo.privsep:
  Invalid

Bug description:
  devstack master with py3. openvswitch agent has suddenly stopped
  working, with no change in config or environment (other than
  rebuilding devstack). Stack trace below. For some reason (yet
  undetermined), privileged.get_link_devices() now seems to be returning
  byte arrays instead of strings as the dict keys:

  >>> from neutron.privileged.agent.linux import ip_lib as privileged
  >>> privileged.get_link_devices(None)[0].keys() 
  dict_keys([b'index', b'family', b'__align', b'header', b'flags', b'ifi_type', 
b'event', b'change', b'attrs'])
  >>> 

  
  From agent startup:

  neutron-openvswitch-agent[42936]: ERROR neutron Traceback (most recent call 
last):
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/bin/neutron-openvswitch-agent", line 10, in 
  neutron-openvswitch-agent[42936]: ERROR neutron sys.exit(main())
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py", line 
20, in main
  neutron-openvswitch-agent[42936]: ERROR neutron agent_main.main()
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/main.py", 
line 47, in main
  neutron-openvswitch-agent[42936]: ERROR neutron mod.main()
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py",
 line 35, in main
  neutron-openvswitch-agent[42936]: ERROR neutron 
'neutron.plugins.ml2.drivers.openvswitch.agent.'
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/os_ken/base/app_manager.py", line 375, 
in run_apps
  neutron-openvswitch-agent[42936]: ERROR neutron hub.joinall(services)  
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/os_ken/lib/hub.py", line 102, in joinall
  neutron-openvswitch-agent[42936]: ERROR neutron t.wait()
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 180, in 
wait
  neutron-openvswitch-agent[42936]: ERROR neutron return 
self._exit_event.wait()
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/eventlet/event.py", line 132, in wait
  neutron-openvswitch-agent[42936]: ERROR neutron current.throw(*self._exc)
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 219, in 
main
  neutron-openvswitch-agent[42936]: ERROR neutron result = function(*args, 
**kwargs)
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/os_ken/lib/hub.py", line 64, in _launch
  neutron-openvswitch-agent[42936]: ERROR neutron raise e
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/os_ken/lib/hub.py", line 59, in _launch
  neutron-openvswitch-agent[42936]: ERROR neutron return func(*args, 
**kwargs)
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_oskenapp.py",
 line 43, in agent_main_wrapper
  neutron-openvswitch-agent[42936]: ERROR neutron LOG.exception("Agent main 
thread died of an exception")
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  neutron-openvswitch-agent[42936]: ERROR neutron self.force_reraise() 
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  neutron-openvswitch-agent[42936]: ERROR neutron six.reraise(self.type_, 
self.value, self.tb)
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise
  neutron-openvswitch-agent[42936]: ERROR neutron raise value
  neutron-openvswitch-agent[42936]: ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_oskenapp.py",
 line 40, in agent_main_wrapper
  neutron-openvswitch-agent[42936]: ERROR neutron 
ovs_agent.main(bridge_classes)
  

[Yahoo-eng-team] [Bug 1830747] Re: Error 500 trying to migrate an instance after wrong request_spec

2019-05-28 Thread Matt Riedemann
This might explain what's happening during a cold migration.

Conductor creates a legacy filter_properties dict here:

https://github.com/openstack/nova/blob/stable/rocky/nova/conductor/tasks/migrate.py#L172

If the spec has an instance_group it will call here:

https://github.com/openstack/nova/blob/stable/rocky/nova/objects/request_spec.py#L397

and _to_legacy_group_info sets these values in the filter_properties
dict:

return {'group_updated': True,
'group_hosts': set(self.instance_group.hosts),
'group_policies': set([self.instance_group.policy]),
'group_members': set(self.instance_group.members)}

Note there is no group_uuid.

Those filter_properties are passed to the prep_resize method on the dest
compute:

https://github.com/openstack/nova/blob/stable/rocky/nova/conductor/tasks/migrate.py#L304

zigo said he hit this:

https://github.com/openstack/nova/blob/stable/rocky/nova/compute/manager.py#L4272

(10:03:07 AM) zigo: 2019-05-28 15:02:35.534 30706 ERROR
nova.compute.manager [instance: ae6f8afe-9c64-4aaf-90e8-be8175fee8e4]
nova.exception.UnableToMigrateToSelf: Unable to migrate instance
(ae6f8afe-9c64-4aaf-90e8-be8175fee8e4) to current host
(clint1-compute-5.infomaniak.ch).

which will trigger a reschedule here:

https://github.com/openstack/nova/blob/stable/rocky/nova/compute/manager.py#L4348

The _reschedule_resize_or_reraise method will setup the parameters for
the resize_instance compute task RPC API (conductor) method:

https://github.com/openstack/nova/blob/stable/rocky/nova/compute/manager.py#L4378-L4379

Note that in Rocky the RequestSpec is not passed back to conductor on
the reschedule, only the filter_properties:

https://github.com/openstack/nova/blob/stable/rocky/nova/compute/manager.py#L1452

We only started passing the RequestSpec from compute to conductor on
reschedule starting in Stein: https://review.opendev.org/#/c/582417/

Without the request spec we get here in conductor:

https://github.com/openstack/nova/blob/stable/rocky/nova/conductor/manager.py#L307

Note that was pass in the filter_properties but no instance_group to
RequestSpec.from_components.

And because there is no instance_group but there are filter_properties,
we call _populate_group_info here:

https://github.com/openstack/nova/blob/stable/rocky/nova/objects/request_spec.py#L442

Which means we get into this block that sets the
RequestSpec.instance_group with no uuid:

https://github.com/openstack/nova/blob/stable/rocky/nova/objects/request_spec.py#L228

Then we eventually RPC cast off to prep_resize on the next host to try
for the cold migration and save the request_spec changes here:

https://github.com/openstack/nova/blob/stable/rocky/nova/conductor/manager.py#L356

Which is how later attempts to use that request spec to migrate the
instance blow up when loading it from the DB because
spec.instance_group.uuid is not set.

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830747

Title:
  Error 500 trying to migrate an instance after wrong request_spec

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New

Bug description:
  We've started an instance last Wednesday, and the compute where it ran
  failed (maybe hardware issue?). Since the networking looked wrong (ie:
  missing network interfaces), I tried to migrate the instance.

  According to Matt, it looked like the request_spec entry for the
  instance is wrong:

   my guess is something like this happened: 1. create server in a 
group, 2. cold migrate the server which fails on host A and does a reschedule 
to host B which maybe also fails (would be good to know if previous cold 
migration attempts failed with reschedules), 3. try to cold migrate again which 
fails with the instance_group.uuid thing
   the reschedule might be the key b/c like i said conductor has to 
rebuild a request spec and i think that's probably where we're doing a partial 
build of the request spec but missing the group uuid

  Here's what I had in my novaapidb:

  {
"nova_object.name": "RequestSpec",
"nova_object.version": "1.11",
"nova_object.data": {
  "ignore_hosts": null,
  "requested_destination": null,

[Yahoo-eng-team] [Bug 1830759] [NEW] Can't create volume using a volume as a source

2019-05-28 Thread Vadym Markov
Public bug reported:

Steps to reproduce:

1. Create volume of non-default type named Vol1
2. Create volume named Vol2 using Vol1 as source Volume

Expected result:
Volume is created

Actual result:
Volume isn't created. Error "Error: Unable to create volume."

CLI allows this operation

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1830759

Title:
  Can't create volume using a volume as a source

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1. Create volume of non-default type named Vol1
  2. Create volume named Vol2 using Vol1 as source Volume

  Expected result:
  Volume is created

  Actual result:
  Volume isn't created. Error "Error: Unable to create volume."

  CLI allows this operation

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1830759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817253] Re: when create volume failure, the message is Success , I think change the success message to info message is much better.

2019-05-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/638586
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=09da336e9cec7295a9269fd1446a24f4ca94a742
Submitter: Zuul
Branch:master

commit 09da336e9cec7295a9269fd1446a24f4ca94a742
Author: pengyuesheng 
Date:   Fri Feb 22 13:59:57 2019 +0800

Correct the prompt message when creating a volume in the image panel

Even when creating a volume fails, the message type was success.
It is better to change the message type to info as a volume creation
might fail when the message is shown.

Change-Id: I1606745a83b9db3e9cf8b5edebd6b1b3e4f8
Closes-Bug: #1817253


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1817253

Title:
  when create volume failure,the message is Success , I think change the
  success message to info message is much better.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  when create volume failure,the message is Success ,
  I think change the success message to info message is much better.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1817253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830747] [NEW] Error 500 trying to migrate an instance after wrong request_spec

2019-05-28 Thread Thomas Goirand
Public bug reported:

We've started an instance last Wednesday, and the compute where it ran
failed (maybe hardware issue?). Since the networking looked wrong (ie:
missing network interfaces), I tried to migrate the instance.

According to Matt, it looked like the request_spec entry for the
instance is wrong:

 my guess is something like this happened: 1. create server in a 
group, 2. cold migrate the server which fails on host A and does a reschedule 
to host B which maybe also fails (would be good to know if previous cold 
migration attempts failed with reschedules), 3. try to cold migrate again which 
fails with the instance_group.uuid thing
 the reschedule might be the key b/c like i said conductor has to 
rebuild a request spec and i think that's probably where we're doing a partial 
build of the request spec but missing the group uuid

Here's what I had in my novaapidb:

{
  "nova_object.name": "RequestSpec",
  "nova_object.version": "1.11",
  "nova_object.data": {
"ignore_hosts": null,
"requested_destination": null,
"instance_uuid": "2098b550-c749-460a-a44e-5932535993a9",
"num_instances": 1,
"image": {
  "nova_object.name": "ImageMeta",
  "nova_object.version": "1.8",
  "nova_object.data": {
"min_disk": 40,
"disk_format": "raw",
"min_ram": 0,
"container_format": "bare",
"properties": {
  "nova_object.name": "ImageMetaProps",
  "nova_object.version": "1.20",
  "nova_object.data": {},
  "nova_object.namespace": "nova"
}
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"properties",
"min_ram",
"container_format",
"disk_format",
"min_disk"
  ]
},
"availability_zone": "AZ3",
"flavor": {
  "nova_object.name": "Flavor",
  "nova_object.version": "1.2",
  "nova_object.data": {
"id": 28,
"name": "cpu2-ram6-disk40",
"is_public": true,
"rxtx_factor": 1,
"deleted_at": null,
"root_gb": 40,
"vcpus": 2,
"memory_mb": 6144,
"disabled": false,
"extra_specs": {},
"updated_at": null,
"flavorid": "e29f3ee9-3f07-46d2-b2e2-efa4950edc95",
"deleted": false,
"swap": 0,
"description": null,
"created_at": "2019-02-07T07:48:21Z",
"vcpu_weight": 0,
"ephemeral_gb": 0
  },
  "nova_object.namespace": "nova"
},
"force_hosts": null,
"retry": null,
"instance_group": {
  "nova_object.name": "InstanceGroup",
  "nova_object.version": "1.11",
  "nova_object.data": {
"members": null,
"hosts": null,
"policy": "anti-affinity"
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"policy",
"members",
"hosts"
  ]
},
"scheduler_hints": {
  "group": [
"295c99ea-2db6-469a-877f-454a3903a8d8"
  ]
},
"limits": {
  "nova_object.name": "SchedulerLimits",
  "nova_object.version": "1.0",
  "nova_object.data": {
"disk_gb": null,
"numa_topology": null,
"memory_mb": null,
"vcpu": null
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"disk_gb",
"vcpu",
"memory_mb",
"numa_topology"
  ]
},
"force_nodes": null,
"project_id": "1bf4dbb3d2c746658f462bf8e59ec6be",
"user_id": "255cca4584c24b16a684e3e8322b436b",
"numa_topology": null,
"is_bfv": false,
"pci_requests": {
  "nova_object.name": "InstancePCIRequests",
  "nova_object.version": "1.1",
  "nova_object.data": {
"instance_uuid": "2098b550-c749-460a-a44e-5932535993a9",
"requests": []
  },
  "nova_object.namespace": "nova"
}
  },
  "nova_object.namespace": "nova",
  "nova_object.changes": [
"ignore_hosts",
"requested_destination",
"num_instances",
"image",
"availability_zone",
"instance_uuid",
"flavor",
"scheduler_hints",
"pci_requests",
"instance_group",
"limits",
"project_id",
"user_id",
"numa_topology",
"is_bfv",
"retry"
  ]
}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830747

Title:
  Error 500 trying to migrate an instance after wrong request_spec

Status in OpenStack Compute (nova):
  New

Bug description:
  We've started an instance last Wednesday, and the compute where it ran
  failed (maybe hardware issue?). Since the networking looked wrong (ie:
  missing network interfaces), I tried to migrate the instance.

  According to Matt, it looked like the request_spec entry for the
  instance is wrong:

   my guess is something like this happened: 1. create server in a 
group, 2. cold 

[Yahoo-eng-team] [Bug 1830739] [NEW] user-data in CloudSigma VM's metadata field "cloudinit-user-data" fails to configure eth1

2019-05-28 Thread Aki Ketolainen
Public bug reported:

1. Tell us your cloud provider

   CloudSigma

2. Any appropriate cloud-init configuration you can provide us

   network: {version: 1, config: {type: physical, name: eth1, subnets:
{type: static, address: 10.1.1.101/24}}}

https://cloudinit.readthedocs.io/en/latest/topics/datasources/cloudsigma.html
says that "By default cloud-config format is expected there and the
#cloud-config header could be omitted."

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar.gz"
   
https://bugs.launchpad.net/bugs/1830739/+attachment/5267155/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1830739

Title:
  user-data in CloudSigma VM's metadata field "cloudinit-user-data"
  fails to configure eth1

Status in cloud-init:
  New

Bug description:
  1. Tell us your cloud provider

     CloudSigma

  2. Any appropriate cloud-init configuration you can provide us

     network: {version: 1, config: {type: physical, name: eth1, subnets:
  {type: static, address: 10.1.1.101/24}}}

  https://cloudinit.readthedocs.io/en/latest/topics/datasources/cloudsigma.html
  says that "By default cloud-config format is expected there and the
  #cloud-config header could be omitted."

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1830739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830708] [NEW] rc_fields.py file is missing in the master branch , which is only present in stable stein

2019-05-28 Thread Punith Kenchappa
Public bug reported:

rc_fields.py file is missing in the master branch , which is only
present in stable stein


Above mentioned file is only present in stable/stein branch , but its
not added to master branch .

Stable Stein:
https://github.com/openstack/nova/blob/stable/stein/nova/rc_fields.py

Master : https://github.com/openstack/nova/blob/master/nova/rc_fields.py

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830708

Title:
  rc_fields.py file is missing in the master branch , which is only
  present in stable stein

Status in OpenStack Compute (nova):
  New

Bug description:
  rc_fields.py file is missing in the master branch , which is only
  present in stable stein


  Above mentioned file is only present in stable/stein branch , but its
  not added to master branch .

  Stable Stein:
  https://github.com/openstack/nova/blob/stable/stein/nova/rc_fields.py

  Master :
  https://github.com/openstack/nova/blob/master/nova/rc_fields.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826120] Re: When the network and qos policy are not selected, create RBAC policy would shown error.

2019-05-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/655341
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4856a7e52006b1de69f02ee7c9c10084c6b23ca8
Submitter: Zuul
Branch:master

commit 4856a7e52006b1de69f02ee7c9c10084c6b23ca8
Author: pengyuesheng 
Date:   Wed Apr 24 15:14:07 2019 +0800

Check if network_id and qos_policy_id is empty

On create RBAC policy form,
if network_id and qos_policy_id is empty,
service will report an error.
This patch check if network_id and qos_policy_id is empty,
before submit form

Change-Id: I9f44900a5ad2dd3be3266b6757ae81c6c2f3e202
Closes-Bug: #1826120


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1826120

Title:
  When the network and qos policy are not selected, create RBAC policy
  would shown error.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When the network and qos policy are not selected, create RBAC policy
  would shown error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1826120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826495] Re: Delete the description information when editing the volume group, an error is shown

2019-05-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/655826
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=cca464fa95bca5164d3932182862c2ec569a37a7
Submitter: Zuul
Branch:master

commit cca464fa95bca5164d3932182862c2ec569a37a7
Author: pengyuesheng 
Date:   Fri Apr 26 11:14:24 2019 +0800

Allow deletion of description information when editing a volume group

Change-Id: Id78b39d7eae8b00c1ed8febccddfd6b4717e8ea1
Closes-Bug: #1826495


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1826495

Title:
  Delete the description information when editing the volume group, an
  error is shown

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Delete the description information when editing the volume group, an
  error is shown

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1826495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830679] [NEW] Security groups RBAC cause a major performance degradation

2019-05-28 Thread Adit Sarfaty
Public bug reported:

On stable/Stein & Train, a setup with about 6000 security groups of different 
tenants.
Using admin user, getting all security groups with GET /v2.0/security-groups 
HTTP/1.1 takes about 70 seconds.
Using the credentials of one of the tenants, who has only 1 security groups 
takes about 800 seconds.

Looking at the mysql DB logs reveals lots of RBAC related queries during thoee 
800 seconds.
Tried to revert the RBAC PATCH https://review.opendev.org/#/c/635311/ that is a 
partial fix of https://bugs.launchpad.net/neutron/+bug/1817119 , and it solved 
the issue completely. 
Now it takes less than a seconds to get security groups of a tenant.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830679

Title:
  Security groups RBAC cause a major performance degradation

Status in neutron:
  New

Bug description:
  On stable/Stein & Train, a setup with about 6000 security groups of different 
tenants.
  Using admin user, getting all security groups with GET /v2.0/security-groups 
HTTP/1.1 takes about 70 seconds.
  Using the credentials of one of the tenants, who has only 1 security groups 
takes about 800 seconds.

  Looking at the mysql DB logs reveals lots of RBAC related queries during 
thoee 800 seconds.
  Tried to revert the RBAC PATCH https://review.opendev.org/#/c/635311/ that is 
a partial fix of https://bugs.launchpad.net/neutron/+bug/1817119 , and it 
solved the issue completely. 
  Now it takes less than a seconds to get security groups of a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1830679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp