[Yahoo-eng-team] [Bug 1665548] [NEW] nova fail to evacuate as wrong onSharedStorarge parameter

2017-02-16 Thread int32bit
Public bug reported:

Nova API removed onSharedStorage parameter after v2.14, but we still
have many users use API under v2.14. And we found that nova 100% fail to
evacuate a server via novaclient, the error log as follows:

2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
[req-b34b79d6-20ed-4488-b8d5-98f4007eb12e 681041a7364d4852930021d009c8dc2b 
bafaf53fac4346b9bcd6a77cf964b7af - - -] Exception during message handling: 
Invalid state of instance files on shared storage
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in 
inner
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 110, in wrapped
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher payload)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in wrapped
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 359, in 
decorated_function
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance=instance)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 328, in 
decorated_function
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 409, in 
decorated_function
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 387, in 
decorated_function
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-02-16 20:22:48.106 6144 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 375, in 
decorated_functi

[Yahoo-eng-team] [Bug 1665539] [NEW] Jasmine unit tests view broken by profiler

2017-02-16 Thread Richard Jones
Public bug reported:

The new profiler dashboard defines an angular constant in the
_scripts.html page; this is not reflected in the jasmine.html page,
which causes injection to fail during testing when run through the
/jasmine URL.

** Affects: horizon
 Importance: High
 Status: Triaged


** Tags: ocata-backport-potential

** Changed in: horizon
   Status: New => Triaged

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
Milestone: None => pike-1

** Tags added: ocata-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1665539

Title:
  Jasmine unit tests view broken by profiler

Status in OpenStack Dashboard (Horizon):
  Triaged

Bug description:
  The new profiler dashboard defines an angular constant in the
  _scripts.html page; this is not reflected in the jasmine.html page,
  which causes injection to fail during testing when run through the
  /jasmine URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1665539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658877] Re: live migration failed with XenServer as hypervisor

2017-02-16 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Fix Released

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/ocata
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Tags removed: ocata-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658877

Title:
  live migration failed with XenServer as hypervisor

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Released

Bug description:
  I used devstack to deploy a multi compute node test environment with XenServer
  Then I executed the command "nova live-migration --block-migrate admin-vm5 
ComputeNode3"
  Then I got the below errors:

  ===
  2017-01-23 07:18:11.243 ERROR nova.virt.xenapi.vmops 
[req-6e4f8d0b-ea2f-4a69-bcd8-98d5f94e8ab0 admin admin] Migrate Send failed
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops Traceback (most recent 
call last):
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 2396, in live_migrate
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops "VM.migrate_send", 
vm_ref, migrate_data)
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 2361, in 
_call_live_migrate_command
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops vdi_map, vif_map, 
options)
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/os_xenapi/client/session.py", line 200, 
in call_xenapi
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops return 
session.xenapi_request(method, args)
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/os_xenapi/client/XenAPI.py", line 130, 
in xenapi_request
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops result = 
_parse_result(getattr(self, methodname)(*full_params))
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/os_xenapi/client/XenAPI.py", line 212, 
in _parse_result
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops raise 
Failure(result['ErrorDescription'])
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops Failure: 
['VIF_NOT_IN_MAP', 'OpaqueRef:b0636c87-539f-59f6-8fef-8c15c6d58665']
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops

  
  
  2017-01-23 07:18:11.355 ERROR nova.compute.manager 
[req-6e4f8d0b-ea2f-4a69-bcd8-98d5f94e8ab0 admin admin] [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] Live migration failed.
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] Traceback (most recent call last):
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/opt/stack/nova/nova/compute/manager.py", line 5368, in _do_live_migration
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] block_migration, migrate_data)
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 520, in live_migration
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] recover_method, block_migration, 
migrate_data)
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 2414, in live_migrate
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] recover_method(context, instance, 
destination_hostname)
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] self.force_reraise()
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] six.reraise(self.type_, self.value, 
self.tb)
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 2400, in live_migrate
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   

[Yahoo-eng-team] [Bug 1662626] Re: live-migrate left in migrating as domain not found

2017-02-16 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Tags removed: ocata-rc-potential
** Tags added: ocata-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662626

Title:
  live-migrate left in migrating as domain not found

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  A live-migration stress test was working fine when suddenly a VM
  stopped migrating. It failed with this error:

  ERROR nova.virt.libvirt.driver [req-df91ac40-820f-4aa9-945b-
  b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c
  669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-
  9c2c-8a792aed3d6b] Error from libvirt during undefine. Code=42
  Error=Domain not found: no domain with matching uuid '62034d78-3144
  -4efd-9c2c-8a792aed3d6b' (instance-0431)

  The full stack trace:

  2017-02-05 02:33:41.787 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Migration running for 240 secs, memory 9% 
remaining; (bytes processed=15198240264, remaining=1680875520, 
total=17314955264)
  2017-02-05 02:33:45.795 19770 INFO nova.compute.manager 
[req-abff9c69-5f82-4ed6-af8a-fd1dc81a72a6 - - - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] VM Paused (Lifecycle Event)
  2017-02-05 02:33:45.870 19770 INFO nova.compute.manager 
[req-abff9c69-5f82-4ed6-af8a-fd1dc81a72a6 - - - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] During sync_power_state the instance has 
a pending task (migrating). Skip.
  2017-02-05 02:33:45.883 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Migration operation has completed
  2017-02-05 02:33:45.884 19770 INFO nova.compute.manager 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] _post_live_migration() is started..
  2017-02-05 02:33:46.156 19770 INFO os_vif 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] Successfully unplugged vif 
VIFBridge(active=True,address=fa:16:3e:a2:90:55,bridge_name='brq476ab6ba-b3',has_traffic_filtering=True,id=98d476b3-0ead-4adb-ad54-1dff63edcd65,network=Network(476ab6ba-b32e-409e-9711-9412e8475ea0),plugin='linux_bridge',port_profile=,preserve_on_delete=True,vif_name='tap98d476b3-0e')
  2017-02-05 02:33:46.189 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Deleting instance files 
/var/lib/nova/instances/62034d78-3144-4efd-9c2c-8a792aed3d6b_del
  2017-02-05 02:33:46.195 19770 INFO nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Deletion of 
/var/lib/nova/instances/62034d78-3144-4efd-9c2c-8a792aed3d6b_del complete

  2017-02-05 02:33:46.334 19770 ERROR nova.virt.libvirt.driver [req-
  df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c
  669472610b194bfa9bf03f50f86d725a - - -] [instance: 62034d78-3144-4efd-
  9c2c-8a792aed3d6b] Error from libvirt during undefine. Code=42
  Error=Domain not found: no domain with matching uuid '62034d78-3144
  -4efd-9c2c-8a792aed3d6b' (instance-0431)

  2017-02-05 02:33:46.363 19770 WARNING nova.virt.libvirt.driver 
[req-df91ac40-820f-4aa9-945b-b2fce73461f8 29c0371e35f84fdaa033f2dbfe2c042c 
669472610b194bfa9bf03f50f86d725a - - -] [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Error monitoring migration: Domain not 
found: no domain with matching uuid '62034d78-3144-4efd-9c2c-8a792aed3d6b' 
(instance-0431)
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] Traceback (most recent call last):
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b]   File 
"/openstack/venvs/nova-14.0.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 6345, in _live_migration
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b] finish_event, disk_paths)
  2017-02-05 02:33:46.363 19770 ERROR nova.virt.libvirt.driver [instance: 
62034d78-3144-4efd-9c2c-8a792aed3d6b]   File 
"/openstack/venvs/nova-14.0.4/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 6255, in _live_migration_monitor
  2017-02-05 0

[Yahoo-eng-team] [Bug 1567181] Re: Request release for networking-fujitsu for stable/mitaka

2017-02-16 Thread Yushiro FURUKAWA
** Changed in: networking-fujitsu
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567181

Title:
  Request release for networking-fujitsu for stable/mitaka

Status in networking-fujitsu:
  Won't Fix
Status in neutron:
  Won't Fix

Bug description:
  Please release stable/mitaka branch of networking-fujitsu.

  tag: 2.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-fujitsu/+bug/1567181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385881
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=155f6c9456d4a0c878bc1209cf91fadc3899a211
Submitter: Jenkins
Branch:master

commit 155f6c9456d4a0c878bc1209cf91fadc3899a211
Author: pawnesh.kumar 
Date:   Thu Oct 13 15:27:28 2016 +0530

Drop MANIFEST.in - it's not needed by pbr

Glance already uses PBR:-
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

This patch removes `MANIFEST.in` file as pbr generates a
sensible manifest from git files and some standard files
and it removes the need for an explicit `MANIFEST.in` file.

Change-Id: Ib2ec595e5a56279ca985abd1ba06c232a5daeaeb
Closes-Bug: #1608980


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in anvil:
  Invalid
Status in craton:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  Fix Released
Status in Glance:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-midonet:
  New
Status in networking-odl:
  New
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Swift Authentication:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617282] Re: functional gate failed with git clone timeout on fetching ovs from github

2017-02-16 Thread Ihar Hrachyshka
The ovs fix we need for functional job stability:
https://mail.openvswitch.org/pipermail/ovs-git/2016-March/017804.html

** Also affects: openvswitch (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617282

Title:
  functional gate failed with git clone timeout on fetching ovs from
  github

Status in neutron:
  Confirmed
Status in openvswitch package in Ubuntu:
  New

Bug description:
  http://logs.openstack.org/68/351368/23/check/gate-neutron-dsvm-
  functional/0d68031/console.html

  2016-08-25 10:06:34.915685 | fatal: unable to access 
'https://github.com/openvswitch/ovs.git/': Failed to connect to github.com port 
443: Connection timed out
  2016-08-25 10:06:34.920456 | + functions-common:git_timed:603   :   
[[ 128 -ne 124 ]]
  2016-08-25 10:06:34.921769 | + functions-common:git_timed:604   :   
die 604 'git call failed: [git clone' https://github.com/openvswitch/ovs.git 
'/opt/stack/new/ovs]'
  2016-08-25 10:06:34.922982 | + functions-common:die:186 :   
local exitcode=0
  2016-08-25 10:06:34.924373 | + functions-common:die:187 :   
set +o xtrace
  2016-08-25 10:06:34.924404 | [Call Trace]
  2016-08-25 10:06:34.924430 | 
/opt/stack/new/neutron/neutron/tests/contrib/gate_hook.sh:53:compile_ovs
  2016-08-25 10:06:34.924447 | 
/opt/stack/new/neutron/devstack/lib/ovs:57:git_timed
  2016-08-25 10:06:34.924463 | /opt/stack/new/devstack/functions-common:604:die
  2016-08-25 10:06:34.926689 | [ERROR] 
/opt/stack/new/devstack/functions-common:604 git call failed: [git clone 
https://github.com/openvswitch/ovs.git /opt/stack/new/ovs]

  I guess we should stop pulling OVS from github. Instead, we could use
  Xenial platform that already provides ovs == 2.5 from .deb packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665487] Re: Live migration tests sometimes timeout waiting for instance to be ACTIVE: "KVM: entry failed, hardware error 0x0"

2017-02-16 Thread Matt Riedemann
The fix is here: https://review.openstack.org/#/c/435154/

** Also affects: openstack-gate
   Importance: Undecided
   Status: New

** Changed in: openstack-gate
   Status: New => In Progress

** Changed in: openstack-gate
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665487

Title:
  Live migration tests sometimes timeout waiting for instance to be
  ACTIVE: "KVM: entry failed, hardware error 0x0"

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack-Gate:
  In Progress

Bug description:
  I've seen this a few times, but start tracking from here:

  http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-
  multinode-live-migration-ubuntu-
  xenial/49adcde/console.html#_2017-02-16_22_25_43_645525

  2017-02-16 22:25:43.645525 | 2017-02-16 22:25:43.645 | 
tempest.api.compute.admin.test_live_migration.LiveBlockMigrationTestJSON.test_live_block_migration[id-1dce86b8-eb04-4c03-a9d8-9c1dc3ee0c7b]
  2017-02-16 22:25:43.646587 | 2017-02-16 22:25:43.646 | 
---
  2017-02-16 22:25:43.647903 | 2017-02-16 22:25:43.647 | 
  2017-02-16 22:25:43.649473 | 2017-02-16 22:25:43.649 | Captured traceback:
  2017-02-16 22:25:43.651373 | 2017-02-16 22:25:43.650 | ~~~
  2017-02-16 22:25:43.656802 | 2017-02-16 22:25:43.656 | Traceback (most 
recent call last):
  2017-02-16 22:25:43.659061 | 2017-02-16 22:25:43.658 |   File 
"tempest/api/compute/admin/test_live_migration.py", line 122, in 
test_live_block_migration
  2017-02-16 22:25:43.661006 | 2017-02-16 22:25:43.660 | 
self._test_live_migration()
  2017-02-16 22:25:43.671547 | 2017-02-16 22:25:43.671 |   File 
"tempest/api/compute/admin/test_live_migration.py", line 97, in 
_test_live_migration
  2017-02-16 22:25:43.672889 | 2017-02-16 22:25:43.672 | 
volume_backed=volume_backed)['id']
  2017-02-16 22:25:43.674405 | 2017-02-16 22:25:43.673 |   File 
"tempest/api/compute/base.py", line 232, in create_test_server
  2017-02-16 22:25:43.676559 | 2017-02-16 22:25:43.676 | **kwargs)
  2017-02-16 22:25:43.681637 | 2017-02-16 22:25:43.680 |   File 
"tempest/common/compute.py", line 182, in create_test_server
  2017-02-16 22:25:43.683300 | 2017-02-16 22:25:43.682 | server['id'])
  2017-02-16 22:25:43.684766 | 2017-02-16 22:25:43.684 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2017-02-16 22:25:43.686210 | 2017-02-16 22:25:43.685 | 
self.force_reraise()
  2017-02-16 22:25:43.687896 | 2017-02-16 22:25:43.687 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2017-02-16 22:25:43.689493 | 2017-02-16 22:25:43.689 | 
six.reraise(self.type_, self.value, self.tb)
  2017-02-16 22:25:43.691459 | 2017-02-16 22:25:43.691 |   File 
"tempest/common/compute.py", line 164, in create_test_server
  2017-02-16 22:25:43.693741 | 2017-02-16 22:25:43.692 | 
clients.servers_client, server['id'], wait_until)
  2017-02-16 22:25:43.695699 | 2017-02-16 22:25:43.695 |   File 
"tempest/common/waiters.py", line 96, in wait_for_server_status
  2017-02-16 22:25:43.697692 | 2017-02-16 22:25:43.697 | raise 
lib_exc.TimeoutException(message)
  2017-02-16 22:25:43.699378 | 2017-02-16 22:25:43.698 | 
tempest.lib.exceptions.TimeoutException: Request timed out
  2017-02-16 22:25:43.701086 | 2017-02-16 22:25:43.700 | Details: 
(LiveBlockMigrationTestJSON:test_live_block_migration) Server 
0ee93807-d206-4ddf-878c-efd1dd2eab3c failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: BUILD. Current 
task state: spawning.

  I was looking in the n-cpu logs in the subnode for 0ee93807-d206-4ddf-
  878c-efd1dd2eab3c and found the last thing we see during the server
  create is here:

  http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-
  multinode-live-migration-ubuntu-
  xenial/49adcde/logs/subnode-2/screen-n-cpu.txt.gz#_2017-02-16_22_22_11_143

  2017-02-16 22:22:11.143 14954 DEBUG nova.virt.libvirt.driver [req-
  025bf4d2-e5ba-4236-a334-f9eb98105ada tempest-
  LiveBlockMigrationTestJSON-793129345 tempest-
  LiveBlockMigrationTestJSON-793129345] [instance: 0ee93807-d206-4ddf-
  878c-efd1dd2eab3c] Instance is running spawn
  /opt/stack/new/nova/nova/virt/libvirt/driver.py:2689

  That's after we've created the domain:

  
https://github.com/openstack/nova/blob/15.0.0.0rc1/nova/virt/libvirt/driver.py#L2689

  After that the driver is waiting for the power_state to go to RUNNING.

  I see th

[Yahoo-eng-team] [Bug 1665487] [NEW] Live migration tests sometimes timeout waiting for instance to be ACTIVE: "KVM: entry failed, hardware error 0x0"

2017-02-16 Thread Matt Riedemann
Public bug reported:

I've seen this a few times, but start tracking from here:

http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-multinode-
live-migration-ubuntu-
xenial/49adcde/console.html#_2017-02-16_22_25_43_645525

2017-02-16 22:25:43.645525 | 2017-02-16 22:25:43.645 | 
tempest.api.compute.admin.test_live_migration.LiveBlockMigrationTestJSON.test_live_block_migration[id-1dce86b8-eb04-4c03-a9d8-9c1dc3ee0c7b]
2017-02-16 22:25:43.646587 | 2017-02-16 22:25:43.646 | 
---
2017-02-16 22:25:43.647903 | 2017-02-16 22:25:43.647 | 
2017-02-16 22:25:43.649473 | 2017-02-16 22:25:43.649 | Captured traceback:
2017-02-16 22:25:43.651373 | 2017-02-16 22:25:43.650 | ~~~
2017-02-16 22:25:43.656802 | 2017-02-16 22:25:43.656 | Traceback (most 
recent call last):
2017-02-16 22:25:43.659061 | 2017-02-16 22:25:43.658 |   File 
"tempest/api/compute/admin/test_live_migration.py", line 122, in 
test_live_block_migration
2017-02-16 22:25:43.661006 | 2017-02-16 22:25:43.660 | 
self._test_live_migration()
2017-02-16 22:25:43.671547 | 2017-02-16 22:25:43.671 |   File 
"tempest/api/compute/admin/test_live_migration.py", line 97, in 
_test_live_migration
2017-02-16 22:25:43.672889 | 2017-02-16 22:25:43.672 | 
volume_backed=volume_backed)['id']
2017-02-16 22:25:43.674405 | 2017-02-16 22:25:43.673 |   File 
"tempest/api/compute/base.py", line 232, in create_test_server
2017-02-16 22:25:43.676559 | 2017-02-16 22:25:43.676 | **kwargs)
2017-02-16 22:25:43.681637 | 2017-02-16 22:25:43.680 |   File 
"tempest/common/compute.py", line 182, in create_test_server
2017-02-16 22:25:43.683300 | 2017-02-16 22:25:43.682 | server['id'])
2017-02-16 22:25:43.684766 | 2017-02-16 22:25:43.684 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
2017-02-16 22:25:43.686210 | 2017-02-16 22:25:43.685 | 
self.force_reraise()
2017-02-16 22:25:43.687896 | 2017-02-16 22:25:43.687 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
2017-02-16 22:25:43.689493 | 2017-02-16 22:25:43.689 | 
six.reraise(self.type_, self.value, self.tb)
2017-02-16 22:25:43.691459 | 2017-02-16 22:25:43.691 |   File 
"tempest/common/compute.py", line 164, in create_test_server
2017-02-16 22:25:43.693741 | 2017-02-16 22:25:43.692 | 
clients.servers_client, server['id'], wait_until)
2017-02-16 22:25:43.695699 | 2017-02-16 22:25:43.695 |   File 
"tempest/common/waiters.py", line 96, in wait_for_server_status
2017-02-16 22:25:43.697692 | 2017-02-16 22:25:43.697 | raise 
lib_exc.TimeoutException(message)
2017-02-16 22:25:43.699378 | 2017-02-16 22:25:43.698 | 
tempest.lib.exceptions.TimeoutException: Request timed out
2017-02-16 22:25:43.701086 | 2017-02-16 22:25:43.700 | Details: 
(LiveBlockMigrationTestJSON:test_live_block_migration) Server 
0ee93807-d206-4ddf-878c-efd1dd2eab3c failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: BUILD. Current 
task state: spawning.

I was looking in the n-cpu logs in the subnode for 0ee93807-d206-4ddf-
878c-efd1dd2eab3c and found the last thing we see during the server
create is here:

http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-multinode-
live-migration-ubuntu-
xenial/49adcde/logs/subnode-2/screen-n-cpu.txt.gz#_2017-02-16_22_22_11_143

2017-02-16 22:22:11.143 14954 DEBUG nova.virt.libvirt.driver [req-
025bf4d2-e5ba-4236-a334-f9eb98105ada tempest-
LiveBlockMigrationTestJSON-793129345 tempest-
LiveBlockMigrationTestJSON-793129345] [instance: 0ee93807-d206-4ddf-
878c-efd1dd2eab3c] Instance is running spawn
/opt/stack/new/nova/nova/virt/libvirt/driver.py:2689

That's after we've created the domain:

https://github.com/openstack/nova/blob/15.0.0.0rc1/nova/virt/libvirt/driver.py#L2689

After that the driver is waiting for the power_state to go to RUNNING.

I see that shortly after that log message we get a libvirt event saying
the instance is started:

http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-multinode-
live-migration-ubuntu-
xenial/49adcde/logs/subnode-2/screen-n-cpu.txt.gz#_2017-02-16_22_22_11_214

And then right after that it's paused:

http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-multinode-
live-migration-ubuntu-
xenial/49adcde/logs/subnode-2/screen-n-cpu.txt.gz#_2017-02-16_22_22_11_263

Looking in the QEMU logs for that instance:

http://logs.openstack.org/50/435050/1/check/gate-tempest-dsvm-multinode-
live-migration-ubuntu-
xenial/49adcde/logs/subnode-2/libvirt/qemu/instance-0001.txt.gz

I see this, which is odd:

KVM: entry failed, hardware error 0x0

In the libvirtd logs I see:

http://logs.openstack.org/50/435050/1/check

[Yahoo-eng-team] [Bug 1663163] Re: Improper prompt when update existed resource class

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/431392
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ff1133c8ba649d6bfe2af6abc68bf42ebb68831e
Submitter: Jenkins
Branch:master

commit ff1133c8ba649d6bfe2af6abc68bf42ebb68831e
Author: ericxiett 
Date:   Thu Feb 9 16:55:11 2017 +0800

Fix improper prompt when update RC with existed one's name.

When update resource class with existed one's name,
the exception message was formatted by the updated resource
class's name, rather than the required existed one's name.
This patch formats the message with existed one's name, and
changing tests to add the right name.

Change-Id: I78ae8d872748de243d74b9954ce634fccf5e7310
Closes-Bug: #1663163


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663163

Title:
  Improper prompt when update existed resource class

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When i updated the resource class 'CUSTOM_A' with name 'CUSTOM_B', which 
resource class 'CUSTOM_B' exists,
  the prompt returned by Placement API was 'Resource class already exists: 
CUSTOM_A'.
  But it should be 'CUSTOM_B' that already exists.

  Steps to reproduce
  ==
  * POST http://**IP**/placement/resource_classes
  {
  "name": "CUSTOM_A"
  }
  * POST http://**IP**/placement/resource_classes
  {
  "name": "CUSTOM_B"
  }
  * PUT http://172.23.28.30/placement/resource_classes/CUSTOM_A
  {
  "name": "CUSTOM_B"
  }

  Expected result
  ===
  Response:
  {
"errors": [
  {
"status": 409,
"request_id": "req-111941ae-839c-4e3e-92fb-eb76a692567c",
"detail": "There was a conflict when trying to complete your 
request.\n\n Resource class already exists: CUSTOM_B  ",
"title": "Conflict"
  }
]
  }

  Actual result
  =
  {
"errors": [
  {
"status": 409,
"request_id": "req-111941ae-839c-4e3e-92fb-eb76a692567c",
"detail": "There was a conflict when trying to complete your 
request.\n\n Resource class already exists: CUSTOM_A  ",
"title": "Conflict"
  }
]
  }

  Environment
  ===
  1. nova version
  [root@controller nova]# git log
  commit 50d402821be7476eb58ccd791c50d8ed801e85eb
  Author: Matt Riedemann 
  Date:   Wed Feb 8 10:23:14 2017 -0500

  Consider startup scenario in _get_compute_nodes_in_db

  2. Which hypervisor did you use?
  devstack + libvirt + kvm

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664509] Re: api-ref: POST /servers does not note that bdm:device_name is ignored by libvirt driver

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/433575
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=748639822093ef71090a3e3c277c60f3d770ded7
Submitter: Jenkins
Branch:master

commit 748639822093ef71090a3e3c277c60f3d770ded7
Author: Balazs Gibizer 
Date:   Tue Feb 14 11:27:38 2017 +0100

api-ref: note that boot ignores bdm:device_name

The volume_attach case was documented properly but the nova boot
case missed the note after I76a7cfd995db6c04f7af48ff8c9acdd55750ed76
was merged.

Change-Id: I1aa0518e60e349ad625ac366f7748ba35806c829
Closes-Bug: #1664509


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1664509

Title:
  api-ref: POST /servers does not note that bdm:device_name is ignored
  by libvirt driver

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The API doc properly states that device is ignored at volume_attach
  [1] in case of libvirt driver but the same note is missing from the
  server create bdm:device_name parameter. However libvirt driver
  ignores the device names in both cases [3]

  
[1]https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail,attach-a-volume-to-an-instance-detail#attach-a-volume-to-an-instance
  
[2]https://developer.openstack.org/api-ref/compute/?expanded=create-server-detail,attach-a-volume-to-an-instance-detail#create-server
  [3]https://review.openstack.org/#/c/189632/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1664509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2017-02-16 Thread Nikhil Komawar
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
   Importance: Undecided => Low

** Changed in: glance
   Status: New => In Progress

** Changed in: glance
Milestone: None => pike-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in anvil:
  Invalid
Status in craton:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  Fix Released
Status in Glance:
  In Progress
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-midonet:
  New
Status in networking-odl:
  New
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Swift Authentication:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229445] Re: db type could not be determined

2017-02-16 Thread Maciej Szankin
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229445

Title:
  db type could not be determined

Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in Testrepository:
  Triaged

Bug description:
  In openstack/python-novaclient project, run test in py27 env, then run
  test in py33 env,  the following error will stop test:

  db type could not be determined

  But, if you run "tox -e py33" fist, then run "tox -e py27", it will be
  fine, no error.

  workaround:
  remove the file in .testrepository/times.dbm, then run py33 test, it will be 
fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1229445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665441] [NEW] cloudinit/net/sysconfig.py does not parse network_data.json correctly

2017-02-16 Thread Lars Kellogg-Stedman
Public bug reported:

In cloudinit/net/sysconfig.py, we see:

elif len(iface_subnets) > 1:
for i, iface_subnet in enumerate(iface_subnets,
 start=len(iface.children)):
iface_sub_cfg = iface_cfg.copy()
iface_sub_cfg.name = "%s:%s" % (iface_name, i)
iface.children.append(iface_sub_cfg)
cls._render_subnet(iface_sub_cfg, route_cfg, iface_subnet)

The code 'start=len(iface.children)' fails because at this point, iface
is simply a dict, and has no 'children' attribute.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1665441

Title:
  cloudinit/net/sysconfig.py does not parse network_data.json correctly

Status in cloud-init:
  New

Bug description:
  In cloudinit/net/sysconfig.py, we see:

  elif len(iface_subnets) > 1:
  for i, iface_subnet in enumerate(iface_subnets,
   start=len(iface.children)):
  iface_sub_cfg = iface_cfg.copy()
  iface_sub_cfg.name = "%s:%s" % (iface_name, i)
  iface.children.append(iface_sub_cfg)
  cls._render_subnet(iface_sub_cfg, route_cfg, iface_subnet)

  The code 'start=len(iface.children)' fails because at this point,
  iface is simply a dict, and has no 'children' attribute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1665441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658877] Re: live migration failed with XenServer as hypervisor

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/424428
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4cd32645fb26d39a900433c4c1dfecaac1767522
Submitter: Jenkins
Branch:master

commit 4cd32645fb26d39a900433c4c1dfecaac1767522
Author: Huan Xie 
Date:   Sun Jan 22 03:08:40 2017 -0800

Fix live migrate with XenServer

Live migration with XenServer as hypervisor failed with xapi
errors "VIF_NOT_IN_MAP". There are two reasons for this
problem:

(1) Before XS7.0, it supports VM live migration without
setting vif_ref and network_ref explicitly if the destination
host has same network, but since XS7.0, it doesn't support
this way, we must give vif_ref and network_ref mapping.

(2) In nova, XenServer has introduced interim network for
fixing ovs updating wrong port in neutron, see bug 1268955
and also interim network can assist support neutron security
group (linux bridge) as we cannot make VIF connected to
linux bridge directly via XAPI

To achieve this, we will add {src_vif_ref: dest_network_ref}
mapping information, in pre_live_migration, we first create
interim network in destination host and store
{neutron_vif_uuid: dest_network_ref} in migrate_data, then in
source host, before live_migration, we will calculate the
{src_vif_ref: dest_network_ref} and set it as parameters to
xapi when calling VM.migrate_send. Also, we will handle the
case where the destination host is running older code that
doesn't have this new src_vif_ref mapping, like live migrating
from an Ocata compute node to a Newton compute node.

Closes-bug: 1658877

Change-Id: If0fb5d764011521916fbbe15224f524a220052f3


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658877

Title:
  live migration failed with XenServer as hypervisor

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I used devstack to deploy a multi compute node test environment with XenServer
  Then I executed the command "nova live-migration --block-migrate admin-vm5 
ComputeNode3"
  Then I got the below errors:

  ===
  2017-01-23 07:18:11.243 ERROR nova.virt.xenapi.vmops 
[req-6e4f8d0b-ea2f-4a69-bcd8-98d5f94e8ab0 admin admin] Migrate Send failed
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops Traceback (most recent 
call last):
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 2396, in live_migrate
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops "VM.migrate_send", 
vm_ref, migrate_data)
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 2361, in 
_call_live_migrate_command
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops vdi_map, vif_map, 
options)
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/os_xenapi/client/session.py", line 200, 
in call_xenapi
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops return 
session.xenapi_request(method, args)
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/os_xenapi/client/XenAPI.py", line 130, 
in xenapi_request
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops result = 
_parse_result(getattr(self, methodname)(*full_params))
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops   File 
"/usr/local/lib/python2.7/dist-packages/os_xenapi/client/XenAPI.py", line 212, 
in _parse_result
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops raise 
Failure(result['ErrorDescription'])
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops Failure: 
['VIF_NOT_IN_MAP', 'OpaqueRef:b0636c87-539f-59f6-8fef-8c15c6d58665']
  2017-01-23 07:18:11.243 TRACE nova.virt.xenapi.vmops

  
  
  2017-01-23 07:18:11.355 ERROR nova.compute.manager 
[req-6e4f8d0b-ea2f-4a69-bcd8-98d5f94e8ab0 admin admin] [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] Live migration failed.
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] Traceback (most recent call last):
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/opt/stack/nova/nova/compute/manager.py", line 5368, in _do_live_migration
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917] block_migration, migrate_data)
  2017-01-23 07:18:11.355 TRACE nova.compute.manager [instance: 
b539c9fd-6f29-472b-908c-5c0146c31917]   File 
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 520, in live_migration
  2017-01-23 07:18:11.355 TRACE nova.compute.

[Yahoo-eng-team] [Bug 1665145] Re: enable defaults for 'nova-manage cell_v2 update-cell'

2017-02-16 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Corey Bryant (corey.bryant)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665145

Title:
  enable defaults for 'nova-manage cell_v2 update-cell'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Currently you have to specify all of the args for 'nova-manage cell_v2
  update-cell' (--cell_uuid, --name, --transport-url and
  --database_connection).  Otherwise you'll hit this:
  http://paste.ubuntu.com/24003055/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664913] Re: cell_v2 update_cell command don't allow default value

2017-02-16 Thread Matt Riedemann
*** This bug is a duplicate of bug 1665145 ***
https://bugs.launchpad.net/bugs/1665145

** This bug has been marked a duplicate of bug 1665145
   enable defaults for 'nova-manage cell_v2 update-cell'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1664913

Title:
  cell_v2 update_cell command don't allow default value

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  devstack built 5 days before ,the param should be optional but it's
  required now

  jichen@ubuntu1604:~$ nova-manage cell_v2 update_cell --cell_uuid 
b674a67e-a4a3-414a-9507-f08407686d37
  Updates the properties of a cell by the given uuid.

  If the cell is not found by uuid, this command will return an exit
  code of 1. If the properties cannot be set, this will return 2.
  Otherwise, the exit code will be 0.

  NOTE: Updating the transport_url or database_connection fields on
  a running system will NOT result in all nodes immediately using the
  new values. Use caution when changing these values.

  
  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 1598, in main
  fn, fn_args, fn_kwargs = cmd_common.get_action_fn()
File "/opt/stack/nova/nova/cmd/common.py", line 160, in get_action_fn
  _("Missing arguments: %s") % ", ".join(missing))
  Invalid: Missing arguments: name, transport_url, db_connection

  usage: nova-manage cell_v2 update_cell [-h] --cell_uuid 
 [--name ]
 [--transport-url ]
 [--database_connection ]

  optional arguments:
-h, --helpshow this help message and exit
--cell_uuid 
  The uuid of the cell to update.
--name  Set the cell name.
--transport-url 
  Set the cell transport_url. NOTE that running nodes
  will not see the change until restart!
--database_connection 
  Set the cell database_connection. NOTE that running
  nodes will not see the change until restart!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1664913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665145] Re: enable defaults for 'nova-manage cell_v2 update-cell'

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/434533
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b276e8410c91c32b6ccb229104dd87b8167d
Submitter: Jenkins
Branch:master

commit b276e8410c91c32b6ccb229104dd87b8167d
Author: Corey Bryant 
Date:   Wed Feb 15 16:43:50 2017 -0500

Enable defaults for cell_v2 update_cell command

Initialize optional parameters for update_cell() to None and
enable getting the transport_url and db_connection from
nova.conf if not specified as arguments.

Change-Id: Ib20cfeb7b17dba06f9f2db5eca1fa194d2795767
Closes-Bug: 1665145


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665145

Title:
  enable defaults for 'nova-manage cell_v2 update-cell'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Currently you have to specify all of the args for 'nova-manage cell_v2
  update-cell' (--cell_uuid, --name, --transport-url and
  --database_connection).  Otherwise you'll hit this:
  http://paste.ubuntu.com/24003055/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661360] Re: tempest test fails with "Instance not found" error

2017-02-16 Thread Emilien Macchi
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661360

Title:
  tempest test fails with "Instance not found" error

Status in OpenStack Compute (nova):
  In Progress
Status in tripleo:
  Fix Released

Bug description:
  Running OpenStack services from master, when we try to run tempest
  test
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
  (among others). It always fails with message "u'message': u'Instance
  bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log
  in http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

  According to the sequence in the log, this is what happens:

  1. tempest creates an instance:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

  2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13
  so it seems it has been properly created:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

  3. tempest try to get status of the instance right after creating it
  and nova server returns 404, instance not found:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

  At that time following messages are found in nova log:

  2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
  2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found.
  2017-02-02 12:58:10.880 7439 DEBUG nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] Returning 404 to user: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1039

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/logs/nova/nova-
  api.txt.gz#_2017-02-02_12_58_10_879

  4. Then tempest start cleaning up environment, deleting security
  group, etc...

  We are hitting this with nova from commit
  f40467b0eb2b58a369d24a0e832df1ace6c400c3





  
  Tempest starts cleaning up securitygroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653517] Re: Move ovsdb nested transaction from trunk code to ovs_lib

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/416647
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=acfbd2d490700d616d8050f4ea6a4565bc72fc52
Submitter: Jenkins
Branch:master

commit acfbd2d490700d616d8050f4ea6a4565bc72fc52
Author: Jakub Libosvar 
Date:   Mon Jan 2 09:59:35 2017 -0500

Move ovsdb_nested transaction to ovs_lib

The patch introduces new abstract method to the API abstract class. The
method is supposed to return a new Transaction object. Each
API object is capable of store one nested transaction which is returned
by context manager in case some transaction already exists.

As there are no projects in OpenStack that use inheritance directly from
API abstract class, it's safe to make this new create_transaction() 
abstract method.
Only projects that currenlty use ovsdb API are networking-ovn,
dragonflow and networking-l2gw. OVN and Dragonflow use only IDL
implementation and L2GW copies the code of API abstract class.

Closes-bug 1653517

Change-Id: I55dd417cae7ebbe0668ba5606949ce4ab045d251


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653517

Title:
  Move ovsdb nested transaction from trunk code to ovs_lib

Status in neutron:
  Fix Released

Bug description:
  Get rid of
  
https://github.com/openstack/neutron/blob/0092198b235af55581381010abcf327a0d39f0b7/neutron/services/trunk/drivers/openvswitch/agent/trunk_manager.py#L96

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1653517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665263] Re: instance.delete notification is missing for unscheduled instance

2017-02-16 Thread Matt Riedemann
** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665263

Title:
  instance.delete notification is missing for unscheduled instance

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  Description
  ===
  It seems that the Move instance creation to conductor commit [1] changed when 
and how the instance.delete notification is emitted for an unscheduled 
instance. Unfortunately the legacy notification doesn't have test coverage and 
the versioned notification coverage are still on review [2] for this case.

  Before [1] the instance.delete for an unscheduled instance is emitted from 
here [3]. But after [1] the execution of the same delete operation goes to a 
new direction [4] and never reaches [3].
  Before [1] the new test coverage in [2] was passing but now after [1] is 
merged the test_create_server_error fails as the instance.delete notification 
is not emitted.

  [1] https://review.openstack.org/#/c/319379
  [2] https://review.openstack.org/#/c/410297
  [3] https://review.openstack.org/#/c/410297/9/nova/compute/api.py@1860
  [4] https://review.openstack.org/#/c/319379/84/nova/compute/api.py@1790

  
  Steps to reproduce
  ==

  Run the nova functional test in patch [2] before and after commit [1].
  The test_create_server_error will pass before and fail after commit
  [1] due to missing instance.delete notification.

  
  Environment
  ===

  Nova functional test env with based on commit
  f9d7b383a7cb12b6cd3e6117daf69b08620bf40f

  Logs & Configs
  ==

  http://logs.openstack.org/97/410297/9/check/gate-nova-tox-functional-
  ubuntu-xenial/5875492/console.html#_2017-02-15_16_21_06_668774

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665370] [NEW] api db_sync fails on upgrade due to duplicate migration files

2017-02-16 Thread Steven Hardy
Public bug reported:

I'm testing upgrades on TripleO, and hit this problem, discussion with
owalsh indicates it may be a nova bug related to backport migration
numbering:

14:44 < owalsh> shardy: it's a nova bug, 028 needs to be renamed to 
021_build_requests_instance_mediumtext.py in ocata & 
master

This is the error:

"ScriptError: You can only have one Python script per version, but you
have: /usr/lib/python2.7/site-
packages/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_build_requests_instance_mediumtext.py
and /usr/lib/python2.7/site-
packages/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_placeholder.py"],
"warnings": []}

I'm testing this version:

[root@overcloud-controller-0 ~]# rpm -qf 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_placeholder.py
python-nova-15.0.0-0.20170215034806.bdeb05d.el7.centos.noarch

So upgrading to:
https://github.com/openstack/nova/commit/bdeb05dfb0f727654ac0b0bae14341fd87b5cbb7

>From stable/newton commit:
https://github.com/openstack/nova/commit/c6743ca709d45334cf25332aa834f86a9d91f1a5

[root@overcloud-controller-0 ~]# rpm -qa --last | grep python-nova
python-nova-15.0.0-0.20170215034806.bdeb05d.el7.centos.noarch Wed 15 Feb 2017 
07:02:12 PM UTC
python-nova-14.0.4-0.20170117154931.c6743ca.el7.centos.noarch Mon 23 Jan 2017 
01:45:50 PM UTC

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Assignee: Steven Hardy (shardy)
 Status: Triaged

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
Milestone: None => ocata-rc1

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
 Assignee: (unassigned) => Steven Hardy (shardy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665370

Title:
  api db_sync fails on upgrade due to duplicate migration files

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  I'm testing upgrades on TripleO, and hit this problem, discussion with
  owalsh indicates it may be a nova bug related to backport migration
  numbering:

  14:44 < owalsh> shardy: it's a nova bug, 028 needs to be renamed to 
021_build_requests_instance_mediumtext.py in ocata & 
  master

  This is the error:

  "ScriptError: You can only have one Python script per version, but you
  have: /usr/lib/python2.7/site-
  
packages/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_build_requests_instance_mediumtext.py
  and /usr/lib/python2.7/site-
  
packages/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_placeholder.py"],
  "warnings": []}

  I'm testing this version:

  [root@overcloud-controller-0 ~]# rpm -qf 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api_migrations/migrate_repo/versions/021_placeholder.py
  python-nova-15.0.0-0.20170215034806.bdeb05d.el7.centos.noarch

  So upgrading to:
  
https://github.com/openstack/nova/commit/bdeb05dfb0f727654ac0b0bae14341fd87b5cbb7

  From stable/newton commit:
  
https://github.com/openstack/nova/commit/c6743ca709d45334cf25332aa834f86a9d91f1a5

  [root@overcloud-controller-0 ~]# rpm -qa --last | grep python-nova
  python-nova-15.0.0-0.20170215034806.bdeb05d.el7.centos.noarch Wed 15 Feb 2017 
07:02:12 PM UTC
  python-nova-14.0.4-0.20170117154931.c6743ca.el7.centos.noarch Mon 23 Jan 2017 
01:45:50 PM UTC

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665366] [NEW] [RFE] Add --key-name option to 'nova rebuild'

2017-02-16 Thread George Shuklin
Public bug reported:

Currently there is no way to change key-name associated with instance.
This has some justification as key may be downloaded only at build time
and later changes will be ignored by instance.

But this is not a case for rebuild command. If tenant want to rebuild
instance, he may wants to change key used to access that instance.

Main reason for 'rebuild' command instead of 'delete/create' often lies
in area of preserving network settings - fixed ips, mac addresses,
assosiated floatings IPs. Normally user want to keep the same ssh key as
at creation time, but occasionally he (she) may want to replace it.

Right now there is no such option.

TL;DR; Please add --key-name option to nova rebuild command (and API).

Thanks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665366

Title:
  [RFE] Add --key-name option to 'nova rebuild'

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently there is no way to change key-name associated with instance.
  This has some justification as key may be downloaded only at build
  time and later changes will be ignored by instance.

  But this is not a case for rebuild command. If tenant want to rebuild
  instance, he may wants to change key used to access that instance.

  Main reason for 'rebuild' command instead of 'delete/create' often
  lies in area of preserving network settings - fixed ips, mac
  addresses, assosiated floatings IPs. Normally user want to keep the
  same ssh key as at creation time, but occasionally he (she) may want
  to replace it.

  Right now there is no such option.

  TL;DR; Please add --key-name option to nova rebuild command (and API).

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2017-02-16 Thread Neil Jerram
I don't see value in this for networking-calico.

** No longer affects: networking-calico

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in Karbor:
  Fix Released
Status in kolla:
  Fix Released
Status in kuryr:
  Fix Released
Status in kuryr-libnetwork:
  Fix Released
Status in Mistral:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in osprofiler:
  Fix Released
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  Won't Fix
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662109] Re: tempest scenario test_qos fails intermittently

2017-02-16 Thread Jakub Libosvar
It seems we still suffer from this issue:
http://logs.openstack.org/47/416647/5/check/gate-tempest-dsvm-neutron-
dvr-multinode-scenario-ubuntu-xenial-
nv/330522c/logs/testr_results.html.gz

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662109

Title:
  tempest scenario test_qos fails intermittently

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/67/418867/7/check/gate-tempest-dsvm-neutron-
  dvr-multinode-scenario-ubuntu-xenial-
  nv/b705e56/logs/testr_results.html.gz

  e-r-q:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20189%2C%20in%20test_qos%5C%22%20AND%20build_name
  %3Agate-tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv%20AND%20build_branch%3Amaster%20AND%20tags%3Aconsole

  11 hits in last 24 hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2017-02-16 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/386513
Committed: 
https://git.openstack.org/cgit/openstack/swauth/commit/?id=6573269e379d65bbd68326124d193eca4e690b5e
Submitter: Jenkins
Branch:master

commit 6573269e379d65bbd68326124d193eca4e690b5e
Author: nizam 
Date:   Fri Oct 14 15:15:02 2016 +0530

Drop MANIFEST.in - it's not needed by pbr

swauth already uses PBR:
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

This patch removes `MANIFEST.in` file as pbr generates a
sensible manifest from git files and some standard files
and it removes the need for an explicit `MANIFEST.in` file.

Change-Id: Idb30c13b6c75129e07e46cbdd75a4aa92dcb5858
Closes-Bug: #1608980


** Changed in: swauth
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in anvil:
  Invalid
Status in craton:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-midonet:
  New
Status in networking-odl:
  New
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Swift Authentication:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665330] [NEW] Fwaas(reduce the associated floatingip count to zero):the firewall rules which in the qrouter namespace will not be cleaned, after delete the firewall.

2017-02-16 Thread wujun
Public bug reported:

environment: Mitaka

In the DVR mode:
1.create the router, network, vm, firewall
2.bind the firewall to the router, after associate the floatingip to the wm
3.disassociate the floatingip
4.delete the firewall

The firewall rules still in the qrouter namespace.

** Affects: neutron
 Importance: Undecided
 Assignee: wujun (wujun)
 Status: New


** Tags: fwaas

** Tags added: fwaas

** Changed in: neutron
 Assignee: (unassigned) => wujun (wujun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1665330

Title:
  Fwaas(reduce the associated floatingip count to zero):the firewall
  rules which in the qrouter namespace will not be cleaned, after delete
  the firewall.

Status in neutron:
  New

Bug description:
  environment: Mitaka

  In the DVR mode:
  1.create the router, network, vm, firewall
  2.bind the firewall to the router, after associate the floatingip to the wm
  3.disassociate the floatingip
  4.delete the firewall

  The firewall rules still in the qrouter namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1665330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665326] [NEW] Fwaas:it will execute firewall rules twice for a router, when update the firewall.

2017-02-16 Thread wujun
Public bug reported:

environment: Mitaka

It will execute firewall rules twice for a router, when update the
firewall.

The code is in the agent update_firewall():
...
router_ids = self._get_router_ids_for_fw(context, firewall)
if router_ids or firewall['router_ids']:
router_info_list = self._get_router_info_list_for_tenant(
router_ids + firewall['router_ids'],
firewall['tenant_id'])
...

But the "router_ids" is the same with "firewall['router_ids']

** Affects: neutron
 Importance: Undecided
 Assignee: wujun (wujun)
 Status: New


** Tags: fwaas

** Tags added: fwaas

** Changed in: neutron
 Assignee: (unassigned) => wujun (wujun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1665326

Title:
  Fwaas:it will execute firewall rules twice for a router, when update
  the firewall.

Status in neutron:
  New

Bug description:
  environment: Mitaka

  It will execute firewall rules twice for a router, when update the
  firewall.

  The code is in the agent update_firewall():
  ...
  router_ids = self._get_router_ids_for_fw(context, firewall)
  if router_ids or firewall['router_ids']:
  router_info_list = self._get_router_info_list_for_tenant(
  router_ids + firewall['router_ids'],
  firewall['tenant_id'])
  ...

  But the "router_ids" is the same with "firewall['router_ids']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1665326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665323] [NEW] Fwaas:When update the firewall name, the neutron-l3-agent will be called to reload the firewall rules.

2017-02-16 Thread wujun
Public bug reported:

environment: Mitaka

Update the firewall name, the neutron-l3-agent will be called to reload
the firewall rules.

When update the firewall name, it should be only updated the DB.

** Affects: neutron
 Importance: Undecided
 Assignee: wujun (wujun)
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1665323

Title:
  Fwaas:When update the firewall name, the neutron-l3-agent will be
  called to reload the firewall rules.

Status in neutron:
  New

Bug description:
  environment: Mitaka

  Update the firewall name, the neutron-l3-agent will be called to
  reload the firewall rules.

  When update the firewall name, it should be only updated the DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1665323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665318] [NEW] Fwaas(bind a firewall to a legacy router that without qrouter namespace):iptables rules do not tabke effect after the qrouter namespace created.

2017-02-16 Thread wujun
Public bug reported:

environment: Mitaka

1.create a legacy router (do not add interface and set gateway), so there is no 
qrouter namespace
2.bind a firewall to the router
3.add interface or set gateway for the router, qrouter namespace is created now

The problem is there no firewall rule in the qrouter namespace.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

** Summary changed:

- Fwaas(bind a firewall to a legacy router that without qrouter 
namespace):iptables rules do not tabke effect  after the qrouter namespace 
created
+ Fwaas(bind a firewall to a legacy router that without qrouter 
namespace):iptables rules do not tabke effect  after the qrouter namespace 
created.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1665318

Title:
  Fwaas(bind a firewall to a legacy router that without qrouter
  namespace):iptables rules do not tabke effect  after the qrouter
  namespace created.

Status in neutron:
  New

Bug description:
  environment: Mitaka

  1.create a legacy router (do not add interface and set gateway), so there is 
no qrouter namespace
  2.bind a firewall to the router
  3.add interface or set gateway for the router, qrouter namespace is created 
now

  The problem is there no firewall rule in the qrouter namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1665318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665300] [NEW] [FWaaS v2] Unused attribut 'firewall_policy_id' on rule resource

2017-02-16 Thread Édouard Thuleau
Public bug reported:

In the FWaaS v2 API extension, an attribut named 'firewall_policy_id'
was declared on the 'rule' resource [1] but it not set or used anywhere
in the code. Probably a copy/paste from v1 API.

[1] https://github.com/openstack/neutron-
fwaas/blob/master/neutron_fwaas/extensions/firewall.py#L259

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1665300

Title:
  [FWaaS v2] Unused attribut 'firewall_policy_id' on rule resource

Status in neutron:
  New

Bug description:
  In the FWaaS v2 API extension, an attribut named 'firewall_policy_id'
  was declared on the 'rule' resource [1] but it not set or used
  anywhere in the code. Probably a copy/paste from v1 API.

  [1] https://github.com/openstack/neutron-
  fwaas/blob/master/neutron_fwaas/extensions/firewall.py#L259

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1665300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665282] [NEW] To prioritize port QoS policies over network QoS policies

2017-02-16 Thread Rodolfo Alonso
Public bug reported:

Two type of QoS policies can apply to a port:
- Port QoS policies: set directly on the port.
- Network QoS policies: those ones applied on the port's network.

If both are applied, port QoS policy must prevail over the network QoS
policy (if both are different) [1].

To know if a rule must be applied on a port, a check is made in
neutron/objects/qos/rule:QoSRule.should_apply_to_port. The logic
currently implemented doesn't reflect what is documented in [1].

The expected result of
test_should_apply_to_port_with_compute_port_and_net_policy must be
False.

[1] https://docs.openstack.org/mitaka/networking-guide/config-qos.html

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1665282

Title:
  To prioritize port QoS policies over network QoS policies

Status in neutron:
  In Progress

Bug description:
  Two type of QoS policies can apply to a port:
  - Port QoS policies: set directly on the port.
  - Network QoS policies: those ones applied on the port's network.

  If both are applied, port QoS policy must prevail over the network QoS
  policy (if both are different) [1].

  To know if a rule must be applied on a port, a check is made in
  neutron/objects/qos/rule:QoSRule.should_apply_to_port. The logic
  currently implemented doesn't reflect what is documented in [1].

  The expected result of
  test_should_apply_to_port_with_compute_port_and_net_policy must be
  False.

  [1] https://docs.openstack.org/mitaka/networking-guide/config-qos.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1665282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665263] [NEW] instance.delete notification is missing for unscheduled instance

2017-02-16 Thread Balazs Gibizer
Public bug reported:

Description
===
It seems that the Move instance creation to conductor commit [1] changed when 
and how the instance.delete notification is emitted for an unscheduled 
instance. Unfortunately the legacy notification doesn't have test coverage and 
the versioned notification coverage are still on review [2] for this case.

Before [1] the instance.delete for an unscheduled instance is emitted from here 
[3]. But after [1] the execution of the same delete operation goes to a new 
direction [4] and never reaches [3].
Before [1] the new test coverage in [2] was passing but now after [1] is merged 
the test_create_server_error fails as the instance.delete notification is not 
emitted.

[1] https://review.openstack.org/#/c/319379
[2] https://review.openstack.org/#/c/410297
[3] https://review.openstack.org/#/c/410297/9/nova/compute/api.py@1860
[4] https://review.openstack.org/#/c/319379/84/nova/compute/api.py@1790


Steps to reproduce
==

Run the nova functional test in patch [2] before and after commit [1].
The test_create_server_error will pass before and fail after commit [1]
due to missing instance.delete notification.


Environment
===

Nova functional test env with based on commit
f9d7b383a7cb12b6cd3e6117daf69b08620bf40f

Logs & Configs
==

http://logs.openstack.org/97/410297/9/check/gate-nova-tox-functional-
ubuntu-xenial/5875492/console.html#_2017-02-15_16_21_06_668774

** Affects: nova
 Importance: Undecided
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: New


** Tags: ocata-backport-potential

** Changed in: nova
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Tags added: ocata-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665263

Title:
  instance.delete notification is missing for unscheduled instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  It seems that the Move instance creation to conductor commit [1] changed when 
and how the instance.delete notification is emitted for an unscheduled 
instance. Unfortunately the legacy notification doesn't have test coverage and 
the versioned notification coverage are still on review [2] for this case.

  Before [1] the instance.delete for an unscheduled instance is emitted from 
here [3]. But after [1] the execution of the same delete operation goes to a 
new direction [4] and never reaches [3].
  Before [1] the new test coverage in [2] was passing but now after [1] is 
merged the test_create_server_error fails as the instance.delete notification 
is not emitted.

  [1] https://review.openstack.org/#/c/319379
  [2] https://review.openstack.org/#/c/410297
  [3] https://review.openstack.org/#/c/410297/9/nova/compute/api.py@1860
  [4] https://review.openstack.org/#/c/319379/84/nova/compute/api.py@1790

  
  Steps to reproduce
  ==

  Run the nova functional test in patch [2] before and after commit [1].
  The test_create_server_error will pass before and fail after commit
  [1] due to missing instance.delete notification.

  
  Environment
  ===

  Nova functional test env with based on commit
  f9d7b383a7cb12b6cd3e6117daf69b08620bf40f

  Logs & Configs
  ==

  http://logs.openstack.org/97/410297/9/check/gate-nova-tox-functional-
  ubuntu-xenial/5875492/console.html#_2017-02-15_16_21_06_668774

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp