[Yahoo-eng-team] [Bug 1490990] Re: acceptance: neutron fails to start server service

2015-09-03 Thread Ihar Hrachyshka
No, it's not a packaging issue, it's a neutron bug.

Since
https://review.openstack.org/#/c/202207/23/neutron/services/provider_configuration.py,
neutron presumes that config-dir points to /etc/neutron. But oslo.config
allows to use multiple config-dir options. F.e. Delorean uses this to
allow users to configure their services by dropping a custom .conf file
into a pre-designated service dir, instead of modifying stock
neutron.conf and other files: https://github.com/openstack-
packages/neutron/blob/rpm-master/neutron-server.service#L8

So now, neutron fails to locate service conf file because config-dir
points to the latest config-dir argument and not /etc/neutron.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490990

Title:
  acceptance: neutron fails to start server service

Status in neutron:
  New
Status in puppet-neutron:
  In Progress

Bug description:
  This is a new error that happened very lately, using RDO liberty
  packaging:

  With current state of beaker manifests, we have this error:
  No providers specified for 'LOADBALANCER' service, exiting

  Source: http://logs.openstack.org/50/216950/5/check/gate-puppet-
  neutron-puppet-beaker-rspec-dsvm-
  centos7/9e7e510/logs/neutron/server.txt.gz#_2015-09-01_12_40_22_734

  That means neutron-server can't start correctly.

  This is probably a misconfiguration in our manifests or a packaging
  issue in Neutron, because we don't have the issue in Trusty jobs.

  RDO packaging version: 7.0.0.0b3-dev606

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490990] Re: acceptance: neutron fails to start server service

2015-09-03 Thread Ihar Hrachyshka
I guess another project that is involved here is oslo.config since it
does not allow to get all config-dirs passed to an executable, instead
rewriting the single value on each --config-dir argument occurrence:

https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L1276

I believe oslo.config should provide a way to get the ordered list of
all config dirs and files.

** Also affects: oslo.config
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490990

Title:
  acceptance: neutron fails to start server service

Status in neutron:
  New
Status in oslo.config:
  New
Status in puppet-neutron:
  In Progress

Bug description:
  This is a new error that happened very lately, using RDO liberty
  packaging:

  With current state of beaker manifests, we have this error:
  No providers specified for 'LOADBALANCER' service, exiting

  Source: http://logs.openstack.org/50/216950/5/check/gate-puppet-
  neutron-puppet-beaker-rspec-dsvm-
  centos7/9e7e510/logs/neutron/server.txt.gz#_2015-09-01_12_40_22_734

  That means neutron-server can't start correctly.

  This is probably a misconfiguration in our manifests or a packaging
  issue in Neutron, because we don't have the issue in Trusty jobs.

  RDO packaging version: 7.0.0.0b3-dev606

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491758] [NEW] Unable to delete instance

2015-09-03 Thread Ioana-Madalina Patrichi
Public bug reported:

Openstack version: Kilo

I am trying to delete an instance that was initially stuck in a Paused
and then an Error state. I gave up trying to bring it up again, however
now I am unable to delete it from Openstack.

I have taken the following steps:
1. I have initially tried to delete the instance directly from Openstack 
Dashboard while the instance was in an error state the operation was reported 
as being successful, however, the instance hasn't been removed.
2. I have tried resetting the state of the instance to Active: 
 $ nova reset-state --active 27d8f8d0-efd5-42bd-9c56-4ddd159833d1
3. Deleted the instance using the nova-api:
 $ nova delete 27d8f8d0-efd5-42bd-9c56-4ddd159833d1
 Request to delete server 27d8f8d0-efd5-42bd-9c56-4ddd159833d1 has been 
accepted.

In addition, the Fault section of the instance on Openstack Dashboard
displays the following:

Message Cannot call obj_load_attr on orphaned Instance object
Code   500

None of these steps have been successful. I know that I can delete the
instance from the database, I would like to address this issue.

Logs from nova-api:
2015-09-03 11:06:31.364 2574 DEBUG nova.api.openstack.wsgi 
[req-a663fee3-b720-4d9e-bc7d-646d4d80922b ba5daa262bb947079f9d2fc54f5e9234 
d819429055d4416bbfc3d693b1571388 - - -] Action: 'action', calling method: 
>, body: {"os-getConsoleOutput": {"length": null}} 
_process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:780
2015-09-03 11:06:31.365 2574 DEBUG nova.compute.api 
[req-a663fee3-b720-4d9e-bc7d-646d4d80922b ba5daa262bb947079f9d2fc54f5e9234 
d819429055d4416bbfc3d693b1571388 - - -] [instance: 
a23088f2-444f-4de4-89e0-593e5502be41] Fetching instance by UUID get 
/usr/lib/python2.7/dist-packages/nova/compute/api.py:1911
2015-09-03 11:06:31.522 2574 INFO nova.osapi_compute.wsgi.server 
[req-a663fee3-b720-4d9e-bc7d-646d4d80922b ba5daa262bb947079f9d2fc54f5e9234 
d819429055d4416bbfc3d693b1571388 - - -] 10.83.100.0 "POST 
/v2/d819429055d4416bbfc3d693b1571388/servers/a23088f2-444f-4de4-89e0-593e5502be41/action
 HTTP/1.1" status: 200 len: 8215 time: 0.2391620
2015-09-03 11:12:54.877 2586 DEBUG keystoneclient.session [-] REQ: curl -g -i 
-X GET http://10.83.100.0:35357/v3/auth/tokens -H "X-Subject-Token: 
{SHA1}605ad7f8b06f1a6319321f83741e7dfa6a7b7418" -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}4887951ff0677461e8630e305ace2b3194f3477c" _http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:195
2015-09-03 11:12:54.960 2586 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 7110 x-subject-token: 
{SHA1}605ad7f8b06f1a6319321f83741e7dfa6a7b7418 vary: X-Auth-Token connection: 
keep-alive date: Thu, 03 Sep 2015 10:12:54 GMT content-type: application/json 
x-distribution: Ubuntu
RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": 
"8d4178ea39e04db68e9d30c1105a2bd8", "name": "admin"}], "expires_at": 
"2015-09-06T10:12:54.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "6560b54895bf4966bf332659f1c32b32", "name": "admin"}, 
"catalog": "", "extras": {}, "user": {"domain": {"id": "default", 
"name": "Default"}, "id": "a5296228ddd6417eb8b63201fc258a6f", "name": "admin"}, 
"audit_ids": ["E1HDBQ7sQgudJje8VvW7rg"], "issued_at": 
"2015-09-03T10:12:54.864888"}}
 _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:224
2015-09-03 11:12:54.968 2586 DEBUG nova.api.openstack.wsgi 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] Calling method '>' _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:783
2015-09-03 11:12:54.969 2586 DEBUG nova.compute.api 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] [instance: 
27d8f8d0-efd5-42bd-9c56-4ddd159833d1] Fetching instance by UUID get 
/usr/lib/python2.7/dist-packages/nova/compute/api.py:1911
2015-09-03 11:12:55.023 2586 DEBUG nova.objects.instance 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] Lazy-loading `flavor' on Instance uuid 
27d8f8d0-efd5-42bd-9c56-4ddd159833d1 obj_load_attr 
/usr/lib/python2.7/dist-packages/nova/objects/instance.py:995
2015-09-03 11:12:55.076 2586 DEBUG nova.objects.instance 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] Lazy-loading `fault' on Instance uuid 
27d8f8d0-efd5-42bd-9c56-4ddd159833d1 obj_load_attr 
/usr/lib/python2.7/dist-packages/nova/objects/instance.py:995
2015-09-03 11:12:55.083 2586 DEBUG keystoneclient.session 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] REQ: curl -g -i -X GET 
http://10.83.100.0:9696/v2.0/port

[Yahoo-eng-team] [Bug 1487570] Re: test_list_servers_by_admin_with_all_tenants fails with InstanceNotFound trying to lazy-load flavor

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487570

Title:
  test_list_servers_by_admin_with_all_tenants fails with
  InstanceNotFound trying to lazy-load flavor

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/70/215170/1/check/gate-tempest-dsvm-
  nova-v21-full/3fdc0d6/console.html#_2015-08-21_16_04_53_513

  2015-08-21 16:04:53.514 | Captured traceback:
  2015-08-21 16:04:53.514 | ~~~
  2015-08-21 16:04:53.514 | Traceback (most recent call last):
  2015-08-21 16:04:53.514 |   File 
"tempest/api/compute/admin/test_servers.py", line 81, in 
test_list_servers_by_admin_with_all_tenants
  2015-08-21 16:04:53.514 | body = 
self.client.list_servers(detail=True, **params)
  2015-08-21 16:04:53.514 |   File 
"tempest/services/compute/json/servers_client.py", line 159, in list_servers
  2015-08-21 16:04:53.514 | resp, body = self.get(url)
  2015-08-21 16:04:53.514 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 271, in get
  2015-08-21 16:04:53.514 | return self.request('GET', url, 
extra_headers, headers)
  2015-08-21 16:04:53.514 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
  2015-08-21 16:04:53.515 | resp, resp_body)
  2015-08-21 16:04:53.515 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 754, in _error_checker
  2015-08-21 16:04:53.515 | raise exceptions.ServerFault(resp_body, 
message=message)
  2015-08-21 16:04:53.515 | tempest_lib.exceptions.ServerFault: Got server 
fault
  2015-08-21 16:04:53.515 | Details: Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  2015-08-21 16:04:53.515 | 

  There is a trace in the n-api logs when trying to lazy-load a flavor
  on an instance:

  http://logs.openstack.org/70/215170/1/check/gate-tempest-dsvm-
  
nova-v21-full/3fdc0d6/logs/screen-n-api.txt.gz?level=TRACE#_2015-08-21_15_39_06_148

  2015-08-21 15:39:06.148 ERROR nova.api.openstack.extensions 
[req-5eca1fa9-7948-4a3d-bc80-7e84441bb74e 
tempest-ServersAdminTestJSON-973647583 tempest-ServersAdminTestJSON-692147874] 
Unexpected exception in API method
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 264, in detail
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions servers 
= self._get_servers(req, is_detail=True)
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 389, in 
_get_servers
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions 
response = self._view_builder.detail(req, instance_list)
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/views/servers.py", line 126, in 
detail
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions return 
self._list_view(self.show, request, instances, coll_name)
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/views/servers.py", line 138, in 
_list_view
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions 
server_list = [func(request, server)["server"] for server in servers]
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/views/servers.py", line 266, in 
show
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions 
"flavor": self._get_flavor(request, instance),
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/views/servers.py", line 198, in 
_get_flavor
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions 
instance_type = instance.get_flavor()
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 890, in get_flavor
  2015-08-21 15:39:06.148 22838 ERROR nova.api.openstack.extensions retu

[Yahoo-eng-team] [Bug 1481220] Re: Cannot boot from volume-backed instance snapshot

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481220

Title:
  Cannot boot from volume-backed instance snapshot

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Since that https://review.openstack.org/#/c/188789/ is merged, Nova
  fails to boot an instance from a volume-backed instance snapshot.

  Steps to reproduce against DevStack:
  1 boot an instance from a volume:
  nova boot inst --block-device 
source=image,dest=volume,size=1,bootindex=0,id= --flavor m1.nano

  2 create a volume-backed snapshot
  nova image-create inst volback

  3 boot a new instance from the snapshot
  nova boot inst1 --image volback --flavor m1.nano

  Expected result: the new instance attributes.
  Actual result:
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

  n-api log:
     File "/opt/stack/nova/nova/compute/api.py", line 1488, in create
   check_server_group_quota=check_server_group_quota)
     File "/opt/stack/nova/nova/compute/api.py", line 1097, in _create_instance
   auto_disk_config, reservation_id, max_count)
     File "/opt/stack/nova/nova/compute/api.py", line 854, in 
_validate_and_build_base_options
   image_meta = objects.ImageMeta.from_dict(boot_meta)
     File "/opt/stack/nova/nova/objects/image_meta.py", line 91, in from_dict
   return cls(**image_meta)
     File "/opt/stack/nova/nova/objects/base.py", line 188, in __init__
   setattr(self, key, kwargs[key])
     File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
70, in setter
   field_value = field.coerce(self, name, value)
     File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 
184, in coerce
   return self._null(obj, attr)
     File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 
162, in _null
   raise ValueError(_("Field `%s' cannot be None") % attr)
   ValueError: Field `container_format' cannot be None

  !!! This blocks gating of stackforge/ec2-api project !!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481164] Re: Invalid root device name for volume-backed instances with libvirt

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481164

Title:
  Invalid root device name for volume-backed instances with libvirt

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Since that https://review.openstack.org/#/c/189632/ is merged, root
  device name of volume backed instances is /dev/vdb with libvirt.

  Steps to reproduce against DevStack:
  1 Boot an instance:
  nova --block-device source=image,dest=volume,size=1,bootindex=0,id= 
--flavor m1.nano inst
  where xxx is an image id.

  2 Look at the device name:
  openstack volume list

  Expected value: /dev/vda
  Actual value: /dev/vdb

  Inside guest OS the volume is displayed as /dev/vda.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480129] Re: nova rbd driver features are hard-coded, it should be readable from ceph.conf

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480129

Title:
  nova rbd driver features are hard-coded, it should be readable from
  ceph.conf

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In nova rbd driver rbd features are hard-coded.

  rbd.RBD().clone(src_client.ioctx,
   image.encode('utf-8'),
   snapshot.encode('utf-8'),
   dest_client.ioctx,
   dest_name,
   features=rbd.RBD_FEATURE_LAYERING)

  
  If We see above given code we are just using RBD_FEATURE_LAYERING directly.
  This restrict users to use only hard-coded RBD_FEATURE_LAYERING feature. 

  We should give a fix which should allow users to opt in to upcoming
  features that have not yet become default and users can specify
  features in ceph.conf and nova can read features information from
  ceph.conf.

  Fix should be something like :

  Rreading rbd_default_features from ceph.conf for rbd
  features configuration, falling back to layering if nothing is found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479869] Re: Creating a server fails with an error about checksum field

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479869

Title:
  Creating a server fails with an error about checksum field

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As seen here for example: http://logs.openstack.org/18/205018/2/gate
  /gate-heat-dsvm-functional-orig-mysql/aa761d5/

  It gets the error: Caught error: Field `checksum' cannot be None

  On logstash it's been happening for some hours:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVmFsdWVFcnJvcjogRmllbGQgYGNoZWNrc3VtJyBjYW5ub3QgYmUgTm9uZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzgyNzEwNjEyMTJ9

  From bug https://bugs.launchpad.net/cinder/+bug/1308058 it seems that
  checksum ought to be able to be None, but obviously it's not all the
  time, so I suppose there is a race condition somewhere. We ought to
  get a better error if it's transient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481043] Re: nova.tests.unit.cmd.test_baseproxy.BaseProxyTestCase.test_proxy randomly fails

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481043

Title:
  nova.tests.unit.cmd.test_baseproxy.BaseProxyTestCase.test_proxy
  randomly fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/59/208559/1/check/gate-nova-
  python27/61df9b7/console.html#_2015-08-03_17_31_39_585

  2015-08-03 17:31:39.585 | {2} 
nova.tests.unit.cmd.test_baseproxy.BaseProxyTestCase.test_proxy [0.059688s] ... 
FAILED
  2015-08-03 17:31:39.585 | 
  2015-08-03 17:31:39.585 | Captured traceback:
  2015-08-03 17:31:39.585 | ~~~
  2015-08-03 17:31:39.586 | Traceback (most recent call last):
  2015-08-03 17:31:39.586 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2015-08-03 17:31:39.586 | return func(*args, **keywargs)
  2015-08-03 17:31:39.586 |   File "nova/tests/unit/cmd/test_baseproxy.py", 
line 63, in test_proxy
  2015-08-03 17:31:39.586 | 
RequestHandlerClass=websocketproxy.NovaProxyRequestHandler)
  2015-08-03 17:31:39.586 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 948, in assert_called_once_with
  2015-08-03 17:31:39.586 | return self.assert_called_with(*args, 
**kwargs)
  2015-08-03 17:31:39.586 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 937, in assert_called_with
  2015-08-03 17:31:39.586 | 
six.raise_from(AssertionError(_error_message(cause)), cause)
  2015-08-03 17:31:39.586 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/six.py",
 line 692, in raise_from
  2015-08-03 17:31:39.587 | raise value
  2015-08-03 17:31:39.587 | AssertionError: Expected call: 
__init__(RequestHandlerClass=, cert='self.pem', 
daemon=False, file_only=True, key=None, listen_host='0.0.0.0', 
listen_port='6080', record=False, source_is_ipv6=False, ssl_only=False, 
traffic=False, verbose=False, web='/usr/share/spice-html5')
  2015-08-03 17:31:39.587 | Actual call: 
__init__(RequestHandlerClass=, cert='self.pem', 
daemon=False, file_only=True, key=None, listen_host='0.0.0.0', 
listen_port='6080', record=False, source_is_ipv6=False, ssl_only=False, 
traffic=True, verbose=True, web='/usr/share/spice-html5')

  Looks like this change introduced it:
  https://review.openstack.org/#/c/204723/

  It's also running with websockify 0.7.0 now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482230] Re: LibvirtConnTestCase.test_clean_shutdown_first_time segfaults

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482230

Title:
  LibvirtConnTestCase.test_clean_shutdown_first_time segfaults

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/77/202777/13/gate/gate-nova-
  python27/af2fc54/console.html#_2015-08-06_12_39_20_984

  note later:

  {5}
  
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_clean_shutdown_first_time
  [] ... inprogress

  2015-08-06 12:39:46.248 | Traceback (most recent call last):
  2015-08-06 12:39:46.248 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/bin/subunit-trace", line 
11, in 
  2015-08-06 12:39:46.248 | sys.exit(main())
  2015-08-06 12:39:46.249 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/os_testr/subunit_trace.py",
 line 328, in main
  2015-08-06 12:39:46.249 | print_summary(sys.stdout, elapsed_time)
  2015-08-06 12:39:46.249 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/os_testr/subunit_trace.py",
 line 268, in print_summary
  2015-08-06 12:39:46.250 | num, time = worker_stats(w)
  2015-08-06 12:39:46.250 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/os_testr/subunit_trace.py",
 line 242, in worker_stats
  2015-08-06 12:39:46.250 | delta = tests[-1]['timestamps'][1] - 
tests[0]['timestamps'][0]
  2015-08-06 12:39:46.251 | TypeError: unsupported operand type(s) for -: 
'NoneType' and 'datetime.datetime'
  2015-08-06 12:39:46.251 | ERROR: InvocationError: '/bin/bash 
tools/pretty_tox.sh '

  Looks like this just started happening:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibm92YS50ZXN0cy51bml0LnZpcnQubGlidmlydC50ZXN0X2RyaXZlci5MaWJ2aXJ0Q29ublRlc3RDYXNlLnRlc3RfY2xlYW5fc2h1dGRvd25fZmlyc3RfdGltZSBbXSAuLi4gaW5wcm9ncmVzc1wiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI3XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzg4NjkxNjUxNzgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478702] Re: Unable to clear device ID for port 'None'

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478702

Title:
  Unable to clear device ID for port 'None'

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I'm seeing this trace in an ironic job but it shows up in other jobs
  as well:

  http://logs.openstack.org/75/190675/2/check/gate-tempest-dsvm-ironic-
  pxe_ssh-full-
  nv/2c65f3f/logs/screen-n-cpu.txt.gz#_2015-07-26_00_36_47_257

  2015-07-26 00:36:47.257 ERROR nova.network.neutronv2.api 
[req-57d4e9e6-adf1-4774-a27a-63d096fe48e6 tempest-ServersTestJSON-1332826451 
tempest-ServersTestJSON-2014105270] Unable to clear device ID for port 'None'
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api Traceback 
(most recent call last):
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/opt/stack/new/nova/nova/network/neutronv2/api.py", line 365, in _unbind_ports
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
port_client.update_port(port_id, port_req_body)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
102, in with_params
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api ret = 
self.function(instance, *args, **kwargs)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
549, in update_port
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api return 
self.put(self.port_path % (port), body=body)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
302, in put
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
headers=headers, params=params)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
270, in retry_request
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
headers=headers, params=params)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
211, in do_request
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
self._handle_fault_response(status_code, replybody)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
185, in _handle_fault_response
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
exception_handler_v20(status_code, des_error_body)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api   File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 83, 
in exception_handler_v20
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
message=message)
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
NeutronClientException: 404 Not Found
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api The resource 
could not be found.
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api
  2015-07-26 00:36:47.257 20871 ERROR nova.network.neutronv2.api 

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5hYmxlIHRvIGNsZWFyIGRldmljZSBJRCBmb3IgcG9ydCAnTm9uZSdcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzODAzMTMwMzgzNX0=

  master and stable/kilo when we added the preserve pre-existing ports
  stuff in the neutron v2 API in nova.

  My guess is this happens in the deallocate_for_instance call and the
  port_id in the requested_networks dict is None, but we don't filter
  those out properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479780] Re: test_describe_instances_with_filters_tags is non deterministic

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479780

Title:
  test_describe_instances_with_filters_tags is non deterministic

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  test_describe_instances_with_filters_tags is often failing in the
  gate, through some non determinism in the test or the ec2 code itself.
  However, because all compares are done with assertJsonEqual, exactly
  what the data structures look like during fail isn't easy to deduce.

  The following patch https://review.openstack.org/#/c/207403/ will dump
  more details about what's gone wrong, then hopefully we can get to the
  bottom of it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475663] Re: Incorrect behaviour of method _check_instance_exists

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475663

Title:
  Incorrect behaviour of method _check_instance_exists

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This method must check instance existence in CURRENT token. But now it
  checks instance existence in ANY token. It happens because parameter
  token_only in sqlalchemy query was missed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471953] Re: DiskBus enum object field is missing 'uml' and 'lxc' types

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471953

Title:
  DiskBus enum object field is missing 'uml' and 'lxc' types

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The nova.objects.fields.DiskBus enum field type added here:

  https://review.openstack.org/#/c/76234/

  Is missing the 'uml' and 'lxc' disk bus types checked here:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/blockinfo.py#n112

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470562] Re: 'in use' error when Nova volume encryptors format cinder volumes

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470562

Title:
  'in use' error when Nova volume encryptors format cinder volumes

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Tempest scenario TestEncryptedCinderVolumes has been silently skipped when 
run with NetApp iSCSI cinder volume drivers because they did not set the 
'encrypted' key in the connection_info['data'] dict in their 
initialize_connection methods. Change
  https://review.openstack.org/#/c/193673/ - which sets the encrypted flag 
generically, in the VolumeManager's initialize_connection, on the basis of the 
volume.encryption_key_id value - causes this test to actually run its 
encryption providers and exposes a problem in LuksEncryptor:_format_volume() 
for these iSCSI volumes.

  In TestEncryptedCinderVolumes we get the following exception:

  2015-06-29 06:27:18.866 ERROR nova.virt.libvirt.driver 
[req-7124728f-64b7-4951-806b-10901bb2f6b9 TestEncryptedCinderVolumes-1766245790 
TestEncryptedCinderVolumes-1100414243] [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] Failed to attach volume at mountpoint: 
/dev/vdb
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] Traceback (most recent call last):
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1082, in attach_volume
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] encryptor.attach_volume(context, 
**encryption)
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c]   File 
"/opt/stack/new/nova/nova/volume/encryptors/luks.py", line 113, in attach_volume
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] self._format_volume(passphrase, 
**kwargs)
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c]   File 
"/opt/stack/new/nova/nova/volume/encryptors/luks.py", line 78, in _format_volume
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] check_exit_code=True, 
run_as_root=True)
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c]   File 
"/opt/stack/new/nova/nova/utils.py", line 229, in execute
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] return processutils.execute(*cmd, 
**kwargs)
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
260, in execute
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] cmd=sanitized_cmd)
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] ProcessExecutionError: Unexpected error 
while running command.
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf cryptsetup --batch-mode luksFormat --key-file=- 
--cipher aes-xts-plain64 --key-size 512 /dev/sdh
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] Exit code: 5
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] Stdout: u''
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] Stderr: u'Cannot format device /dev/sdh 
which is still in use.\n'
  2015-06-29 06:27:18.866 12852 ERROR nova.virt.libvirt.driver [instance: 
4a5dd4fd-78f8-42ea-8736-406ccb178d7c] 

  This is on master, corresponding code is:
  
https://github.com/openstack/nova/blob/master/nova/volume/encryptors/luks.py#L78

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474074] Re: PciDeviceList is not versioned properly in liberty and kilo

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474074

Title:
  PciDeviceList is not versioned properly in liberty and kilo

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The following commit:

  https://review.openstack.org/#/c/140289/4/nova/objects/pci_device.py

  missed to bump the PciDeviceList version.

  We should do it now (master @ 4bfb094) and backport this to stable
  Kilo as well

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471751] Re: VMware: unable to delete VM with attached volumes

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471751

Title:
  VMware: unable to delete VM with attached volumes

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the ESX running the VM's was deleted or removed the following
  exception occurred when the instance was deleted via OpenStack:

  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher 
ManagedObjectNotFound: The object has already been deleted or has not been 
completely created
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Cause: 
Server raised fault: 'The object has already been deleted or has not been 
completely created'
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Faults: 
[ManagedObjectNotFound]
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Details: 
{'obj': 'vm-2073'}
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher
  2015-07-02 21:33:00.653 21297 ERROR oslo.messaging._drivers.common [-] 
Returning exception The object has already been deleted or has not been 
completely created
  Cause: Server raised fault: 'The object has already been deleted or has not 
been completely created'
  Faults: [ManagedObjectNotFound]
  Details: {'obj': 'vm-2073'} to caller

  This prevent the instance from being deleted in openstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475202] Re: Snapshot deleting of attached volume fails with remotefs volume drivers

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475202

Title:
  Snapshot deleting of attached volume fails with remotefs volume
  drivers

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  cinder create --image-id 3dc83685-ed82-444c-8863-1e962eb33de8 1  # ID
  of cirros image

  nova boot qwe  --flavor m1.tiny --block-device id=d62c5786-1d13-46bb-
  be13-3b110c144de7,source=volume,dest=volume,type=disk,bootindex=0

  cinder snapshot-create --force=True
  46b22595-31b0-41ca-8214-8ad6b81a06b6

  cinder snapshot-delete 43fb72a4-963f-45f7-8b42-89e7c2cbd720

  
  Then check nova-compute log:

  2015-07-16 08:44:26.841 ERROR nova.virt.libvirt.driver 
[req-f92f3dd2-1bef-4c2c-8208-54d765592985 nova service] Error occurred during 
volume_snapshot_delete, sending error status to Cinder.
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver Traceback (most 
recent call last):
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2004, in 
volume_snapshot_delete
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver 
self._volume_snapshot_delete(context, instance, volume_id,
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1939, in 
_volume_snapshot_delete
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver dev = 
guest.get_block_device(rebase_disk)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 302, in rebase
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver self._disk, 
base, self.REBASE_DEFAULT_BANDWIDTH, flags=flags)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver rv = 
execute(f, *args, **kwargs)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver 
six.reraise(c, e, tb)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver rv = 
meth(*args, **kwargs)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
"/usr/lib/python2.7/site-packages/libvirt.py", line 865, in blockRebase
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver if ret == 
-1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver libvirtError: 
invalid argument: flag VIR_DOMAIN_BLOCK_REBASE_RELATIVE is valid only with 
non-null base

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462295] Re: xenapi needs pre live-migration plugin to check/fake pv driver version in xenstore

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462295

Title:
  xenapi needs pre live-migration plugin to check/fake pv driver version
  in xenstore

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Xenserver relies on guest (domU) to provide information about
  the presence of PV drivers in the guest image back to dom0 though
  xenstore for various actions like live-migration etc.

  It is possible for users to disable the xen agent that reports this
  info and therefore causing failures in live-migration.
  In cases where
  PV drivers are running it is safe to fake the presence of this information
  in xenstore. XAPI simply reads this information to ascertain the presence
  of pv drives.

  Since it is common for users to disable this, we need a way to ensure that if
  pv tools are present (we can check this though the presence of pv devices 
like  vif)
  we can carry out a live-migration. We can easily do this by faking driver 
version in xenstore
  for the instance for which we are attempting live-migration prior to starting 
live-migration.

  In cases where this info is not present in xenstore, xapi will simply fail 
the migration attempt with
  VM_MISSING_PV_DRIVERS error.

  2014-04-24 20:47:36.938 24870 TRACE nova.virt.xenapi.vmops Failure:
  ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:ef49f129-691d-
  4e18-7a09-8dae8928aa7a']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467581] Re: Concurrent interface attachment corrupts info cache

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467581

Title:
  Concurrent interface attachment corrupts info cache

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Concurrently attaching multiple network interfaces to a single
  instance can often result in corruption of the instance's information
  cache in Nova. The result is that some network interfaces may be
  missing from 'nova list', and silently fail to detach when 'nova
  interface-detach' is run. The ports are listed in 'nova interface-
  list', however, and can be seen in 'neutron port-list'.

  Initially seen on CentOS7 running Juno. Reproduced on Ubuntu 14.04
  running devstack (master branch).

  This issue is similar (possibly identical) to bug 1326183, and the
  steps to reproduce it are similar also.

  1) Devstack with trunk with the following local.conf:
  disable_service n-net
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-meta
  RECLONE=yes
  # and other options as set in the trunk's local

  2) Create few networks:
  $> neutron net-create testnet1
  $> neutron net-create testnet2
  $> neutron net-create testnet3
  $> neutron subnet-create testnet1 192.168.1.0/24
  $> neutron subnet-create testnet2 192.168.2.0/24
  $> neutron subnet-create testnet3 192.168.3.0/24

  3) Create a testvm in testnet1:
  $> nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --nic 
net-id=`neutron net-list | grep testnet1 | cut -f 2 -d ' '` testvm

  4) Run the following shell script to attach and detach interfaces for this vm 
in the remaining two networks in a loop until we run into the issue at hand:
  -
  #! /bin/bash
  c=1
  netid1=`neutron net-list | grep testnet2 | cut -f 2 -d ' '`
  netid2=`neutron net-list | grep testnet3 | cut -f 2 -d ' '`
  while [ $c -gt 0 ]
  do
 echo "Round: " $c
 echo -n "Attaching two interfaces concurrently... "
 nova interface-attach --net-id $netid1 testvm &
 nova interface-attach --net-id $netid2 testvm &
 wait
 echo "Done"
 echo "Sleeping until both those show up in nova show"
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova show testvm | grep testnet | wc -l`
 if [ $count -eq 3 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 echo "Waited for " $waittime " seconds"
 if [ $waittime -ge 60 ]
 then
echo "bad case"
exit 1
 fi
 echo "Detaching both... "
 nova interface-list testvm | grep $netid1 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 nova interface-list testvm | grep $netid2 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 echo "Done; check interfaces are gone in a minute."
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 echo "line count: " $count
 if [ $count -eq 5 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 if [ $waittime -ge 60 ]
 then
echo "failed to detach interfaces - raise another bug!"
exit 1
 fi
 echo "Interfaces are gone"
 (( c-- ))
  done
  -

  Eventually the test will stop with a failure ("bad case") and the
  interface remaining either from testnet2 or testnet3 can not be
  detached at all.

  For me, eventually is every time.

  Based on my analysis of the source code, the concurrent requests cause
  corruption of the instance network info cache. Each takes a copy of
  the info cache at the start of the request processing, which contains
  only the initial network. Each request thread then allocates a network
  port and adds it to the network info. This info object is then saved
  back to the DB. In each case, the info contains the initial network
  and the network that has been added by that thread. Therefore, the
  last thread to save wins, and the other network is lost.

  I have a patch that appears to fix the issue, by refreshing the info
  cache whilst holding the refresh-cache- lock. However, I'm not
  intimately familiar with the nova networking code so would appreciate
  more experienced eyes on it. I will submit the change to gerrit for
  analysis and comments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463044] Re: Hyper-V: the driver fails to initialize on Windows Server 2008 R2

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463044

Title:
  Hyper-V: the driver fails to initialize on Windows Server 2008 R2

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The Hyper-V driver uses the Microsoft\Windows\SMB WMI namespace in
  order to handle SMB shares. The issue is that this namespace is not
  available on Windows versions prior to Windows Server 2012.

  For this reason, the Hyper-V driver fails to initialize on Windows
  Server 2008 R2.

  Trace: http://paste.openstack.org/show/271422/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1463044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469006] Re: Live migration fails with a booted from volume instance

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469006

Title:
  Live migration fails with a booted from volume  instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This error always occurr in case of a non shared storage environment
  even if the instance is booted from volume.

  # nova live-migration c5993bdb-c230-48b4-ba42-70c680372640 dest_host

  ERROR (BadRequest): src_host is not on shared storage: Live migration
  can not be used without shared storage. (HTTP 400) (Request-ID: req-
  22dc7adb-fd13-40fd-b7da-10c2851df528)

  We should allow user to execute live migration in case of a booted
  from volume instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1469006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461678] Re: nova error handling causes glance to keep unlinked files open, wasting space

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461678

Title:
  nova error handling causes glance to keep unlinked files open, wasting
  space

Status in OpenStack Compute (nova):
  Fix Released
Status in python-glanceclient:
  Fix Released

Bug description:
  When creating larger glance images (like a 10GB CentOS7 image), if we
  run into situation where we run out of room on the destination device,
  we cannot recover the space from glance. glance-api will have open
  unlinked files, so a TONNE of space is unavailable until we restart
  glance-api.

  Nova will try to reschedule the instance 3 times, so should see this 
nova-conductor.log :
  u'RescheduledException: Build of instance 
98ca2c0d-44b2-48a6-b1af-55f4b2db73c1 was re-scheduled: [Errno 28] No space left 
on device\n']

  The problem is this code in
  nova.image.glance.GlanceImageService.download():

  if data is None:
  return image_chunks
  else:
  try:
  for chunk in image_chunks:
  data.write(chunk)
  finally:
  if close_file:
  data.close()

  image_chunks is an iterator.  If we take an exception (like we can't
  write the file because the filesystem is full) then we will stop
  iterating over the chunks.  If we don't iterate over all the chunks
  then glance will keep the file open.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447675] Re: directory listing of the service No-VNC

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447675

Title:
  directory listing of the service No-VNC

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  Reported via private E-mail from Anass ANNOUR:

  I found a directory listing of the service No-VNC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461065] Re: Security groups may break

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461065

Title:
  Security groups may break

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/171e5f8b127610d93a230a6f692d8fd5ea0d0301
  converted instance dicts to objects. There are cases for the security
  groups where these should still be dicts. This will cause update of
  security groups to break.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447380] Re: wrong cinder.conf.sample generation: missing directives for keystone_authtoken (at least)

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447380

Title:
  wrong cinder.conf.sample generation: missing directives for
  keystone_authtoken (at least)

Status in Cinder:
  Fix Released
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Hi,

  When building the Debian Kilo RC1 package of Cinder, I'm generating
  the cinder.conf.sample file using (the same way tox would do):

  tools/config/generate_sample.sh -b . -p cinder -o etc/cinder

  Unfortunately, this resulted in a broken cinder.conf.sample, with at
  least keystone_authtoken missing directives. It stops at
  "#hash_algorithms = md5" and all what's after is now missing.
  auth_host, auth_port, auth_protocol, identity_uri, admin_token,
  admin_user, admin_password and admin_tenant_name are missing
  directives from the configuration file. "patrickeast" on IRC gave me a
  file (which I supposed was generated using devstack) and latest trunk,
  and it seems there's the exact same issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1447380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454512] Re: Device for other volume is deleted unexpected during volume detach when iscsi multipath is used

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454512

Title:
  Device for other volume is deleted unexpected during volume detach
  when iscsi multipath is used

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We found this issue during testing volume detachment when iSCSI
  multipath is used. When a same iSCSI protal and iqn is shared by
  multiple LUNs, device from other volume maybe be deleted unexpected.
  This is found both in Kilo and the latest code.

  For example, the devices under /dev/disk/by-path may looks like below when 
LUN 23 and 231 are from a same storage system and a same iSCSI protal and iqn 
are used. ls /dev/disk/by-path
  ip-192.168.3.50:3260-iscsi--lun-23
  ip-192.168.3.50:3260-iscsi--lun-231
  ip-192.168.3.51:3260-iscsi--lun-23
  ip-192.168.3.51:3260-iscsi--lun-231

  When we try to detach volume corresponding LUN 23 from the host, we
  noticed that the devices regarding to LUN 231 are also deleted which
  may cause the data unavailable.

  Why this happen? After digging into the nova code, below is the clue:

  nova/virt/libvirt/volume.py
  770 def _delete_mpath(self, iscsi_properties, multipath_device, ips_iqns):
  771 entries = self._get_iscsi_devices()
  772 # Loop through ips_iqns to construct all paths
  773 iqn_luns = []
  774 for ip, iqn in ips_iqns:
  775 iqn_lun = '%s-lun-%s' % (iqn,
  776 iscsi_properties.get('target_lun', 0))
  777 iqn_luns.append(iqn_lun)
  778 for dev in ['/dev/disk/by-path/%s' % dev for dev in entries]:
  779 for iqn_lun in iqn_luns:
  780 if iqn_lun in dev: ==> This is incorrect, device for LUN 231 will made 
this be True.
  781 self._delete_device(dev)
  782
  783 self._rescan_multipath()

  Due to the incorrect logic in line 780, detach LUN xx will deleted devices 
for other LUNs starts with xx, such as xxy, xxz. We could use 
dev.endswith(iqn_lun) to avoid it.
  ===
  stack@openstack-performance:~/tina/nova_iscsi_mp/nova$ git log -1
  commit f4504f3575b35ec14390b4b678e441fcf953f47b
  Merge: 3f21f60 5fbd852
  Author: Jenkins 
  Date: Tue May 12 22:46:43 2015 +

  Merge "Remove db layer hard-code permission checks for
  network_get_all_by_host"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447079] Re: Libvirt KVM: Huge pages need to be mapped shared to allow vhostuser access

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447079

Title:
  Libvirt KVM: Huge pages need to be mapped shared to allow vhostuser
  access

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  By default, when a KVM guest is setup using huge pages, the pages are
  mapped using the MAP_PRIVATE flag to mmap().

  The vhostuser VIF backend is designed to allow an external process to
  provide the QEMU network driver functionality. For some usecases of
  vhostuser, this requires that the external process be able to access
  the QEMU guest's memory pages directly. This is not possible when the
  hugepages  are mapped with MAP_PRIVATE - they must use MAP_SHARED
  instead.

  The result is that current Nova hugepages config doesn't work too well
  with vhostuser VIF backend.

  The fix to this is to tell libvirt to use a shared mapping for huge
  pages

  http://libvirt.org/formatdomain.html#elementsCPU

  eg:

...

  ...
  


  
  ...

...

  notice the addition of the memAccess attribute.

  There is no serious downside to using shared mappings, so Nova might
  as well just unconditionally request them all the time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429220] Re: libvirt does ensure live migration will eventually complete (or abort)

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429220

Title:
  libvirt does ensure live migration will eventually complete (or abort)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently the libvirt driver's approach to live migration is bested
  characterized as "launch & pray".  It starts the live migration
  operation and then just unconditionally waits for it to finish. It
  never makes any attempt to tune its behaviour (for example changing
  max downtime), nor does it look at the data transfer statistics to
  check if it is making any progress, nor does it have any overall
  timeout.

  It is not uncommon for guests to have workloads that will preclude
  live migration from completing. Basically they can be dirtying guest
  RAM (or block devices) faster than the network is able to transfer it
  to the destination host. In such a case Nova will just leave the
  migration running, burning up host CPU cycles and trashing network
  bandwidth until the end of the universe.

  There are many features exposed by libvirt, that Nova could be using
  to do a better job, but the question is obviously ...which features
  and how should they be used. Fortunately Nova is not the first project
  to come across this problem. The oVirt data center mgmt project has
  the exact same problem. So rather than trying to invent some new logic
  for Nova, we should, as an immediate bug fix task, just copy the oVirt
  logic from VDSM

  https://github.com/oVirt/vdsm/blob/master/vdsm/virt/migration.py#L430

  If we get this out to users and then get real world feedback on how it
  operates, we will have an idea of how/where to focus future ongoing
  efforts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288039] Re: live-migration cinder boot volume target_lun id incorrect

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288039

Title:
  live-migration cinder boot volume target_lun id incorrect

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When nova goes to cleanup _post_live_migration on the source host, the
  block_device_mapping has incorrect data.

  I can reproduce this 100% of the time with a cinder iSCSI backend,
  such as 3PAR.

  This is a Fresh install on 2 new servers with no attached storage from Cinder 
and no VMs.
  I create a cinder volume from an image. 
  I create a VM booted from that Cinder volume.  That vm shows up on host1 with 
a LUN id of 0.
  I live migrate that vm.   The vm moves to host 2 and has a LUN id of 0.   The 
LUN on host1 is now gone.

  I create another cinder volume from image.
  I create another VM booted from the 2nd cinder volume.  The vm shows up on 
host1 with a LUN id of 0.  
  I live migrate that vm.  The VM moves to host 2 and has a LUN id of 1.  
  _post_live_migrate is called on host1 to clean up, and gets failures, because 
it's asking cinder to delete the volume
  on host1 with a target_lun id of 1, which doesn't exist.  It's supposed to be 
asking cinder to detach LUN 0.

  First migrate
  HOST2
  2014-03-04 19:02:07.870 WARNING nova.compute.manager 
[req-24521cb1-8719-4bc5-b488-73a4980d7110 admin admin] pre_live_migrate: 
{'block_device_mapping': [{'guest_format': None, 'boot_index': 0, 
'mount_device': u'vda', 'connection_info': {u'd
  river_volume_type': u'iscsi', 'serial': 
u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260'
  , u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': u'virtio', 
'device_type': u'disk', 'delete_on_termination': False}]}
  HOST1
  2014-03-04 19:02:16.775 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi',
   u'serial': u'83fb6f13-905e-45f8-a465-508cb343b721', u'data': 
{u'target_discovered': True, u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 0, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}



  Second Migration
  This is in _post_live_migration on the host1.  It calls libvirt's driver.py 
post_live_migration with the volume information returned from the new volume on 
host2, hence the target_lun = 1.   It should be calling libvirt's driver.py to 
clean up the original volume on the source host, which has a target_lun = 0.
  2014-03-04 19:24:51.626 WARNING nova.compute.manager [-] 
_post_live_migration: block_device_info {'block_device_mapping': 
[{'guest_format': None, 'boot_index': 0, 'mount_device': u'vda', 
'connection_info': {u'driver_volume_type': u'iscsi', u'serial': 
u'f0087595-804d-4bdb-9bad-0da2166313ea', u'data': {u'target_discovered': True, 
u'qos_specs': None, u'target_iqn': 
u'iqn.2000-05.com.3pardata:20810002ac00383d', u'target_portal': 
u'10.10.120.253:3260', u'target_lun': 1, u'access_mode': u'rw'}}, 'disk_bus': 
u'virtio', 'device_type': u'disk', 'delete_on_termination': False}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381425] Re: nova cells, force-delete VM throws error even if VM gets deleted

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381425

Title:
  nova cells, force-delete VM throws error even if VM gets deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In nova cells environment, when try to force-delete an instance which is in 
'active' state, the instance gets deleted successfully, but in nova-cells 
service in compute cell (n-cell-child), it throws the following error:
  InvalidRequestError: Object '' is already 
attached to session '75' (this is '79')

  Reproduction steps:
  1) Create instance.
  2) Wait until instance becomes 'active'.
  3) Try to force-delete the instance.
  $ nova force-delete 

  Found this error in nova-cells service in compute cell (n-cell-child
  service):

  2014-10-15 01:59:36.742 ERROR nova.cells.messaging 
[req-7c1615ad-491d-4af8-88d7-ff83563ef429 admin admin] Error processing message 
locally: Object '' is already attached to session '75' (this is '79')
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 199, in _process_locally
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 1293, in 
_process_message_locally
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 698, in run_compute_api_method
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging return 
fn(message.ctxt, *args, **method_info['method_kwargs'])
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 219, in wrapped
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 209, in inner
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging return function(self, 
context, instance, *args, **kwargs)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 190, in inner
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 1836, in force_delete
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._delete_instance(context, instance)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 1790, in _delete_instance2014-10-15 
01:59:36.742 TRACE nova.cells.messaging task_state=task_states.DELETING)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 1622, in _delete
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging quotas.rollback()
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in 
__exit__
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
six.reraise(self.type_, self.value, self.tb)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 1550, in _delete
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging instance.save()
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/db/sqlalchemy/models.py", line 52, in save
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging super(NovaBase, 
self).save(session=session)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/models.py", line 47, 
in save
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging session.add(self)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1399, in add
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._save_or_update_state(state)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1411, in 
_save_or_update_state
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._save_or_update_impl(state)
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1667, in 
_save_or_update_impl
  2014-10-15 01:59:36.742 TRACE nova.cells.messaging 
self._update_impl(state

[Yahoo-eng-team] [Bug 1240373] Re: VMware: Sparse glance vmdk's size property is mistaken for capacity

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240373

Title:
  VMware: Sparse glance vmdk's size property is mistaken for capacity

Status in OpenStack Compute (nova):
  Fix Released
Status in VMwareAPI-Team:
  Confirmed

Bug description:
  Scenario:

  a sparse vmdk whose file size is 800MB and whose capacity is 4GB is uploaded 
to glance without specifying the size property.
  (glance uses the file's size for the size property in this case)

  nova boot with flavor tiny (root disk size of 1GB) said image.

  Result:
  The vmwareapi driver fails to spawn the VM because the ESX server throws a 
fault when asked to 'grow' the disk from 4GB to 1GB (driver thinks it is 
attempt to grow from 800MB to 1GB)

  Relevant hostd.log on ESX host:
  2013-10-15T17:02:24.509Z [35BDDB90 verbose 'Default'
  opID=HB-host-22@3170-d82e35d0-80] ha-license-manager:Validate -> Valid
  evaluation detected for "VMware ESX Server 5.0" (lastError=2,
  desc.IsValid:Yes)
  2013-10-15T17:02:25.129Z [FFBE3D20 info 'Vimsvc.TaskManager'
  opID=a3057d82-8e] Task Created :
  haTask--vim.VirtualDiskManager.extendVirtualDisk-526626761


  2013-10-15T17:02:25.158Z [35740B90 warning 'Vdisksvc' opID=a3057d82-8e]
  New capacity (2097152) is not greater than original capacity (8388608).

  I am still not exactly sure if this is consider user error on glance
  import, a glance shortcoming of not introspecting the vmdk, or a bug
  in the compute driver. Regardless, this bug is to track any potential
  defensive code we can add to the driver to better handle this
  scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390111] Re: when downloading glance image to vmware datastore, it cost too much time

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390111

Title:
  when downloading glance image to vmware datastore,  it cost too much
  time

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.vmware:
  Confirmed

Bug description:
  When creating vm with VMware dirver, nova compute first download image
  from glance to datastore. It is very slowly. Because nova compute
  interactive with vCenter to put file, not ESX.

  We compare performance. Interactiving with ESX is faster at least six
  times.

  Fix: at first, we get esx cookies from vCenter, then we use cookies to
  put files to  esx datastore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384379] Re: versions resource uses host_url which may be incorrect

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384379

Title:
  versions resource uses host_url which may be incorrect

Status in Ceilometer:
  In Progress
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Triaged
Status in Glance juno series:
  Triaged
Status in heat:
  Triaged
Status in Ironic:
  In Progress
Status in Manila:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in Trove:
  In Progress

Bug description:
  The versions resource constructs the links by using host_url, but the
  glance api endpoint may be behind a proxy or ssl terminator. This
  means that host_url may be incorrect. It should have a config option
  to override host_url like the other services do when constructing
  versions links.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1384379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436693] Re: Unable to delete incomplete instance

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436693

Title:
  Unable to delete incomplete instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When instance creation is finished incompletely by any reason, nova can't 
delete its instance.
  When an instance has no related record in instance_info_caches, "nova delete" 
bring error state.

  The case we found:

  1) Create instance.
  $ nova boot --flavor 2 --image 17f667e4-b932-4a6c-a01c-478b69c0f8bd test1

  2) Some error occurred (maybe caused by network problem).
  $ nova list
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State | Power 
State | Networks |
  
+--+---+++-+--+
  | 2f677075-9d9c-4e1b-b483-74d79b31af26 | test1 | ERROR  || 
NOSTATE |  |
  
+--+---+++-+--+

  3) Try deleting the instance, but it is failed.
  $ nova delete 2f677075-9d9c-4e1b-b483-74d79b31af26
  Request to delete server 2f677075-9d9c-4e1b-b483-74d79b31af26 has been 
accepted.

  $ nova list
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State | Power 
State | Networks |
  
+--+---+++-+--+
  | 2f677075-9d9c-4e1b-b483-74d79b31af26 | test1 | ERROR  | -  | 
NOSTATE |  |
  
+--+---+++-+--+

  $ nova show 2f677075-9d9c-4e1b-b483-74d79b31af26
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| -
  |
  | OS-EXT-STS:vm_state  | error
  |
  | OS-SRV-USG:launched_at   | 2015-03-23T08:51:44.00   
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | config_drive | True 
  |
  | created  | 2015-03-23T08:51:14Z 
  |
  | fault| {"message": "'NoneType' object has 
no attribute 'delete'", "code": 500, "created": "2015-03-23T08:58:23Z"} |
  | flavor   | m1.small (2) 
  |
  | hostId   | 
1c3addc04df95d1f561280fe73da48c9f6c26526cc7d5cffc5cb6df0
   |
  | id   | 2f677075-9d9c-4e1b-b483-74d79b31af26 
  |
  | image| cirros-0.3.2-x86_64-uec 
(17f667e4-b932-4a6c-a01c-478b69c0f8bd)  
   |
  | key_name 

[Yahoo-eng-team] [Bug 1387543] Re: [OSSA 2015-015] Resize/delete combo allows to overload nova-compute (CVE-2015-3241)

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387543

Title:
  [OSSA 2015-015] Resize/delete combo allows to overload nova-compute
  (CVE-2015-3241)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  If user create instance, and resize it to larger flavor and than
  delete that instance, migration process does not stop. This allow
  user to repeat operation many times, causing overload to affected
  compute nodes over user quota.

  Affected installation: most drastic effect happens on 'raw-disk'
  instances without live migration. Whole raw disk (full size of the
  flavor) is copied during migration.

  If user delete instance it does not terminate rsync/scp keeping disk
  backing file opened regardless of removal by nova compute.

  Because rsync/scp of large disks is rather slow, it gives malicious
  user enough time to repeat that operation few hundred times, causing
  disk space depletion on compute nodes, huge impact on management
  network and so on.

  Proposed solution: abort migration (kill rsync/scp) as soon, as
  instance is deleted.

  Affected installation: Havana, Icehouse, probably Juno (not tested).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436568] Re: Ironic Nova driver makes two calls to delete a node

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436568

Title:
  Ironic Nova driver makes two calls to delete a node

Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When deleting an instance in Nova, it sets the provision state to
  DELETED and then when that completes (node is in CLEANING, CLEANFAIL,
  or NOSTATE/AVAILABLE), it makes another call to remove the instance
  UUID. The instance UUID should be cleared out when Ironic clears out
  node.instance_info, and Nova should delete the instance as soon as the
  node is one of the states above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1436568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442749] Re: Bandwidth usage object not created after db update

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442749

Title:
  Bandwidth usage object not created after db update

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When using objects.BandwidthUsage().create() it was updating the
  database and then trying to populate the object with the returned
  result.  But db.bw_usage_update does not return a result so the object
  creation fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370250] Re: Can not set volume attributes at instance launch by EC2 API

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370250

Title:
  Can not set volume attributes at instance launch by EC2 API

Status in ec2-api:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  AWS allows to change block device attributes (such as volume size,
  delete on termination behavior, existence) at instance launch.

  For example, image xxx has devices:
  vda, size 10, delete on termination
  vdb, size 100, delete on termination
  vdc, size 100, delete on termination
  We can run an instance by
  euca-run-instances ... xxx -b /dev/vda=:20 -b /dev/vdb=::false -b 
/dev/vdc=none
  to get the instance with devices:
  vda, size 20, delete on termination
  vdb, size 100, not delete on termination

  For Nova we get now:
  $ euca-run-instances --instance-type m1.nano -b /dev/vda=::true ami-000a
  euca-run-instances: error (InvalidBDMFormat): Block Device Mapping is 
Invalid: Unrecognized legacy format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301279] Re: Changing node's properties in Ironic after node is deployed will count as available resources in Nova

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301279

Title:
  Changing node's properties in Ironic after node is deployed will count
  as available resources in Nova

Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If you increase the properties of a node which was already deployed
  the different will go to nova as available resources. For e.g, a node
  with properties/memory_mb=512 was deployed, and n-cpu is showing:

  2014-04-02 10:37:26.514 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 0
  2014-04-02 10:37:26.514 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 0
  2014-04-02 10:37:26.514 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0

  Now if we update that that to properties/memory_mb=1024, the
  difference will be shown in nova as available resources:

  2014-04-02 10:40:48.266 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 512
  2014-04-02 10:40:48.266 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 0
  2014-04-02 10:40:48.266 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0

  LOGs: http://paste.openstack.org/show/74806/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1301279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423772] Re: During live-migration Nova expects identical IQN from attached volume(s)

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423772

Title:
  During live-migration Nova expects identical IQN from attached
  volume(s)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When attempting to do a live-migration on an instance with one or more
  attached volumes, Nova expects that the IQN will be exactly the same
  as it's attaching the volume(s) to the new host. This conflicts with
  the Cinder settings such as "hp3par_iscsi_ips" which allows for
  multiple IPs for the purpose of load balancing.

  Example:
  An instance on Host A has a volume attached at 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  An attempt is made to migrate the instance to Host B.
  Cinder sends the request to attach the volume to the new host.
  Cinder gives the new host 
"/dev/disk/by-path/ip-10.10.120.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"
  Nova looks for the volume on the new host at the old location 
"/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2"

  The following error appears in n-cpu in this case:

  2015-02-19 17:09:05.574 ERROR nova.virt.libvirt.driver [-] [instance: 
b6fa616f-4e78-42b1-a747-9d081a4701df] Live Migration failure: Failed to open 
file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5426, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5393, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, 
in proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, 
in execute
  six.reraise(c, e, tb)
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, 
in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1582, in 
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
  libvirtError: Failed to open file 
'/dev/disk/by-path/ip-10.10.220.244:3260-iscsi-iqn.2000-05.com.3pardata:22210002ac002a13-lun-2':
 No such file or directory
  Removing descriptor: 3

  
  When looking at the nova DB, this is the state of block_device_mapping prior 
to the migration attempt:

  mysql> select * from block_device_mapping where 
instance_uuid='b6fa616f-4e78-42b1-a747-9d081a4701df' and deleted=0;
  
+-+-+++-+---+-+--+-+---+---+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id | device_name | 
delete_on_termination | snapshot_id | volume_id| 
volume_size | no_device | connection_info   





 

[Yahoo-eng-team] [Bug 1392527] Re: Deleting instance while resize instance is running leads to unuseable compute nodes (CVE-2015-3280)

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527

Title:
  Deleting instance while resize instance is running leads to unuseable
  compute nodes (CVE-2015-3280)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress
Status in OpenStack Compute (nova) kilo series:
  In Progress
Status in OpenStack Security Advisory:
  Fix Committed

Bug description:
  Steps to reproduce:
  1) Create a new instance,waiting until it’s status goes to ACTIVE state
  2) Call resize API
  3) Delete the instance immediately after the task_state is “resize_migrated” 
or vm_state is “resized”
  4) Repeat 1 through 3 in a loop

  I have kept attached program running for 4 hours, all instances
  created are deleted (nova list returns empty list) but I noticed
  instances directories with the name “_resize> are not
  deleted from the instance path of the compute nodes (mainly from the
  source compute nodes where the instance was running before resize). If
  I keep this program running for couple of more hours (depending on the
  number of compute nodes), then it completely uses the entire disk of
  the compute nodes (based on the disk_allocation_ratio parameter
  value). Later, nova scheduler doesn’t select these compute nodes for
  launching new vms and starts reporting error "No valid hosts found".

  Note: Even the periodic tasks doesn't cleanup these orphan instance
  directories from the instance path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1392527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369854] Re: min_disk and size information wrong when creating an instance image

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369854

Title:
  min_disk and size information wrong when creating an instance image

Status in Glance:
  Confirmed
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If I launch an instance from an image that has min_disk set to 2GB,
  then create an image from the instance, the min_disk attribute is not
  set correctly on the image (it's set to 0). Instead of being set as an
  attribute, it exists as a property instead. This can cause issues when
  other services rely on min_disk to make decisions about size (e.g. bug
  1368600 for Horizon, not sure if Cinder is affected too).

  This doesn't seem to happen when creating a basic instance from an
  instance, only when there is also a volume involved.

  I'm not sure if this is a nova or glance issue so I'm opening a task
  on both for now.

  Steps to reproduce
  
  (done on a recent devstack)

  1. Create a new image with e.g. min_disk 2GB
  $ glance image-create --name mindisk2gb --disk-format qcow2 
--container-format bare --location 
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-i386-disk.img 
--min-disk 2
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2014-09-16T12:35:33  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | qcow2|
  | id   | 9bbed3a5-00bf-4901-855e-556ca02e7fb7 |
  | is_public| False|
  | min_disk | 2|  <-- min_disk 
set as expected
  | min_ram  | 0|
  | name | mindisk2gb   |
  | owner| eb6acc49df4b4390a99c722ded526284 |
  | protected| False|
  | size | 9159168  |  <-- size 
looks ok
  | status   | active   |
  | updated_at   | 2014-09-16T12:35:37  |
  | virtual_size | None |
  +--+--+

  2. Boot an instance from this image, backed by a volume (I believe this 
matches what happens when launching with "Image (creates a new volume)" in 
Horizon)
  $ nova boot --flavor 4115a835-04f0-4457-b93c-1817bc44812c --block-device 
device='/dev/vda',source='image',dest='volume',id='9bbed3a5-00bf-4901-855e-556ca02e7fb7',size=2,bootindex=0
 my_instance 
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-STS:power_state   | 0
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | building 
|
  | OS-SRV-USG:launched_at   | -
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | adminPass| Uo2DL64GaY74 
|
  | config_drive |  
|
  | created  | 2014-09-16T12:55:30Z 
|
  | flavor   | m1.lowmem 
(4115a835-04f0-4457-b93c-1817bc44812c) |
  | hostId   |  
|
  | id   | 3e70fd2e-1dc6-4001-941e-9496a9514882 
|
  | image| Attempt to boot from volume - no 
image supplied  |
  | key_name

[Yahoo-eng-team] [Bug 1384392] Re: Snapshot volume backed VM does not handle image metadata correctly

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384392

Title:
  Snapshot volume backed VM does not handle image metadata correctly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova Juno

  The instance snapshot of volume backed instances does not handle image
  metadata the same way that the regular instance snapshot path does.

  nova/compute/api/api.py's snapshot path builds the Glance image
  metadata using nova/compute/utils.py get_image_metadata which gets
  metadata from the VM's base image, includes metadata from the
  instance's system metadata, and excludes properties specified in
  CONF.non_inheritable_image_properties.

  The volume backed snapshot path,
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n1472
  simply gets the image properties from the base image and does not
  include properties from instance system metadata and doesn't honor the
  CONF.non_inheritable_image_properties property.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284202] Re: nova --debug secgroup-list --all-tenant 1 does not show all tenant when Neutron is used

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284202

Title:
  nova --debug secgroup-list --all-tenant 1 does not show all tenant
  when Neutron is used

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As admin, I cannot list all neutron security groups with nova.

  # neutron security-group-list
  
+--+-+--+
  | id   | name| description
  |
  
+--+-+--+
  | 0060bb2a-a685-4445-b1a5-89d0c4f5f226 | default | default
  |
  | 00c42289-69fb-4d0b-a69b-cce89b89fefb | default | default
  |
  | 00e73187-cfb7-4362-86e5-f2310ace5266 | default | default
  |
  | 0149af4c-4521-4e89-8e26-49dbff96494b | default | default
  |
  | 039b91b1-daf6-4e8c-a815-c137f3a56975 | default | default
  |
  | 03c1640e-a715-4b4b-b69e-ab02c247c72b | default | default
  |
  | 0679924c-70a8-499b-88f5-663c520bf6d1 | sec_grp--3193851| 
desc--722700139  |
  | 0a223200-56a0-4eb9-a933-013211082be7 | default | default
  |
  | 0a4fcf8b-dc8c-408a-994e-c563345e6e20 | default | default
  |
  | 0b97a8f7-0582-4814-b914-b0571ddd4746 | default | default
  |
  | 0db18431-a54a-4b59-913a-85a9542bcb3c | default | default
  |
  | 1024926f-40ee-4fdf-9637-a14d6aed1d66 | default | default
  |
  | 119d588f-9ce8-4dbe-a711-3bd2de3327a4 | default | default
  |
  | 13d49793-dd7c-4504-bbe6-96ff0ffac1d9 | default | default
  |
  | 161a0b3c-334e-4a4f-a411-db70eb6ab26d | sec_grp--876102254  | desc--49581304 
  |
  | 18645f71-76c5-440a-9198-27a406f5635e | default | default
  |
  | 18cbadeb-687a-4001-8b4b-c41900947ecb | default | default
  |
  | 18d5badf-7e3e-4ba5-a172-eb7f60b3fe49 | default | default
  |
  | 1a846b20-ec19-427f-aca6-2bb3fae50f2d | default | default
  |
  | 1da062a2-33a2-41d4-be25-c061b1593b31 | default | default
  |
  | 2252c82d-8624-4ed3-9d4e-c533848be734 | default | default
  |
  | 238a40e8-d7a1-4def-a9b4-0a9a475ec97a | default | default
  |
  | 26aadab6-6e31-4871-937f-9ab367f970c5 | default | default
  |
  | 26e08be5-7c5d-4e20-b7d5-dd95fe481edd | default | default
  |
  | 26eaef82-b627-4ed8-ac71-c80718c5d3f7 | default | default
  |
  | 28acea35-c0b4-4735-ae06-2f9e643d084b | default | default
  |
  | 305a4649-fb63-4daa-9548-f4d62ff53f20 | default | default
  |
  | 348f7291-223f-4fad-babd-1a1bfa1bde87 | default | default
  |
  | 386b0d4a-f561-480d-96e9-0ffe9810f999 | default | default
  |
  | 3b3a0261-461d-4e95-836f-bb3c7610c6f1 | default | default
  |
  | 3fb562dd-891f-4619-af12-8faa53372d35 | default | default
  |
  | 4027da48-0e5d-4a65-abfd-102345b14b30 | default | default
  |
  | 411f79ea-754a-4ce2-a678-35f20f5de532 | default | default
  |
  | 42243252-4f8a-464d-8002-c1e6c9c8592b | default | default
  |
  | 45283e79-5698-4852-bfe3-20e84ec61e4f | default | default
  |
  | 498987d7-1a4c-4a81-af45-5704142fbd7b | default | default
  |
  | 4b05cc35-f9e6-493f-a520-b2e883cb2305 | default | default
  |
  | 4b3bd3ca-fa60-4d78-a207-78e421aeee78 | default | default
  |
  | 4c0c1393-70a3-46d0-a748-792e00297e7d | default | default
  |
  | 4c5f102d-0509-4ef8-8132-12d4bba8a536 | default | default
  |
  | 4de94b11-5bbf-41ad-ab6f-8291c14df5c3 | default | default
  |
  | 4ea81807-6627-4182-aa21-3feb1315b79c | default | default
  |
  | 4f3efebe-2a22-4c9a-8d65-dd0859c84979 | default | default
  |
  | 543cfe02-457f-4399-887f-3c1f2307c2e3 | default | default
  |
  | 5462f168-324f-44e3-a81e-0f3444099d19 | default | default
  |
  | 554c7ecd-7876-4f80-9096-41b0f1e7498b | default | default
  |
  | 55b20025-f031-4ab3-83a5-ae84e7d162b3 | default | default
  |
  | 56523131-0929-4b7c-a647-d19452579e54 | default | default
  |
  | 571b707b-718d-428f-8e9f-40815be756a5 | default | default
  |
  | 589ba3

[Yahoo-eng-team] [Bug 1253991] Re: image cache clean-up should also remove unused ephemeral backing files

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253991

Title:
  image cache clean-up should also remove unused ephemeral backing files

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently ephermeral backing files are never removed from the system.
  However changes to flavor defintions and supported os_types may make
  them obsolete.

  The image clean-up code should recognise at least obsolete unused
  files (i.e size doesn't match any flavor, os_type doesn't match a
  defined mkfs command, not used by any instance) and remove them.

  It woudl be nice if it also had an option to remove files that are not
  obsolete but are unused after some period.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297375] Re: All nova apis relying on Instance.save(expected_*_state) for safety contain a race condition

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297375

Title:
  All nova apis relying on Instance.save(expected_*_state) for safety
  contain a race condition

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Take, for example, resize_instance(). In manager.py, we assert that
  the instance is in RESIZE_PREP state with:

instance.save(expected_task_state=task_states.RESIZE_PREP)

  This should mean that the first resize will succeed, and any
  subsequent will fail. However, the underlying db implementation does
  not lock the instance during the update, and therefore doesn't
  guarantee this.

  Specifically, _instance_update() in db/sqlalchemy/apy.py starts a
  session, and reads task_state from the instance. However, it does not
  use a 'select ... for update', meaning the row is not locked. 2
  concurrent calls to this method can both read the same state, then
  race to the update. The last writer will win. Without 'select ... for
  update', the db transaction is only ensuring that all writes are
  atomic, not reads with dependent writes.

  SQLAlchemy seems to support select ... for update, as do MySQL and
  PostgreSQL, although MySQL will fall back to whole table locks for
  non-InnoDB tables, which would likely be a significant performance
  hit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488635] Re: xen: resize assumes an ephemeral was migrated if the new flavor has one

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488635

Title:
  xen: resize assumes an ephemeral was migrated if the new flavor has
  one

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When resizing from a flavor with no ephemeral disk to a flavor which
  includes one an error is raise due to a migrated ephemeral disk not
  being found.  The migration code needs to consider the original flavor
  when looking for migrated ephemeral disks.

  
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] import_root=import_root)
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/nova/virt/xenapi/vm_utils.py",
 line 2532, in import_all_migrated_disks
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] eph_vdis = _import_migrate_ephemeral_disks(session, 
instance)
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/nova/virt/xenapi/vm_utils.py",
 line 2554, in _import_migrate_ephemeral_disks
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] vdi_label) 
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/nova/virt/xenapi/vm_utils.py",
 line 2566, in _import_migrated_vhds
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] sr_path=get_sr_path(session), 
uuid_stack=_make_uuid_stack())
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/session.py",
 line 227, in call_plugin_serialized
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] rv = self.call_plugin(plugin, fn, params)
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/session.py",
 line 223, in call_plugin 
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] self.host_ref, plugin, fn, args)
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/nova/virt/xenapi/client/session.py",
 line 297, in _unwrap_plugin_exceptions
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] return func(*args, **kwargs)
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/XenAPI.py", 
line 229, in __call__
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] return self.__send(self.__name, args)
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/XenAPI.py", 
line 133, in xenapi_request
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] result = _parse_result(getattr(self, 
methodname)(*full_params))
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d]   File 
"/opt/rackstack/rackstack.348.13/nova/lib/python2.7/site-packages/XenAPI.py", 
line 203, in _parse_result
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] raise Failure(result['ErrorDescription'])
  2015-08-20 14:38:07.744 23025 ERROR nova.utils [instance: 
38a0406d-1ba0f514764d] Failure: ['XENAPI_PLUGIN_FAILURE', 'move_vhds_into_sr', 
'OSError', "[Errno 2] No such file or directory: 
'/images/instance38a0406d-1ba0f514764d_ephemeral_1'"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490011] Re: cells: test_create_ebs_image_and_check_boot fails intermittently

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490011

Title:
  cells: test_create_ebs_image_and_check_boot fails intermittently

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  2015-08-28 18:41:12.128 | 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_create_ebs_image_and_check_boot[compute,id-36c34c67-7b54-4b59-b188-02a2f458a63b,image,volume]
  2015-08-28 18:41:12.128 | 

  2015-08-28 18:41:12.128 | 
  2015-08-28 18:41:12.128 | Captured traceback:
  2015-08-28 18:41:12.128 | ~~~
  2015-08-28 18:41:12.128 | Traceback (most recent call last):
  2015-08-28 18:41:12.128 |   File "tempest/test.py", line 126, in wrapper
  2015-08-28 18:41:12.128 | return f(self, *func_args, **func_kwargs)
  2015-08-28 18:41:12.128 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 194, in 
test_create_ebs_image_and_check_boot
  2015-08-28 18:41:12.128 | instance = 
self.create_server(image=image['id'])
  2015-08-28 18:41:12.129 |   File "tempest/scenario/manager.py", line 177, 
in create_server
  2015-08-28 18:41:12.129 | **create_kwargs)
  2015-08-28 18:41:12.129 |   File 
"tempest/services/compute/json/servers_client.py", line 86, in create_server
  2015-08-28 18:41:12.129 | resp, body = self.post('servers', post_body)
  2015-08-28 18:41:12.129 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 256, in post
  2015-08-28 18:41:12.129 | return self.request('POST', url, 
extra_headers, headers, body)
  2015-08-28 18:41:12.129 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
  2015-08-28 18:41:12.129 | resp, resp_body)
  2015-08-28 18:41:12.129 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 700, in _error_checker
  2015-08-28 18:41:12.129 | raise exceptions.BadRequest(resp_body)
  2015-08-28 18:41:12.129 | tempest_lib.exceptions.BadRequest: Bad request
  2015-08-28 18:41:12.130 | Details: {u'code': 400, u'message': u'Block 
Device Mapping is Invalid: Boot sequence for the instance and image/block 
device mapping combination is not valid.'}

  as seen at http://logs.openstack.org/96/216696/6/check/gate-tempest-
  dsvm-cells/5304a42/console.html

  The cause appears to be a race with processing messages to
  create_or_update a bdm in the parent cell.  Rather than creating then
  updating the two messages both create due to the quickness with which
  they are sent and processed.  One solution is to send a create message
  and then an update.  A followup to enhance that would include adding a
  unique constraint so that two identical bdms can't be created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488696] Re: Nova compute *.percent metrics are always 0

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488696

Title:
  Nova compute *.percent metrics are always 0

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  It seems that with the recent update to the Nova virt metrics
  framework in Liberty, the 'percent' related metrics always 0 now in
  the database.  After tracing through the code, I determined tha the
  virt_driver and resource tracker will still behaving properly and the
  compute_driver::get_host_cpu_stats work as expected.

  It seems that the root cause is:

  
https://github.com/openstack/nova/blob/master/nova/objects/monitor_metric.py#L29

  This shows that the metric value is expected to be an integer in nova
  object, but the percentage metrics are all floating points and range
  in value from [0, 1] -- e.g., so 17.5% has been historically
  represented as 0.175 using the monitor framework.  This causes the
  percentage values to assume the value 0, as shown below in the snippet
  from `select metrics from compute_nodes`:

  "cpu.user.percent", "value": 0,
  "cpu.percent", "value": 0,

  ...so on and so forth.  By the time the metrics get to this spot in
  the resource tracker:

  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L366

  ...the '*.percent' values are all 0.

  I'm not sure if the intended behavior here was to only support
  integer-style values.  If so, we probably need to do some "multiply by
  100" logic when putting them into the MonitorMetric object and then
  divide by 100 (we'll lose precision, though) when we convert back to
  the values stored in the compute_nodes.metrics column, otherwise we
  will break backwards compatibility in terms of what folks were
  expecting to find in the DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491256] Re: VMware: nova log files overflow

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491256

Title:
  VMware: nova log files overflow

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The commit 
https://review.openstack.org/#q,bcc002809894f39c84dca5c46034468ed0469c2b,n,z 
removed the suds logging level. This causes the log files to overlfow with data.
  An example is http://208.91.1.172/logs/178652/3/1437935064/ the nova compute 
log file is 56M compared to all other files that are a few K.
  This makes debugging and troubleshooting terribly difficult. This also has 
upgrade impact for people going to L from any other version and that is a 
degradation

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487350] Re: wrong exception msg of param backlog check

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487350

Title:
  wrong exception msg of param backlog check

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  in file nova/wsgi.py, Line 128
  Line128if backlog < 1:
    129  raise exception.InvalidInput(
    130  reason='The backlog must be more than 1')
  I think it wrong for variable reason='The backlog must be more than 1', 
because the condition is [if backlog < 1:]

  I think Line130 it should change from 'The backlog must be more than
  1' to 'The backlog must be more than 0'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487522] Re: Objects: obj_reset_changes signature doesn't match

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487522

Title:
  Objects: obj_reset_changes signature doesn't match

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If an object contains a Flavor object within it and obj_reset_changes
  is called with recursive=True it will fail with the following error.
  This is because Flavor.obj_reset_changes is missing the recursive
  param in it's signature.  The Instance object is also missing this
  parameter in its method.

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/objects/test_request_spec.py", line 284, in 
test_save
  req_obj.obj_reset_changes(recursive=True)
File "nova/objects/base.py", line 224, in obj_reset_changes
  value.obj_reset_changes(recursive=True)
  TypeError: obj_reset_changes() got an unexpected keyword argument 
'recursive'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487300] Re: Misaligned partitions for ephemeral drives (Xenapi)

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487300

Title:
  Misaligned partitions for ephemeral drives (Xenapi)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently the code creates a mis-aligned partition on the ephemeral
  drive during disk creation.  This is done by the code attempting to
  create the partition at the 0 sector which then defaults to 1.  When
  attempting to replicate this setup by hand using parted (as the code
  does) it will alert you of the misalignment such as:

  test@testbox:~# parted --script /dev/xvde -- mkpart primary 0 -0
  Warning: The resulting partition is not properly aligned for best performance.

  This results in the possibility of significantly slower read speeds
  against the disk, and some impact against the write speeds.  The
  relevant code responsible for the ephemeral partition size
  configuration can be found at:

  
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L1043-L1057

  And the actual parted command utilizing this can be found here:

  
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/vm_utils.py#L992-L1006

  I have already committed a simple change for this awaiting review --
  Change ID: I2b33659d66ce5ba8a361386277c5fee47ddcde94

  https://review.openstack.org/#/c/203323/

  Testing for this bug can be accomplished by creating a VM with an
  ephemeral drive and monitoring r/w speeds with the default
  partitioning and then with custom partitioning (2048 start sector)
  using dd/hdparm, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471239] Re: nova service-delete 11-3 returns 500 error

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471239

Title:
  nova service-delete 11-3 returns 500 error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  nova service-delete 11-3
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-931741d5-4013-477e-93cc-c0c90b302ffe)

  the logs shows we need to handle DBError Exception

  
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack result = 
conn.execute(querycontext.statement, self._params)
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack params)
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack compiled_sql, 
distilled_params
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack context)
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/compat/handle_error.py", 
line 125, in _handle_dbapi_exception
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack 
six.reraise(type(newraise), newraise, sys.exc_info()[2])
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/compat/handle_error.py", 
line 102, in _handle_dbapi_exception
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack per_fn = fn(ctx)
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/exc_filters.py", line 323, 
in handler
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack 
context.is_disconnect)
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/exc_filters.py", line 278, 
in _raise_for_remaining_DBAPIError
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack raise 
exception.DBError(error)
  2015-07-03 09:54:01.177 11631 TRACE nova.api.openstack DBError: (DataError) 
ibm_db_dbi::DataError: Statement Execute Failed: [IBM][CLI Driver] CLI0112E  
Error in assignment. SQLSTATE=22005 SQLCODE=-9 'SELECT services.deleted_at 
AS services_deleted_at, services.deleted AS services_deleted, 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.id AS services_id, services.host AS 
services_host, services."binary" AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason \nFROM services \nWHERE services.deleted = ? AND 
services.id = ? FETCH FIRST 1 ROWS ONLY' (0, '11-3')

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482191] Re: Error message is too generic when boot with volume creation fails with exceeded quota

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482191

Title:
  Error message is too generic when boot with volume creation fails with
  exceeded quota

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The message is "InvalidBDM: Block Device Mapping is Invalid."
  Exceeding quota is a problem that should be solved by the user, not a
  cloud admin. So the message should be one telling the user that the
  error was caused by quota.

  Example:

  A user has 2 GB of volume quota.

  ubuntu@dev-1:~/devstack$ cinder quota-usage b926f718a7d345cb9f57b4e4dc98f674
  +---++--+---+
  |  Type | In_use | Reserved | Limit |
  +---++--+---+
  |backup_gigabytes   |   0|0 |  1000 |
  |backups|   0|0 |   10  |
  |   gigabytes   |   0|0 |   2   | 
  | gigabytes_lvmdriver-1 |   0|0 |   2   |
  |  per_volume_gigabytes |   0|0 |   -1  |
  |   snapshots   |   0|0 |   10  |
  | snapshots_lvmdriver-1 |   0|0 |   -1  |
  |volumes|   0|0 |   10  |
  |  volumes_lvmdriver-1  |   0|0 |   -1  |
  +---++--+---+

  Then, create an instance with 3 GB volume creation

  ubuntu@dev-1:~/devstack$ nova boot --flavor m1.tiny --block-device 
source=image,id=535ec234-5522-4cdc-598-4050a62d5135,dest=volume,size=3,bootindex=0
 vb
  
+--+-+
  | Property | Value
   |
  
+--+-+
  | OS-DCF:diskConfig| MANUAL   
   |
  | OS-EXT-AZ:availability_zone  | nova 
   |
  | OS-EXT-STS:power_state   | 0
   |
  | OS-EXT-STS:task_state| scheduling   
   |
  | OS-EXT-STS:vm_state  | building 
   |
  | OS-SRV-USG:launched_at   | -
   |
  | OS-SRV-USG:terminated_at | -
   |
  | accessIPv4   |  
   |
  | accessIPv6   |  
   |
  | adminPass| znMzRMuCJx99 
   |
  | config_drive |  
   |
  | created  | 2015-08-06T11:34:53Z 
   |
  | flavor   | m1.tiny (1)  
   |
  | hostId   |  
   |
  | id   | 5934a4f5-4d89-465b-956b-f53b2ebdad43 
   |
  | image| Attempt to boot from volume - no 
image supplied |
  | key_name | -
   |
  | metadata | {}   
   |
  | name | vb   
   |
  | os-extended-volumes:volumes_attached | []   
   |
  | progress | 0
   |
  | security_groups  | default  
   |
  | status   | BUILD
   |
  | tenant_id| b926f718a7d345cb9f57b4e4dc98f674 
   |
  | updated  | 2015-08-06T11:34:54Z 
   |
  | user_id  | 386255bb3f05410dae7856633f0e611e 
   |
  
+--+-+

  Fails with a message "Failure prepping block device". It is difficult
  to know the quota is a cause of the error by seeing the message.

  ubuntu@dev-1:~/devstack$ nova show vb
  
+--+---+
  | Property 

[Yahoo-eng-team] [Bug 1481812] Re: nova servers pagination does not work with changes-since and deleted marker

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481812

Title:
  nova servers pagination does not work with changes-since and deleted
  marker

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  instance test1 - test6, where test2 and test5 has been deleted:

  # nova list
  
+--+---+-++-+--+
  | ID   | Name  | Status  | Task State | Power 
State | Networks |
  
+--+---+-++-+--+
  | 7e12d6a0-126f-44d0-b566-15cd5e4ab82e | test1 | SHUTOFF | -  | 
Shutdown| private=10.0.0.3 |
  | 8b35f7fb-65d0-4fc3-ac22-390743c695db | test3 | ACTIVE  | -  | 
Running | private=10.0.0.5 |
  | 2ab70dfe-2983-4886-a930-7deb15279763 | test4 | ACTIVE  | -  | 
Running | private=10.0.0.6 |
  | 489e22cf-5e22-43a4-8c46-438f62d66e59 | test6 | ACTIVE  | -  | 
Running | private=10.0.0.8 |
  
+--+---+-++-+--+

  # Get all instances with changes-since=2015-01-01 :
  # curl -s -H "X-Auth-Token:92ecba357e5b49f88a21cedfa63bf36e" 
'http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01';
  {"servers": [{"id": "489e22cf-5e22-43a4-8c46-438f62d66e59", "links": 
[{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "bookmark"}], "name": "test6"}, {"id": 
"9bda60eb-6ff7-4b84-b081-0120b62155a3", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "bookmark"}], "name": "test5"}, {"id": 
"2ab70dfe-2983-4886-a930-7deb15279763", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/2ab70dfe-2983-4886-a930-7deb15279763";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/2ab70dfe-2983-4886-a
 930-7deb15279763", "rel": "bookmark"}], "name": "test4"}, {"id": 
"8b35f7fb-65d0-4fc3-ac22-390743c695db", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/8b35f7fb-65d0-4fc3-ac22-390743c695db";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/8b35f7fb-65d0-4fc3-ac22-390743c695db";,
 "rel": "bookmark"}], "name": "test3"}, {"id": 
"18d9ffbb-e1d4-4218-bb66-f792aab4e091", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/18d9ffbb-e1d4-4218-bb66-f792aab4e091";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/18d9ffbb-e1d4-4218-bb66-f792aab4e091";,
 "rel": "bookmark"}], "name": "test2"}, {"id": 
"7e12d6a0-126f-44d0-b566-15cd5e4ab82e", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/7e12d6a0-126f-44d0-b566-15cd5e4ab82e";,
 "rel": "self"}, {"href": "http://10.10.180.210:8774/30d2b54aa8f64bc2a
 1577c992c16271a/servers/7e12d6a0-126f-44d0-b566-15cd5e4ab82e", "rel": 
"bookmark"}], "name": "test1"}]}

  # query the instances in junks of 2 with changes-since and limit=2

  # curl -s -H "X-Auth-Token:92ecba357e5b49f88a21cedfa63bf36e" 
'http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01&limit=2';
  {"servers_links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers?changes-since=2015-01-01&limit=2&marker=9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "next"}], "servers": [{"id": "489e22cf-5e22-43a4-8c46-438f62d66e59", 
"links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/489e22cf-5e22-43a4-8c46-438f62d66e59";,
 "rel": "bookmark"}], "name": "test6"}, {"id": 
"9bda60eb-6ff7-4b84-b081-0120b62155a3", "links": [{"href": 
"http://10.10.180.210:8774/v2/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "self"}, {"href": 
"http://10.10.180.210:8774/30d2b54aa8f64bc2a1577c992c16271a/servers/9bda60eb-6ff7-4b84-b081-0120b62155a3";,
 "rel": "bookmark"}], "name": "test5"}]}

  => returns instance test6 and test5(deleted)

  # use instance test5 (del

[Yahoo-eng-team] [Bug 1482699] Re: glance requests from nova fail if there are too many endpoints in the service catalog

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482699

Title:
  glance requests from nova fail if there are too many endpoints in the
  service catalog

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  Nova sends the entire serialized service catalog in the http header to
  glance requests:

  https://github.com/openstack/nova/blob/icehouse-
  eol/nova/image/glance.py#L136

  If you have a lot of endpoints in your service catalog this can make
  glance fail with "400 Header Line TooLong".

  Per bknudson: "Any service using the auth_token middleware has no use
  for the x-service-catalog header. All that auth_token middleware uses
  is x-auth-token. The auth_token middleware will actually strip the x
  -service-catalog from the request before it sends the request on to
  the rest of the pipeline, so the application will never see it."

  If glance needs the service catalog it will get it from keystone when
  it auths the tokens, so nova shouldn't be sending this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480009] Re: version show request does nto consider CONF.osapi_compute_link_prefix for buidling links

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480009

Title:
  version show request does nto consider CONF.osapi_compute_link_prefix
  for buidling links

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  version show request for both v2 and v2.1 (/v2 and /v2.1) does not
  consider the CONF.osapi_compute_link_prefix for buidling links's href.

  Currently version view->build_version  use base url directly for
  building the href for links -
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/views/versions.py#L70

  Above should prepare href using  CONF.osapi_compute_link_prefix as
  done for other links.

  This was caught in newly added functional tests fails -

  http://logs.openstack.org/39/201439/4/check/gate-nova-tox-
  functional/ff44cb5/testr_results.html.gz .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480441] Re: Live migration doesn't retry on migration pre-check failure

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480441

Title:
  Live migration doesn't retry on migration pre-check failure

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When live migrating an instance, it is supposed to retry some
  (configurable) number of times. It only retries if the host
  compatibility and migration pre-checks raise nova.exception.Invalid,
  though:

  
https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L167-L174

  If, for instance, a destination hypervisor has run out of disk space
  it will not raise an Invalid subclass, but rather
  MigrationPreCheckError, which causes the retry loop to short-circuit.
  Nova should instead retry as long as either Invalid or
  MigrationPreCheckError is raised.

  This can be tricky to reproduce because it only occurs if a host
  raises MigrationPreCheckError before a valid host is found, so it's
  dependent upon the order in which the scheduler supplies possible
  destinations to the conductor. In theory, though, it can be reproduced
  by bringing up a number of hypervisors, exhausting the disk on one --
  ideally the one that the scheduler will return first -- and then
  attempting a live migration. It will fail with something like:

  $ nova live-migration  --block-migrate stpierre-test-1 ERROR
  (BadRequest): Migration pre-check error: Unable to migrate f44296dd-
  ffa6-4ec0-8256-c311d025d46c: Disk of instance is too large(available
  on destination host:-38654705664 < need:1073741824) (HTTP 400)
  (Request-ID: req-9951691a-c63c-4888-bec5-30a072dfe727)

  Even when there are valid hosts to migrate to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479941] Re: cells: deleting an instance with cell_name set that doesn't exist in the cell fails

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479941

Title:
  cells: deleting an instance with cell_name set that doesn't exist in
  the cell fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging 
[req-1ce8e3f7-a14f-4687-891c-a1541dfdce41 3742 391232 - - -] Error processing 
message locally
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging Traceback (most 
recent call last):
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/nova/cells/messaging.py",
 line 200, in _process_locally
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/nova/cells/messaging.py",
 line 1296, in _process_message_locally
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/nova/cells/messaging.py",
 line 1078, in instance_destroy_at_top
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging 
instance=instance)
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 119, in __exit__
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging 
six.reraise(self.type_, self.value, self.tb)
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/nova/cells/messaging.py",
 line 1065, in instance_destroy_at_top
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging 
instance.destroy()
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/nova/objects/base.py",
 line 116, in wrapper
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging return fn(self, 
*args, **kwargs)
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging   File 
"/opt/rackstack/rackstack.301.55/nova/lib/python2.7/site-packages/nova/objects/instance.py",
 line 651, in destroy
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging reason='already 
destroyed')
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging ObjectActionError: 
Object action destroy failed because: already destroyed
  2015-07-28 19:29:33.153 23754 ERROR nova.cells.messaging 

  The exception being raised is
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/objects/instance.py?id=a74f07a92356187d3455e5a9e2418fb1ea697f96#n614

  What's happening is that the instance does not exist in the cell so
  instance.destroy is being called with an instance object instantiated
  like "objects.Instance(context=ctxt, uuid=instance.uuid)" so
  instance.id is not set.  So we should probably try to look up the
  instance when this failure occurs in instance_destroy_at_top.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479124] Re: Scheduler doesn't respect tracks_instance_changes in all cases

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479124

Title:
  Scheduler doesn't respect tracks_instance_changes in all cases

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This commit introduces instance tracking in the scheduler, with an
  option to disable it for performance.
  
https://github.com/openstack/nova/commit/82cc056fb7e1b081a733797ed27550343cbaf44c

  However, _add_instance_info is not guarded by the config option, but
  causes just as much performance havoc as the initial load.
  
https://github.com/openstack/nova/commit/82cc056fb7e1b081a733797ed27550343cbaf44c
  #diff-978b9f8734365934eaf8fbb01f11a7d7R554

  This should be guarded by the config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487025] Re: network-changed external_instance_event fails and traces hard if InstanceInfoCacheNotFound

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487025

Title:
  network-changed external_instance_event fails and traces hard if
  InstanceInfoCacheNotFound

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I saw this in a failed jenkins run with neutron:

  http://logs.openstack.org/82/200382/6/check/gate-tempest-dsvm-neutron-
  full/cd9bfaa/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-08-20_12_36_06_873

  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 89, in wrapped
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher payload)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 72, in wrapped
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6310, in 
external_instance_event
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
self.network_api.get_instance_nw_info(context, instance)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 244, in 
get_instance_nw_info
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher result = 
self._get_instance_nw_info(context, instance, **kwargs)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/network/neutronv2/api.py", line 873, in 
_get_instance_nw_info
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
compute_utils.refresh_info_cache_for_instance(context, instance)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 356, in 
refresh_info_cache_for_instance
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
instance.info_cache.refresh()
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
195, in wrapper
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher ctxt, 
self, fn.__name__, args, kwargs)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 248, in object_action
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
objmethod=objmethod, args=args, kwargs=kwargs)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
retry=self.retry)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
timeout=timeout, retry=retry)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher 
retry=retry)
  2015-08-20 12:36:06.873 9235 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 422, in _sen

[Yahoo-eng-team] [Bug 1476931] Re: monkey_patch broken

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476931

Title:
  monkey_patch broken

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With monkey_patch enabled, it would patch specified modules with
  decorator to trace all the function call. But it broken by some minor
  bugs:

  =bug 1
  2015-07-21 09:52:16.707 TRACE nova   File 
"/opt/stack/nova/nova/api/ec2/__init__.py", line 349, in __init__
  2015-07-21 09:52:16.707 TRACE nova self.controller = 
importutils.import_object(controller)
  2015-07-21 09:52:16.707 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 38, in 
import_object
  2015-07-21 09:52:16.707 TRACE nova return import_class(import_str)(*args, 
**kwargs)
  2015-07-21 09:52:16.707 TRACE nova   File 
"/opt/stack/nova/nova/notifications.py", line 89, in wrapped_func
  2015-07-21 09:52:16.707 TRACE nova method = 
notifier.getattr(CONF.default_notification_level.lower(),
  2015-07-21 09:52:16.707 TRACE nova AttributeError: '_SubNotifier' object has 
no attribute 'getattr'
  2015-07-21 09:52:16.707 TRACE nova

  =bug 2==
  2015-07-21 09:56:51.836 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 38, in 
import_object
  2015-07-21 09:56:51.836 TRACE nova return import_class(import_str)(*args, 
**kwargs)
  2015-07-21 09:56:51.836 TRACE nova   File 
"/opt/stack/nova/nova/notifications.py", line 91, in wrapped_func
  2015-07-21 09:56:51.836 TRACE nova method(ctxt, name, body)
  2015-07-21 09:56:51.836 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/notify/notifier.py", 
line 228, in info
  2015-07-21 09:56:51.836 TRACE nova self._notify(ctxt, event_type, 
payload, 'INFO')
  2015-07-21 09:56:51.836 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/notify/notifier.py", 
line 309, in _notify
  2015-07-21 09:56:51.836 TRACE nova super(_SubNotifier, 
self)._notify(ctxt, event_type, payload, priority)
  2015-07-21 09:56:51.836 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/notify/notifier.py", 
line 168, in _notify
  2015-07-21 09:56:51.836 TRACE nova ctxt = 
self._serializer.serialize_context(ctxt)
  2015-07-21 09:56:51.836 TRACE nova   File "/opt/stack/nova/nova/rpc.py", line 
114, in serialize_context
  2015-07-21 09:56:51.836 TRACE nova return context.to_dict()
  2015-07-21 09:56:51.836 TRACE nova AttributeError: 'NoneType' object has no 
attribute 'to_dict'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476490] Re: wrong expected code for os-resetState action

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476490

Title:
  wrong expected code for os-resetState  action

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  os-resetState action have no 400(BadRequest Error) comes out, we need
  to remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483486] Re: Ironic: get_available_resource doesn't report numa_topology

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483486

Title:
  Ironic: get_available_resource doesn't report numa_topology

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  in update_available_resource of RT we have

  # TODO(berrange): remove this once all virt drivers are updated
  # to report topology
  if "numa_topology" not in resources:
  resources["numa_topology"] = None

  by searching codes, all other hypervisor report numa_topology in
  get_available_resource except ironic driver.

  this can be improved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482594] Re: __getitem___ is not returning any value

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482594

Title:
   __getitem___ is not returning any value

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are 2 cases where __getitem___  method is not returning any
  value from fake_volume module. If we are calling a get method it is
  expected to return some results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479532] Re: Ironic driver needs to handle nodes in CLEANWAIT state

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479532

Title:
  Ironic driver needs to handle nodes in CLEANWAIT state

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Ironic recently added a new state 'CLEANWAIT' [1] so nodes that
  previously had their provision state as CLEANING will now be in either
  CLEANING or CLEANWAIT.

  The ironic driver in nova needs to be updated to know about this new
  CLEANWAIT state. In particular, when destroying an instance, from
  nova's perspective, the instance has been removed when a node is in
  CLEANWAIT state.

  [1] Ic2bc4f147f68947f53d341fda5e0c8d7b594a553

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475353] Re: _get_host_sysinfo_serial_os fails with different exceptions if the machine-id file is not present or if it is empty

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475353

Title:
  _get_host_sysinfo_serial_os fails with different exceptions if the
  machine-id file is not present or if it is empty

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the sysinfo_serial config parameter for the libvirt driver is set to 
'os', the defined behavior is:
  - return the value found in /etc/machine-id file or
  - return an error if the file is not present.
  There is an additional scenario where the file is present but it is empty 
  (https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1413293) in that case we 
want nova to behave as if the file was missed.
  At the moment the missing file and an empty file return different exceptions:
  - IOError for a missing file
  - IndexError for an empty file
  Both of these errors are too general and don't give a big help in debugging 
the issue.

  Please note that we do not want to fix the issue about a missing/empty
  machine-id file, that is something related to a bad OS
  installation/configuration or a bad image, the proposed fix is just to
  keep nova behaves consistently and giving back to the user a more
  clear and precise error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477159] Re: VMware: volume's vmdk uuid exists in instance's vmx file even after volume detach

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477159

Title:
  VMware: volume's vmdk uuid exists in instance's vmx file even after
  volume detach

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce:

  * Attach volume with ID = foo to nova instance
  * Verify instance VM's vmx file in ESX host  (should contain entry volume-foo 
= )
  * Detach volume
  *  Verify instance VM's vmx file in ESX host  (still contains entry 
volume-foo = )

  Expected behavior:

  After volume detach, the vmx file shouldn't contain volume related
  entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481078] Re: auto_disk_config image property incorrectly treated as boolean during rescue mode boot

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481078

Title:
  auto_disk_config image property incorrectly treated as boolean during
  rescue mode boot

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Introduced in Kilo release here:
  
https://github.com/openstack/nova/blame/90e1eacee8da05bed2b061b8df5fc4fbf5057bb2/nova/virt/xenapi/vmops.py#L707

  The auto_disk_config value is a string on the image, but is being used
  as if it were a boolean value.  As a result, even an auto_disk_config
  value of "False" on the image will result in nova attempting to resize
  the root disk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483993] Re: Fix three typos on nova/pci directory

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483993

Title:
  Fix three typos on nova/pci directory

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Fix three typos on nova/pci directory

  QuicAssist=> QuickAssist
  comptue => compute
  trackes=>tracks

  I am looking following report and found it.
  https://bugs.launchpad.net/openstack-manuals/+bug/1381017

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475752] Re: standards_filters don't--update doc

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475752

Title:
  standards_filters don't--update doc

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In the doc/source/filter_scheduler.rst there is a reference to
  "standard_filters". These haven't existed in some time.  Please update
  with the correct terminology.

  Reference:
  nova master as of ee61c076b6772f26ceb84941e397085d11af7d18

  The default is actually all_filters and will be fixed in a changeset
  shortly.

  Webref:
  http://docs.openstack.org/developer/nova/devref/filter_scheduler.html
  as of 2015-07-17

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486041] Re: IntegrityError: NOT NULL constraint failed: tags.resource_id during instance_tag_set

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486041

Title:
  IntegrityError: NOT NULL constraint failed: tags.resource_id during
  instance_tag_set

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce:

   1. create instance
   2. set ['tag1', 'tag2'] to the instance
   3. set ['tag1'] to the instance

  Expected result:
  tags were added.

  Actual result:

  2015-08-18 16:26:36,601 ERROR [oslo_db.sqlalchemy.exc_filters] DBAPIError 
exception wrapped from (sqlite3.IntegrityError) NOT NULL constraint failed: 
tags.resource_id [SQL: u'INSERT INTO tags DEFAULT VALUES']
  Traceback (most recent call last):
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
  context)
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
  cursor.execute(statement, parameters)
  IntegrityError: NOT NULL constraint failed: tags.resource_id

  It happens because list 'to_add' in method 'instance_tag_set' is empty
  in second case. It's empty because on step 3 must delete one tag and
  create zero tags. So to fix bug we must check list to_add.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479361] Re: missing _ in API documentation for hw properties

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479361

Title:
  missing _ in API documentation for hw properties

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There is a missing _ in the API documentation for the cpu_max_threads,
  cpu_max_cores and cpu_max_sockets extra specs.

  According to
  http://docs.openstack.org/developer/nova/api/nova.virt.hardware.html,
  these should be spelt cpu_maxthreads. According to the code, it is
  cpu_max_threads (as it is in the wiki,
  http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented
  /virt-driver-vcpu-topology.html)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476530] Re: Iscsi session still connect after detach volume from paused instance

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476530

Title:
  Iscsi session still connect after detach volume from paused instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  My test environment are lvm/ISCSI,  libvirt/QEMU

  How to reproduce:
  1. create instance
  2 .create volume
  3. attach volume to instance
  4. pause instance
  5. detach volume from instance

  Nova will not disconnect with volume. You can enter following command to 
verify.
  > sudo iscsiadm -m node --rescan
  It will display the session which was build in previous steps.
  Of course, you can also find device will still exist in /sys/block

  Because of nova will search all block devices which are defined in xml for 
all guest.
  Then nova will disconnect ISCSI block which was existed in /dev/disk/by-path 
and didn't been define in any guest.
  But paused instance's xml define will still contain dev which prefer to 
remove.
  Therefore nova will not disconnect with this volume.

  There are two kind solution:
  1. Logout iscsci connection manually. (sudo iscsiadm -m node -T Target 
--logout)
  2. Reattach same volume.(lol)

  But we still need to handle this bug with paused instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1476530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482019] Re: resource leak when launching pci instance on host that don't have enough pci resources

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482019

Title:
  resource leak when launching pci instance on host that don't have
  enough pci resources

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I specify a host to boot instance with pci request, but the host
  didn't have pci devices. like below:

   nova boot --image cirros-0.3.2-x86_64-disk --nic net-
  id=d9eee163-f148-4244-92c5-ffda7d9db06a --flavor chenrui_f
  --availability ::devstack chenrui_pci

  A exception would be raised from self.pci_stats.apply_requests in
  HostState.consume_from_instance.

  https://github.com/openstack/nova/blob/master/nova/pci/stats.py#L234

  But at this time, the part of compute resource had been consumed,
  like: ram, disk, vcpus and so on. And there is no revert resource
  logic to release the part of resource when the exception was raised. I
  think it's a resource lacking.

  I boot 12 instances, the following is nova-scheduler.log, you can
  found the resources constantly on the decrease. At final, I must
  restart the nova-scheduler, or else I can't boot any instances.

  stack@devstack:/opt/stack/logs$ $ tailf screen-n-sch.log | fgrep 'Selected 
host: WeighedHost'
  2015-05-11 15:54:45.735 DEBUG nova.scheduler.filter_scheduler 
[req-11dcc5ee-586a-472f-afa0-260c282676e3 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:14509 disk:76800 io_ops:0 
instances:2, weight: 0.965914386526] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:54:53.620 DEBUG nova.scheduler.filter_scheduler 
[req-a88af594-2633-4527-8d8b-4db8feef7489 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:13997 disk:75776 io_ops:0 
instances:3, weight: 0.931828773051] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:54:58.849 DEBUG nova.scheduler.filter_scheduler 
[req-8a79ad56-eb1b-4bc8-8573-d387bfc38184 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:13485 disk:74752 io_ops:0 
instances:4, weight: 0.897743159577] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:55:05.956 DEBUG nova.scheduler.filter_scheduler 
[req-e2a3577a-e739-406b-957a-3bc8fc16a7d8 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:12973 disk:73728 io_ops:0 
instances:5, weight: 0.863657546102] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:55:10.868 DEBUG nova.scheduler.filter_scheduler 
[req-6f943265-dfc7-473a-a9df-3e078c7abb08 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:12461 disk:72704 io_ops:0 
instances:6, weight: 0.829571932628] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:55:43.500 DEBUG nova.scheduler.filter_scheduler 
[req-e171dcfd-373e-4ff9-b7de-e8d8d977b727 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:11949 disk:71680 io_ops:0 
instances:7, weight: 0.795486319153] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:55:55.551 DEBUG nova.scheduler.filter_scheduler 
[req-522f9d71-35ed-44bb-b308-d3f78374c24e admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:11437 disk:70656 io_ops:0 
instances:8, weight: 0.761400705679] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:56:13.723 DEBUG nova.scheduler.filter_scheduler 
[req-106cccfb-4778-4eb7-90d8-a97d4a62de8c admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:10925 disk:69632 io_ops:0 
instances:9, weight: 0.727315092204] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:57:43.972 DEBUG nova.scheduler.filter_scheduler 
[req-c054d26e-ca44-4375-991c-531418791806 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:10413 disk:68608 io_ops:0 
instances:10, weight: 0.69322947873] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:57:54.557 DEBUG nova.scheduler.filter_scheduler 
[req-92684590-df86-4c0e-a359-6f661ee0cd23 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:9901 disk:67584 io_ops:0 
instances:11, weight: 0.659143865255] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:58:24.918 DEBUG nova.scheduler.filter_scheduler 
[req-eb7443d4-8617-4986-8d33-c8f44646d769 admin admin] Selected host: 
WeighedHost [host: (devstack, devstack) ram:9389 disk:66560 io_ops:0 
instances:12, weight: 0.625058251781] _schedule 
/opt/stack/nova/nova/scheduler/filter_scheduler.py:158
  2015-05-11 15:59:53.188 DEBUG nova.scheduler.filter_scheduler 
[req-416e2d3b-a601-463b-948e-c6fe27341398 admin admin] Selected host: 

[Yahoo-eng-team] [Bug 1482816] Re: Nova should not return a 500 on floating ip create with ipv6

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482816

Title:
  Nova should not return a 500 on floating ip create with ipv6

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Neutron does not support creating or using floating ips on an ipv6
  network. When nova gets a create floating ip call and does the
  passthrough call to neutron it will get a 400 response from neutron
  saying there is not ipv4 subnet available. This goes unhandled by nova
  and will cause a 500 response. Since this is an expected error, nova
  should catch it and return a 400 to the user for the same reason.

  For example logs with the unhandled exception see:

  http://logs.openstack.org/47/210647/2/check/gate-tempest-dsvm-neutron-
  full/ea565de/logs/screen-n-api.txt.gz?level=TRACE

  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack Traceback (most recent 
call last):
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 128, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
req.get_response(self.application)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
resp(environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 434, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack response = 
req.get_response(self._app)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
resp(environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
resp(environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack response = 
self.app(environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
resp(environ, start_response)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-08-08 01:55:33.923 20521 ERROR nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.p

[Yahoo-eng-team] [Bug 1481271] Re: formely is not correct

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481271

Title:
  formely is not correct

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
   nova/doc/ext/support_matrix.py
  Virtuozzo was "formely" named Parallels in this document
  is not correct

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485631] Re: CPU/RAM overcommit treated differently by "normal" and "NUMA topology" case

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485631

Title:
  CPU/RAM overcommit treated differently by "normal" and "NUMA topology"
  case

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently in the NUMA topology case (so multi-node guest, dedicated
  CPUs, hugepages in the guest, etc.) a single guest is not allowed to
  consume more CPU/RAM than the host actually has in total regardless of
  the specified overcommit ratio.  In other words, the overcommit ratio
  only applies when the host resources are being used by multiple
  guests.  A given host resource can only be used once by any particular
  guest.

  So as an example, if the host has 2 pCPUs in total for guests, a
  single guest instance is not allowed to use more than 2CPUs but you
  might be able to have 16 such instances running. (Assuming default CPU
  overcommit ratio.)

  However, this is not true when the NUMA topology is not involved.  In
  that case a host with 2 pCPUs would allow a guest with 3 vCPUs to be
  spawned.

  We should pick one behaviour as "correct" and adjust the other one to
  match.  Given that the NUMA topology case was discussed more recently,
  it seems reasonable to select it as the "correct" behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485082] Re: Host Manager should use monitor metric objects

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485082

Title:
  Host Manager should use monitor metric objects

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Per bug 1468012 we changed the monitor metric reporting to use
  versioned Monitor Metric objects instead of plain old dictionaries.

  This bug is being filed to address a code refactoring needed inside
  nova/scheduler/host_manager.py - in the
  _update_metrics_from_compute_node to use the monitor metric object
  instead of the current non object implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483613] Re: It may be possible to request (un)pinning of CPUs not in the NUMA cpuset

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483613

Title:
  It may be possible to request (un)pinning of CPUs not in the NUMA
  cpuset

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There's already a check to ensure pinned CPUs are unpinned and vice
  versa, but none to ensure the CPUs are in the known set. This could
  lead to an invalid system state and emergent bugs.

  I noticed this via code inspection during Liberty. I don't know if
  it's possible to hit externally but it seems like a potential bug.
  John Garbutt encouraged me to open this for advertising.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475411] Re: During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins 
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge "Port crypto to Python 3"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470154] Re: List objects should use obj_relationships

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470154

Title:
  List objects should use obj_relationships

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova's List-based objects have something called child_versions, which
  is a naive mapping of the objects field and the version relationships
  between the list object and the content object. This was created
  before we generalized the work in obj_relationships, which normal
  objects now use. The list-based objects still use child_versions,
  which means we need a separate test and separate developer behaviors
  when updating these.

  For consistency, we should replace child_versions on all the list
  objects with obj_relationships, remove the list-specific test in
  test_objects.py, and make sure that the generalized tests properly
  cover list objects and relationships between list and non-list
  objects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483287] Re: test_models_sync() will be broken on upcoming Alembic versions

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483287

Title:
  test_models_sync() will be broken on upcoming Alembic versions

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  test_models_sync() is currently making assumptions, that won't be true
  in the upcoming Alembic releases (0.7.7 and 0.8.0 respectively).
  Unless we fix it now, it's going to break the gate when the releases
  of Alembic are cut.

  Mike Bayer's comment in the original patch:

  
https://review.openstack.org/#/c/192760/14/nova/tests/unit/db/test_migrations.py,cm

  ML thread:

  http://lists.openstack.org/pipermail/openstack-
  dev/2015-August/071638.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471978] Re: test_relationships() uses subobject version instead of relationship version

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471978

Title:
  test_relationships() uses subobject version instead of relationship
  version

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Part of test_relationships() is spent building the subobject tree of
  each nova object
  
(http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/unit/objects/test_objects.py#n1274).
  In _build_tree(), a tree is built with all objects in the nova
  registry, and then it finds the version of each subobject that this
  object should be holding (only down 1 level). This version that it
  should be holding needs to be determined by obj_relationships (or
  child_versions until https://bugs.launchpad.net/nova/+bug/1470154 is
  fixed).

  In _build_tree(), the versions it should be holding is determined by
  sub_obj_class.VERSION instead of what is in obj_relationships. This
  causes the static tree used in test_relationships to be testing
  against the most recent version of the subobjects instead of testing
  against the subobject version held in obj_relationships.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480222] Re: hw:mem_page_size=2MB|1GB unsupported

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480222

Title:
  hw:mem_page_size=2MB|1GB unsupported

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The spec "Virt driver large pages allocation for guest RAM" (
  http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
  /virt-driver-large-pages.html ) was marked as complete for Kilo.

  However, the options to include standard Hugepage sizes 2MB and 1GB is
  not supported.

  The flavor extra spec key hw:mem_page_size=2MB|1GB is not supported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468696] Re: Move cold migration from conductor's manager to separate task

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1468696

Title:
  Move cold migration from conductor's manager to separate task

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  To increase cold-migration/resize process manageability and make code
  clearer new task similar to live-migration should be created in
  conductor. While moving logic, class hierarchy for tasks should be
  created it allows to use similar interface for all new task classes in
  future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1468696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487070] Re: Only CPU compute monitors are loaded by compute.monitors extension loader

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487070

Title:
  Only CPU compute monitors are loaded by compute.monitors extension
  loader

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There is a TODO in the extension loader for compute monitors:

  # TODO(jaypipes): Right now, we only have CPU monitors, so we don't
  # need to check if the plugin is a CPU monitor or not. Once non-CPU
  # monitors are added, change this to check either the base class or
  # the set of metric names returned to ensure only a single CPU
  # monitor is loaded at any one time.

  We need a mechanism to load other types of compute monitors than CPU.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474079] Re: Cross-site web socket connections fail on Origin and Host header mismatch

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474079

Title:
  Cross-site web socket connections fail on Origin and Host header
  mismatch

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The Kilo web socket proxy implementation for Nova consoles added an
  Origin header validation to ensure the Origin hostname matches the
  hostname from the Host header.  This was a result of the following XSS
  security bug:  https://bugs.launchpad.net/nova/+bug/1409142
  (CVE-2015-0259)

  In other words, this requires that the web UI being used (Horizon, or
  whatever) having a URL hostname which is the same as the hostname by
  which the console proxy is accessed.  This is a safe assumption for
  Horizon.  However, we have a use case where our (custom) UI runs at a
  different URL than does the console proxies, and thus we need to allow
  cross-site web socket connections.  The patch for 1409142
  (https://github.secureserver.net/cloudplatform/els-
  nova/commit/fdb73a2d445971c6158a80692c6f74094fd4193a) breaks this
  functionality for us.

  Would like to have some way to enable controlled XSS web socket
  connections to the console proxy services, maybe via a nova config
  parameter providing a list of allowed origin hosts?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480226] Re: SAWarning: The IN-predicate on tags.tag was invoked with an empty sequence

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480226

Title:
  SAWarning: The IN-predicate on tags.tag was invoked with an empty
  sequence

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the 'to_delete' list of instance tags in db method
  instance_tag_set() is empty, warnings are printed in the nova logs:

  SAWarning: The IN-predicate on "tags.tag" was invoked with an empty
  sequence. This results in a contradiction, which nonetheless can be
  expensive to evaluate. Consider alternative strategies for improved
  performance.

  The fix is to not query the DB in that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471216] Re: Rebuild detaches block devices when instance is still powered on

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471216

Title:
  Rebuild detaches block devices when instance is still powered on

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Due to the fact that rebuild detaches block devices when instance is
  still powered on, data written to attached volumes can possibly be
  lost, if it hasn't been fsynced yet.

  We can prevent this by allowing instance to shut down gracefully
  before detaching block devices during rebuild.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468513] Re: hacking check needed for using greenthread.spawn()

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1468513

Title:
  hacking check needed for using greenthread.spawn()

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Given https://review.openstack.org/#/c/183144/ and bug 1404268, we
  should have a hacking check such that anytime someone tries to use
  greenthread.spawn() in code it's a pep8 failure - they should be using
  nova.utils.spawn() instead.

  nova-specific hacking checks go here:

  http://git.openstack.org/cgit/openstack/nova/tree/nova/hacking/checks.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1468513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468431] Re: Nova delayed instance lifecycle events issue

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1468431

Title:
  Nova delayed instance lifecycle events issue

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The instance lifecylcle events can be delayed, thus not reflecting the
  current instance power state.

  Some drivers may power off/on the instance during operations such as
  rescue or resize. If the event is handled by the manager after the
  operation finishes and the instance task state is set to "None", the
  manager can attempt to call the stop API, even if the instance is
  currently active.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1468431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478108] Re: Live migration should throttle itself

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478108

Title:
  Live migration should throttle itself

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova will accept an unbounded number of live migrations for a single
  host, which will result in timeouts and failures (at least for
  libvirt). Since live migrations are seriously IO intensive, allowing
  this to be unlimited is just never going to be the right thing to do,
  especially when we have functions in our own client to live migrate
  all instances to other hosts (nova host-evacuate-live).

  We recently added a build semaphore to allow capping the number of
  parallel builds being attempted on a compute host for a similar
  reason. This should be the same sort of thing for live migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468212] Re: Instance action event for live-migration is missing

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1468212

Title:
  Instance action event for live-migration is missing

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We have the instance action and action event for most of the instance
  operations, exclude: live-migration. In the current master code, when
  we do live-migration, the instance action is recorded, but the action
  event for live-migration is lost.

  Version: Master

  Bug Details:

  Migrate the server:

  root@controller:/var/log/nova# nova list
  
+--+--+---++-+---+
  | ID   | Name | Status| Task State | 
Power State | Networks  |
  
+--+--+---++-+---+
  | f4070134-f9f0-4314-951c-80b9c7e80499 | test | MIGRATING | migrating  | 
Running | internal-net=10.10.10.187 |
  
+--+--+---++-+---+
  root@controller:/var/log/nova# nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | Power 
State | Networks  |
  
+--+--+++-+---+
  | f4070134-f9f0-4314-951c-80b9c7e80499 | test | ACTIVE | -  | Running 
| internal-net=10.10.10.187 |
  
+--+--+++-+---+

  After Live-Migrate, the instance action has been recorded but the action 
event is missing:
  root@controller:/var/log/nova# nova instance-action-list 
f4070134-f9f0-4314-951c-80b9c7e80499
  
++--+-++
  | Action | Request_ID   | Message | 
Start_Time |
  
++--+-++
  | create | req-789a1956-11a0-4a1b-9063-7adf0ed51f3b | -   | 
2015-06-24T07:57:02.00 |
  | live-migration | req-e76f2a5e-79f8-4879-8e41-249ea574aeff | -   | 
2015-06-24T08:20:40.00 |
  
++--+-++
  root@controller:/var/log/nova# nova instance-action 
f4070134-f9f0-4314-951c-80b9c7e80499 req-e76f2a5e-79f8-4879-8e41-249ea574aeff
  +---+--+
  | Property  | Value|
  +---+--+
  | action| live-migration   |
  | events| []   |
  | instance_uuid | f4070134-f9f0-4314-951c-80b9c7e80499 |
  | message   | -|
  | project_id| 522eda8d23124b25bf03fe44f1986b74 |
  | request_id| req-e76f2a5e-79f8-4879-8e41-249ea574aeff |
  | start_time| 2015-06-24T08:20:40.00   |
  | user_id   | 3917d63e5a2943319fdaebd80fb8b4f2 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1468212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467916] Re: Hyper-V: get free SCSI controller slot issue on V1

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467916

Title:
  Hyper-V: get free SCSI controller slot issue on V1

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The method retrieving a free SCSI controller slot gets all the related
  disk resources, checking their address using the AddressOnParent
  attribute.

  The issue is that this WMI object attribute is not available on the V1 
virtualization namespace, for which reason this method will raise
  an AttributeError in case there disks connected to the SCSI controller. For 
this reason, attaching a second volume will fail.

  This bug affects Windows Server 2008 R2 and Windows Server 2012 when
  using the V1 namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466780] Re: nova libvirt pinning not reflected in VirtCPUTopology

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466780

Title:
  nova libvirt pinning not reflected in VirtCPUTopology

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in
  vCPUs being pinned to pCPUs, per the original blueprint:

  http://specs.openstack.org/openstack/nova-
  specs/specs/kilo/implemented/virt-driver-cpu-pinning.html

  When scheduling instance with this extra spec, it would be expected
  that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects
  (which are in turn used by an 'InstanceNumaTopology' object) should
  bear some reflection on the actual configuration. For example, a VM
  booted with four vCPUs and the 'dedicated' CPU policy should have NUMA
  topologies similar to one of the below:

  VirtCPUTopology(cores=4,sockets=1,threads=1)
  VirtCPUTopology(cores=2,sockets=1,threads=2)
  VirtCPUTopology(cores=1,sockets=2,threads=2)
  ...

  In summary, cores * sockets * threads = vCPUs. However, this does not
  appear to happen.

  ---

  # Testing Configuration

  Testing was conducted on a single-node, Fedora 21-based
  (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The
  system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10
  cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 =
  node1). Two flavors were used:

  openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-
  pinning

  openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning
  nova flavor-key demo.pinning set hw:cpu_policy=dedicated 
hw:cpu_threads_policy=separate

  # Results

  Results vary - however, we have seen very random assignments like so:

  For a three vCPU instance:

  (Pdb) p instance.numa_topology.cells[0].cpu_topology
  VirtCPUTopology(cores=10,sockets=1,threads=1)

  For a four vCPU instance:

  VirtCPUTopology(cores=2,sockets=1,threads=2)

  For a ten vCPU instance:

  VirtCPUTopology(cores=7,sockets=1,threads=2)

  The actual underlying libvirt XML is correct, however:

  For example, for a three vCPU instance:

  
  3072
  
  
  
  

  UPDATE(23/06/15): The random assignments aren't actually random
  (thankfully). They correspond to the number of free cores in the
  system. The reason they change is because the number of cores is
  changing (as pinned CPUs deplete resources). However, I still don't
  think this is correct/logical.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467145] Re: Socket related unit tests fail on FreeBSD

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467145

Title:
  Socket related unit tests fail on FreeBSD

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Due to different behavior of SO_REUSEADDR on Linux and BSD some unit
  tests fail, e.g.:

  nova.tests.unit.test_wsgi.TestWSGIServer.test_socket_options_for_simple_server
  --

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_wsgi.py", line 140, in 
test_socket_options_for_simple_server
  socket.SO_REUSEADDR))
File 
"/usr/home/novel/code/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/usr/home/novel/code/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 1 != 4
  

  Captured pythonlogging:
  ~~~
  2015-06-20 17:32:52,230 INFO [nova.wsgi] test_socket_options listening on 
127.0.0.1:60566
  

  Similar (or I'd say the same) problem was reported and fixed for OS X:
  https://bugs.launchpad.net/nova/+bug/1436895

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467451] Re: Hyper-V: fail to detach virtual hard disks

2015-09-03 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467451

Title:
  Hyper-V: fail to detach virtual hard disks

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova Hyper-V driver fails to detach  virtual hard disks when using the
  virtualizaton v1 WMI namespace.

  The reason is that it cannot find the attached resource, using the
  wrong resource object connection attribute.

  This affects Windows Server 2008 as well as Windows Server 2012 when
  the old namespace is used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   >