[Yahoo-eng-team] [Bug 1365854] [NEW] schedule fail on live migration

2014-09-05 Thread warewang
Public bug reported:

I live_migrate the VM to anthor host,  but in scheduler, it select the
host whitch hypervisor_type is not same as source host, when it try 3
times, the conductor raise a InvalidHypervisorType exception. But in my
cluster , there has the hypervisor_type whitch same as the source host,
so I thick  the scheduler selected the host need  to select the host
which hypervisor_type same as the source host  priori  on
live_migration, and,  the scheduler should be added the hypervisor
filter, whitch can take effect on live_migration.

** Affects: nova
 Importance: Undecided
 Assignee: warewang (wangguangcai)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => warewang (wangguangcai)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365854

Title:
  schedule fail on live migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  I live_migrate the VM to anthor host,  but in scheduler, it select the
  host whitch hypervisor_type is not same as source host, when it try 3
  times, the conductor raise a InvalidHypervisorType exception. But in
  my cluster , there has the hypervisor_type whitch same as the source
  host, so I thick  the scheduler selected the host need  to select the
  host which hypervisor_type same as the source host  priori  on
  live_migration, and,  the scheduler should be added the hypervisor
  filter, whitch can take effect on live_migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303865] Re: mandatory fields are not enforced in launch stack

2014-09-05 Thread Thierry Carrez
** Changed in: heat
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1303865

Title:
  mandatory fields are not enforced in launch stack

Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  - go to the Create Stack screen, enter the following valid Heat template:
  heat_template_version: 2013-05-23
  description: >
A single stack with a keypair.

  parameters:
key_name:
  type: string
  default: heat_key3
key_save:
  type: string
  default: false

  resources:
KeyPair:
  type: OS::Nova::KeyPair
  properties:
name: { get_param: key_name }
save_private_key: { get_param: key_save }

  outputs:
PublicKey:
  value: { get_attr: [KeyPair, public_key] }
PrivateKey:
  value: { get_attr: [KeyPair, private_key] }

   - delete one of the fields value (key_name or/and key_save)
  => you will get a message saying "Error: Stack creation failed."

  In horizon.log you will get:
  2014-04-07 14:49:23,055 7116 DEBUG heatclient.common.http 
  HTTP/1.1 400 Bad Request
  date: Mon, 07 Apr 2014 14:49:23 GMT
  content-length: 301
  content-type: application/json; charset=UTF-8

  {"explanation": "The server could not comply with the request since it
  is either malformed or otherwise incorrect.", "code": 400, "error":
  {"message": "Property error : KeyPair: save_private_key \"\" is not a
  valid boolean", "traceback": null, "type": "StackValidationFailed"},
  "title": "Bad Request"}

  if any/all of the 2 fields is mandatory, this should be enforced, both
  with a message, and with an asterisk, right next to the field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1303865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365061] Re: Warn against sorting requirements

2014-09-05 Thread Thierry Carrez
** Changed in: designate
   Status: Fix Committed => Fix Released

** Changed in: designate
Milestone: juno-3 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365061

Title:
  Warn against sorting requirements

Status in Cinder:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Identity  (Keystone) Middleware:
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Keystone:
  In Progress
Status in OpenStack Object Storage (Swift):
  Fix Committed

Bug description:
  Contrary to bug 1285478, requirements files should not be sorted
  alphabetically. Given that requirements files can contain comments,
  I'd suggest a header in all requirements files along the lines of:

  # The order of packages is significant, because pip processes them in the 
order
  # of appearance. Changing the order has an impact on the overall integration
  # process, which may cause wedges in the gate later.

  This is the result of a mailing list discussion (thanks, Sean!):

http://www.mail-archive.com/openstack-
  d...@lists.openstack.org/msg33927.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-09-05 Thread Thierry Carrez
** Changed in: heat
   Status: Fix Committed => Fix Released

** Changed in: sahara
   Status: Fix Committed => Fix Released

** Changed in: sahara
Milestone: juno-3 => None

** Changed in: designate
   Status: Fix Committed => Fix Released

** Changed in: designate
Milestone: juno-3 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Key Management (Barbican):
  Fix Released
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in OpenStack Object Storage (Swift):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Fix Released

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279611] Re: urlparse is incompatible for python 3

2014-09-05 Thread Thierry Carrez
** Changed in: trove
Milestone: juno-3 => None

** Changed in: trove
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1279611

Title:
   urlparse is incompatible for python 3

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Storyboard database creator:
  Fix Committed
Status in OpenStack Object Storage (Swift):
  New
Status in Tempest:
  In Progress
Status in Openstack Database (Trove):
  Fix Released
Status in Tuskar:
  In Progress
Status in OpenStack Messaging and Notifications Service (Zaqar):
  Fix Released
Status in Zuul: A project gating system:
  Fix Committed

Bug description:
  import urlparse

  should be changed to :
  import six.moves.urllib.parse as urlparse

  for python3 compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1279611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-09-05 Thread Thierry Carrez
** Changed in: trove
Milestone: juno-3 => None

** Changed in: designate
Milestone: juno-3 => None

** Changed in: designate
   Status: Fix Committed => Fix Released

** Changed in: sahara
Milestone: juno-3 => None

** Changed in: sahara
   Status: Fix Committed => Fix Released

** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  In Progress
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Committed
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365887] [NEW] Metadata filtering is broken

2014-09-05 Thread Rushi Agrawal
Public bug reported:

When I make this change

http://paste.openstack.org/show/106339/

to unit tests, the test fails. key3=a key-value pair is matched, but it
shouldn't be.

Also, there is no test case for different values of filter, except empty
filter.

The problem is , the def _match_any() is not checking if pattern_list is
actually a list, or just a string.

The solution will be to add a condition "pattern_list = [pattern_list]
if isinstance(pattern_list, str)"

Unit tests also must be added

** Affects: nova
 Importance: Undecided
 Assignee: Rushi Agrawal (rushiagr)
 Status: New


** Tags: low-hanging-fruit

** Description changed:

- When I make this change to unit tests, the test fails. key3=a key-value
- pair is matched, but it shouldn't be.
+ When I make this change
+ 
+ http://paste.openstack.org/show/106339/
+ 
+ to unit tests, the test fails. key3=a key-value pair is matched, but it
+ shouldn't be.
  
  Also, there is no test case for different values of filter, except empty
  filter.
  
  The problem is , the def _match_any() is not checking if pattern_list is
  actually a list, or just a string.
  
  The solution will be to add a condition "pattern_list = [pattern_list]
  if isinstance(pattern_list, str)"
  
  Unit tests also must be added

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365887

Title:
  Metadata filtering is broken

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I make this change

  http://paste.openstack.org/show/106339/

  to unit tests, the test fails. key3=a key-value pair is matched, but
  it shouldn't be.

  Also, there is no test case for different values of filter, except
  empty filter.

  The problem is , the def _match_any() is not checking if pattern_list
  is actually a list, or just a string.

  The solution will be to add a condition "pattern_list = [pattern_list]
  if isinstance(pattern_list, str)"

  Unit tests also must be added

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365918] [NEW] UT test_security_groups_rpc.TestSecurityGroupAgentEnhancedRpcWithIptables fails non deterministacally with MismatchError

2014-09-05 Thread Miguel Angel Ajo
Public bug reported:


It's only affecting py26/py27 neutron UTs, not tempest.

http://logs.openstack.org/55/105855/21/check/gate-neutron-
python26/f408f8a/console.html#_2014-09-05_05_23_47_990

A bit wide search which could be including other failures too, but mostly works:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTWlzbWF0Y2hFcnJvcjogOCAhPSA0XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOlsiYnVpbGRfY2hhbmdlIiwibWVzc2FnZSJdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0wMVQwMDowMDowMCswMDowMCIsInRvIjoiMjAxNC0wOS0wNVQwNzozNjo0NyswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA5OTA1NDcyMTEwLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9


2014-09-05 05:23:47.988 | FAIL: 
neutron.tests.unit.test_security_groups_rpc.TestSecurityGroupAgentEnhancedRpcWithIptables.test_prepare_remove_port
2014-09-05 05:23:47.988 | tags: worker-3
2014-09-05 05:23:47.988 | 
--
2014-09-05 05:23:47.988 | Traceback (most recent call last):
2014-09-05 05:23:47.988 | _StringException: Empty attachments:
2014-09-05 05:23:47.988 |   pythonlogging:'neutron.api.extensions'
2014-09-05 05:23:47.988 |   stderr
2014-09-05 05:23:47.989 |   stdout
2014-09-05 05:23:47.989 | 
2014-09-05 05:23:47.989 | pythonlogging:'': {{{
2014-09-05 05:23:47.989 | 2014-09-05 05:23:48,051 INFO 
[neutron.agent.securitygroups_rpc] Preparing filters for devices ['tap_port1']
2014-09-05 05:23:47.989 | 2014-09-05 05:23:48,053 INFO 
[neutron.openstack.common.lockutils] Created lock path: 
/tmp/tmp.PvSiI1ldrI/tmpLQG7MG/tmpl7ODUk/lock
2014-09-05 05:23:47.989 | 2014-09-05 05:23:48,054 INFO 
[neutron.agent.securitygroups_rpc] Remove device filter for ['tap_port1']
2014-09-05 05:23:47.989 | }}}
2014-09-05 05:23:47.989 | 
2014-09-05 05:23:47.989 | Traceback (most recent call last):
2014-09-05 05:23:47.989 |   File 
"neutron/tests/unit/test_security_groups_rpc.py", line 2296, in 
test_prepare_remove_port
2014-09-05 05:23:47.989 | self._verify_mock_calls()
2014-09-05 05:23:47.989 |   File 
"neutron/tests/unit/test_security_groups_rpc.py", line 2140, in 
_verify_mock_calls
2014-09-05 05:23:47.990 | self.iptables_execute.call_count)
2014-09-05 05:23:47.990 |   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 348, in assertEqual
2014-09-05 05:23:47.990 | self.assertThat(observed, matcher, message)
2014-09-05 05:23:47.990 |   File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 433, in assertThat
2014-09-05 05:23:47.990 | raise mismatch_error
2014-09-05 05:23:47.990 | MismatchError: 8 != 4
2014-09-05 05:23:47.990 |

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365918

Title:
  UT
  test_security_groups_rpc.TestSecurityGroupAgentEnhancedRpcWithIptables
  fails non deterministacally with MismatchError

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  
  It's only affecting py26/py27 neutron UTs, not tempest.

  http://logs.openstack.org/55/105855/21/check/gate-neutron-
  python26/f408f8a/console.html#_2014-09-05_05_23_47_990

  A bit wide search which could be including other failures too, but mostly 
works:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTWlzbWF0Y2hFcnJvcjogOCAhPSA0XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOlsiYnVpbGRfY2hhbmdlIiwibWVzc2FnZSJdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0wMVQwMDowMDowMCswMDowMCIsInRvIjoiMjAxNC0wOS0wNVQwNzozNjo0NyswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA5OTA1NDcyMTEwLCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  
  2014-09-05 05:23:47.988 | FAIL: 
neutron.tests.unit.test_security_groups_rpc.TestSecurityGroupAgentEnhancedRpcWithIptables.test_prepare_remove_port
  2014-09-05 05:23:47.988 | tags: worker-3
  2014-09-05 05:23:47.988 | 
--
  2014-09-05 05:23:47.988 | Traceback (most recent call last):
  2014-09-05 05:23:47.988 | _StringException: Empty attachments:
  2014-09-05 05:23:47.988 |   pythonlogging:'neutron.api.extensions'
  2014-09-05 05:23:47.988 |   stderr
  2014-09-05 05:23:47.989 |   stdout
  2014-09-05 05:23:47.989 | 
  2014-09-05 05:23:47.989 | pythonlogging:'': {{{
  2014-09-05 05:23:47.989 | 2014-09-05 05:23:48,051 INFO 
[neutron.agent.securitygroups_rpc] Preparing filters for devices ['tap_port1']
  2014-09-05 05:23:47.989 | 2014-09-05 05:23:48,053 INFO 
[neutron.openstack.common.lockutils] Created lock path: 
/tmp/tmp.PvSiI1ldrI/tmpLQG7MG/tmpl7ODUk/lock
  2014-09-05 05:23:47.989 | 2014-09-05 05:23:48,05

[Yahoo-eng-team] [Bug 1346424] Re: Baremetal node id not supplied to driver

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346424

Title:
  Baremetal node id not supplied to driver

Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  A random overcloud baremetal node fails to boot during check-tripleo-
  overcloud-f20. Occurs intermittently.

  Full logs:

  
http://logs.openstack.org/26/105326/4/check-tripleo/check-tripleo-overcloud-f20/9292247/
  
http://logs.openstack.org/81/106381/2/check-tripleo/check-tripleo-overcloud-f20/ca8a59b/
  
http://logs.openstack.org/08/106908/2/check-tripleo/check-tripleo-overcloud-f20/e9894ca/

  
  Seed's nova-compute log shows this exception:

  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 ERROR oslo.messaging.rpc.dispatcher 
[req-9f090bea-a974-4f3c-ab06-ebd2b7a5c9e6 ] Exception during message handling: 
Baremetal node id not supplied to driver for 
'e13f2660-b72d-4a97-afac-64ff0eecc448'
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent 
call last):
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher incoming.message))
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py", line 88, 
in wrapped
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher payload)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py", line 71, 
in wrapped
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py", 
line 291, in decorated_function
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher pass
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py", 
line 277, in decorated_function
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return function(self, 
context, *args, **kwargs)
  Jul 21

[Yahoo-eng-team] [Bug 1347795] Re: All baremetal instance going to ERROR state

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347795

Title:
  All baremetal instance going to ERROR state

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  As of 1300 UTC approce all tripleo CI is failing to bring up instances

  looks like the commit that caused it is
  https://review.openstack.org/#/c/71557/

  just waiting for some CI to finish to confirm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1347795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354499] Re: boot from volume fails when upgrading using cinder v1 API

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354499

Title:
  boot from volume fails when upgrading using cinder v1 API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In c5402ef4fc509047d513a715a1c14e9b4ba9674f we added support for the
  new cinder V2 API.

  When a user who was previously using the Cinder v1 API (which would
  have been required) updates to the new code the immediate defaults
  cause the cinder v2 API to be chosen. This is because we now default
  cinder_catalog_info to 'volumev2:cinder:publicURL'. So if a user was
  using the previous default value of 'volumev2:cinder:publicURL' their
  configuration would now be broken.

  Given the new deprecation code hasn't been released yet I think we
  need to wait at least one release before we can make this change to
  our cinder_catalog_info default value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311778] Re: Unit tests fail with MessagingTimeout errors

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311778

Title:
  Unit tests fail with MessagingTimeout errors

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  There is an issue that is causing unit tests to fail with the
  following error:

  MessagingTimeout: No reply on topic conductor
  MessagingTimeout: No reply on topic scheduler

  2014-04-23 13:45:52.017 | Traceback (most recent call last):
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  2014-04-23 13:45:52.017 | incoming.message))
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  2014-04-23 13:45:52.017 | return self._do_dispatch(endpoint, method, 
ctxt, args)
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  2014-04-23 13:45:52.017 | result = getattr(endpoint, method)(ctxt, 
**new_args)
  2014-04-23 13:45:52.018 |   File "nova/conductor/manager.py", line 798, in 
build_instances
  2014-04-23 13:45:52.018 | legacy_bdm_in_spec=legacy_bdm)
  2014-04-23 13:51:50.628 |   File "nlibvir:  error : internal error could not 
initialize domain event timer
  2014-04-23 13:54:57.953 | ova/scheduler/rpcapi.py", line 120, in run_instance
  2014-04-23 13:54:57.953 | cctxt.cast(ctxt, 'run_instance', **msg_kwargs)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
  2014-04-23 13:54:57.953 | wait_for_reply=True, timeout=timeout)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/transport.py",
 line 90, in _send
  2014-04-23 13:54:57.953 | timeout=timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 166, in send
  2014-04-23 13:54:57.954 | return self._send(target, ctxt, message, 
wait_for_reply, timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 161, in _send
  2014-04-23 13:54:57.954 | 'No reply on topic %s' % target.topic)
  2014-04-23 13:54:57.954 | MessagingTimeout: No reply on topic scheduler

  

  2014-04-23 13:45:52.008 | Traceback (most recent call last):
  2014-04-23 13:45:52.008 |   File "nova/api/openstack/__init__.py", line 125, 
in __call__
  2014-04-23 13:45:52.008 | return req.get_response(self.application)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1320, in send
  2014-04-23 13:45:52.009 | application, catch_exc_info=False)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1284, in call_application
  2014-04-23 13:45:52.009 | app_iter = application(self.environ, 
start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.009 | return resp(environ, start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/routes/middleware.py",
 line 131, in __call__
  2014-04-23 13:45:52.010 | response = self.app(environ, start_response)
  2014-04-23 13:45:52.011 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 li

[Yahoo-eng-team] [Bug 1361840] Re: nova boot fails with rbd backend

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361840

Title:
  nova boot fails with rbd backend

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Booting a VM in a plain devstack setup with ceph enabled, I get an
  error like:

  libvirtError: internal error: process exited while connecting to
  monitor: qemu-system-x86_64: -drive file=rbd:vmz/27dcd57f-948f-410c-
  830f-
  
48d8fda0d968_disk.config:id=cindy:key=AQA00PxTiFa0MBAAQ9Uq9IVtBwl/pD8Fd9MWZw==:auth_supported=cephx\;none:mon_host=192.168.122.76\:6789,if=none,id
  =drive-ide0-1-1,readonly=on,format=raw,cache=writeback: error reading
  header from 27dcd57f-948f-410c-830f-48d8fda0d968_disk.config

  even though config_drive is set to false.

  This seems to be related to https://review.openstack.org/#/c/112014/,
  if I revert ecce888c469c62374a3cc43e3cede11d8aa1e799 everything works
  fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360719] Re: unit test test_killed_worker_recover taking 160 seconds

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360719

Title:
  unit test test_killed_worker_recover taking 160 seconds

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova patch https://review.openstack.org/#/c/104099/ caused the
  following unit tests to take 160 seconds:

  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_killed_worker_recover
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_killed_worker_recover

  
  This is because Server.wait() now waits for all workers to finish, but 
test_killed_worker_recover doesn't attempt to kill the workers like some of the 
other tests in MultiprocessWSGITest

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358818] Re: extra_specs string check breaks backward compatibility

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358818

Title:
  extra_specs string check breaks backward compatibility

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  We've found that while with Icehouse we were able to specify
  extra_specs values as ints or floats, in Juno the command fails unless
  we make these values strings by quoting them. This breaks backward
  compatibility.

  compare Icehouse:

  curl -k -i -X POST 
http://127.0.0.1:8774/v2/982607a6a1134514abac252fc25384ad/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":1,"powervm:max_vcpu":5,"powervm:min_mem":1024,"powervm:max_mem":4096,"powervm:availability_priority":127,"powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":128}}';
 echo
  HTTP/1.1 200 OK
  Content-Type: application/json
  Content-Length: 385
  X-Compute-Request-Id: req-9132922d-c703-4573-9822-9ca7a6bf7b0d
  Date: Thu, 14 Aug 2014 18:25:02 GMT

  {"extra_specs": {"powervm:processor_compatibility": "default",
  "powervm:max_proc_units": "0.5", "powervm:shared_weight": 128,
  "powervm:min_mem": 1024, "powervm:max_mem": 4096, "powervm:uncapped":
  "true", "powervm:proc_units": "0.2", "powervm:dedicated_proc":
  "false", "powervm:max_vcpu": 5, "powervm:availability_priority": 127,
  "powervm:min_proc_units": "0.1", "powervm:min_vcpu": 1}}

  
  to Juno:

  curl -k -i -X POST 
http://127.0.0.1:8774/v2/be2ffade1e0b4bed83619e00482317d1/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":1,"powervm:max_vcpu":5,"powervm:min_mem":1024,"powervm:max_mem":4096,"powervm:availability_priority":127,"powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":128}}';
 echo
  HTTP/1.1 400 Bad Request
  Content-Length: 88
  Content-Type: application/json; charset=UTF-8
  Date: Thu, 14 Aug 2014 18:25:46 GMT

  {"badRequest": {"message": "extra_specs value is not a string or
  unicode", "code": 400}}

  
  if I modify the data sent so that everything is a string, it will work for 
Juno:

  curl -k -i -X POST 
http://127.0.0.1:8774/v2/be2ffade1e0b4bed83619e00482317d1/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":"1","powervm:max_vcpu":"5","powervm:min_mem":"1024","powervm:max_mem":"4096","powervm:availability_priority":"127","powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":"128"}}';
 echo
  HTTP/1.1 200 OK
  Content-Type: application/json
  Content-Length: 397
  Date: Thu, 14 Aug 2014 18:26:27 GMT

  {"extra_specs": {"powervm:processor_compatibility": "default",
  "powervm:max_proc_units": "0.5", "powervm:shared_weight": "128",
  "powervm:min_mem": "1024", "powervm:max_mem": "4096",
  "powervm:uncapped": "true", "powervm:proc_units": "0.2",
  "powervm:dedicated_proc": "false", "powervm:max_vcpu": "5",
  "powervm:availability_priority": "127", "powervm:min_proc_units":
  "0.1", "powervm:min_vcpu": "1"}}

  
  The API change guidelines 
(https://wiki.openstack.org/wiki/APIChangeGuidelines) describe as "generally 
not acceptable": "A change such that a request which was successful before now 
results in an error response (unless the success reported previously was hiding 
an existing error condition)". That is exactly what this is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360656] Re: Objects remotable decorator fails to properly handle ListOfObjects field if it is in the updates dict

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360656

Title:
  Objects remotable decorator fails to properly handle ListOfObjects
  field if it is in the updates dict

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Since this change https://review.openstack.org/#/c/98607/, if the
  conductor sends back  a field of type ListOfObjects field in the
  updates dictionary after a remotable decorator has called the
  object_action RPC method, restoring them into objects will fail since
  they will already be 'hydrated' but the field's from_primitive logic
  won't know hot to deal with that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358881] Re: jjsonschema 2.3.0 -> 2.4.0 upgrade breaking nova.tests.test_api_validation tests

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358881

Title:
  jjsonschema 2.3.0 -> 2.4.0 upgrade breaking
  nova.tests.test_api_validation tests

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  the following two failures appeared after upgrading jsonschema to
  2.4.0.  downgrading to 2.3.0 returned the tests to passing.

  ==
  FAIL: 
nova.tests.test_api_validation.TcpUdpPortTestCase.test_validate_tcp_udp_port_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
602, in test_validate_tcp_udp_port_fails
  expected_detail=detail)
File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
31, in check_validation_error
  self.assertEqual(ex.kwargs, expected_kwargs)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'code': 400,
   'detail': u'Invalid input for field/attribute foo. Value: 65536. 65536 is 
greater than the maximum of 65535'}
  actual= {'code': 400,
   'detail': 'Invalid input for field/attribute foo. Value: 65536. 65536.0 is 
greater than the maximum of 65535'}

  
  ==
  FAIL: 
nova.tests.test_api_validation.IntegerRangeTestCase.test_validate_integer_range_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [migrate.versioning.api] 215 -> 216... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 216 -> 217... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 217 -> 218... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 218 -> 219... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 219 -> 220... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 220 -> 221... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 221 -> 222... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 222 -> 223... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 223 -> 224... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 224 -> 225... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 225 -> 226... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 226 -> 227... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 227 -> 228... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 228 -> 229... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 229 -> 230... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 230 -> 231... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 231 -> 232... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 232 -> 233... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 233 -> 234... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 234 -> 235... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 235 -> 236... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 236 -> 237... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 237 -> 238... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 238 -> 239... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 239 -> 240... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 240 -> 241... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 241 -> 242... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 242 -> 243... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 243 -> 244... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 244 -> 245... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 245 -> 246... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 246 -> 247... 
  INFO [migrate.versioning.api] done
  IN

[Yahoo-eng-team] [Bug 1356449] Re: VMware: resize operations fail

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356449

Title:
  VMware: resize operations fail

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The driver function get_host_ip_addr() is needed by the
  resource_tracker and if it is missing the resize operation fails. This
  is regression caused by the deprecation of the ESX driver with commit
  1deb31f85a8f5d1e261b2cf1eddc537a5da7f60b

  We need to bring back get_host_ip_addr() and return the IP address of
  the vCenter server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354664] Re: Image cache aging: change to objects causes invalid data to be passed

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354664

Title:
  Image cache aging: change to objects causes invalid data to be passed

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  An extra entry of invalid data is passes to the image cache aging:
  USED: {'': (2, 0, ['instance-000a', 'instance-000a']), 
'7ee53435-b6d5-4c15-bce4-2f3dfac96ffd': (1, 0, ['instance-000a'])}
  This is due to the change to objects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356051] Re: Cannot load 'instance' in the base class - problem in floating-ip-list

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356051

Title:
  Cannot load 'instance' in the base class - problem in floating-ip-list

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I tried the following on VMware using the VMwareVCDriver with nova-
  network:

  1. Create an instance

  2. Create a floating IP: $ nova floating-ip-create

  3. Associate a floating IP with the instance: $ nova floating-ip-
  associate test1 10.131.254.249

  4. Attempt a list of the floating IPs:
  $ nova floating-ip-list
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-dcb17077-c670-4e2a-8a34-715a8afc5f33)

  
  It failed and printed out the following messages in n-api logs:

  2014-08-12 13:54:29.578 ERROR nova.api.openstack 
[req-86d8f466-cfae-42ac-8340-9eac36d6fc71 demo demo] Caught error: Cannot load 
'instance' in the base class
  2014-08-12 13:54:29.578 TRACE nova.api.openstack Traceback (most recent call 
last):
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 124, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-08-12 13:54:29.578 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-08-12 13:54:29.578 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
565, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return self._app(env, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 908, in __call__
  2014-08-12 13:54:29.578 TRACE nova.api.openstack content_type, body, 
accept)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 974, in _process_stack
  2014-08-12 13:54:29.578 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/wsgi.py", line 1058, in dispatch
  2014-08-12 13:54:29.578 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 146, 
in index
  2014-08-12 13:54:29.578 TRACE nova.api.openstack 
self._normalize_ip(floating_ip)
  2014-08-12 13:54:29.578 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 117, 
in _normalize_ip
  2014-08-12 13:54:29.578 TRACE nova.api.openstack

[Yahoo-eng-team] [Bug 1352595] Re: nova boot fails when using rbd backend

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352595

Title:
  nova boot fails when using rbd backend

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Trace ends with:

  TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3]  
File "/opt/stack/nova/nova/virt/libvirt/rbd.py", line 238, in exists
  TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3]   
  except rbd.ImageNotFound:
  TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] 
AttributeError: 'module' object has no attribute 'ImageNotFound'

  It looks like the above module tries to do a "import rbd" and ends up
  importing itself again instead of the global library module.

  A quick fix would be renaming the file to rbd2.py and changing the
  references in driver.py and imagebackend.py, but maybe there is a
  better solution?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355882] Re: get_floating_ip_pools for neutron v2 API inconsistent with nova network API

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355882

Title:
  get_floating_ip_pools for neutron v2 API inconsistent with nova
  network API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Commit e00bdd7aa8c1ac9f1ae5057eb2f774f34a631845 change
  get_floating_ip_pools in a way that it now return a list of names
  rather than a list whose elements are in the form {'name':
  'pool_name'}.

  The implementation of this method in nova.network.neutron_v2.api has
  not been adjusted thus causing
  
tempest.api.compute.floating_ips.test_list_floating_ips.FloatingIPDetailsTestJSON
  to always fail with neutron

  The fix is straightforward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355875] Re: VMware: ESX deprecation break VC driver

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355875

Title:
  VMware: ESX deprecation break VC driver

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The ESX dprecation
  
https://github.com/openstack/nova/commit/1deb31f85a8f5d1e261b2cf1eddc537a5da7f60b
  break devstack

  2014-08-12 07:53:45.453 ERROR nova.openstack.common.threadgroup [-] 
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 125, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup x.wait()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 47, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 490, in run_service
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/service.py", line 164, in start
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/manager.py", line 1058, in init_host
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
self.driver.init_host(host=self.host)
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/driver.py", line 150, in init_host
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup raise 
NotImplementedError()
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
NotImplementedError
  2014-08-12 07:53:45.453 TRACE nova.openstack.common.threadgroup 
  nicira@Ubuntu1404Server:/opt/stack/nova$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348720] Re: Missing index for expire_reservations

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

  > EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire < '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335076] Re: Exception raised by attach interface is problematic

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1335076

Title:
  Exception raised by attach interface is problematic

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The exception raised is inappropriate. It just returns the instance
  object. This should be a coherent message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1335076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336127] Re: The volumes will be deleted when creating a virtual machine fails with the parameter delete_on_termination being set true, which causes that the rescheduling fails

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336127

Title:
  The volumes will be deleted when  creating a virtual machine fails
  with the parameter delete_on_termination being set true, which causes
  that the rescheduling fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  
  when specifying a volume or an image with a user volume to create a virtual 
machine, if the virtual machine fails to be created for the first time with the 
parameter delete_on_termination being set “true”, the specified volume or the 
user volume will be deleted, which causes that the rescheduling fails.
  for example:
  1. upload a image
  | 62aa6627-0a07-4ab4-a99f-2d99110db03e | cirros-0.3.2-x86_64-uec | ACTIVE
  2.create a boot volume by the above image
  cinder create --image-id 62aa6627-0a07-4ab4-a99f-2d99110db03e 
--availability-zone nova 1
  | b821313a-9edb-474f-abb0-585a211589a6 | available | None | 1 | None | true | 
|
  3. create a virtual machine
  nova boot --flavor m1.tiny --nic net-id=28216e1d-f1c2-463b-8ae2-330a87e800d2 
tralon_disk1 --block-device-mapping 
vda=b821313a-9edb-474f-abb0-585a211589a6::1:1
  ERROR (BadRequest): Block Device Mapping is Invalid: failed to get volume 
b821313a-9edb-474f-abb0-585a211589a6. (HTTP 400) (Request-ID: 
req-486f7ab5-dc08-404e-8d4c-ac570d4f4aa1)
  4. use the "cinder list" to find that the volume 
b821313a-9edb-474f-abb0-585a211589a6 has been deleted
  +++--+--+-+--+-+
  | ID | Status | Name | Size | Volume Type | Bootable | Attached to |
  +++--+--+-+--+-+
  +++--+--+-+--+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348661] Re: nova.tests.api.ec2.test_cloud.CloudTestCase.test_terminate_instances_two_instances race fails with UnexpectedDeletingTaskStateError

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348661

Title:
  
nova.tests.api.ec2.test_cloud.CloudTestCase.test_terminate_instances_two_instances
  race fails with UnexpectedDeletingTaskStateError

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This was being masked by bug 1311778 due to the MessagingTimeout
  failure, but there are more specific errors.

  http://logs.openstack.org/79/108879/1/gate/gate-nova-
  python26/283e967/console.html#_2014-07-24_08_14_12_631

  2014-07-24 08:14:12.631 | FAIL: 
nova.tests.api.ec2.test_cloud.CloudTestCase.test_terminate_instances_two_instances
  2014-07-24 08:14:12.631 | tags: worker-4
  2014-07-24 08:14:12.631 | 
--
  2014-07-24 08:14:12.631 | Empty attachments:
  2014-07-24 08:14:12.631 |   pythonlogging:'boto'
  2014-07-24 08:14:12.631 |   stderr
  2014-07-24 08:14:12.631 |   stdout
  2014-07-24 08:14:12.631 | 
  2014-07-24 08:14:12.631 | pythonlogging:'': {{{
  2014-07-24 08:14:12.631 | INFO [nova.network.driver] Loading network driver 
'nova.network.linux_net'
  2014-07-24 08:14:12.632 | AUDIT [nova.service] Starting conductor node 
(version 2014.2)
  2014-07-24 08:14:12.632 | INFO [nova.virt.driver] Loading compute driver 
'nova.virt.fake.FakeDriver'
  2014-07-24 08:14:12.632 | AUDIT [nova.service] Starting compute node (version 
2014.2)
  2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Auditing 
locally available compute resources
  2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Free ram 
(MB): 7680
  2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Free disk 
(GB): 1028
  2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] Free VCPUS: 1
  2014-07-24 08:14:12.632 | AUDIT [nova.compute.resource_tracker] PCI stats: []
  2014-07-24 08:14:12.632 | INFO [nova.compute.resource_tracker] 
Compute_service record created for 093d0c3802bf440db8f3f839963027c4:fake-mini
  2014-07-24 08:14:12.632 | AUDIT [nova.service] Starting scheduler node 
(version 2014.2)
  2014-07-24 08:14:12.632 | INFO [nova.network.driver] Loading network driver 
'nova.network.linux_net'
  2014-07-24 08:14:12.633 | AUDIT [nova.service] Starting network node (version 
2014.2)
  2014-07-24 08:14:12.633 | AUDIT [nova.service] Starting consoleauth node 
(version 2014.2)
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.manager] Starting instance...
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Attempting claim: 
memory 2048 MB, disk 20 GB
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Total memory: 8192 MB, 
used: 512.00 MB
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] memory limit not 
specified, defaulting to unlimited
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Total disk: 1028 GB, 
used: 0.00 GB
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] disk limit not 
specified, defaulting to unlimited
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Claim successful
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.manager] Starting instance...
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Attempting claim: 
memory 2048 MB, disk 20 GB
  2014-07-24 08:14:12.633 | AUDIT [nova.compute.claims] Total memory: 8192 MB, 
used: 2560.00 MB
  2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] memory limit not 
specified, defaulting to unlimited
  2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] Total disk: 1028 GB, 
used: 20.00 GB
  2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] disk limit not 
specified, defaulting to unlimited
  2014-07-24 08:14:12.634 | AUDIT [nova.compute.claims] Claim successful
  2014-07-24 08:14:12.634 | WARNING [nova.compute.manager] Instance is not 
stopped. Calling the stop API.
  2014-07-24 08:14:12.634 | ERROR [nova.compute.manager] error during stop() in 
sync_power_state.
  2014-07-24 08:14:12.634 | Traceback (most recent call last):
  2014-07-24 08:14:12.634 |   File "nova/compute/manager.py", line 5551, in 
_sync_instance_power_state
  2014-07-24 08:14:12.634 | self.compute_api.force_stop(context, 
db_instance)
  2014-07-24 08:14:12.634 |   File "nova/compute/api.py", line 1767, in 
force_stop
  2014-07-24 08:14:12.634 | self.compute_rpcapi.stop_instance(context, 
instance, do_cast=do_cast)
  2014-07-24 08:14:12.635 |   File "nova/compute/rpcapi.py", line 908, in 
stop_instance
  2014-07-24 08:14:12.635 | return rpc_method(ctxt, 'stop_instance', 
instance=instance)
  2014-07-24 08:14:12.635 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
  2014-07-24 08:14:12.635 | wait_for_reply=True, timeout=timeout)
  2014-07-

[Yahoo-eng-team] [Bug 1347156] Re: deleting floating-ip in nova-network does not free quota

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347156

Title:
  deleting floating-ip in nova-network does not free quota

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  It seems that when you allocate a floating-ip in a tenant with nova-
  network, its quota is never returned after calling 'nova floating-ip-
  delete' ecen though 'nova floating-ip-list' shows it gone. This
  behavior applies to each tenant individually. The gate tests are
  passing because they all run with tenant isolation. But this problem
  shows in the nightly run without tenant isolation:

  http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm-full-non-
  isolated-master/2bc5ead/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336080] Re: deleting instance doesn't update scheduler immediately

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336080

Title:
  deleting instance doesn't update scheduler immediately

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Previously deleting an instance would update the scheduler resources
  fairly quickly. There is now a delay when deleting an instance   until
  the scheduler makes the resources available again. This appears to be
  due to the fact that the delete code path used to call resource
  tracker to update the compute_node record but this no longer happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File "/opt/stack/old/nova/nova/compute/manager.py", line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File "/opt/stack/old/nova/nova/openstack/common/jsonutils.py", line 164, 
in loads
   return json.loads(s)
     File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
   return _default_decoder.decode(s)
     File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:"Exception during message handling" AND message:"expected
  string or buffer" AND message:"connection_info =
  jsonutils.loads(bdm.connection_info)" AND tags:"screen-n-cpu.txt"

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342919

Title:
  instances rescheduled after building network info do not update the
  MAC

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This is weird - Ironic has used the mac from a different node (which
  quite naturally leads to failures to boot!)

  nova list | grep spawn
  | 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | 
ci-overcloud-NovaCompute3-zmkjp5aa6vgf  | BUILD  | spawning   | NOSTATE | 
ctlplane=10.10.16.137 |

   nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv
   | OS-EXT-SRV-ATTR:hypervisor_hostname  | 
b07295ee-1c09-484c-9447-10b9efee340c |

   neutron port-list | grep 137
   | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d || 
78:e7:d1:23:90:0d | {"subnet_id": "a6ddb35e-305e-40f1-9450-7befc8e1af47", 
"ip_address": "10.10.16.137"} |

  ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait
   | provision_state| wait call-back
 |

  ironic port-list | grep 78:e7:d1:23:90:0d  # from neutron
  | 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d |

  ironic port-show 33ab97c0-3de9-458a-afb7-8252a981
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b  |  # 
Ruh-roh, wrong node!
  | uuid   | 33ab97c0-3de9-458a-afb7-8252a981b37a  |
  | extra  | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} |
  | created_at | 2014-07-08T23:09:16+00:00 |
  | updated_at | 2014-07-16T01:23:23+00:00 |
  | address| 78:e7:d1:23:90:0d |
  ++---+

  
  ironic port-list | grep 78:e7:d1:23:9b:1d  # This is the MAC my hardware list 
says the node should have
  | caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
  # ironic port-show caba5b36-f518-43f2-84ed-0bc516cc
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | b07295ee-1c09-484c-9447-10b9efee340c  |  # 
and tada right node
  | uuid   | caba5b36-f518-43f2-84ed-0bc516cc89df  |
  | extra  | {u'vif_port_id': u'272f2413-0309-4e8b-9a6d-9cb6fdbe978d'} |
  | created_at | 2014-07-08T23:08:26+00:00 |
  | updated_at | 2014-07-16T19:07:56+00:00 |
  | address| 78:e7:d1:23:9b:1d |
  ++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1342919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in Mirantis OpenStack 6.0.x series:
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348642] Re: Rebuild does not work with cells

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348642

Title:
  Rebuild does not work with cells

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The rebuild command will not work with cells.  The command is dropped
  at the api layer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352102] Re: users are unable to create ports on provider networks

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352102

Title:
  users are unable to create ports on provider networks

Status in OpenStack Compute (Nova):
  Fix Released
Status in “nova” package in Ubuntu:
  New

Bug description:
  after commit da66d50010d5b1ba1d7fc9c3d59d81b6c01bb0b0 my users are
  unable to boot vm attached to provider networks, this is a serious
  regression for me as we mostly use provider networks.

  bug which originated the commit
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1284718

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330503] Re: Restarting destination compute manager during resize migration can cause instance data loss

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330503

Title:
  Restarting destination compute manager during resize migration can
  cause instance data loss

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  During compute manager startup init_host is called. One of the
  functions there is to delete instance data that doesn't belong to this
  host i.e. _destroy_evacuated_instances. But this function only checks
  if the local instance belongs to the host or not. It doesn't check the
  task_state or vm_state.

  If at this time a resize migration is taking place and the destination
  compute manager is restarted it might destroy the resizing instance.
  Alternatively, if the resize has completed (vm_state = RESIZED) but
  has not been confirmed/reverted, then a restart of the source compute
  manager might destroy the original instance.

  A similar bug concerning just the migrating state is outlined here:
  https://bugs.launchpad.net/nova/+bug/1319797 and a fix is proposed
  here: https://review.openstack.org/#/c/93903

  It was intended to have that fix deal with resize migrating instances
  as well as those just in the migrating state but as pointed out in a
  review comment this solution will work for migrating but a fix for
  resize would require further changes so I have raised this bug to
  highlight that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313655] Re: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances[gate, smoke] fails

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1313655

Title:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances[gate,smoke]
  fails

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_idempotent_instances[gate,smoke]
  fails with following log:

  2014-04-28 09:46:28.416 | 
--
  2014-04-28 09:46:28.417 | _StringException: Empty attachments:
  2014-04-28 09:46:28.417 |   pythonlogging:''
  2014-04-28 09:46:28.417 |   stderr
  2014-04-28 09:46:28.417 |   stdout
  2014-04-28 09:46:28.417 | 
  2014-04-28 09:46:28.417 | Traceback (most recent call last):
  2014-04-28 09:46:28.417 |   File 
"tempest/thirdparty/boto/test_ec2_instance_run.py", line 115, in 
test_run_idempotent_instances
  2014-04-28 09:46:28.417 | self.assertEqual(reservation_1.id, 
reservation_1a.id)
  2014-04-28 09:46:28.417 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 321, in 
assertEqual
  2014-04-28 09:46:28.417 | self.assertThat(observed, matcher, message)
  2014-04-28 09:46:28.417 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  2014-04-28 09:46:28.418 | raise mismatch_error
  2014-04-28 09:46:28.418 | MismatchError: u'r-fbmcu1r7' != u'r-10ay052f'

  full logs are here: 
  
http://logs.openstack.org/43/90643/2/check/check-tempest-dsvm-neutron-pg/6426a54/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1313655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321220] Re: [EC2] StartInstance response missing instanceset info

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321220

Title:
  [EC2] StartInstance response missing instanceset info

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Startinstance response elements shown as below:

  "http://ec2.amazonaws.com/doc/2013-10-15/"";>
    req-5970ccd7-c763-456c-89f0-5b55ea18880b
    true
  
  "

  But as per the AWS API reference doc, the response elements shown be as below:
  ==
  http://ec2.amazonaws.com/doc/2014-02-01/";>
59dbff89-35bd-4eac-99ed-be587EXAMPLE   

  
i-10a64379

0
pending


80
stopped

  

  
  ===

  here,  information missing in the response elements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346637] Re: VMware: remove ESX driver for juno

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346637

Title:
  VMware: remove ESX driver for juno

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The ESX driver was deprecated in Icehouse and should be removed in
  Juno. This bug is for the removal of the ESX virt driver in nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331092] Re: FlatDHCP manager will hand out networks from other tenants

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1331092

Title:
  FlatDHCP manager will hand out networks from other tenants

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If FlatDhcpManager is used to create specific networks per tenant, a tenant
  will get all networks by default instead of just his or her assigned network.
  Due to context elevation, the network manager doesn't properly ensure that 
the network is owned by the tenant before it creates a nic.

  nova network-create --interface eth0 --bridge-interface br100 --project-id 
 --fixed-range 100.0.0.0/24 foonet
  nova network-create --interface eth1 --bridge-interface br200 --project-id 
 --fixed-range 100.0.0.0/24 barnet

  A instance create inside the foo tenant will get an interface on both
  foonet and barnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1331092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321239] Re: [EC2] StopInstance response missing instanceset info

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321239

Title:
  [EC2] StopInstance response missing instanceset info

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Stoptinstance response elements shown as below:

  Sample Request to stop the specified instance:
  ===
  https://ec2.amazonaws.com/?Action=StopInstances
  &InstanceId.1=i-10a64379
  &AUTHPARAMS
  ==

  Response elements are:
  ==
  ":http://ec2.amazonaws.com/doc/2013-10-15/"";>
req-30edb813-5802-4fa2-8a83-9dbcb751264e
true
  
  "

  But as per the AWS API reference doc, the response elements shown be as below:
  ==
  http://ec2.amazonaws.com/doc/2014-02-01/";>
59dbff89-35bd-4eac-99ed-be587EXAMPLE 

  
i-10a64379

64
stopping


16
running


  
  ===

  The  information missing in the response elements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310817] Re: VMware: concurrent access error when deleting snapshot

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310817

Title:
  VMware: concurrent access error when deleting snapshot

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The VMware Minesweeper CI is occasionally seeing an error when
  deleting snapshots. The error is:

  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply
  incoming.message))
File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File "/opt/stack/nova/nova/exception.py", line 88, in wrapped
  payload)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/exception.py", line 71, in wrapped
  return f(self, context, *args, **kw)
File "/opt/stack/nova/nova/compute/manager.py", line 280, in 
decorated_function
  pass
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 266, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 309, in 
decorated_function
  e, sys.exc_info())
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 296, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 2692, in 
backup_instance
  task_states.IMAGE_BACKUP)
File "/opt/stack/nova/nova/compute/manager.py", line 2758, in 
_snapshot_instance
  update_task_state)
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 645, in snapshot
  _vmops.snapshot(context, instance, name, update_task_state)
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 873, in snapshot
  self._delete_vm_snapshot(instance, vm_ref, snapshot)
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 780, in 
_delete_vm_snapshot
  self._session._wait_for_task(delete_snapshot_task)
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 948, in 
_wait_for_task
  ret_val = done.wait()
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 116, 
in wait
  return hubs.get_hub().switch()
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
187, in switch
  return self.greenlet.switch()
  VMwareDriverException: A general system error occurred: concurrent access

  Full logs for an affected run can be found here:
  http://10.148.255.241/logs/85961/2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322025] Re: [EC2] Terminateinstance returns incorrect current state name

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322025

Title:
  [EC2] Terminateinstance returns incorrect current state name

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  TerminateInstance returns the currentstate name and previousstate name are 
same.
  In the below sample response elements show the currnentstate name and 
previoustate name as "running". 
  Ideally the currentstate name should be "terminated".
  ==
  http://ec2.amazonaws.com/doc/2013-10-15/";>
req-c15f5c7d-2551-4a08-b8b8-255462a09592

  
i-0001

  16
  running


  16
  running

  

  
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304593] Re: VMware: waste of disk datastore when root disk size of instance is 0

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304593

Title:
  VMware: waste of disk datastore when root disk size of instance is 0

Status in OpenStack Compute (Nova):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  When an instance has 0 root disk size an extra image is created on the
  datastore (uuid.0.vmdk that is identical to uuid.vmdk). This is only
  in the case of a linked clone image and wastes space on the datastore.
  The original image that is cached can be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305399] Re: Cannot unshelve instance with user volumes

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305399

Title:
  Cannot unshelve instance with user volumes

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  the steps to reproduce:

  1. boot an instance with user volume: nova boot --flavor 11 --image
  cirros --block-device-mapping /dev/vdb=a6118113-bce9-4e0f-89ce-
  d2aecb0148f8 test_vm1

  2. shelve the instance: nova shelve 958b6615-1a02-46a7-a0cf-
  a4b253f1b9de

  3. unshelve the instance: nova unshelve 958b6615-1a02-46a7-a0cf-
  a4b253f1b9de

  the instance will be in task_state of "unshelving", and the error
  message in log file is:

  [-] Exception during message handling: Invalid volume: status must be 
'available'
  Traceback (most recent call last):
File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply
  incoming.message))
File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File "/opt/stack/nova/nova/exception.py", line 88, in wrapped
  payload)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/exception.py", line 71, in wrapped
  return f(self, context, *args, **kw)
File "/opt/stack/nova/nova/compute/manager.py", line 243, in 
decorated_function
  pass
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 229, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 294, in 
decorated_function
  function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 271, in 
decorated_function
  e, sys.exc_info())
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 258, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 3593, in 
unshelve_instance
  do_unshelve_instance()
File "/opt/stack/nova/nova/openstack/common/lockutils.py", line 249, in 
inner
  return f(*args, **kwargs)
File "/opt/stack/nova/nova/compute/manager.py", line 3592, in 
do_unshelve_instance
  filter_properties, node)
File "/opt/stack/nova/nova/compute/manager.py", line 3617, in 
_unshelve_instance
  block_device_info = self._prep_block_device(context, instance, bdms)
File "/opt/stack/nova/nova/compute/manager.py", line 1463, in 
_prep_block_device
  instance=instance)
File "/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/compute/manager.py", line 1442, in 
_prep_block_device
  self.driver, self._await_block_device_map_created) +
File "/opt/stack/nova/nova/virt/block_device.py", line 364, in 
attach_block_devices
  map(_log_and_attach, block_device_mapping)
File "/opt/stack/nova/nova/virt/block_device.py", line 362, in 
_log_and_attach
  bdm.attach(*attach_args, **attach_kwargs)
File "/opt/stack/nova/nova/virt/block_device.py", line 44, in wrapped
  ret_val = method(obj, context, *args, **kwargs)
File "/opt/stack/nova/nova/virt/block_device.py", line 218, in attach
  volume_api.check_attach(context, volume, instance=instance)
File "/opt/stack/nova/nova/volume/cinder.py", line 229, in check_attach
  raise exception.InvalidVolume(reason=msg)
  InvalidVolume: Invalid volume: status must be 'available'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265839] Re: duplicate index on block_device_mapping ('instance_uuid', 'device_name')

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265839

Title:
  duplicate index on block_device_mapping ('instance_uuid',
  'device_name')

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Due to an upgrade issue in Havana DB migration 186 there is now a
  duplicate index on the block_device_mapping ('instance_uuid',
  'device_name') column for MySQL. (does not affect PostgreSQL).

  DROP TABLE IF EXISTS `block_device_mapping`;
  /*!40101 SET @saved_cs_client = @@character_set_client */;
  /*!40101 SET character_set_client = utf8 */;
  CREATE TABLE `block_device_mapping` (
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`device_name` varchar(255) DEFAULT NULL,
`delete_on_termination` tinyint(1) DEFAULT NULL,
`snapshot_id` varchar(36) DEFAULT NULL,
`volume_id` varchar(36) DEFAULT NULL,
`volume_size` int(11) DEFAULT NULL,
`no_device` tinyint(1) DEFAULT NULL,
`connection_info` mediumtext,
`instance_uuid` varchar(36) DEFAULT NULL,
`deleted` int(11) DEFAULT NULL,
`source_type` varchar(255) DEFAULT NULL,
`destination_type` varchar(255) DEFAULT NULL,
`guest_format` varchar(255) DEFAULT NULL,
`device_type` varchar(255) DEFAULT NULL,
`disk_bus` varchar(255) DEFAULT NULL,
`boot_index` int(11) DEFAULT NULL,
`image_id` varchar(36) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `snapshot_id` (`snapshot_id`),
KEY `volume_id` (`volume_id`),
KEY `block_device_mapping_instance_uuid_idx` (`instance_uuid`),
KEY `block_device_mapping_instance_uuid_device_name_idx` 
(`instance_uuid`,`device_name`),
KEY `block_device_mapping_instance_uuid_virtual_name_device_name_idx` 
(`instance_uuid`,`device_name`),
KEY `block_device_mapping_instance_uuid_volume_id_idx` 
(`instance_uuid`,`volume_id`),
CONSTRAINT `block_device_mapping_instance_uuid_fkey` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`)
  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
  /*!40101 SET character_set_client = @saved_cs_client */;

  
  *** We should drop the 
block_device_mapping_instance_uuid_virtual_name_device_name_idx index for MySQL 
in IceHouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262450] Re: Nova doesn't update vnc listen address during migration with libvirt

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262450

Title:
  Nova doesn't update vnc listen address during migration with libvirt

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Nova should update VNC listen address in libvirt.xml to the
  destination node's vncserver_listen setting on completing migration.
  Without that, the only way to make VMs accessible over VNC after
  migration is to set vncserver_listen to 0.0.0.0 as recommended in:

  http://docs.openstack.org/havana/config-reference/content/configuring-
  openstack-compute-basics.html#section_configuring-compute-migrations

  which is a suboptimal solution from security standpoint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254727] Re: Increase min required libvirt to 0.9.11 to allow use of libvirt python on PyPI

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254727

Title:
  Increase min required libvirt to 0.9.11 to allow use of libvirt python
  on PyPI

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Based on the data in this wiki

  https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix

  and discussions on the dev & operator mailing lists

http://lists.openstack.org/pipermail/openstack-dev/2013-November/019767.html

http://lists.openstack.org/pipermail/openstack-operators/2013-November/003748.html

  We are able to increase the min required libvirt to 0.9.11

  This will allow us to switch to using the (soon to be released)
  standalone libvirt python binding from PyPI, as well as removing some
  old compat code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254972] Re: Volume transition from attaching -> attached is lost, leading to unattached and undettachable volume

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254972

Title:
  Volume transition from attaching -> attached is lost, leading to
  unattached and undettachable volume

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpjb25zb2xlLmh0bWwgQU5EIG1lc3NhZ2U6XCJWb2x1bWUgc3RhdHVzIG11c3QgYmUgYXZhaWxhYmxlIG9yIGVycm9yLCBidXQgY3VycmVudCBzdGF0dXMgaXM6IGF0dGFjaGluZ1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzg1NDQyOTg1OTc0fQ==

  Not super common, but this caused at least two failures in merges
  today.

  From console.html:

  2013-11-26 04:33:12.467 | 
==
  2013-11-26 04:33:12.467 | FAIL: tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON)
  2013-11-26 04:33:12.467 | tearDownClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON)
  2013-11-26 04:33:12.467 | 
--
  2013-11-26 04:33:12.467 | _StringException: Traceback (most recent call last):
  2013-11-26 04:33:12.468 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 85, in tearDownClass
  2013-11-26 04:33:12.468 | 
client.delete_volume(str(cls.volume_to_detach['id']).strip())
  2013-11-26 04:33:12.468 |   File 
"tempest/services/compute/json/volumes_extensions_client.py", line 84, in 
delete_volume
  2013-11-26 04:33:12.468 | return self.delete("os-volumes/%s" % 
str(volume_id))
  2013-11-26 04:33:12.468 |   File "tempest/common/rest_client.py", line 308, 
in delete
  2013-11-26 04:33:12.469 | return self.request('DELETE', url, headers)
  2013-11-26 04:33:12.469 |   File "tempest/common/rest_client.py", line 436, 
in request
  2013-11-26 04:33:12.469 | resp, resp_body)
  2013-11-26 04:33:12.469 |   File "tempest/common/rest_client.py", line 486, 
in _error_checker
  2013-11-26 04:33:12.469 | raise exceptions.BadRequest(resp_body)
  2013-11-26 04:33:12.469 | BadRequest: Bad request
  2013-11-26 04:33:12.470 | Details: {u'badRequest': {u'message': u'Invalid 
input received: Invalid volume: Volume status must be available or error, but 
current status is: attaching', u'code': 400}}
  2013-11-26 04:33:12.470 | 
  2013-11-26 04:33:12.470 | 

  Unfortunately I can't find more in the cinder logs because tempest
  doesn't log the volume id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1254972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226351] Re: Make RBD Usable for Ephemeral Storage

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226351

Title:
  Make RBD Usable for Ephemeral Storage

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Currently in Havana development, RBD as ephemeral storage has serious 
stability
  and performance issues that makes the Ceph cluster a bottleneck for using an
  image as a source.

  Nova has to currently communicate with the external service Glance, which has
  to talk to the separate Ceph storage backend to fetch path information. The
  entire image is then downloaded to local disk, and then imported from local
  disk to RBD. This leaves a stability concern, especially with large images for
  the instance to be successfully created.

  This can be eliminated by instead having Nova's RBD image backend utility
  communicate directly with the Ceph backend to do a copy-on-write of the image.
  Not only does this greatly improve stability, but performance is drastically
  improved by not having to do a full copy of the image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285908] Re: IOError: [Errno 2] No such file or directory: '/opt/stack/data/nova/instances/UUID/libvirt.xml'

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285908

Title:
  IOError: [Errno 2] No such file or directory:
  '/opt/stack/data/nova/instances/UUID/libvirt.xml'

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  2014-02-27 22:17:30.887 26111 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: [Errno 2] No such file or directory: 
'/opt/stack/data/nova/instances/32ce67c4-8444-4f3a-8fda-58c8e303efdc/libvirt.xml'
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 133, in 
_dispatch_and_reply
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 176, in 
_dispatch
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 122, in 
_do_dispatch
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 243, in decorated_function
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher pass
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 229, in decorated_function
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 294, in decorated_function
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 271, in decorated_function
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 258, in decorated_function
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2076, in start_instance
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
self._power_on(context, instance)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2064, in _power_on
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
block_device_info)
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2093, in power_on
  2014-02-27 22:17:30.887 26111 TRACE oslo.messaging.rpc.dispatcher 
self._hard_reboot(context, insta

[Yahoo-eng-team] [Bug 1291364] Re: _destroy_evacuated_instances fails randomly with high number of instances

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291364

Title:
  _destroy_evacuated_instances fails randomly with high number of
  instances

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In our production environment (2013.2.1), we're facing a random error
  thrown while starting nova-compute in Hyper-V nodes.

  The following exception is thrown while calling
  '_destroy_evacuated_instances':

  16:30:58.802 7248 ERROR nova.openstack.common.threadgroup [-] 'NoneType' 
object is not iterable
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  (...)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup   File 
"C:\Python27\lib\site-packages\nova\compute\manager.py", line 532, in 
_get_instances_on_driver
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
name_map = dict((instance['name'], instance) for instance in instances)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
TypeError: 'NoneType' object is not iterable

  Full trace: http://paste.openstack.org/show/73243/

  Our first guess is that this problem is related with number of
  instances in our deployment (~3000), they're all fetched in order to
  check evacuated instances (as Hyper-V is not implementing
  "list_instance_uuids").

  In the case of KVM, this error is not happening as it's using a
  smarter method to get this list based on the UUID of the instances.

  Although this is being reported using Hyper-V, it's a problem that
  could occur in other drivers not implementing "list_instance_uuids"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282643] Re: block/live migration doesn't work with LVM as libvirt storage

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282643

Title:
  block/live migration doesn't work with LVM as libvirt storage

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  ## What we did:

  We were trying to use block migration in a setup that use LVM as
  libvirt storage:

  nova live-migrate --block-migrate  

  ## Current Result:

  Nothing happens, no migration, but in libvirtd.log of the destination
  hypervisor we saw:

     error : virNetClientProgramDispatchError:175 : Failed to open file
  '/dev/instances/instance-015f_disk': No such file or directory

  the /dev/instances/instance-015f_disk is the root disk of our
  instance.

  ## What we found:

  After a bit of wondering in the code of nova, we saw that nova in the
  destination host actually fails to create the instance resources. This
  should have been done as part of pre_live_migration RPC call, but this
  one doesn't receive any disks in the disk_info argument
  
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/driver.py#L4132)
  except the config disk. We found that this due to the fact that LVM
  disks (e.g. root disk) are skipped by driver.get_instance_disk_info
  method, specially by this line
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4585-L4587,
  which skip any disk that is not a file thinking that it must be a
  block storage which not true because LVM disk are created as a block
  type
  
(https://github.com/openstack/nova/blob/stable/havana/nova/virt/libvirt/imagebackend.py#L358),
  snippets for the libvirt.xml below:

   
  
    
    
    
  
  
    
    
    
  
  
    
    
    
    
  
  
    
  
  
  
  
    

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302238] Re: throw exception if no configured affinity filter

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302238

Title:
  throw exception if no configured affinity filter

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  After https://blueprints.launchpad.net/nova/+spec/instance-group-api-
  extension, nova has the feature of creating instance groups with
  affinity or anti-affinity policy and creating vm instance with
  affinity/anti-affinity  group.

  If did not enable ServerGroupAffinityFilter and
  ServerGroupAntiAffinityFilter, then the instance group will not able
  to leverage affinity/anti-affinity.

  Take the following case:
  1) Create a group with affinity
  2) Create two vms with this group
  3) The result is that those two vms was not created on the same host.

  We should  throw exception if using server group with no configured
  affinity filter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240043] Re: get_server_diagnostics must define a hypervisor-independent API

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240043

Title:
  get_server_diagnostics must define a hypervisor-independent API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  get_server_diagnostics currently returns an unrestricted dictionary, which is 
only lightly documented in a few places, e.g.:
  
http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

  That documentation shows explicit differences between libvirt and
  XenAPI.

  There are moves to test + enforce the return values, and suggestions
  that Ceilometer may be interested in consuming the output, therefore
  we need an API which is explicitly defined and not depend on
  hypervisor-specific behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361611] Re: console/virt stop returning arbitrary dicts in driver API

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361611

Title:
  console/virt stop returning arbitrary dicts in driver API

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  We have a general desire though to stop returning / passing arbitrary
  dicts in the virt driver API - On this report we would like to create
  typed objects for consoles that will be used by drivers to return
  values on the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241866] Re: Nova-compute does not clean up Logical Volume on resize/migrate

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241866

Title:
  Nova-compute does not clean up Logical Volume on resize/migrate

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Noticed this starting in RC1 of Havana (Replicated with release build
  of Havana), with

  libvirt_images_type=lvm
  libvirt_images_volume_group=hv1_vg
  in nova.conf

  I am having to manually remove old Logical Volumes after a resize or
  migrate has completed in the Horizon UI;

  Before Resize
  [root@hv1 ~]# lvscan
    ACTIVE'/dev/hv1_vg/volume-9714c3de-b255-46c3-bdf6-234b99309d17' 
[15.00 GiB] inherit
    ACTIVE'/dev/hv1_vg/instance-0047_disk' [20.00 GiB] inherit

  After Resize
  [root@hv3 ~]# lvscan
    ACTIVE'/dev/hv3_vg/instance-0047_disk' [80.00 GiB] inherit

  47_disk still exists;
  [root@hv1 ~]# lvscan
    ACTIVE'/dev/hv1_vg/volume-9714c3de-b255-46c3-bdf6-234b99309d17' 
[15.00 GiB] inherit
    ACTIVE'/dev/hv1_vg/instance-0047_disk' [20.00 GiB] inherit

  In addition, if an instance is migrated or re-sized to a Hypervisor
  with a previous old Logical Volume (with that same instance disk ID)
  it will simply re-use the existing volume and not upgrade/downgrade
  the size.

  Debug logs of the instance going to HV:
  http://pastebin.mozilla.org/3287928 and From HV:
  http://pastebin.mozilla.org/3287949

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296519] Re: finish migration should handle exception to revert instance info

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296519

Title:
  finish migration should handle exception to revert instance info

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  when instance resize, it will call finish_migration at last to create new 
instance and destroy old instance
  if driver layer has problem in create new instance, the instance will be set 
to 'ERROR' state

  we are able to use  reset-state --active  to reset the instance and use it
  but the instance information is set to new instance and not reverted to old 
one

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361097] Re: Compute exception text never present when max sched attempt reached

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361097

Title:
  Compute exception text never present when max sched attempt reached

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When scheduling VMs and the retry logic kicks in, the failed compute
  exception text is saved to be displayed for triaging purposes in the
  conductor/scheduler logs.  When the conductor tries to display the
  exception text when the maximum scheduling attempts have been reached,
  the exception always shows 'None' for the exception text.

  Snippet from scheduler_utils.py...

   msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
  'for instance %(instance_uuid)s. '
  'Last exception: %(exc)s.')
  % {'max_attempts': max_attempts,
  'instance_uuid': instance_uuid,
  'exc': exc})

  That is, 'exc' is erroneously ALWAYS None in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291741] Re: VMWare: Resize action does not change disk

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291741

Title:
  VMWare: Resize action does not change disk

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In "nova/virt/vmwareapi/vmops.py"

  def finish_migration(self, context, migration, instance, disk_info,
   network_info, image_meta, resize_instance=False,
   block_device_info=None, power_on=True):
  """Completes a resize, turning on the migrated instance."""
  if resize_instance:
  client_factory = self._session._get_vim().client.factory
  vm_ref = vm_util.get_vm_ref(self._session, instance)
  vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
  instance)
  reconfig_task = self._session._call_method(
  self._session._get_vim(),
  "ReconfigVM_Task", vm_ref,
  spec=vm_resize_spec)
  .

  finish_migration uses vm_util.get_vm_resize_spec() to get resize
  parameters.

  But in "nova/virt/vmwareapi/vm_util.py"

  def get_vm_resize_spec(client_factory, instance):
  """Provides updates for a VM spec."""
  resize_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
  resize_spec.numCPUs = int(instance['vcpus'])
  resize_spec.memoryMB = int(instance['memory_mb'])
  return resize_spec

  the get_vm_resize_spec action does not set up disk size to resize.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362129] Re: For rbd image backend, disk IO rate limiting isn't supported

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362129

Title:
  For rbd image backend, disk IO rate limiting isn't supported

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  when using rbd as disk backend.   images_type=rbd in nova.conf

  disk IO tunning doesn't work as described
  https://wiki.openstack.org/wiki/InstanceResourceQuota

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358380] Re: rebuild API doesn't handle OnsetFile*LimitExceeded quota errors

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358380

Title:
  rebuild API doesn't handle OnsetFile*LimitExceeded quota errors

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I noticed this while reviewing
  https://review.openstack.org/#/c/102103/ for bug 1298131, the 3
  OnsetFile*LimitExceeded exceptions from
  nova.compute.api.API._check_injected_file_quota are not handled in the
  os-compute rebuild APIs (v2 or v3), and I'm not even seeing specific
  unit testing for those exceptions in the _check_injected_file_quota
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358795] Re: instance.create.end notification may not be sent if the instance is deleted during boot

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358795

Title:
  instance.create.end notification may not be sent if the instance is
  deleted during boot

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If an instance is deleted at a point during the virt driver.spawn()
  method that doesn't raise an exception, or while the power state is
  being updated, then the instance.save() which sets the final power
  state, vm_state, task_state, and launched_at will raise
  InstanceNotFound or UnexpectedDeletingTaskStateError and cause the
  final create.end notification to be skipped.  This could have
  implications for billing/usage in a deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360113] Re: V2 hypervisors Unit Test tests hypervisors API as non Admin

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360113

Title:
  V2 hypervisors Unit Test tests hypervisors API as non Admin

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  hypervisors API are Admin API and unit tests should test those
  accordingly. But All the V2 hypervisors Unit tests
  
(https://github.com/openstack/nova/blob/master/nova/tests/api/openstack/compute/contrib/test_hypervisors.py)
  tests those as a  non Admin API.

  Issue is in fake_policy.py where V2 hypervisors API is not marked as
  Admin role unlike V3.
  https://github.com/openstack/nova/blob/master/nova/tests/fake_policy.py#L221

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358702] Re: Hyper-V unit test fails on Windows due to path separator inconsistency: nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358702

Title:
  Hyper-V unit test fails on Windows due to path separator
  inconsistency:
  
nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The following test fails due to mismatching in the path separator.

  FAIL: 
nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path
  --
  _StringException: Empty attachments:
pythonlogging:''

  Traceback (most recent call last):
File "C:\OpenStack\nova\nova\tests\virt\hyperv\test_pathutils.py", line 48, 
i
   test_lookup_configdrive_path
  format_ext)
File "C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\
  ite-packages\testtools\testcase.py", line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File "C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\
  ite-packages\testtools\testcase.py", line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = 'C:/fake_instance_dir\\configdrive.vhd'
  actual= 'C:/fake_instance_dir/configdrive.vhd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360022] Re: min_ram and min_disk is ignored when boot from volume

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360022

Title:
  min_ram and min_disk is ignored when boot from volume

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When boot from volume and the volume is created from a image,  the
  original image's min_ram, min_disk attributes are ignored, this is not
  good.

  The reason of this failure is because the _check_requested_image() in
  compute/api.py ignore if the source if a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1360022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356983] Re: Need to document installing graphviz from distro for tox -e docs run

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356983

Title:
  Need to document installing graphviz from distro for tox -e docs run

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This commit:

  
https://github.com/openstack/nova/commit/a507d42cf5d9912c2b3622e84afb8b7d3278595b

  Made warnings get treated like errors when building docs, therefore,
  people are going to need to get used to running 'tox -e docs' locally.
  However, you need the graphviz package for that to work which comes
  from the distro, so we need to document that in the developement
  environment setup doc here:

  http://docs.openstack.org/developer/nova/devref/development.environment.html
  #linux-systems

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356815] Re: Nova hacking check for jsonutils used invalid number

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356815

Title:
  Nova hacking check for jsonutils used invalid number

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  It should have been 324 and not 324

  commit 243879f5c51fc45f03491bcb78765945ddf76be8 was bad

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359475] Re: AttributeError: 'module' object has no attribute 'VIR_MIGRATE_LIVE'

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359475

Title:
  AttributeError: 'module' object has no attribute 'VIR_MIGRATE_LIVE'

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This commit added some new default flags for migration in the libvirt
  driver:

  
https://github.com/openstack/nova/commit/26504d71ceaecf22f135d8321769db801290c405

  However those new flags weren't added to
  nova.tests.virt.libvirt.fakelibvirt, so if you're not running with a
  real libvirt you're going to get this:

  FAIL: 
nova.tests.virt.libvirt.test_driver.LibvirtConnTestCase.test_live_migration_uses_migrateToURI_without_dest_listen_addrs
  tags: worker-1
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [nova.network.driver] Loading network driver 'nova.network.linux_net'
  INFO [nova.virt.driver] Loading compute driver 'nova.virt.fake.FakeDriver'
  WARNING [nova.virt.libvirt.firewall] Libvirt module could not be loaded. 
NWFilterFirewall will not work correctly.
  ERROR [nova.virt.libvirt.driver] Live Migration failure: 'module' object has 
no attribute 'VIR_MIGRATE_LIVE'
  }}}

  traceback-1: {{{
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 112, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 88, in __call__
  reraise(error[0], error[1], error[2])
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 82, in __call__
  cleanup(*args, **kwargs)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/mox.py",
 line 286, in VerifyAll
  mock_obj._Verify()
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/mox.py",
 line 506, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  ExpectedMethodCallsError: Verify: Expected methods never called:
0.  Stub for Domain.migrateToURI() -> 
None.__call__('qemu+tcp://dest/system', , None, 0) -> None
  }}}

  Traceback (most recent call last):
File "nova/tests/virt/libvirt/test_driver.py", line 4607, in 
test_live_migration_uses_migrateToURI_without_dest_listen_addrs
  migrate_data=migrate_data)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 393, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 404, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 454, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 385, in match
  reraise(*matchee)
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/home/jenkins/workspace/osee-nova-merge/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 902, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File "nova/virt/libvirt/driver.py", line 4798, in _live_migration
  recover_method(context, instance, dest, block_migration)
File "nova/openstack/common/excutils.py", line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "nova/virt/libvirt/driver.py", line 4764, in _live_migration
  flagvals = [getattr(libvirt, x.strip()) for x in flaglist]
  AttributeError: 'module' object has no attribute 'VIR_MIGRATE_LIVE'

  That is in basically anything that goes through the _live_migrate
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yah

[Yahoo-eng-team] [Bug 1359835] Re: select_destinations should send start/end notifications

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359835

Title:
  select_destinations should send start/end notifications

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In the filter scheduler, schedule_run_instance sends notifications,
  but select_destinations does not.

  This is inconsistent, and we should send start/end notifications from
  both code paths.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359002] Re: comments misspelled in type_filter

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359002

Title:
  comments misspelled in type_filter

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  the comments of nova.scheduler.filters.type_filter.py:

  class TypeAffinityFilter(filters.BaseHostFilter):
  ...
  def host_passes(self, host_state, filter_properties):
  """Dynamically limits hosts to one instance type

  Return False if host has any instance types other then the requested
  type. Return True if all instance types match or if host is empty.
  """
  ...

  
  'other then' in the next-to-last line should be 'other than'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357152] Re: nova.utils.TIME_UNITS['Day'] is only 84400 sec

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357152

Title:
  nova.utils.TIME_UNITS['Day'] is only 84400 sec

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Full day is 86400 sec

  diff --git a/nova/utils.py b/nova/utils.py
  index 65d99aa..0b9afe4 100644
  --- a/nova/utils.py
  +++ b/nova/utils.py
  @@ -90,7 +90,7 @@ TIME_UNITS = {
   'SECOND': 1,
   'MINUTE': 60,
   'HOUR': 3600,
  -'DAY': 84400
  +'DAY': 86400
   }

  
  Based on code from master branch 2014-08-14 HEAD commit 
374e9766c20c9f83dbd8139aa9d95a66b5da7295

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354448] Re: The Hyper-V driver should raise a InstanceFaultRollback in case of resize down requests

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354448

Title:
  The Hyper-V driver should raise a InstanceFaultRollback in case of
  resize down requests

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The Hyper-V driver does not support resize down and is currently
  rising an exception if the user attempts to do that, causing the
  instance to go in ERROR state.

  The driver should use the recently introduced instance faults
  "exception.InstanceFaultRollback" instead, which will leave the
  instance in ACTIVE state as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353029] Re: Trusted Filter does not work when Mt. Wilson returns non-ISO formatted dates.

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353029

Title:
  Trusted Filter does not work when Mt. Wilson returns non-ISO formatted
  dates.

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The trusted_filter assumes that all OAT attestation servers will
  return `vtime` in ISO8601 format. This is not the case with the Mt.
  Wilson attestation server, and there is no way to configure Mt. Wilson
  to do ship an appropriately formatted `vtime` which prevents trusted
  hosts from appearing in the available host list for instances booted
  from a flavor with the trust extra-spec.

  It's a pretty trivial change to allow locale-formatted dates to be
  used in the trusted_filter as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356687] Re: hacking check for jsonutils produces pep8 traceback

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356687

Title:
  hacking check for jsonutils produces pep8 traceback

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  the new jsonutils hacking check produces a pep8 traceback because it
  returns a set (column offset and error text) instead of an iterable
  (as logical line checks, like this check, should).

  commit 243879f5c51fc45f03491bcb78765945ddf76be8
  Change-Id: I86ed6cd3316dd4da5e1b10b36a3ddba3739316d3

  = 8< = TEST CASE = 8< =
  $ echo 'foo = json.dumps(bar)' >nova/foobar.py
  $ flake8 -vv nova/foobar.py
  local configuration: in /home/dev/Desktop/nova-test
ignore = 
E121,E122,E123,E124,E125,E126,E127,E128,E129,E131,E251,H405,H803,H904
exclude = 
.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools
  checking nova/foobar.py
  foo = json.dumps(bar)
  Traceback (most recent call last):
File "/home/dev/Desktop/nova-test/.venv/bin/flake8", line 9, in 
  load_entry_point('flake8==2.1.0', 'console_scripts', 'flake8')()
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/flake8/main.py",
 line 32, in main
  report = flake8_style.check_files()
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/pep8.py", 
line 1672, in check_files
  runner(path)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/flake8/engine.py",
 line 73, in input_file
  return fchecker.check_all(expected=expected, line_offset=line_offset)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/pep8.py", 
line 1436, in check_all
  self.check_logical()
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/pep8.py", 
line 1338, in check_logical
  for offset, text in self.run_check(check, argument_names) or ():
  TypeError: 'int' object is not iterable
  = 8< = TEST CASE = 8< =

  
  diff --git a/nova/hacking/checks.py b/nova/hacking/checks.py
  index a1dd614..7fe7412 100644
  --- a/nova/hacking/checks.py
  +++ b/nova/hacking/checks.py
  @@ -300,7 +300,7 @@ def use_jsonutils(logical_line, filename):
   for f in json_funcs:
   pos = logical_line.find('json.%s' % f)
   if pos != -1:
  -return (pos, msg % {'fun': f})
  +yield (pos, msg % {'fun': f})
   
   
   def factory(register):
  = 8< = PATCH = 8< =

  it's late, so tomorrow, if there hasn't been any activity on this,
  then i'll submit a patch for review.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356058] Re: Various extensions don't respect content header when returning a 202

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356058

Title:
  Various extensions don't respect content header when returning a 202

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Various nova extensions commands return a text response ("202 Accepted
  [..]") even when provided with an "Accept: application/json" header.
  For other 202 responses, either an empty body or a JSON-formatted
  response is standard. The implementation should be consistent with
  other 202's from Nova and other OpenStack Services.

  This seems to be due to returning a webob exception instead of a response. 
The affected extensions are:
  $ grep HTTPAccepted nova/api/openstack/compute/contrib/*.py
  nova/api/openstack/compute/contrib/cloudpipe_update.py:return 
webob.exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/fixed_ips.py:return 
webob.exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/networks_associate.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/os_networks.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/os_networks.py:return 
exc.HTTPAccepted()
  nova/api/openstack/compute/contrib/os_tenant_networks.py:response 
= exc.HTTPAccepted()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352732] Re: live migration fails:CPU feature `erms' specified more than once

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352732

Title:
  live migration fails:CPU feature `erms' specified more than once

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When I live migrate VMs from one node to another(both are the same
  species),  nova service at the destination node throws exception
  below, my physical compute nodes are both ubuntu 14.04lts and
  openstack git stable/icehouse

  (I notice there are similar bug reports like
  https://bugs.launchpad.net/nova/+bug/1303536, but the codes throwing
  exception seem not the same )

  /usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py
  4229 # Compare CPU
  4230 source_cpu_info = src_compute_info['cpu_info']
  4231 self._compare_cpu(source_cpu_info)

  Refer to http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult
  2014-08-05 14:20:11.027 914 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: XML error: CPU feature `erms' specified more than once
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
133, in _dispatch_and_reply
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
176, in _dispatch
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw) 
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, in 
decorated_function
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, in 
decorated_function
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/compute/manager.py", line 4440, in 
check_can_live_migrate_destination
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
block_migration, disk_over_commit)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4231, in check_can_live_migrate_destination
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
self._compare_cpu(source_cpu_info)
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4373, in _compare_cpu
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
LOG.error(m, {'ret': ret, 'u': u})
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  2014-08-05 14:20:11.027 914 TRACE oslo.messaging.rpc.dispatcher 
six.re

[Yahoo-eng-team] [Bug 1352659] Re: race in server show api

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352659

Title:
  race in server show api

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Because of the instance object lazy loading its possible to get into
  situations where the API code is half way through assembling data to
  return to the client when the instance disappears underneath it. We
  really need to ensure everything we will need is retreived up front so
  we have a consistent snapshot view of the instance

  [req-5ca39eb3-c1d2-433b-8dac-1bf5f338ce1f 
ServersAdminNegativeV3Test-1453501114 ServersAdminNegativeV3Test-364813115] 
Unexpected exception in API method
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions Traceback 
(most recent call last):
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 473, in wrapped
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/plugins/v3/servers.py", line 
410, in show
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions return 
self._view_builder.show(req, instance)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/views/servers.py", line 268, in 
show
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
_inst_fault = self._get_fault(request, instance)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/views/servers.py", line 214, in 
_get_fault
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions fault = 
instance.fault
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/base.py", line 67, in getter
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
self.obj_load_attr(name)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 520, in obj_load_attr
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
expected_attrs=[attrname])
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/base.py", line 153, in wrapper
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions result 
= fn(cls, context, *args, **kwargs)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 310, in get_by_uuid
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
use_slave=use_slave)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/db/api.py", line 676, in instance_get_by_uuid
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
columns_to_join, use_slave=use_slave)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 167, in wrapper
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1715, in 
instance_get_by_uuid
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
columns_to_join=columns_to_join, use_slave=use_slave)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 1727, in 
_instance_get_by_uuid
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions raise 
exception.InstanceNotFound(instance_id=uuid)
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 
InstanceNotFound: Instance fcff276a-d410-4760-9b98-4014024b1353 could not be 
found.
  2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions 

  http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm-
  nova-v3-full-master/a278802/logs/screen-n-api.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352659/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352768] Re: virt: error in log when log exception in guestfs.py

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352768

Title:
  virt: error in log when log exception in guestfs.py

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In this code review, https://review.openstack.org/#/c/104262/  brings an 
error in log because of the line at 137:
  + LOG.info(_LI("Unable to force TCG mode, libguestfs too old?"),
  + ex)

  Error is:

  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
  msg = self.format(record)
File "/opt/stack/nova/nova/openstack/common/log.py", line 685, in format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
  return fmt.format(record)
File "/opt/stack/nova/nova/openstack/common/log.py", line 649, in format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
  TypeError: not all arguments converted during string formatting
  Logged from file guestfs.py, line 139

  To fix this issue, we just need to add %s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352141] Re: uncaught exception in floating ip creation when pool is not found with neutron backend

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352141

Title:
  uncaught exception in floating ip creation when pool is not found with
  neutron backend

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  2014-07-31 07:14:07.574 ERROR nova.api.openstack 
[req-98b42d5d-575f-4209-b711-4cb2ae8ef14e FloatingIPsNegativeTestXML-1138647745 
FloatingIPsNegativeTestXML-1764448859] Caught error: Floating ip pool not found.
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 663, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 917, in __call__
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack content_type, 
body, accept)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 983, in _process_stack
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/wsgi.py", line 1070, in dispatch
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/contrib/floating_ips.py", line 
158, in create
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack address = 
self.network_api.allocate_floating_ip(context, pool)
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/network/neutronv2/api.py", line 927, in 
allocate_floating_ip
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/network/neutronv2/api.py", line 927, in 
allocate_floating_ip
  2014-07-31 07:14:07.574 28614 TRACE nova.api.openstack pool_id = 
self._get_floating_ip_pool_id_by_name_or_id(client, pool)
  2014-07-31 07:14:07.574 28614 TRACE

[Yahoo-eng-team] [Bug 1351810] Re: Move _is_mapping logic to more central place

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351810

Title:
  Move _is_mapping logic to more central place

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This bug is a follow-up to Nikola Dipanov's comment in 
https://review.openstack.org/#/c/109834/2/nova/compute/manager.py.
  The logic to identify volumes is currently a nested function in 
_default_block_device_names, named _is_mapping.  It should be moved to a more 
general place so others could utilize it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352428] Re: HyperV "Shutting Down" state is not mapped

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352428

Title:
  HyperV "Shutting Down" state is not mapped

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The method which gets VM related information can fail if the VM is in an 
intermediary state such as "Shutting down".
  The reason is that some of the Hyper-V specific vm states are not defined as 
possible states.

  This will result into a key error as shown bellow:

  http://paste.openstack.org/show/90015/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348204] Re: test_encrypted_cinder_volumes_cryptsetup times out waiting for volume to be available

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348204

Title:
  test_encrypted_cinder_volumes_cryptsetup times out waiting for volume
  to be available

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  http://logs.openstack.org/15/109115/1/check/check-tempest-dsvm-
  full/168a5dd/console.html#_2014-07-24_01_07_09_115

  2014-07-24 01:07:09.116 | 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup[compute,image,volume]
  2014-07-24 01:07:09.116 | 

  2014-07-24 01:07:09.116 | 
  2014-07-24 01:07:09.116 | Captured traceback:
  2014-07-24 01:07:09.117 | ~~~
  2014-07-24 01:07:09.117 | Traceback (most recent call last):
  2014-07-24 01:07:09.117 |   File "tempest/test.py", line 128, in wrapper
  2014-07-24 01:07:09.117 | return f(self, *func_args, **func_kwargs)
  2014-07-24 01:07:09.117 |   File 
"tempest/scenario/test_encrypted_cinder_volumes.py", line 63, in 
test_encrypted_cinder_volumes_cryptsetup
  2014-07-24 01:07:09.117 | self.attach_detach_volume()
  2014-07-24 01:07:09.117 |   File 
"tempest/scenario/test_encrypted_cinder_volumes.py", line 49, in 
attach_detach_volume
  2014-07-24 01:07:09.117 | self.nova_volume_detach()
  2014-07-24 01:07:09.117 |   File "tempest/scenario/manager.py", line 757, 
in nova_volume_detach
  2014-07-24 01:07:09.117 | self._wait_for_volume_status('available')
  2014-07-24 01:07:09.117 |   File "tempest/scenario/manager.py", line 710, 
in _wait_for_volume_status
  2014-07-24 01:07:09.117 | self.volume_client.volumes, self.volume.id, 
status)
  2014-07-24 01:07:09.118 |   File "tempest/scenario/manager.py", line 230, 
in status_timeout
  2014-07-24 01:07:09.118 | not_found_exception=not_found_exception)
  2014-07-24 01:07:09.118 |   File "tempest/scenario/manager.py", line 296, 
in _status_timeout
  2014-07-24 01:07:09.118 | raise exceptions.TimeoutException(message)
  2014-07-24 01:07:09.118 | TimeoutException: Request timed out
  2014-07-24 01:07:09.118 | Details: Timed out waiting for thing 
4ef6a14a-3fce-417f-aa13-5aab1789436e to become available

  I've actually been seeing this out of tree in our internal CI also but
  thought it was just us or our slow VMs, this is the first I've seen it
  upstream.

  From the traceback in the console log, it looks like the volume does
  get to available status because it doesn't get out of that state when
  tempest is trying to delete the volume on tear down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348584] Re: KeyError in nova.compute.api.API.external_instance_event

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348584

Title:
  KeyError in nova.compute.api.API.external_instance_event

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The fix for bug 1333654 ensured events for instance without host are not 
accepted.
  However, the instances without the host are still being passed to the compute 
API layer.

  This is likely to result in keyerrors as the one found here:
  http://logs.openstack.org/51/109451/2/check/check-tempest-dsvm-
  neutron-full/ad70f74/logs/screen-n-api.txt.gz#_2014-07-25_01_41_48_068

  The fix for this bug should be straightforward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1348584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350268] Re: allocate_fixed_ip should cleanup with correct param

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350268

Title:
  allocate_fixed_ip should cleanup with correct param

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  in nova-network , when allocate_fixed_ip failed for some unknown
  reason

  it will add 
  cleanup.append(fip.disassociate)

  to cleanup the stuffs it did when handle exception
  but the function is following in objects/fixed_ips.py
  def disassociate(self, context): 

  so the cleanup function will not be executed correctly

  try:
  f()
  except Exception:
  LOG.warn(_('Error cleaning up fixed ip allocation. '
 'Manual cleanup may be required.'),
   exc_info=True)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349933] Re: xenapi: Do not retry snapshot uploads on glance 500

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349933

Title:
  xenapi: Do not retry snapshot uploads on glance 500

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If Glance returns a 500 response on an initial attempt to upload
  a snapshot image, it will set the image status to KILLED/DELETED.
  Any retry attempts will fail with a 409 response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348288] Re: Resource tracker should report virt driver stats

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348288

Title:
  Resource tracker should report virt driver stats

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  sha1 Nova at: 106fb458c7ac3cc17bb42d1b83ec3f4fa8284e71
  sha1 ironic at: 036c79e38f994121022a69a0bc76917e0048fd63

  The ironic driver passes stats to nova's resource tracker in
  get_available_resources(). Sometimes these appear to get through to
  the database without modification, sometimes they seem to be replaced
  entirely by other stats generated by the resource tracker. The correct
  behaviour should be to combine the two.

  As an example, the following query on the compute_nodes table in
  nova's database shows the contents for a tripleo system (all nodes are
  ironic):

  mysql> select hypervisor_hostname, stats from compute_nodes;
  
+--+-+
  | hypervisor_hostname  | stats

   |
  
+--+-+
  | 4e014e26-2f90-4a91-a6f0-c1978df88369 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | fadb50bf-26ec-420c-a13f-f182e38569d6 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | ffe5a5bf-7151-468c-b9bb-980477e5f736 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | 752966ea-17f8-4d6d-87a4-03c91cb65354 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | f2f0ecb1-6234-4975-808f-a17534c9ae6c | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | 9adf4551-24f0-43a7-9267-a20cfa309137 | {"cpu_arch": "amd64", 
"ironic_driver": "ironic.nova.virt.ironic.driver.IronicDriver"} 
  |
  | 1bd13fc5-4938-4781-9680-ad1e0ccec77c | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | 88a39f5d-6174-47c9-9817-13d08bf2e079 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | ec6b5dc6-de38-4e23-a967-b87c10da37e3 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | ac52fd79-e0b9-4749-b794-590d5c181b4a | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | a1b81342-ed57-4310-8d5b-a2aa48718f1f | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | 0588e463-748a-4248-9110-6e18988cfa4e | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | 8f73d8dc-5d8c-47b0-a866-b829edc3667f | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b2e64b06606529": 1} |
  | bac38b1d-f7f9-4770-9195-ff204a0c05c3 | {"cpu_arch": "amd64", 
"ironic_driver": "ironic.nova.virt.ironic.driver.IronicDriver"} 
  |
  | 62cc33f7-701b-47f6-8f50-3f7c1ca0f0a3 | {"num_task_None": 1, "io_workload": 
0, "num_instances": 1, "num_vm_active": 1, "num_vcpus_used": 24, 
"num_os_type_None": 1, "num_proj_505908300744403496b

[Yahoo-eng-team] [Bug 1351127] Re: Exception in string format operation

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351127

Title:
  Exception in string format operation

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  "tox -e docs" shows the following error

  2014-07-31 22:20:04.961 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] Exception in string format 
operation
  2014-07-31 22:20:04.961 23265 TRACE nova.exception Traceback (most recent 
call last):
  2014-07-31 22:20:04.961 23265 TRACE nova.exception   File 
"/Users/dims/openstack/nova/nova/exception.py", line 118, in __init__
  2014-07-31 22:20:04.961 23265 TRACE nova.exception message = self.msg_fmt 
% kwargs
  2014-07-31 22:20:04.961 23265 TRACE nova.exception KeyError: u'flavor_id'
  2014-07-31 22:20:04.961 23265 TRACE nova.exception 
  2014-07-31 22:20:04.962 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] reason: 
  2014-07-31 22:20:04.962 23265 ERROR nova.exception 
[req-ad8a37c6-85ae-4218-b8e6-d15eda5a1553 None] code: 404

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350800] Re: nova v3 api boot failed, return error 500 instead of 404

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350800

Title:
  nova v3 api boot failed, return error 500 instead of 404

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  when boot an instance without specify the networking.
  nova-api v2 and v3 behavior difference:

  
  [tagett@stack-01 devstack]$ nova --os-compute-api-version 2 boot vm2 --flavor 
1 --image 44c37b90-0ec3-460a-bdf2-bd8bb98c9fdf
  ERROR (BadRequest): Multiple possible networks found, use a Network ID to be 
more specific. (HTTP 400) (Request-ID: req-dbbf32e4-7eda-421c-8b3b-2ae697769077)
  [tagett@stack-01 devstack]$ nova --os-compute-api-version 3 boot vm2 --flavor 
1 --image 44c37b90-0ec3-460a-bdf2-bd8bb98c9fdf
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500)

  nova-api should report BadRequest with correct error message instead
  of 'Unexpected API Error'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348244] Re: debug log messages need to be unicode

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348244

Title:
  debug log messages need to be unicode

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Committed

Bug description:
  Debug logs should be:

  LOG.debug("message")  should be LOG.debug(u"message")

  Before the translation of debug log messages was removed, the
  translation was returning unicode.   Now that they are no longer
  translated they need to be explicitly marked as unicode.

  This was confirmed by discussion with dhellman.   See
  2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs
  /%23openstack-oslo/%23openstack-oslo.2014-07-23.log

  The problem was discovered when an exception was used as replacement
  text in a debug log message:

 LOG.debug("Failed to mount image %(ex)s)", {'ex': e})

  In particular it was discovered as part of enabling lazy translation,
  where the exception message is replaced with an object that does not
  support str().   Note that this would also fail without lazy enabled,
  if a translation for the exception message was provided that was
  unicode.

  
  Example trace: 

   Traceback (most recent call last):
File "nova/tests/virt/disk/test_api.py", line 78, in 
test_can_resize_need_fs_type_specified
  self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True))
File "nova/virt/disk/api.py", line 208, in is_image_partitionless
  fs.setup()
File "nova/virt/disk/vfs/localfs.py", line 80, in setup
  LOG.debug("Failed to mount image %(ex)s)", {'ex': e})
File "/usr/lib/python2.7/logging/__init__.py", line 1412, in debug
  self.logger.debug(msg, *args, **kwargs)
File "/usr/lib/python2.7/logging/__init__.py", line 1128, in debug
  self._log(DEBUG, msg, args, **kwargs)
File "/usr/lib/python2.7/logging/__init__.py", line 1258, in _log
  self.handle(record)
File "/usr/lib/python2.7/logging/__init__.py", line 1268, in handle
  self.callHandlers(record)
File "/usr/lib/python2.7/logging/__init__.py", line 1308, in callHandlers
  hdlr.handle(record)
File "nova/test.py", line 212, in handle
  self.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
  return fmt.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py",
 line 167, in __str__
  raise UnicodeError(msg)
  UnicodeError: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead.
  ==
  FAIL: nova.tests.virt.disk.test_api.APITestCase.test_resize2fs_e2fsck_fails
  tags: worker-3

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350751] Re: Nova responses unexpected error messages when fail to create flavor

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350751

Title:
  Nova responses unexpected error messages when fail to create flavor

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The response message is unexpected(not easy to understand) if the
  requested resource exceeded limitation when to create new flavor.
  Following are examples:

  1. requested ram exceeded limitation:
  Run "nova --debug flavor-create ram_test 10 99 20 1", the response 
info is:
  RESP BODY: {"badRequest": {"message": "Invalid input received: memory_mb must 
be <= 2147483647", "code": 400}}

  2. requested disk exceeded limitation:
  nova --debug flavor-create ram_test 10 1024 200 1
  RESP BODY: {"badRequest": {"message": "Invalid input received: root_gb must 
be <= 2147483647", "code": 400}}

  I think "memory_mb" and "root_gb" in above response messages are
  unexpected, "ram" and "disk" could be better to user.

  Hope for your comments. Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1350751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351350] Re: Warnings and Errors in Document generation

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351350

Title:
  Warnings and Errors in Document generation

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Just pick any recent docs build and you will see a ton of issues:

  Example from:
  
http://logs.openstack.org/46/46/1/check/gate-nova-docs/4f3e8c4/console.html

  2014-08-01 03:40:18.805 | 
/home/jenkins/workspace/gate-nova-docs/nova/api/openstack/compute/contrib/hosts.py:docstring
 of nova.api.openstack.compute.contrib.hosts.HostController.index:3: ERROR: 
Unexpected indentation.
  2014-08-01 03:40:18.806 | 
/home/jenkins/workspace/gate-nova-docs/nova/api/openstack/compute/contrib/hosts.py:docstring
 of nova.api.openstack.compute.contrib.hosts.HostController.index:5: WARNING: 
Block quote ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.806 | 
/home/jenkins/workspace/gate-nova-docs/nova/api/openstack/compute/plugins/v3/hosts.py:docstring
 of nova.api.openstack.compute.plugins.v3.hosts.HostController.index:6: 
WARNING: Block quote ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.806 | 
/home/jenkins/workspace/gate-nova-docs/nova/compute/resource_tracker.py:docstring
 of nova.compute.resource_tracker.ResourceTracker.resize_claim:7: ERROR: 
Unexpected indentation.
  2014-08-01 03:40:18.807 | 
/home/jenkins/workspace/gate-nova-docs/nova/compute/resource_tracker.py:docstring
 of nova.compute.resource_tracker.ResourceTracker.resize_claim:8: WARNING: 
Block quote ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.847 | 
/home/jenkins/workspace/gate-nova-docs/nova/db/sqlalchemy/api.py:docstring of 
nova.db.sqlalchemy.api.instance_get_all_by_filters:23: WARNING: Definition list 
ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.847 | 
/home/jenkins/workspace/gate-nova-docs/nova/db/sqlalchemy/api.py:docstring of 
nova.db.sqlalchemy.api.instance_get_all_by_filters:24: WARNING: Definition list 
ends without a blank line; unexpected unindent.
  2014-08-01 03:40:18.847 | 
/home/jenkins/workspace/gate-nova-docs/nova/db/sqlalchemy/api.py:docstring of 
nova.db.sqlalchemy.api.instance_get_all_by_filters:31: ERROR: Unexpected 
indentation.
  2014-08-01 03:40:18.848 | 
/home/jenkins/workspace/gate-nova-docs/nova/db/sqlalchemy/utils.py:docstring of 
nova.db.sqlalchemy.utils.create_shadow_table:6: ERROR: Unexpected indentation.
  2014-08-01 03:40:18.848 | 
/home/jenkins/workspace/gate-nova-docs/nova/hooks.py:docstring of 
nova.hooks:15: WARNING: Inline emphasis start-string without end-string.
  2014-08-01 03:40:18.849 | 
/home/jenkins/workspace/gate-nova-docs/nova/hooks.py:docstring of 
nova.hooks:15: WARNING: Inline strong start-string without end-string.
  2014-08-01 03:40:18.849 | 
/home/jenkins/workspace/gate-nova-docs/nova/hooks.py:docstring of 
nova.hooks:18: WARNING: Inline emphasis start-string without end-string.
  2014-08-01 03:40:18.849 | 
/home/jenkins/workspace/gate-nova-docs/nova/hooks.py:docstring of 
nova.hooks:18: WARNING: Inline strong start-string without end-string.
  2014-08-01 03:40:18.850 | 
/home/jenkins/workspace/gate-nova-docs/nova/hooks.py:docstring of 
nova.hooks:23: WARNING: Inline emphasis start-string without end-string.
  2014-08-01 03:40:18.850 | 
/home/jenkins/workspace/gate-nova-docs/nova/hooks.py:docstring of 
nova.hooks:23: WARNING: Inline strong start-string without end-string.
  2014-08-01 03:40:18.850 | :0: WARNING: Inline emphasis start-string 
without end-string.
  2014-08-01 03:40:18.851 | :0: WARNING: Inline strong start-string 
without end-string.
  2014-08-01 03:40:18.851 | 
/home/jenkins/workspace/gate-nova-docs/nova/image/api.py:docstring of 
nova.image.api.API.get_all:8: WARNING: Inline strong start-string without 
end-string.
  2014-08-01 03:40:18.852 | 
/home/jenkins/workspace/gate-nova-docs/nova/keymgr/key_mgr.py:docstring of 
nova.keymgr.key_mgr.KeyManager.copy_key:9: WARNING: Definition list ends 
without a blank line; unexpected unindent.
  2014-08-01 03:40:18.852 | 
/home/jenkins/workspace/gate-nova-docs/nova/notifications.py:docstring of 
nova.notifications.info_from_instance:6: WARNING: Field list ends without a 
blank line; unexpected unindent.
  2014-08-01 03:40:18.852 | 
/home/jenkins/workspace/gate-nova-docs/nova/objects/base.py:docstring of 
nova.objects.base.NovaObject.obj_make_compatible:10: ERROR: Unexpected 
indentation.
  2014-08-01 03:40:18.853 | 
/home/jenkins/workspace/gate-nova-docs/nova/objects/base.py:docstring of 
nova.objects.base.NovaObject.obj_make_compatible:11: WARNING: Block quote ends 
without a blank line; unexpected unindent.
  2014-08-01 03:40:18.853 | 
/home/jenkins/workspace/gate-nova-docs/nova/objects/instance.py:docstring of 
nova.objects

[Yahoo-eng-team] [Bug 1349268] Re: OverLimit: VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349268

Title:
  OverLimit: VolumeLimitExceeded: Maximum number of volumes allowed (10)
  exceeded

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The instance will be ERROR when booting instance from volume, if the
  volume quota is not enough. And there is even no useful error message
  to show to the user. Following is the related nova-compute.log:

  2014-07-27 17:56:19.372 17060 ERROR nova.compute.manager 
[req-4e876b97-be8a-486b-98e2-7d707266755d 98fa3fd418914a9288b5560e1bb6944e 
5254621adfd949a9a3b975f68119e269] [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] Instance failed block device setup
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] Traceback (most recent call last):
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1690, in 
_prep_block_device
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] self.driver, 
self._await_block_device_map_created))
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 363, in 
attach_block_devices
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] map(_log_and_attach, 
block_device_mapping)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 361, in 
_log_and_attach
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] bdm.attach(*attach_args, 
**attach_kwargs)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 311, in 
attach
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] '', '', image_id=self.image_id)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 303, in create
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] item = 
cinderclient(context).volumes.create(size, **kwargs)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 187, in 
create
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] return self._create('/volumes', body, 
'volume')
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 153, in _create
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] resp, body = 
self.api.client.post(url, body=body)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 209, in post
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] return self._cs_request(url, 'POST', 
**kwargs)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 173, in 
_cs_request
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] **kwargs)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 156, in request
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] raise exceptions.from_response(resp, 
body)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-bb28-f0a96a7ed7bc] OverLimit: VolumeLimitExceeded: Maximum 
number of volumes allowed (10) exceeded (HTTP 413) (Request-ID: 
req-07dcc4c4-182f-4d73-b054-806f31cb7e71)
  2014-07-27 17:56:19.372 17060 TRACE nova.compute.manager [instance: 
2a124872-3332-4f54-

[Yahoo-eng-team] [Bug 1351020] Re: FloatingIP fails to load from database when not associated

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351020

Title:
  FloatingIP fails to load from database when not associated

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  A FloatingIP can be not associated with an FixedIP, which will cause
  its fixed_ip field in the database model to be None. Currently,
  FloatingIP's _from_db_object() method always assumes it's non-None and
  thus tries to load a FixedIP from None, which fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1351020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347777] Re: The compute_driver option description does not include the Hyper-V driver

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/134

Title:
  The compute_driver option description does not include the Hyper-V
  driver

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The description of the option "compute_driver" should include
  hyperv.HyperVDriver along with the other supported drivers

  
https://github.com/openstack/nova/blob/aa018a718654b5f868c1226a6db7630751613d92/nova/virt/driver.py#L35-L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347778] Re: raising Maximum number of ports exceeded is wrong

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347778

Title:
  raising Maximum number of ports exceeded is wrong

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Neutron:
  Fix Released

Bug description:
  When neutron API in nova calls create_port(), it looks for exceptions.
  Any 409 is turned into 'Maximum number of ports exceeded'. This is a
  horrible assumption. Neutron can return 409s for more than just this
  reason.

  Another case where neutron returns a 409 is this:

  2014-07-22 18:10:27.583 26577 INFO neutron.api.v2.resource 
[req-b7267ae5-bafa-4c34-8e25-9c0fca96ad2d None] create failed (client error):
   Unable to complete operation for network 
----. The mac address XX:XX:XX:XX:XX:XX is in 
use.

  This can occur when the request to create a port includes the mac
  address to use (as happens w/ baremetal/ironic in nova) and neutron
  for some reason still has things assigned with that mac.

  This is the offending code:

   174 except neutron_client_exc.NeutronClientException as e:
   175 # NOTE(mriedem): OverQuota in neutron is a 409
   176 if e.status_code == 409:
   177 LOG.warning(_('Neutron error: quota exceeded'))
   178 raise exception.PortLimitExceeded()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349680] Re: nova v3 api raise a exceptions.NotImplementedError instead of error message

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349680

Title:
  nova v3 api raise a  exceptions.NotImplementedError instead of error
  message

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova v3 diagnostics api returns a NotImplementedError , but it is not
  caught by api.

  [tagett@stack-01 devstack]$ nova --os-compute-api-version 3  diagnostics f1
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500)

  libvirt driver doesn't implement get_instance_diagnostics, it raise an 
NotImplementedError ,but v3 api doesn't handle it .
  it returns http 500, but should be 501

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1349680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348368] Re: ERROR(s) and WARNING(s) during tox -e docs

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348368

Title:
  ERROR(s) and WARNING(s) during tox -e docs

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  ERROR(s):

  dims@dims-mac:~/openstack/nova$ grep ERROR ~/junk/docs.log | sort | uniq -c
 2 /Users/dims/openstack/nova/nova/compute/manager.py:docstring of 
nova.compute.manager.ComputeVirtAPI.wait_for_instance_event:24: ERROR: 
Unexpected indentation.
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:100: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:110: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:135: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:138: ERROR: Unknown interpreted text 
role "paramref".
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:143: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:156: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:190: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:228: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:233: ERROR: Unknown interpreted text 
role "paramref".
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:255: ERROR: Unknown interpreted text 
role "paramref".
 6 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:265: ERROR: Unknown interpreted text 
role "paramref".
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:282: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:293: ERROR: Unknown interpreted text 
role "paramref".
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:299: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:307: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:318: ERROR: Unknown interpreted text 
role "paramref".
 4 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:322: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:360: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:389: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:432: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:446: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:452: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:513: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:517: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sqlalchemy/models.py:docstring of 
nova.db.sqlalchemy.models.relationship:546: ERROR: Unknown interpreted text 
role "paramref".
 2 /Users/dims/openstack/nova/nova/db/sq

[Yahoo-eng-team] [Bug 1347499] Re: block-device source=blank, dest=volume is allowed as a combination, but won't work

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347499

Title:
  block-device source=blank,dest=volume is allowed as a combination, but
  won't work

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  This is a spin-off of https://bugs.launchpad.net/nova/+bug/1347028

  As per the example given there -  currently source=blank,
  destination=volume will not work. We should either make it create an
  empty volume and attach it, or disallow it in the API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346866] Re: EndpointNotFound when deleting volume backended instance

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346866

Title:
  EndpointNotFound when deleting volume backended instance

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When booting a volume backend instance, there may be an error because
  of volume creating error. and the following error occur when deleting
  the instance:

  2014-07-22 11:19:15.305 14601 ERROR nova.compute.manager [-] [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] Failed to complete a deletion
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] Traceback (most recent call last):
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 845, in 
_init_instance
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] self._delete_instance(context, 
instance, bdms)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 103, in inner
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] rv = f(*args, **kwargs)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2220, in 
_delete_instance
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] user_id=user_id)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2190, in 
_delete_instance
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] self._shutdown_instance(context, 
db_inst, bdms)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2136, in 
_shutdown_instance
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] connector)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 174, in wrapper
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] res = method(self, ctx, volume_id, 
*args, **kwargs)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 276, in 
terminate_connection
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] return 
cinderclient(context).volumes.terminate_connection(volume_id,
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 92, in 
cinderclient
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] endpoint_type=endpoint_type)
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/service_catalog.py", line 80, in 
url_for
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] raise 
cinderclient.exceptions.EndpointNotFound()
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] EndpointNotFound
  2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListH

[Yahoo-eng-team] [Bug 1347028] Re: block_device mapping identifies ephemeral disks incorrectly

2014-09-05 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347028

Title:
  block_device mapping identifies ephemeral disks incorrectly

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Ephemeral drives are destinaton == local, but the new bdm code bases
  it on source instead.  This leads to improper errors:

  $ nova boot --flavor m1.tiny --block-device 
source=blank,dest=volume,bus=virtio,size=1,bootindex=0 test
  ERROR (BadRequest): Ephemeral disks requested are larger than the instance 
type allows. (HTTP 400) (Request-ID: req-53247c8e-d14e-43e2-b01e-85b49f520e61)

  The code is here:

  
https://github.com/openstack/nova/blob/106fb458c7ac3cc17bb42d1b83ec3f4fa8284e71/nova/block_device.py#L411

  This should be checking destination_type == 'local' instead of source
  type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >