[Yahoo-eng-team] [Bug 1641413] [NEW] Unnecessary db traffic when constructing instance object from db info

2016-11-13 Thread Hans Lindgren
Public bug reported:

During construction of an instance object from db info, a call to
_from_db_object() is made. In many situations this results in one or
more unnecessary db calls due to the way instance extras are handled.

This occurs when the following two conditions apply; (1) the
'expected_attrs' parameter contains one of the affected instance_extras
fields (see below) and (2) the corresponding value in the provided
db_instance data either does not exist or else contains None.

The affected instance extras fields are:
 - numa_topology
 - pci_requests
 - device_metadata
 - vcpu_model

** Affects: nova
 Importance: Medium
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: unified-objects

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1641413

Title:
  Unnecessary db traffic when constructing instance object from db info

Status in OpenStack Compute (nova):
  New

Bug description:
  During construction of an instance object from db info, a call to
  _from_db_object() is made. In many situations this results in one or
  more unnecessary db calls due to the way instance extras are handled.

  This occurs when the following two conditions apply; (1) the
  'expected_attrs' parameter contains one of the affected
  instance_extras fields (see below) and (2) the corresponding value in
  the provided db_instance data either does not exist or else contains
  None.

  The affected instance extras fields are:
   - numa_topology
   - pci_requests
   - device_metadata
   - vcpu_model

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1641413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528041] [NEW] Inefficient use of db calls to get instance rules in virt/firewall.py

2015-12-20 Thread Hans Lindgren
Public bug reported:

When getting instance rules in virt/firewall.py a for loop is used to
query the db for rules belonging to each individual security group in a
list of security groups that itself comes from a separate query. See:

https://github.com/openstack/nova/blob/47e5199f67949f3cbd73114f4f45591cbc01bdd5/nova/virt/firewall.py#L349

This can be made much more efficient by querying all rules in a single
db query joined by instance.

** Affects: nova
 Importance: Medium
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: db security-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528041

Title:
  Inefficient use of db calls to get instance rules in virt/firewall.py

Status in OpenStack Compute (nova):
  New

Bug description:
  When getting instance rules in virt/firewall.py a for loop is used to
  query the db for rules belonging to each individual security group in
  a list of security groups that itself comes from a separate query.
  See:

  
https://github.com/openstack/nova/blob/47e5199f67949f3cbd73114f4f45591cbc01bdd5/nova/virt/firewall.py#L349

  This can be made much more efficient by querying all rules in a single
  db query joined by instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1528041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519017] Re: nova keypair-list generates ERROR (ClientException): Unexpected API Error.

2015-11-23 Thread Hans Lindgren
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1519017

Title:
  nova keypair-list generates ERROR (ClientException): Unexpected API
  Error.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  On executing nova keypair-list, the following error gets generated:

  nova keypair-list
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-5ff60d4d-67e3-4f2d-9912-fd4dcb833165)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1519017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514360] [NEW] Deleting a rebuilding instance tries to set it to ERROR

2015-11-09 Thread Hans Lindgren
Public bug reported:

As can be seen in the logs[1], this happens quite frequently in tempest
runs. Although this is not causing any errors, it fill logs with stack
traces and will unnecessarily save an instance fault in the db before
the instance itself is deleted.

[1]
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Setting%20instance%20vm_state%20to%20ERROR%5C%22%20AND%20message:%5C%22Expected:%20%7B'task_state':%20%5Bu'rebuilding'%5D%7D.%20Actual:%20%7B'task_state':%20u'deleting'%7D%5C%22&from=86400s

** Affects: nova
 Importance: Low
 Assignee: Hans Lindgren (hanlind)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1514360

Title:
  Deleting a rebuilding instance tries to set it to ERROR

Status in OpenStack Compute (nova):
  New

Bug description:
  As can be seen in the logs[1], this happens quite frequently in
  tempest runs. Although this is not causing any errors, it fill logs
  with stack traces and will unnecessarily save an instance fault in the
  db before the instance itself is deleted.

  [1]
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Setting%20instance%20vm_state%20to%20ERROR%5C%22%20AND%20message:%5C%22Expected:%20%7B'task_state':%20%5Bu'rebuilding'%5D%7D.%20Actual:%20%7B'task_state':%20u'deleting'%7D%5C%22&from=86400s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1514360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486541] [NEW] Using cells, local instance deletes incorrectly use legacy bdms instead of objects

2015-08-19 Thread Hans Lindgren
Public bug reported:

The instance delete code paths were changed to use new-world bdm objects
in commit f5071bd1ac00ed68102d37c8025d36df6777cd9e.

However, cells code still use the legacy format for local delete
operations which is clearly wrong. Code that gets called in the parent
class in nova/compute/api.py uses dot-notation and calls bdm.destroy()
as well.

** Affects: nova
 Importance: Medium
 Assignee: Hans Lindgren (hanlind)
 Status: In Progress


** Tags: cells

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486541

Title:
  Using cells, local instance deletes incorrectly use legacy bdms
  instead of objects

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The instance delete code paths were changed to use new-world bdm
  objects in commit f5071bd1ac00ed68102d37c8025d36df6777cd9e.

  However, cells code still use the legacy format for local delete
  operations which is clearly wrong. Code that gets called in the parent
  class in nova/compute/api.py uses dot-notation and calls bdm.destroy()
  as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454846] [NEW] When prep_resize() issues a retry it calls conductor resize_instance() with flavor as a primitive

2015-05-13 Thread Hans Lindgren
Public bug reported:

Since the server method this ends up calling, migrate_server() has been
changed to take a flavor object, this only works for as long as the
compat code in migrate_server() is still in place. When conductor
compute task rpcapi version is major bumped and the compat code is
removed, retries will start to fail if this is not fixed first.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New

** Summary changed:

- When prep_resize() issues a retry it calls conductor migrate_server() with 
flavor as a primitive
+ When prep_resize() issues a retry it calls conductor resize_instance() with 
flavor as a primitive

** Description changed:

- Since conductor migrate_server() has been changed to take a flavor
- object, this only works for as long as the compat code in
- migrate_server() is still in place. When conductor compute task rpcapi
- version is major bumped and the compat code is removed, retries will
- start to fail if this is not fixed first.
+ Since the server method this ends up calling, migrate_server() has been
+ changed to take a flavor object, this only works for as long as the
+ compat code in migrate_server() is still in place. When conductor
+ compute task rpcapi version is major bumped and the compat code is
+ removed, retries will start to fail if this is not fixed first.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454846

Title:
  When prep_resize() issues a retry it calls conductor resize_instance()
  with flavor as a primitive

Status in OpenStack Compute (Nova):
  New

Bug description:
  Since the server method this ends up calling, migrate_server() has
  been changed to take a flavor object, this only works for as long as
  the compat code in migrate_server() is still in place. When conductor
  compute task rpcapi version is major bumped and the compat code is
  removed, retries will start to fail if this is not fixed first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448075] [NEW] Recent compute RPC API version bump missed out on security group client side

2015-04-24 Thread Hans Lindgren
Public bug reported:

Because compute and security group client side RPC API:s both share the
same target, they need to be bumped together like what has been done
previously in 6ac1a84614dc6611591cb1f1ec8cce737972d069 and
6b238a5c9fcef0e62cefbaf3483645f51554667b.

In fact, having two different client side RPC API:s for the same target
is of little value and to avoid future mistakes should really be merged
into one.

The impact of this bug is that all security group related calls will
start to fail in an upgrade scenario.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1448075

Title:
  Recent compute RPC API version bump missed out on security group
  client side

Status in OpenStack Compute (Nova):
  New

Bug description:
  Because compute and security group client side RPC API:s both share
  the same target, they need to be bumped together like what has been
  done previously in 6ac1a84614dc6611591cb1f1ec8cce737972d069 and
  6b238a5c9fcef0e62cefbaf3483645f51554667b.

  In fact, having two different client side RPC API:s for the same
  target is of little value and to avoid future mistakes should really
  be merged into one.

  The impact of this bug is that all security group related calls will
  start to fail in an upgrade scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1448075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438189] [NEW] ComputeNode and Service objects out of sync with db schema

2015-03-30 Thread Hans Lindgren
Public bug reported:

A recent  commit [1] removed the relationship between compute_node and
service tables and in addition also made the compute_node.service_id
column nullable. These changes were never replicated to the object
counterparts.

[1] 551be2c52a29cb2755de4825a3fcb2c8f7d7b3f1

** Affects: nova
 Importance: Medium
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: unified-objects

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438189

Title:
  ComputeNode and Service objects out of sync with db schema

Status in OpenStack Compute (Nova):
  New

Bug description:
  A recent  commit [1] removed the relationship between compute_node and
  service tables and in addition also made the compute_node.service_id
  column nullable. These changes were never replicated to the object
  counterparts.

  [1] 551be2c52a29cb2755de4825a3fcb2c8f7d7b3f1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435803] [NEW] Some compute manager tests take excessively long time to complete due to conductor timeouts

2015-03-24 Thread Hans Lindgren
Public bug reported:

Some compute manager tests that exercise the exception behavior of
methods combined with using a somewhat real instance parameter when
doing so take very long time to complete. This happens if the method
being tested has the @revert_task_state decorator because it will try to
update the instance using a conductor call when there is no conductor
service listening.

By setting the conductor use_local flag for those tests I am able to
reduce the total test time with 4 full minutes when run locally.

** Affects: nova
 Importance: Low
 Assignee: Hans Lindgren (hanlind)
 Status: Incomplete


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435803

Title:
  Some compute manager tests take excessively long time to complete due
  to conductor timeouts

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  Some compute manager tests that exercise the exception behavior of
  methods combined with using a somewhat real instance parameter when
  doing so take very long time to complete. This happens if the method
  being tested has the @revert_task_state decorator because it will try
  to update the instance using a conductor call when there is no
  conductor service listening.

  By setting the conductor use_local flag for those tests I am able to
  reduce the total test time with 4 full minutes when run locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1435803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381414] [NEW] Unit test failure "AssertionError: Expected to be called once. Called 2 times." in test_get_port_vnic_info_3

2014-10-15 Thread Hans Lindgren
Public bug reported:

This looks to be due to tests test_get_port_vnic_info_2 and 3 sharing
some code and is easily reproduced by running these two tests alone with
no concurrency.

./run_tests.sh --concurrency 1 test_get_port_vnic_info_2
test_get_port_vnic_info_3

The above always results in:

Traceback (most recent call last):
  File "/home/hans/nova/nova/tests/network/test_neutronv2.py", line 2615, in 
test_get_port_vnic_info_3
self._test_get_port_vnic_info()
  File "/home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py", line 
1201, in patched
return func(*args, **keywargs)
  File "/home/hans/nova/nova/tests/network/test_neutronv2.py", line 2607, in 
_test_get_port_vnic_info
fields=['binding:vnic_type', 'network_id'])
  File "/home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py", line 
845, in assert_called_once_with
raise AssertionError(msg)
AssertionError: Expected to be called once. Called 2 times.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381414

Title:
  Unit test failure "AssertionError: Expected to be called once. Called
  2 times." in test_get_port_vnic_info_3

Status in OpenStack Compute (Nova):
  New

Bug description:
  This looks to be due to tests test_get_port_vnic_info_2 and 3 sharing
  some code and is easily reproduced by running these two tests alone
  with no concurrency.

  ./run_tests.sh --concurrency 1 test_get_port_vnic_info_2
  test_get_port_vnic_info_3

  The above always results in:

  Traceback (most recent call last):
File "/home/hans/nova/nova/tests/network/test_neutronv2.py", line 2615, in 
test_get_port_vnic_info_3
  self._test_get_port_vnic_info()
File "/home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py", 
line 1201, in patched
  return func(*args, **keywargs)
File "/home/hans/nova/nova/tests/network/test_neutronv2.py", line 2607, in 
_test_get_port_vnic_info
  fields=['binding:vnic_type', 'network_id'])
File "/home/hans/nova/.venv/local/lib/python2.7/site-packages/mock.py", 
line 845, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected to be called once. Called 2 times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371587] [NEW] MessagingTimeout errors in unit tests

2014-09-19 Thread Hans Lindgren
Public bug reported:

These can be seen all over the unit test logs. At least some of them are
caused by tests failing to mock calls to conductor api method
build_instances(), which is spawning new threads to handle such builds.
The timeouts happen when calls to scheduler gets no reply within the
configured rpc timeout of 60 secs.

This is not actually causing any test failures but makes debugging
harder since errors show up randomly in logs.

A typical error looks like this:

Traceback (most recent call last):
  File "nova/conductor/manager.py", line 614, in build_instances
request_spec, filter_properties)
  File "nova/scheduler/client/__init__.py", line 49, in select_destinations
context, request_spec, filter_properties)
  File "nova/scheduler/client/__init__.py", line 35, in __run_method
return getattr(self.instance, __name)(*args, **kwargs)
  File "nova/scheduler/client/query.py", line 34, in select_destinations
context, request_spec, filter_properties)
  File "nova/scheduler/rpcapi.py", line 107, in select_destinations
request_spec=request_spec, filter_properties=filter_properties)
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py",
 line 152, in call
retry=self.retry)
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/messaging/transport.py",
 line 90, in _send
timeout=timeout, retry=retry)
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 194, in send
return self._send(target, ctxt, message, wait_for_reply, timeout)
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 186, in _send
'No reply on topic %s' % target.topic)
MessagingTimeout: No reply on topic scheduler
WARNING [nova.scheduler.driver] Setting instance to ERROR state.

Then followed by an attempt to set the instance to ERROR state, which
fails since the instance does not exist in the database.

Traceback (most recent call last):
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 455, in fire_timers
timer()
  File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 58, in __call__
cb(*args, **kw)
  File "nova/utils.py", line 949, in wrapper
return func(*args, **kwargs)
  File "nova/conductor/manager.py", line 618, in build_instances
instance.uuid, request_spec)
  File "nova/scheduler/driver.py", line 67, in handle_schedule_error
'task_state': None})
  File "nova/db/api.py", line 746, in instance_update_and_get_original
columns_to_join=columns_to_join)
  File "nova/db/sqlalchemy/api.py", line 143, in wrapper
return f(*args, **kwargs)
  File "nova/db/sqlalchemy/api.py", line 2282, in 
instance_update_and_get_original
columns_to_join=columns_to_join)
  File "nova/db/sqlalchemy/api.py", line 2320, in _instance_update
columns_to_join=columns_to_join)
  File "nova/db/sqlalchemy/api.py", line 1713, in _instance_get_by_uuid
raise exception.InstanceNotFound(instance_id=uuid)

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: testing

** Description changed:

  These can be seen all over the unit test logs. At least some of them are
  caused by tests failing to mock calls to conductor api method
- create_instances(), which is spawning new threads to handle such
- creations. The timeouts happen when calls to scheduler gets no reply
- within the configured rpc timeout of 60 secs.
+ build_instances(), which is spawning new threads to handle such builds.
+ The timeouts happen when calls to scheduler gets no reply within the
+ configured rpc timeout of 60 secs.
  
  This is not actually causing any test failures but makes debugging
  harder since errors show up randomly in logs.
  
  A typical error looks like this:
  
  Traceback (most recent call last):
-   File "nova/conductor/manager.py", line 614, in build_instances
- request_spec, filter_properties)
-   File "nova/scheduler/client/__init__.py", line 49, in select_destinations
- context, request_spec, filter_properties)
-   File "nova/scheduler/client/__init__.py", line 35, in __run_method
- return getattr(self.instance, __name)(*args, **kwargs)
-   File "nova/scheduler/client/query.py", line 34, in select_destinations
- context, request_spec, filter_properties)
-   File "nova/scheduler/rpcapi.py", line 107, in select_destinations
- request_spec=re

[Yahoo-eng-team] [Bug 1371566] [NEW] Async conductor tasks should not raise exceptions

2014-09-19 Thread Hans Lindgren
Public bug reported:

Conductor API use cast or spawn_n to start async tasks such as
rebuild_instance() and unshelve_instance(). Since no caller is waiting
for a response, there is no reason for them to raise exceptions if
anything goes wrong.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: conductor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371566

Title:
  Async conductor tasks should not raise exceptions

Status in OpenStack Compute (Nova):
  New

Bug description:
  Conductor API use cast or spawn_n to start async tasks such as
  rebuild_instance() and unshelve_instance(). Since no caller is waiting
  for a response, there is no reason for them to raise exceptions if
  anything goes wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368404] [NEW] Uncaught 'libvirtError: Domain not found' errors during destroy

2014-09-11 Thread Hans Lindgren
.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] six.reraise(c, e, tb)
2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] rv = meth(*args, **kwargs)
2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 1068, in info
2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] if ret is None: raise libvirtError 
('virDomainGetInfo() failed', dom=self)
2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] libvirtError: Domain not found: no domain 
with matching uuid '525f4f95-f631-4fbb-a884-20c37711fb0d' (instance-0097)
2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]

** Affects: nova
 Importance: High
 Assignee: Hans Lindgren (hanlind)
 Status: In Progress


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368404

Title:
  Uncaught 'libvirtError: Domain not found' errors during destroy

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Some uncaught libvirt errors may result in instances being set to
  ERROR state and is causing sporadic gate failures. This can happen for
  any of the code paths that use _destroy().  Here is a recent example
  of a failed resize:

  [req-06dd4908-382e-455e-854e-e4d42a4bf62b TestServerAdvancedOps-724416891 
TestServerAdvancedOps-711228572] [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] Setting instance vm_state to ERROR
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] Traceback (most recent call last):
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 5902, in 
_error_out_instance_on_exception
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] yield
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3658, in resize_instance
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] timeout, retry_interval)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5468, in 
migrate_disk_and_power_off
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] self.power_off(instance, timeout, 
retry_interval)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2400, in power_off
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] self._destroy(instance)
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 998, in _destroy
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] timer.start(interval=0.5).wait()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] return hubs.get_hub().switch()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] return self.greenlet.switch()
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d]   File 
"/opt/stack/new/nova/nova/openstack/common/loopingcall.py", line 81, in _inner
  2014-09-05 01:08:37.123 26984 TRACE nova.compute.manager [instance: 
525f4f95-f631-4fbb-a884-20c37711fb0d] s

[Yahoo-eng-team] [Bug 1282858] Re: InstanceInfoCacheNotFound while cleanup running deleted instances

2014-07-15 Thread Hans Lindgren
That is correct, thanks.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282858

Title:
  InstanceInfoCacheNotFound while cleanup running deleted instances

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  reproduce steps:
  1. create an instance
  2. stop nova-compute and wait for it becomes to XXX in `nova-manage service 
list `
  3. delete the instance
  and you should change these two config in nova.conf:
  running_deleted_instance_poll_interval=60
  running_deleted_instance_action = reap

  2014-02-21 10:57:14.915 DEBUG nova.network.api 
[req-60f769f1-0a53-4f0b-817f-a04dee2ab1af None None] Updating cache with info: 
[] from (pid=13440) update_instance_cache_with_nw_info 
/opt/stack/nova/nova/network/api.py:70
  2014-02-21 10:57:14.920 ERROR nova.network.api 
[req-60f769f1-0a53-4f0b-817f-a04dee2ab1af None None] [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] Failed storing info cache
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] Traceback (most recent call last):
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
"/opt/stack/nova/nova/network/api.py", line 81, in 
update_instance_cache_with_nw_info
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] ic.save(update_cells=update_cells)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
"/opt/stack/nova/nova/objects/base.py", line 151, in wrapper
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] return fn(self, ctxt, *args, **kwargs)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
"/opt/stack/nova/nova/objects/instance_info_cache.py", line 91, in save
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] {'network_info': nw_info_json})
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File "/opt/stack/nova/nova/db/api.py", 
line 864, in instance_info_cache_update
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] return 
IMPL.instance_info_cache_update(context, instance_uuid, values)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 128, in wrapper
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] return f(*args, **kwargs)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66]   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 2308, in 
instance_info_cache_update
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] instance_uuid=instance_uuid)
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] InstanceInfoCacheNotFound: Info cache for 
instance d150ab27-3a6a-4003-ac42-51a7c56ece66 could not be found.
  2014-02-21 10:57:14.920 TRACE nova.network.api [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] 
  2014-02-21 10:57:16.724 INFO nova.virt.libvirt.driver [-] [instance: 
d150ab27-3a6a-4003-ac42-51a7c56ece66] Instance destroyed successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260265] Re: BaremetalHostManager cannot distinguish baremetal hosts from other hosts

2014-06-03 Thread Hans Lindgren
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260265

Title:
  BaremetalHostManager cannot distinguish baremetal hosts from other
  hosts

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  BaremetalHostManager could distinguish baremetal hosts by checking
  "baremetal_driver" exists in capabilities or not. However, now
  BaremetalHostManager cannot, because capabilities are not reported to
  scheduler and BaremetalHostManager always receives empty capabilities.
  As a result, BaremetalHostManager just does the same thing as the
  original HostManager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1260265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317804] [NEW] InstanceActionEvent traceback parameter not serializable

2014-05-09 Thread Hans Lindgren
Public bug reported:

The change to use InstanceActionEvent objects in
compute.utils.EventReporter changed the order of how things are done.
Before, traceback info were converted to strings before being sent to
the conductor. Now, since the object method being used remotes itself,
the order becomes the opposite and any captured tracebacks are sent as
is, resulting in errors during messaging.

See
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVmFsdWVFcnJvcjogQ2lyY3VsYXIgcmVmZXJlbmNlIGRldGVjdGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjkwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTk2MjYzMjYwODZ9

** Affects: nova
 Importance: Critical
 Assignee: Hans Lindgren (hanlind)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317804

Title:
  InstanceActionEvent traceback parameter not serializable

Status in OpenStack Compute (Nova):
  New

Bug description:
  The change to use InstanceActionEvent objects in
  compute.utils.EventReporter changed the order of how things are done.
  Before, traceback info were converted to strings before being sent to
  the conductor. Now, since the object method being used remotes itself,
  the order becomes the opposite and any captured tracebacks are sent as
  is, resulting in errors during messaging.

  See
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVmFsdWVFcnJvcjogQ2lyY3VsYXIgcmVmZXJlbmNlIGRldGVjdGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjkwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTk2MjYzMjYwODZ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1317804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316074] [NEW] nova network dhcpbridge has direct DB access

2014-05-05 Thread Hans Lindgren
Public bug reported:

nova-network is currently broken due to direct DB access in dhcpbridge.

This issue was found using a multihost devstack setup where the non-
controller node has an empty sql connection string.

** Affects: nova
 Importance: High
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: db icehouse-backport-potential network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316074

Title:
  nova network dhcpbridge has direct DB access

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova-network is currently broken due to direct DB access in
  dhcpbridge.

  This issue was found using a multihost devstack setup where the non-
  controller node has an empty sql connection string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1154522] Re: NovaException's message variable should be private

2014-03-20 Thread Hans Lindgren
https://github.com/openstack/nova/commit/70569ae344bceb3794713abc5cb2c82e9671c37d
fixed this by renaming NovaException's message attribute as msg_fmt
which make it clear that this isn't the message but rather a format
string used to construct the message.

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova
     Assignee: Hans Lindgren (hanlind) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1154522

Title:
  NovaException's message variable should be private

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Different exceptions define message as a parameterized string that
  cannot be used directly. The complete message with variables replaced
  with their real values are only available through the use of str() or
  unicode() on the exception class itself.

  As such, NovaException's message variable should be renamed _message
  to indicate that it is private.

  See bug 1154117 for an example of this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1154522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284345] [NEW] Some network API methods unnecessarily trigger multiple get_instance_nw_info() calls

2014-02-24 Thread Hans Lindgren
Public bug reported:

Network manager methods add_fixed_ip_to_instance() and
remove_fixed_ip_from_instance() both return with updated nw_info models.
The corresponding network API methods however returns nothing, which has
the following effect:

Both API methods have the @refresh_cache decorator that tries to update
instance info cache from the decorated method's return value. In absence
of a return value, it will make a new rpc call to to get the missing
nw_info model. By changing the two API methods so that they return the
models that they in fact already get, these extra calls can be avoided
altogether.

In addition, having the API methods return updated nw_info models make
it possible to further improve as in compute manager, calls to these
methods are immediately followed by calls to get updated nw_info.

** Affects: nova
 Importance: Medium
 Assignee: Hans Lindgren (hanlind)
 Status: New


** Tags: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284345

Title:
  Some network API methods unnecessarily trigger multiple
  get_instance_nw_info() calls

Status in OpenStack Compute (Nova):
  New

Bug description:
  Network manager methods add_fixed_ip_to_instance() and
  remove_fixed_ip_from_instance() both return with updated nw_info
  models. The corresponding network API methods however returns nothing,
  which has the following effect:

  Both API methods have the @refresh_cache decorator that tries to
  update instance info cache from the decorated method's return value.
  In absence of a return value, it will make a new rpc call to to get
  the missing nw_info model. By changing the two API methods so that
  they return the models that they in fact already get, these extra
  calls can be avoided altogether.

  In addition, having the API methods return updated nw_info models make
  it possible to further improve as in compute manager, calls to these
  methods are immediately followed by calls to get updated nw_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277422] [NEW] Quotas change incorrectly when a resizing instance is deleted

2014-02-07 Thread Hans Lindgren
Public bug reported:

When deleting a resizing instance that has not yet finished resizing,
quotas should be adjusted for the old flavor type. Instead it
incorrectly use values from the new flavor.

This was originally reported and fixed in bug 1099729 but has since
resurfaced with the move to objects (commit
dce64683291ba2cdb5e6617e01ccc2909254acb4). This was made possible by a
prior change (commit a56f0b33069b919ebb24c4afdcc6b6c31592c98e) that
accidentally removed the test put in place to guard against this error
ever happening again.

** Affects: nova
 Importance: Undecided
 Assignee: Hans Lindgren (hanlind)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277422

Title:
  Quotas change incorrectly when a resizing instance is deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  When deleting a resizing instance that has not yet finished resizing,
  quotas should be adjusted for the old flavor type. Instead it
  incorrectly use values from the new flavor.

  This was originally reported and fixed in bug 1099729 but has since
  resurfaced with the move to objects (commit
  dce64683291ba2cdb5e6617e01ccc2909254acb4). This was made possible by a
  prior change (commit a56f0b33069b919ebb24c4afdcc6b6c31592c98e) that
  accidentally removed the test put in place to guard against this error
  ever happening again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270675] Re: 'Unshelve' doesn't record instances' host-attributes into DB

2014-01-21 Thread Hans Lindgren
*** This bug is a duplicate of bug 1237868 ***
https://bugs.launchpad.net/bugs/1237868

I guess this is Havana or earlier?

This bug is already fixed in Icehouse but as far as I can see that fix
hasn't been backported.

** This bug has been marked a duplicate of bug 1237868
   Fail to suspend a unshelved server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270675

Title:
  'Unshelve' doesn't record instances' host-attributes into DB

Status in OpenStack Compute (Nova):
  New

Bug description:
  When one VM was shelve-offloaded, the VM won't belong to any hosts.
  And the 'host' attribute will be 'None' if you execute 'nova show $id'. 
That's correct.

  But 'unshelve' doesn't record 'host' attributes even if the instance has 
already spawned in one host.
  Therefore, if you operate it later, the nova-api will raise an exception 
because it can't find the VM's host.

  Visit here for more information:
  http://paste.openstack.org/show/61521/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257545] Re: Unshelving an offloaded instance doesn't set host, hypervisor_name

2013-12-04 Thread Hans Lindgren
*** This bug is a duplicate of bug 1237868 ***
https://bugs.launchpad.net/bugs/1237868

** This bug has been marked a duplicate of bug 1237868
   Fail to suspend a unshelved server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257545

Title:
  Unshelving an offloaded instance doesn't set host, hypervisor_name

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you unshelve an instance that has been offloaded it doesn't set:

  OS-EXT-SRV-ATTR:host
  OS-EXT-SRV-ATTR:hypervisor_hostname

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180670] Re: XenAPIVMTestCase.test_instance_snapshot_fails_with_no_primary_vdi sometimes fails

2013-10-17 Thread Hans Lindgren
Just happend on the gate at http://logs.openstack.org/12/52312/1/check
/gate-nova-python27/571e82b/console.html

A quick search with logstash confirms that this has started to happen 
frequently over the last couple of days.
Logstash query: @message:"FAIL: 
nova.tests.virt.xenapi.test_xenapi.XenAPIVMTestCase.test_instance_snapshot_fails_with_no_primary_vdi"

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180670

Title:
  XenAPIVMTestCase.test_instance_snapshot_fails_with_no_primary_vdi
  sometimes fails

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Sometimes
  XenAPIVMTestCase.test_instance_snapshot_fails_with_no_primary_vdi
  fails. It always seems to succeed on its own, but ocassionally fails
  on a full test run.

  ==
  FAIL: 
nova.tests.test_xenapi.XenAPIVMTestCase.test_instance_snapshot_fails_with_no_primary_vdi
  tags: worker-3
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  Loading network driver 'nova.network.linux_net'
  Loading network driver 'nova.network.linux_net'
  Fast cloning is only supported on default local SR of type ext. SR on this 
system was found to be of type lvm. Ignoring the cow flag.
  No agent build found for xen/linux/x86-64
  Instance agent version: 1.0
  }}}

  Traceback (most recent call last):
File "nova/tests/test_xenapi.py", line 501, in 
test_instance_snapshot_fails_with_no_primary_vdi
  lambda *args, **kwargs: None)
  MismatchError: > returned None
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197922] Re: _sync_instance_power_state throws KeyError for shutdown instance

2013-09-20 Thread Hans Lindgren
*** This bug is a duplicate of bug 1195849 ***
https://bugs.launchpad.net/bugs/1195849

** This bug has been marked a duplicate of bug 1195849
   _sync_power_state fails when instance is powered off

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1197922

Title:
  _sync_instance_power_state throws KeyError for shutdown instance

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Steps to reproduce in DevStack:

  1) Create an instance:  nova boot .
  2) Shut it down with virsh:   sudo virsh shutdown n
  3) Wait for the compute manager to run the periodic power state sync (may 
take up to a minute).
  4) Compute manager triggers an Attribute Error on task_state as it tries to 
call stop() for the instance:

  
  2013-07-04 18:08:55.895 WARNING nova.compute.manager [-] [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] Instance shutdown by itself. Calling the 
stop API.
  2013-07-04 18:08:55.896 DEBUG nova.openstack.common.rpc.amqp [-] Making 
synchronous call on conductor ... from (pid=1367) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:586
  2013-07-04 18:08:55.896 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
ad7195d5326b4854b08aceb3fb7cc72c from (pid=1367) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:589
  2013-07-04 18:08:55.897 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is 
e2fef3b087734925afa50c0b68c03dad. from (pid=1367) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:337
  2013-07-04 18:08:56.056 DEBUG amqp [-] Closed channel #1 from (pid=1367) 
_do_close /usr/local/lib/python2.7/dist-packages/amqp/channel.py:88
  2013-07-04 18:08:56.057 DEBUG amqp [-] using channel_id: 1 from (pid=1367) 
__init__ /usr/local/lib/python2.7/dist-packages/amqp/channel.py:70
  2013-07-04 18:08:56.058 DEBUG amqp [-] Channel open from (pid=1367) _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp/channel.py:420
  2013-07-04 18:08:56.058 ERROR nova.compute.manager [-] [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] error during stop() in sync_power_state.
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] Traceback (most recent call last):
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4095, in 
_sync_instance_power_state
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] 
self.conductor_api.compute_stop(context, db_instance)
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/conductor/api.py", line 332, in compute_stop
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] return 
self._manager.compute_stop(context, instance, do_cast)
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 460, in compute_stop
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] return self.call(context, msg, 
version='1.43')
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/openstack/common/rpc/proxy.py", line 125, in call
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] result = rpc.call(context, 
real_topic, msg, timeout)
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/openstack/common/rpc/__init__.py", line 140, in call
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] return _get_impl().call(CONF, 
context, topic, msg, timeout)
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/openstack/common/rpc/impl_kombu.py", line 798, in call
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] rpc_amqp.get_connection_pool(conf, 
Connection))
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 615, in call
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973] rv = list(rv)
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e973]   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 564, in __iter__
  2013-07-04 18:08:56.058 TRACE nova.compute.manager [instance: 
799f934d-2d05-455b-bf03-51b8c506e97

[Yahoo-eng-team] [Bug 1129354] Re: sqlalchemy _instance_update sets extra_specs for original instance type.

2013-04-10 Thread Hans Lindgren
Looks like this was fixed together with
https://github.com/openstack/nova/commit/6f47035605e471562a3c7de593a272cf1b5a3a86.

** Changed in: nova
   Status: Confirmed => Invalid

** Tags removed: baremetal

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1129354

Title:
  sqlalchemy _instance_update sets extra_specs for original instance
  type.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Bare metal seems to require that instance['extra_specs'] contains a
  copy of the extra specs for the instance type of the instance.
  However, instance_update() can change the instance type ID.  The
  _instance_update() method in sqlalchemy/api.py only seems to pull and
  set instance['extra_specs'] for the OLD instance_type if
  instance_type_id is changing (as can happen with resize()).  I think
  we need to pull the extra specs at the end of this method if the
  instance_type_id changes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1129354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1145768] Re: nova host-update doesn't work

2013-03-07 Thread Hans Lindgren
The error was introduced by a change to python-novaclient,
https://review.openstack.org/#/c/18578/ that according to the commit
message "update hosts and services API according to changes on nova". It
references a change in Nova that was abandoned.

** Changed in: python-novaclient
   Importance: Undecided => High

** Changed in: python-novaclient
   Status: New => Triaged

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1145768

Title:
  nova host-update doesn't work

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Nova:
  Triaged

Bug description:
  From latest devstack:

  '$nova --debug  host-update hostname'

  http://paste.openstack.org/show/32772/

  Not clear if the bug is in python-novaclient or in nova.  But
  presumably at one point the client worked, and then the API changed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1145768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp