[Yahoo-eng-team] [Bug 1382630] [NEW] access_ip_* not updated on reschedules when they should be

2014-10-17 Thread Chris Behrens
Public bug reported:

For virt drivers that require networks to be reallocated on nova
reschedules, the access_ip_v[4|6] fields on Instance are not updated.

This bug was introduced when the new build_instances path was added.
This new path updates access_ip_* before the instance goes ACTIVE... and
it only updates when its not already set. The old path only updated the
access_ip_* fields when the instance went ACTIVE...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382630

Title:
  access_ip_* not updated on reschedules when they should be

Status in OpenStack Compute (Nova):
  New

Bug description:
  For virt drivers that require networks to be reallocated on nova
  reschedules, the access_ip_v[4|6] fields on Instance are not updated.

  This bug was introduced when the new build_instances path was added.
  This new path updates access_ip_* before the instance goes ACTIVE...
  and it only updates when its not already set. The old path only
  updated the access_ip_* fields when the instance went ACTIVE...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC

2014-08-04 Thread Chris Behrens
I'm proposing a new virt driver method for nova that allows the virt
driver to say whether reschedules should deallocate networks. Once the
nova side is confirmed, we'll add the method to ironic's virt driver.

** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: Robert Collins (lifeless) => Chris Behrens (cbehrens)

** Changed in: ironic
 Assignee: (unassigned) => Chris Behrens (cbehrens)

** Changed in: ironic
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1342919

Title:
  instances rescheduled after building network info do not update the
  MAC

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  This is weird - Ironic has used the mac from a different node (which
  quite naturally leads to failures to boot!)

  nova list | grep spawn
  | 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | 
ci-overcloud-NovaCompute3-zmkjp5aa6vgf  | BUILD  | spawning   | NOSTATE | 
ctlplane=10.10.16.137 |

   nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv
   | OS-EXT-SRV-ATTR:hypervisor_hostname  | 
b07295ee-1c09-484c-9447-10b9efee340c |

   neutron port-list | grep 137
   | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d || 
78:e7:d1:23:90:0d | {"subnet_id": "a6ddb35e-305e-40f1-9450-7befc8e1af47", 
"ip_address": "10.10.16.137"} |

  ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait
   | provision_state| wait call-back
 |

  ironic port-list | grep 78:e7:d1:23:90:0d  # from neutron
  | 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d |

  ironic port-show 33ab97c0-3de9-458a-afb7-8252a981
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b  |  # 
Ruh-roh, wrong node!
  | uuid   | 33ab97c0-3de9-458a-afb7-8252a981b37a  |
  | extra  | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} |
  | created_at | 2014-07-08T23:09:16+00:00 |
  | updated_at | 2014-07-16T01:23:23+00:00 |
  | address| 78:e7:d1:23:90:0d |
  ++---+

  
  ironic port-list | grep 78:e7:d1:23:9b:1d  # This is the MAC my hardware list 
says the node should have
  | caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d |
  # ironic port-show caba5b36-f518-43f2-84ed-0bc516cc
  ++---+
  | Property   | Value |
  ++---+
  | node_uuid  | b07295ee-1c09-484c-9447-10b9efee340c  |  # 
and tada right node
  | uuid   | caba5b36-f518-43f2-84ed-0bc516cc89df  |
  | extra  | {u'vif_port_id': u'272f2413-0309-4e8b-9a6d-9cb6fdbe978d'} |
  | created_at | 2014-07-08T23:08:26+00:00 |
  | updated_at | 2014-07-16T19:07:56+00:00 |
  | address| 78:e7:d1:23:9b:1d |
  ++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1342919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352510] [NEW] Delete and re-add of same node to compute_nodes table is broken

2014-08-04 Thread Chris Behrens
Public bug reported:

When a compute node is deleted (or marked deleted) in the DB and another
compute node is re-added with the same name, things break.

This is because the resource tracker caches the compute node object/dict
and uses the 'id' to update the record. When this happens,
rt.update_available_resources will raise a ComputeHostNotFound. This
ends up short-circuiting the full run of the update_available_resource()
periodic task.

This mostly applies when using a virt driver where a nova-compute
manages more than 1 "hypervisor".

** Affects: nova
 Importance: Medium
 Assignee: Chris Behrens (cbehrens)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Chris Behrens (cbehrens)

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352510

Title:
  Delete and re-add of same node to compute_nodes table is broken

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When a compute node is deleted (or marked deleted) in the DB and
  another compute node is re-added with the same name, things break.

  This is because the resource tracker caches the compute node
  object/dict and uses the 'id' to update the record. When this happens,
  rt.update_available_resources will raise a ComputeHostNotFound. This
  ends up short-circuiting the full run of the
  update_available_resource() periodic task.

  This mostly applies when using a virt driver where a nova-compute
  manages more than 1 "hypervisor".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347778] Re: raising Maximum number of ports exceeded is wrong

2014-07-23 Thread Chris Behrens
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347778

Title:
  raising Maximum number of ports exceeded is wrong

Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  Confirmed

Bug description:
  When neutron API in nova calls create_port(), it looks for exceptions.
  Any 409 is turned into 'Maximum number of ports exceeded'. This is a
  horrible assumption. Neutron can return 409s for more than just this
  reason.

  Another case where neutron returns a 409 is this:

  2014-07-22 18:10:27.583 26577 INFO neutron.api.v2.resource 
[req-b7267ae5-bafa-4c34-8e25-9c0fca96ad2d None] create failed (client error):
   Unable to complete operation for network 
----. The mac address XX:XX:XX:XX:XX:XX is in 
use.

  This can occur when the request to create a port includes the mac
  address to use (as happens w/ baremetal/ironic in nova) and neutron
  for some reason still has things assigned with that mac.

  This is the offending code:

   174 except neutron_client_exc.NeutronClientException as e:
   175 # NOTE(mriedem): OverQuota in neutron is a 409
   176 if e.status_code == 409:
   177 LOG.warning(_('Neutron error: quota exceeded'))
   178 raise exception.PortLimitExceeded()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347778] [NEW] raising Maximum number of ports exceeded is wrong

2014-07-23 Thread Chris Behrens
Public bug reported:

When neutron API in nova calls create_port(), it looks for exceptions.
Any 409 is turned into 'Maximum number of ports exceeded'. This is a
horrible assumption. Neutron can return 409s for more than just this
reason.

Another case where neutron returns a 409 is this:

2014-07-22 18:10:27.583 26577 INFO neutron.api.v2.resource 
[req-b7267ae5-bafa-4c34-8e25-9c0fca96ad2d None] create failed (client error):
 Unable to complete operation for network 
----. The mac address XX:XX:XX:XX:XX:XX is in 
use.

This can occur when the request to create a port includes the mac
address to use (as happens w/ baremetal/ironic in nova) and neutron for
some reason still has things assigned with that mac.

This is the offending code:

 174 except neutron_client_exc.NeutronClientException as e:
 175 # NOTE(mriedem): OverQuota in neutron is a 409
 176 if e.status_code == 409:
 177 LOG.warning(_('Neutron error: quota exceeded'))
 178 raise exception.PortLimitExceeded()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347778

Title:
  raising Maximum number of ports exceeded is wrong

Status in OpenStack Compute (Nova):
  New

Bug description:
  When neutron API in nova calls create_port(), it looks for exceptions.
  Any 409 is turned into 'Maximum number of ports exceeded'. This is a
  horrible assumption. Neutron can return 409s for more than just this
  reason.

  Another case where neutron returns a 409 is this:

  2014-07-22 18:10:27.583 26577 INFO neutron.api.v2.resource 
[req-b7267ae5-bafa-4c34-8e25-9c0fca96ad2d None] create failed (client error):
   Unable to complete operation for network 
----. The mac address XX:XX:XX:XX:XX:XX is in 
use.

  This can occur when the request to create a port includes the mac
  address to use (as happens w/ baremetal/ironic in nova) and neutron
  for some reason still has things assigned with that mac.

  This is the offending code:

   174 except neutron_client_exc.NeutronClientException as e:
   175 # NOTE(mriedem): OverQuota in neutron is a 409
   176 if e.status_code == 409:
   177 LOG.warning(_('Neutron error: quota exceeded'))
   178 raise exception.PortLimitExceeded()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336377] [NEW] random nova unit test failure for test_shelve_offload

2014-07-01 Thread Chris Behrens
Public bug reported:


http://logs.openstack.org/02/94402/18/check/gate-nova-python26/b20aa1d/testr_results.html.gz

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336377

Title:
  random nova unit test failure for test_shelve_offload

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  
http://logs.openstack.org/02/94402/18/check/gate-nova-python26/b20aa1d/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200591] Re: virt xenapi driver does not retry upload_image on a socket error

2014-04-18 Thread Chris Behrens
The retry code does not check for this:

2014-04-19 00:37:39.354 13204 ERROR nova.compute.manager 
[req-e7e92354-6e42-4955-9519-08a65872372d  ]
 [instance: 7a2b7c97-f793-4666-888d-430dXXX] Error: 
[Errno 104] Connection reset by peer

in xenapi/client/session.py:

def _is_retryable_exception(self, exc, fn):
_type, method, error = exc.details[:3]
if error == 'RetryableError':
LOG.debug(_("RetryableError, so retrying %(fn)s"), {'fn': fn},
  exc_info=True)
return True
elif "signal" in method:
LOG.debug(_("Error due to a signal, retrying %(fn)s"), {'fn': fn},
  exc_info=True)
return True
else:
return False

** Changed in: nova
   Status: Invalid => Triaged

** Changed in: nova
 Assignee: Sridevi Koushik (sridevik) => (unassigned)

** Changed in: nova
   Importance: Low => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1200591

Title:
  virt xenapi driver does not retry upload_image on a socket error

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  As of now only XenAPI failures are retried in the upload_image method (for 
the glance_num_retries ).
  We need to retry on "error: [Errno 104] Connection reset by peer".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1200591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296414] [NEW] quotas not updated when periodic tasks or startup finish deletes

2014-03-23 Thread Chris Behrens
Public bug reported:

There are a couple of cases in the compute manager where we don't pass
reservations to _delete_instance().  For example, one of them is
cleaning up when we see a delete that is stuck in DELETING.

The only place we ever update quotas as part of delete should be when
the instance DB record is removed. If something is stuck in DELETING, it
means that the quota was not updated.  We should make sure we're always
updating the quota when the instance DB record is removed.

(Soft delete kinda throws a wrench in this, hrmph, because I think you
want soft deleted instances to not count against quotas -- yet their DB
records will still exist.)

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.
  
  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING, it
  means that the quota was not updated.  We should make sure we're always
  updating the quota when the instance DB record is removed.
+ 
+ (Soft delete kinda throws a wrench in this, hrmph, because I think you
+ want soft deleted instances to not count against quotas -- yet their DB
+ records will still exist.)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296414

Title:
  quotas not updated when periodic tasks or startup finish deletes

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.

  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING,
  it means that the quota was not updated.  We should make sure we're
  always updating the quota when the instance DB record is removed.

  (Soft delete kinda throws a wrench in this, hrmph, because I think you
  want soft deleted instances to not count against quotas -- yet their
  DB records will still exist.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212028] Re: SchedulerManager._expire_reservations doesn't need to always be running

2013-09-04 Thread Chris Behrens
This bug is not valid.  The task needs to run periodically to expire any
reservations that should be expired ahead of QUOTAS.reserve() calls.
The only alternative is to expire reservations explicitly before every
single QUOTAS.reserve()... which is not as performant.

This task doesn't necessarily belong in nova-scheduler.  It really could
easily be moved to nova-conductor or nova-compute or something... but it
needs to run somewhere!

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1212028

Title:
  SchedulerManager._expire_reservations doesn't need to always be
  running

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The nova-scheduler log is filled with:

  2013-08-13 23:30:56.219 730 DEBUG nova.openstack.common.loopingcall [-] 
Dynamic looping call sleeping for 60.00 seconds _inner 
/opt/stack/nova/nova/openstack/common/loopingcall.py:132
  2013-08-13 23:31:56.228 730 DEBUG nova.openstack.common.periodic_task [-] 
Running periodic task SchedulerManager._expire_reservations run_periodic_tasks 
/opt/stack/nova/nova/openstack/common/periodic_task.py:176

  This periodic task is to help the quota system work, but if the
  scheduler hasn't done anything for the past few minutes there is no
  reason to keep running this task.  Instead it can be restarted upon
  the next scheduling event.

  The main benefit from fixing this will be cleaner nova-scheduler log
  files

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1212028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211702] Re: Server list filtered by tenant_id doesn't work

2013-08-15 Thread Chris Behrens
If you're an admin, you must be explicit in saying "I want to reach
outside of my project id" by using all_tenants=1.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1211702

Title:
  Server list filtered by tenant_id doesn't work

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When an admin user try to get the list of servers filtered by
  tenant_id the filter is overwritten by the admin tenant_id so  the
  actual filter specified is not used but the list is just reporting the
  the admin user's servers.

  The problem is in that in the servers API the "project_id' is defined to be 
equal to the context.project_id:
  if 'all_tenants' not in search_opts:
  if context.project_id:
  search_opts['project_id'] = context.project_id

  Then in the nova/compute/api.py we have a filter_mapping to associate
  the tenant_id filter as project_id filter:

  # search_option to filter_name mapping.
  filter_mapping = {
  'image': 'image_ref',
  'name': 'display_name',
  'tenant_id': 'project_id',
  'flavor': _remap_flavor_filter,
  'fixed_ip': _remap_fixed_ip_filter}

  because of the mapping the filter tenant_id is mapped as project_id
  then the value project_id in the dictionary is overwritten by the
  project_id filter defined at the API layer.

  My solution is basically avoid to add a filter if the same filter has been 
required in the original request.
  To do that the idea is, before adding a value in the serach_opts, we need to 
check if the key we are going to add  is present as value in the filter_mapping 
dict,  if so we can force the filter only if the user is not requiring the same 
value in the original call.

  In my opinion we should also to move all the logic about the creation
  and management of the search_opts dictionary in the API code in order
  to have a single point where the dictionary is created and the let the
  compute api code just to give results back based on the filters
  passed.

  Steps to reproduce:
  - create an instance (inst1) for the tenant_id X
  - with an admin user call the servers/detail?tenant_id=X
  - verify that the instance inst1 is not returned. (Note that if the admin 
user has some instances the call will get them back)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1211702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179395] Re: Powervm driver does not support IBM i

2013-06-21 Thread Chris Behrens
** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1179395

Title:
  Powervm driver does not support IBM i

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Rationale:
  Currently OpenStack does not support IBM i in powervm driver. Although 
the underground virtualization mechanism of IBM i is the same as current 
supported PowerLinux in PowerVM, IBM i instances can not be deployed correctly.

  Overview:
  To support IBM i in powervm, the implementation of the enhancement will 
be based on current powervm implementation, which includes IVM managed system, 
single VIOS support and logical volume support. It enrolls a difference in 
image meta to distinguish IBM i image and Power Linux/AIX. PowerVM will 
distinguish OS type based on image meta to generate partition accordingly.

  Use Cases:
   1. Create new IBM i instance based on file-based IBM i image.
   2. Start IBM i instance.
   3. Stop IBM i instance.
   4. Create snapshot on a IBM i instance.
   5. Delete an IBM i instance.

  Design:
  The changes for IBM i supporting are within powervm of Nova, which may 
include:
   1. Specify partition property lpar_env for IBM i instance.
   2. Specify partition property ipl_source as b for IBM i instance.
   3. Specify LPA_PROFILE_ATTRIBUTES load_source_slot and console_slot for 
IBM i instance.
   4. Specify partition keylock position as norm during powering on IBM i 
instance.
   5. Check lpar_started status for IBM i instances according to refcode 
and running state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1179395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186401] Re: instance dict is being abused to pass config_drive_id to libvirt on build

2013-05-31 Thread Chris Behrens
sigh.  click too fast on status and you get undesired results.  heh.

** Changed in: nova
 Assignee: (unassigned) => Chris Behrens (cbehrens)

** Changed in: nova
   Status: Triaged => Won't Fix

** Changed in: nova
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1186401

Title:
  instance dict is being abused to pass config_drive_id to libvirt on
  build

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  compute/api create() shoves 'config_drive_id' into instance
  properties... which get passed down during a build.. but that's not a
  column in the DB.

  It looks like it actually makes it all the way into libvirt because we
  don't use the results from any instance_update() in between, which is
  bad because the instance we pass has old task_state, etc.

  Also: This will not work when we switch to passing objects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1186401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1153827] Re: soft delete values in InstanceSystemMetadata on instance delete

2013-05-30 Thread Chris Behrens
Re-opening this bug as it is not actually fixed in h-1.  The previous
fix needed to be reverted due to bug 1185190.

** Changed in: nova
   Status: Fix Released => Triaged

** Changed in: nova
 Assignee: Joe Gordon (jogo) => (unassigned)

** Changed in: nova
Milestone: havana-1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1153827

Title:
  soft delete values in InstanceSystemMetadata on instance delete

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  soft delete values in InstanceSystemMetadata on instance delete.

  Currently InstanceSystemMetadata is used in
  notify_usage_exists_deleted_instance.  Non-deleted
  systemInstanceMetadta is currently used (with deleted instances).

  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L755

  
  In order to fix this the read_deleted flag needs to work for joined tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1153827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1181412] Re: nova unit tests pass with LOG.*() calls that should generate KeyError

2013-05-17 Thread Chris Behrens
Sigh.  This appears to be how the python logging module works.

http://docs.python.org/2/library/logging.html

Under logger.debug they have an example use of 'extra'...and then under
it states:

"If you choose to use these attributes in logged messages, you need to
exercise some care. In the above example, for instance, the Formatter
has been set up with a format string which expects ‘clientip’ and ‘user’
in the attribute dictionary of the LogRecord. If these are missing, the
message will not be logged because a string formatting exception will
occur. So in this case, you always need to pass the extra dictionary
with these keys."

:(


** Changed in: nova
   Status: New => Invalid

** Changed in: oslo
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1181412

Title:
  nova unit tests pass with LOG.*() calls that should generate KeyError

Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  I just found a broken LOG.debug() message in the cells/scheduler which
  references a non-existant keyword arg.  Tests run fine.  I've inserted
  a different completely bogus LOG.debug() message and it also passes.
  It seems that LOG.debug()s are being turned into no-ops.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1181412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157330] Re: error: 'is not a valid node managed by this compute host.' under load

2013-03-19 Thread Chris Behrens
Not sure if this is truly a bug or not.   I can't seem to reproduce it
on a fresh compute_nodes table.  However, if I have a compute_nodes
entry leftover... from switching from XenAPI -> fake... I see this
problem just trying to build *1* instance.  The problem is that the
'nodename' changes.  The fake driver advertises a 'fake-mini' nodename.

This may help if you switch from kvm -> fake:

http://paste.openstack.org/show/34095/

That should make the fake driver use the same nodename as was probably
used for kvm.   But ultimately, I'm not sure just switching the driver
on a host should "just work".  XenAPI's nodename is the host in dom0.
kvm's nodename is the same as the nova-compute hostname.  There's all
sorts of things that may need to be cleaned when swapping the driver.

I would try repeating this when an empty compute nodes table and see if
this is still a bug.

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: Won't Fix => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1157330

Title:
  error: 'is not a valid node managed by this compute host.' under load

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Using Devstack (Grizzly-3) and nova  SHA1
  5aec4001eb30197f88467e0519203a52c3acd431.

  And setting 'compute_driver = nova.virt.fake.FakeDriver' in nova.conf

  
  vagrant@precise64:~$ nova image-list
  
+--+-+++
  | ID   | Name| 
Status | Server |
  
+--+-+++
  | 90e59cc9-503f-46e8-873e-13cde01b1252 | cirros-0.3.1-x86_64-uec | 
ACTIVE ||
  | 8097952f-6ef8-440b-bdf4-dfac34746396 | cirros-0.3.1-x86_64-uec-kernel  | 
ACTIVE ||
  | 23828452-dc20-4e4e-a505-86abc77a4cc3 | cirros-0.3.1-x86_64-uec-ramdisk | 
ACTIVE ||
  
+--+-+++

  When trying to run 100 vms  as admin user using

  $ nova boot --image 90e59cc9-503f-46e8-873e-13cde01b1252 --flavor
  m1.nano --num-instances 100 test

  over %10 fail with the following error:

  {u'message': u'NovaException', u'code': 500, u'details': u'precise64
  is not a valid node managed by this compute host.

  
  http://paste.openstack.org/show/34069/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1157330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1142901] Re: keypairs aren't propagated with cells

2013-03-03 Thread Chris Behrens
Ah, key data is sent with each instance already... so we really don't
need this in child cells.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1142901

Title:
  keypairs aren't propagated with cells

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Keypairs don't work with cells as they are not synced down to child
  cells.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1142901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1130997] Re: host_power_action doesn't work in cells

2013-02-25 Thread Chris Behrens
Turns out this is not a bug.  There's a bit of trickery in cells_api's
version of HostAPI() in that unimplemented methods there use api.py's
HostAPI() versions of methods and the rpcapi class is swapped out
for one that proxies via cells to the correct cell and manager.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1130997

Title:
  host_power_action doesn't work in cells

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  nova.compute.api.HostAPI.host_power_action needs to be added to
  nova.compute.cells_api

  At the moment if you have a host in a different cell to your api
  server, you can't call host_power_action on it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1130997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Blueprint scaling-zones] Improvements for Scaling Zones

2011-12-21 Thread Chris Behrens
Blueprint changed by Chris Behrens:

Assignee: (none) => Nova Scaling Team

-- 
Improvements for Scaling Zones
https://blueprints.launchpad.net/nova/+spec/scaling-zones

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Blueprint scaling-zones] Improvements for Scaling Zones

2011-12-21 Thread Chris Behrens
Blueprint changed by Chris Behrens:

Drafter: Chris Behrens => Nova Scaling Team

-- 
Improvements for Scaling Zones
https://blueprints.launchpad.net/nova/+spec/scaling-zones

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp