[Yahoo-eng-team] [Bug 1407643] [NEW] Setting network bandwidth quota in extra_specs causes a VM creation to fail in devstack

2015-01-05 Thread Phil Day
Public bug reported:

https://blueprints.launchpad.net/nova/+spec/quota-instance-resource
added a number of resource managment capabilites via extra_specs foe
libvirt - but at least one of these causes VMs to fail on devstcak with
Neutron (so I'm guessing that they aren't covered in Tempest ?)


On a devstack system with Neutron Networking:

nova flavor-key m1.tiny set quota:vif_inbound_average=1024
ubuntu@devstack-forced-shutdown:/mnt/devstack$ nova boot --image  
02985e98-a163-4ce9-afb8-098c41c6573c --flavor 1 phil.limit
 
+--++
 | Property | Value 
 |
 
+--++
 | OS-DCF:diskConfig| MANUAL
 |
 | OS-EXT-AZ:availability_zone  | nova  
 |
 | OS-EXT-SRV-ATTR:host | - 
 |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  | - 
 |
 | OS-EXT-SRV-ATTR:instance_name| instance-0003 
 |
 | OS-EXT-STS:power_state   | 0 
 |
 | OS-EXT-STS:task_state| scheduling
 |
 | OS-EXT-STS:vm_state  | building  
 |
 | OS-SRV-USG:launched_at   | - 
 |
 | OS-SRV-USG:terminated_at | - 
 |
 | accessIPv4   |   
 |
 | accessIPv6   |   
 |
 | adminPass| 3SrCw22q8Prz  
 |
 | config_drive |   
 |
 | created  | 2015-01-05T11:21:10Z  
 |
 | flavor   | m1.tiny (1)   
 |
 | hostId   |   
 |
 | id   | 72c953c8-9bd3-4e94-8fbb-db54f77509b7  
 |
 | image| cirros-0.3.2-x86_64-uec 
(02985e98-a163-4ce9-afb8-098c41c6573c) |
 | key_name | - 
 |
 | metadata | {}
 |
 | name | phil.limit
 |
 | os-extended-volumes:volumes_attached | []
 |
 | progress | 0 
 |
 | security_groups  | default   
 |
 | status   | BUILD 
 |
 | tenant_id| 0c1ece771f3f43958d010dfbfba52b83  
 |
 | updated  | 2015-01-05T11:21:10Z  
 |
 | user_id  | b311b86b7fe2424e89508d1af73260ec  
 |
 
+--++

ubuntu@devstack-forced-shutdown:/mnt/devstack$ nova list 
 
+--++++-+---+
 | ID   | Name   | Status | Task State | 
Power State | Networks  |
 
+--++++-+---+
 | 72c953c8-9bd3-4e94-8fbb-db54f77509b7 | phil.limit | ERROR  | -  | 
NOSTATE | public=172.24.4.4 | 
 
+--++++-+---+

Stack trace on nova-compute shows:

2015-01-05 11:46:46.598 ERROR nova.virt.libvirt.driver [-] Error launching a 
defined domain with XML: domain type='qemu'
  nameinstance-0004/name
  uuid07fe9d59-cfe0-4937-b6c1-aead2cd2e23d/uuid
  metadata
nova:instance 

[Yahoo-eng-team] [Bug 1401121] [NEW] SimpleCIDRAffinityFilter should accept missing host_ip

2014-12-10 Thread Phil Day
Public bug reported:

The SimpleCIDRAffinityFilter uses the host_ip value from host_state,
but the ironic driver doesn't provide this value. Whilst it doesn't
make sense to use this filter with Ironic nodes, it should be possible
to have in configured in system configured to use both ironic and a
virt driver.

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401121

Title:
  SimpleCIDRAffinityFilter should accept missing host_ip

Status in OpenStack Compute (Nova):
  New

Bug description:
  The SimpleCIDRAffinityFilter uses the host_ip value from host_state,
  but the ironic driver doesn't provide this value. Whilst it doesn't
  make sense to use this filter with Ironic nodes, it should be possible
  to have in configured in system configured to use both ironic and a
  virt driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384637] [NEW] Shelve_offload should support clean_shutdown

2014-10-23 Thread Phil Day
Public bug reported:

Change https://review.openstack.org/#/c/112988/ added clean_shutdown
support for shelve(), but instances booted from volumes are shelved via
shelve_offload() - so that should also support clean_shutdown

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384637

Title:
  Shelve_offload should support clean_shutdown

Status in OpenStack Compute (Nova):
  New

Bug description:
  Change https://review.openstack.org/#/c/112988/ added clean_shutdown
  support for shelve(), but instances booted from volumes are shelved
  via shelve_offload() - so that should also support clean_shutdown

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1007116] Re: nova should support showing 'DELETED' servers

2014-09-18 Thread Phil Day
I don't think this is a valid bug. Admins can already see deleted
instances by including deleted=True in the search options.   Non
admins shouldn't be able to see deleted instances.

** Changed in: nova
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1007116

Title:
  nova should support showing 'DELETED' servers

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Nova supports showing (HTTP GET) on deleted images and flavors. Trying
  to show a deleted server currently fails however:

  
  [root@nova1 ~]# nova delete 4e38efa4-6980-44b0-8774-3a28de88e22f
  [root@nova1 ~]# nova show 4e38efa4-6980-44b0-8774-3a28de88e22f
  ERROR: No server with a name or ID of '4e38efa4-6980-44b0-8774-3a28de88e22f' 
exists.

  
  It would seem for consistency that we should follow the model we do with 
images and flavors and allow 'DELETED' records that still exist in the database 
to be shown. See example of showing deleted image below:

  
  [root@nova1 ~]# nova image-show 01705a39-4deb-402c-a651-e6e8bbef83ef
  +--+--+
  | Property |Value |
  +--+--+
  | created  | 2012-05-31T20:39:36Z |
  | id   | 01705a39-4deb-402c-a651-e6e8bbef83ef |
  | minDisk  | 0|
  | minRam   | 0|
  | name | foo  |
  | progress | 0|
  | status   | DELETED  |
  | updated  | 2012-05-31T20:39:54Z |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1007116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370578] [NEW] Ironic Hostmanager does not pass hypervisor_type to filters

2014-09-17 Thread Phil Day
Public bug reported:

The Ironic Hostmager does not include the compute node hypervisor values
such as type, version,  hostname.

Including these values, which are included by the normal HostManager, is
needed to allow the capabilities filter to work in a combined Ironic /
virt Nova

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370578

Title:
  Ironic Hostmanager does not pass hypervisor_type to filters

Status in OpenStack Compute (Nova):
  New

Bug description:
  The Ironic Hostmager does not include the compute node hypervisor
  values such as type, version,  hostname.

  Including these values, which are included by the normal HostManager,
  is needed to allow the capabilities filter to work in a combined
  Ironic / virt Nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368176] [NEW] Scheduler hints broken in V2.1 API

2014-09-11 Thread Phil Day
Public bug reported:

The v2.1 scheduler_hints plugin is broken, as it tries to extract the hints 
from the request_body rather than the server_dict
 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/scheduler_hints.py#L42-L45

The hints value in the request body is another layer down in the
server element

There are no V2.1 unit tests for the scheduler hints, so this is not
picked up by testing at the moment.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368176

Title:
  Scheduler hints broken in V2.1 API

Status in OpenStack Compute (Nova):
  New

Bug description:
  The v2.1 scheduler_hints plugin is broken, as it tries to extract the hints 
from the request_body rather than the server_dict
   
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/scheduler_hints.py#L42-L45

  The hints value in the request body is another layer down in the
  server element

  There are no V2.1 unit tests for the scheduler hints, so this is not
  picked up by testing at the moment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317100] Re: User data should be redacted when logging request spec

2014-07-09 Thread Phil Day
The change to drive provisioning from conductor now avoids the code path
that was logging user data, so the immediate concern has gone away.  Not
carrying user data in the instance object would still be a useful
optimization - but that's a different task.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317100

Title:
  User data should be redacted when logging request spec

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The filter scheduler has a Debug level log for the request spec, which
  includes in the instance properties the base64 encoded user_data.

  Since this may be used by the user to pass credentials into the VM
  this field should be redacted in the log enrty.

  User data  is an opaque data blob as far as Nova is concerned (and
  hence of no practical use for debugging).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1317100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337991] [NEW] absolute limits API doesn't take user quotas into account

2014-07-04 Thread Phil Day
Public bug reported:

The limits API always returns the per tenant limits and not any per-user
limits that may exist.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/limits.py#L94-95

The call to get_project_quotas should be replaced with a call to
get_user_quotas.

I suspect this just got missed when per-user project quotas were
introduced.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337991

Title:
  absolute limits API doesn't take user quotas into account

Status in OpenStack Compute (Nova):
  New

Bug description:
  The limits API always returns the per tenant limits and not any per-
  user limits that may exist.

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/limits.py#L94-95

  The call to get_project_quotas should be replaced with a call to
  get_user_quotas.

  I suspect this just got missed when per-user project quotas were
  introduced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336767] [NEW] Instance disappeared during wait for destroy should be a warning not an error

2014-07-02 Thread Phil Day
Public bug reported:

Currently Nova logs an error if a libvirt domain disappears during while
waiting for it to be destroyed, but the code actually treats this
(correctly) as a recoverable situation since the end result is the
required one.  Hence this should be logged as a warning not  an error.

This may help wit some of the gate failures:
https://bugs.launchpad.net/nova/+bug/1300279

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336767

Title:
  Instance disappeared during wait for destroy should be a warning not
  an error

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently Nova logs an error if a libvirt domain disappears during
  while waiting for it to be destroyed, but the code actually treats
  this (correctly) as a recoverable situation since the end result is
  the required one.  Hence this should be logged as a warning not  an
  error.

  This may help wit some of the gate failures:
  https://bugs.launchpad.net/nova/+bug/1300279

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336829] [NEW] Action and action events not soft deleted

2014-07-02 Thread Phil Day
Public bug reported:

Entries in the instance_actions and instance_actions_events table should
be soft deleted as part of teh associated instance deletion.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336829

Title:
  Action and action events not soft deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  Entries in the instance_actions and instance_actions_events table
  should be soft deleted as part of teh associated instance deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327145] [NEW] rescue_instance RPC has reverted to passing a dict

2014-06-06 Thread Phil Day
Public bug reported:

Change https://review.openstack.org/#/c/62314/  mistakenly made the
conversion of the instance object in compute/rpcapi.py   rescue_image()
unconditional on the RPC version.   From 3.9 onwards this should be
passed as an object

** Affects: nova
 Importance: Medium
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

** Changed in: nova
Milestone: None = juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327145

Title:
  rescue_instance RPC has reverted to passing a dict

Status in OpenStack Compute (Nova):
  New

Bug description:
  Change https://review.openstack.org/#/c/62314/  mistakenly made the
  conversion of the instance object in compute/rpcapi.py
  rescue_image() unconditional on the RPC version.   From 3.9 onwards
  this should be passed as an object

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319263] Re: ICE-HOUSE-GA:Nova conductor and Nova scheduler not stable when 200 compute connected to the controller.

2014-05-23 Thread Phil Day
Marking as Invalid since this is a MySQL mis-configuration error by the
user rather than a Nova bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319263

Title:
  ICE-HOUSE-GA:Nova conductor and Nova scheduler not stable when 200
  compute connected to the  controller.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Unable to boot more than ~117 VMs.
  The VMs remain in Build/Scheduling state and do not goto active state.

  | f367e216-decf-43c0-a5fe-26af0796235b | 
vm_one1-f367e216-decf-43c0-a5fe-26af0796235b | BUILD  | scheduling | NOSTATE
 |  |
  | f374c766-c715-4d39-97da-5baa47bd16b5 | 
vm_one1-f374c766-c715-4d39-97da-5baa47bd16b5 | BUILD  | scheduling | NOSTATE
 |  |
  | f391ce75-20fc-4bc1-93b0-0ff716eff044 | 
vm_one1-f391ce75-20fc-4bc1-93b0-0ff716eff044 | BUILD  | scheduling | NOSTATE
 |  |
  | f3e06663-b7b8-4933-a499-88d64dede80a | 
vm_one1-f3e06663-b7b8-4933-a499-88d64dede80a | BUILD  | scheduling | NOSTATE
 |  |
  | f3fb6f81-c1b7-4f8a-a27a-062a5af775de | 
vm_one1-f3fb6f81-c1b7-4f8a-a27a-062a5af775de | BUILD  | scheduling | NOSTATE
 |  |
  | f442ea70-fd60-42b1-aa61-782a1c5a21d9 | 
vm_one1-f442ea70-fd60-42b1-aa61-782a1c5a21d9 | BUILD  | scheduling | NOSTATE
 |  |
  | f4a2adae-fa84-4004-bfb8-40664a1fe29c | 
vm_one1-f4a2adae-fa84-4004-bfb8-40664a1fe29c | BUILD  | scheduling | NOSTATE
 |  |

  
  Here is what I am doing:
  I am creating VMs in a batch of 5 and providing a delay of 90 secs for each 
of the batch operation.

  However I am seeing this issue only when I connect around 200 compute
  nodes to controller. If I reduce the number of Compute nodes to around
  80 I m able to spawn more than 1000 VMs.

  So on the outset it looks like, the number of compute nodes connected
  to controller is playing a role in how many VMs can booted.

  Also pls. note that all the VMs being booted are part of single
  tenant.

  Below are the details of the setup on which the issue was observed:
  1. Server Config: 32CPU,32GB RAM,Ubuntu 12.04 LTS Server.
  2. Number of Compute nodes connected to Controller: 200.
  Following error msgs are being observed in the log files:

  found that inside the log says Operational error /Too many
  connections.

  nova.servicegroup.drivers.db [-] model server went away”
  “2014-05-13 18:22:13.035 16245 ERROR nova.servicegroup.drivers.db [-] 
Recovered model server connection!”

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317100] [NEW] User data should be redacted when logging request spec

2014-05-07 Thread Phil Day
Public bug reported:

The filter scheduler has a Debug level log for the request spec, which
includes in the instance properties the base64 encoded user_data.

Since this may be used by the user to pass credentials into the VM this
field should be redacted in the log enrty.

User data  is an opaque data blob as far as Nova is concerned (and hence
of no practical use for debugging).

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317100

Title:
  User data should be redacted when logging request spec

Status in OpenStack Compute (Nova):
  New

Bug description:
  The filter scheduler has a Debug level log for the request spec, which
  includes in the instance properties the base64 encoded user_data.

  Since this may be used by the user to pass credentials into the VM
  this field should be redacted in the log enrty.

  User data  is an opaque data blob as far as Nova is concerned (and
  hence of no practical use for debugging).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1317100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308544] [NEW] libvirt: Trying to delete a non-existing vif raises an exception

2014-04-16 Thread Phil Day
 nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/vif.py, line 783, in unplug
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self.unplug_ovs(instance, vif)
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/vif.py, line 667, in unplug_ovs
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self.unplug_ovs_hybrid(instance, vif)
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/virt/libvirt/vif.py, line 661, in unplug_ovs_hybrid
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] v2_name)
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/network/linux_net.py, line 1318, in delete_ovs_vif_port
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] _ovs_vsctl(['del-port', bridge, dev])
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/network/linux_net.py, line 1302, in _ovs_vsctl
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] raise 
exception.AgentError(method=full_args)
2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] AgentError: Error during following call 
to agent: ['ovs-vsctl', '--timeout=120', 'del-port', 'br-int', 
u'qvo67a96e96-10']

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308544

Title:
  libvirt: Trying to delete a non-existing vif raises an exception

Status in OpenStack Compute (Nova):
  New

Bug description:
  If an instance fails during its network creation (for example if the
  network-vif-plugged event doesn't arrive in time) a subsequent delete
  will also fail when it tries to delete the vif, leaving the instance
  in a Error(deleting) state.

  This can be avoided by including the --if-exists option to the
  ovs=vsctl command.

  Example of stack trace:

   2014-04-16 12:28:51.949 AUDIT nova.compute.manager 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Terminating instance
  2014-04-16 12:28:52.309 ERROR nova.virt.libvirt.driver [-] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] During wait destroy, instance disappeared.
  2014-04-16 12:28:52.407 ERROR nova.network.linux_net 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] Unable to execute 
['ovs-vsctl', '--timeout=120', 'del-port', 'br-int', u'qvo67a96e96-10']. 
Exception: Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 
del-port br-int qvo67a96e96-10
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no port named qvo67a96e96-10\n'
  2014-04-16 12:28:52.573 ERROR nova.compute.manager 
[req-af72c100-5d9b-44f6-b941-3d72529b3401 demo demo] [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Setting instance vm_state to ERROR
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] Traceback (most recent call last):
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2261, in do_terminate_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] self._delete_instance(context, 
instance, bdms, quotas)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File /mnt/stack/nova/nova/hooks.py, 
line 103, in inner
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] rv = f(*args, **kwargs)
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/compute/manager.py, line 2231, in _delete_instance
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] quotas.rollback()
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f]   File 
/mnt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-04-16 12:28:52.573 TRACE nova.compute.manager [instance: 
3b7ac090-1ada-4beb-9e56-1ba3a6445e1f] six.reraise(self.type_, self.value, 
self.tb

[Yahoo-eng-team] [Bug 1291515] [NEW] Recent Change to default state_path can silently break existing systems

2014-03-12 Thread Phil Day
Public bug reported:

the change to the default value of state_path introduced by
I94502bcfac8b372271acd0dbc1710c0e3009b8e1 for the reasons set out
in my -1 review of the same that seems to have been skipped when the
change was accepted.

As implemented the change will break any existing systems that are using
the default value of state_path with no warning period, which goes beyond
the scope of change for UpgradeImpact

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: High
 Assignee: Phil Day (philip-day)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291515

Title:
  Recent Change to default state_path can silently  break existing
  systems

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  the change to the default value of state_path introduced by
  I94502bcfac8b372271acd0dbc1710c0e3009b8e1 for the reasons set out
  in my -1 review of the same that seems to have been skipped when the
  change was accepted.

  As implemented the change will break any existing systems that are using
  the default value of state_path with no warning period, which goes beyond
  the scope of change for UpgradeImpact

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1291515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284741] [NEW] EC2 metadata service doesn't account for request forwarding when using neutron metadata-proxy

2014-02-25 Thread Phil Day
Public bug reported:

When an EC2 metadata request is received via the neutron metadata proxy
Nova assumes that the X-Forwarded-For item in teh header is the address
of the instance:

https://github.com/openstack/nova/blob/master/nova/api/metadata/handler.py#L149

In fact depending on the network path this could be a comma separated
list of of addresses, only the first element of which is the address of
the instance.

The correct handling should be something like:

remote_address = req.headers.get('X-Forwarded-For').split(',')[0]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284741

Title:
  EC2 metadata service doesn't account for request forwarding when using
  neutron metadata-proxy

Status in OpenStack Compute (Nova):
  New

Bug description:
  When an EC2 metadata request is received via the neutron metadata
  proxy Nova assumes that the X-Forwarded-For item in teh header is the
  address of the instance:

  
https://github.com/openstack/nova/blob/master/nova/api/metadata/handler.py#L149

  In fact depending on the network path this could be a comma separated
  list of of addresses, only the first element of which is the address
  of the instance.

  The correct handling should be something like:

  remote_address = req.headers.get('X-Forwarded-For').split(',')[0]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277204] Re: notifications no longer available in Nova

2014-02-12 Thread Phil Day
** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: oslo.messaging
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277204

Title:
  notifications no longer available in Nova

Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Invalid

Bug description:
  The recent change to oslo messaging seems to have removed the ability
  to specify a list of topics for notifications.   This is a critical
  feature for systems which provide multiple message streams for billing
  and monitoring.

  
  To reproduce:

  1) Create a devstack system

  2) Add the following lines to the [DEFAULT] section of nova.conf:
  notification_driver = nova.openstack.common.notifier.rpc_notifier
  notification_topics = notifications,monitor
  notify_on_state_change = vm_and_task_state
  notify_on_any_change = True
  instance_usage_audit = True
  instance_usage_audit_period = hour

  3) Restart all the n-* services

  4) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  15

  5) Create an instance
  ubuntu@devstack-net-cache:/mnt/devstack$ nova boot --image 
cirros-0.3.1-x86_64-uec --flavor 1 phil2

  6) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  17


  Messages are being added to the notifications queue, but not to the
  monitor queue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277204] [NEW] notification topics no longer configurable

2014-02-06 Thread Phil Day
Public bug reported:

The recent change to oslo messaging seems to have removed the ability to
specify a list of topics for notifications.   This is a critical feature
for systems which provide multiple message streams for billing and
monitoring.


To reproduce:

1) Create a devstack system

2) Add the following lines to the [DEFAULT] section of nova.conf:
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_topics = notifications,monitor
notify_on_state_change = vm_and_task_state
notify_on_any_change = True
instance_usage_audit = True
instance_usage_audit_period = hour

3) Restart all the n-* services

4) Look at the info queues in rabbit
sudo rabbitmqctl list_queues | grep info
notifications.info  15

5) Create an instance
ubuntu@devstack-net-cache:/mnt/devstack$ nova boot --image 
cirros-0.3.1-x86_64-uec --flavor 1 phil2

6) Look at the info queues in rabbit
sudo rabbitmqctl list_queues | grep info
notifications.info  17


Messages are being added to the notifications queue, but not to the
monitor queue

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277204

Title:
  notification topics no longer configurable

Status in OpenStack Compute (Nova):
  New

Bug description:
  The recent change to oslo messaging seems to have removed the ability
  to specify a list of topics for notifications.   This is a critical
  feature for systems which provide multiple message streams for billing
  and monitoring.

  
  To reproduce:

  1) Create a devstack system

  2) Add the following lines to the [DEFAULT] section of nova.conf:
  notification_driver = nova.openstack.common.notifier.rpc_notifier
  notification_topics = notifications,monitor
  notify_on_state_change = vm_and_task_state
  notify_on_any_change = True
  instance_usage_audit = True
  instance_usage_audit_period = hour

  3) Restart all the n-* services

  4) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  15

  5) Create an instance
  ubuntu@devstack-net-cache:/mnt/devstack$ nova boot --image 
cirros-0.3.1-x86_64-uec --flavor 1 phil2

  6) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  17


  Messages are being added to the notifications queue, but not to the
  monitor queue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276203] [NEW] Period task interval config values need to be consistent

2014-02-04 Thread Phil Day
Public bug reported:

Currently we have a mix of “==0” and “=0” being used inside periodic
tasks to decide to skip the task altogether.   We also have the
“spacing=” option in the periodic_task decorator to determine how often
to call the task, but in this case:  ==0 means “call at default
interval” and 0 means “never call”. It would be nice to make these
consistent so that all tasks can use the spacing option rather than keep
their own check on when (and if)  they  need to run.

However in order to do this cleanly and not break anyone that is currently 
using “0 “ to mean “don’t run” we need to:
-   Change the default values that are currently 0 to -1
-   Log a deprecation warning for the use “*_interval=0”

And then leave this in place until after Juno before making the change

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276203

Title:
  Period task interval config values need to be consistent

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently we have a mix of “==0” and “=0” being used inside periodic
  tasks to decide to skip the task altogether.   We also have the
  “spacing=” option in the periodic_task decorator to determine how
  often to call the task, but in this case:  ==0 means “call at default
  interval” and 0 means “never call”. It would be nice to make
  these consistent so that all tasks can use the spacing option rather
  than keep their own check on when (and if)  they  need to run.

  However in order to do this cleanly and not break anyone that is currently 
using “0 “ to mean “don’t run” we need to:
  - Change the default values that are currently 0 to -1
  - Log a deprecation warning for the use “*_interval=0”

  And then leave this in place until after Juno before making the change

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274627] [NEW] Volume attach /detach should be blocked during some opertaions

2014-01-30 Thread Phil Day
Public bug reported:

Currently volume attach, detach, and swap check on vm_state but not
task_state.  This means that, for example, volume attach is allowed
during a reboot, rebuild, or migration.

As with other operations the check should be against a task state of
None

** Affects: nova
 Importance: Undecided
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

** Changed in: nova
Milestone: None = icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274627

Title:
  Volume attach /detach should be blocked during some opertaions

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently volume attach, detach, and swap check on vm_state but not
  task_state.  This means that, for example, volume attach is allowed
  during a reboot, rebuild, or migration.

  As with other operations the check should be against a task state of
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273266] [NEW] Error message is malformed when removing a non-existent security group from an instance

2014-01-27 Thread Phil Day
Public bug reported:

Trying to remove a security group from an instance which is not actually 
associated with the instance produces the following:
 
---
$nova remove-secgroup 71069945-5bea-4d53-b6ab-9026bfeebba4 phil

ERROR: [u'Security group %(security_group_name)s not assocaited with the
instance %(instance)s', {u'instance': u'71069945-5bea-4d53-b6ab-
9026bfeebba4', u'security_group_name': u'phil'}] (HTTP 404) (Request-ID:
req-a334b53d-e7cc-482c-9f1f-7bc61b8367e0)

---

The variables are not being populated correctly, and there is a typo:  
assocaited

** Affects: nova
 Importance: Low
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

** Changed in: nova
Milestone: None = icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273266

Title:
  Error message is malformed when removing a non-existent security group
  from an instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  Trying to remove a security group from an instance which is not actually 
associated with the instance produces the following:
   
  ---
  $nova remove-secgroup 71069945-5bea-4d53-b6ab-9026bfeebba4 phil

  ERROR: [u'Security group %(security_group_name)s not assocaited with
  the instance %(instance)s', {u'instance': u'71069945-5bea-4d53-b6ab-
  9026bfeebba4', u'security_group_name': u'phil'}] (HTTP 404) (Request-
  ID: req-a334b53d-e7cc-482c-9f1f-7bc61b8367e0)

  ---

  The variables are not being populated correctly, and there is a typo:
   assocaited

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1196924] Re: Stop and Delete operations should give the Guest a chance to shutdown

2014-01-22 Thread Phil Day
The associated change was reverted as it extended the duration of the
gate too much

** Changed in: nova
   Status: Fix Released = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1196924

Title:
  Stop and Delete operations should give the Guest a chance to shutdown

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Currently in libvirt stop and delete operations simply destroy the
  underlying VM. Some GuestOS's do not react well to this type of
  power failure, and it would be better if these operations followed the
  same approach a a soft_reboot and give the guest a chance to shutdown
  gracefully.   Even where VM is being deleted, it may be booted from a
  volume which will be reused on another server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1196924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258603] [NEW] tests still fail when libvirt not installed

2013-12-06 Thread Phil Day
Public bug reported:

https://review.openstack.org/#/c/60052/1  fixed a problem with fake
libvirt driver by adding default arguments to Connection class, but the
tests still fail sometimes because the default value for uri is only
accepted if a global  in the fake driver is true:

 allow_default_uri_connection

Tests only fail sometimes, so I guess this global is affected by other
tests

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258603

Title:
  tests still fail when libvirt not installed

Status in OpenStack Compute (Nova):
  New

Bug description:
  https://review.openstack.org/#/c/60052/1  fixed a problem with fake
  libvirt driver by adding default arguments to Connection class, but
  the tests still fail sometimes because the default value for uri is
  only accepted if a global  in the fake driver is true:

   allow_default_uri_connection

  Tests only fail sometimes, so I guess this global is affected by other
  tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1258603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238536] Re: POST with empty body results in 411 Error

2013-10-23 Thread Phil Day
Looks like the problem is specific to our network

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238536

Title:
  POST with empty body results in 411 Error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Some API commands don't need a body - for example allocating a
  floating IP.   However making a request without a body results in a
  411 error:

  curl -i 
https://compute.systestb.hpcloud.net/v2/21240759398822/os-floating-ips -H 
Content-Type: application/xml -H Accept: application/xml -H X-Auth-Token: 
xxx -X POST
  HTTP/1.1 411 Length Required
  nnCoection: close
  Content-Length: 284

  Fault Name: HttpRequestReceiveError
  Error Type: Default
  Description: Http request received failed
  Root Cause Code: -19013
  Root Cause : HTTP Transport: Couldn't determine the content length
  Binding State: CLIENT_CONNECTION_ESTABLISHED
  Service: null
  Endpoint: null

  
  Passing an Empty body works:
  curl -i 
https://compute.systestb.hpcloud.net/v2/21240759398822/os-floating-ips -H 
Content-Type: application/xml -H Accept: application/xml -H X-Auth-Token: 
xxx -X POST -d ''
  HTTP/1.1 200 OK
  Content-Length: 164
  Content-Type: application/xml; charset=UTF-8
  Date: Fri, 31 May 2013 11:13:26 GMT
  X-Compute-Request-Id: req-cc2ce740-6114-4820-8717-113ea1796142

  ?xml version='1.0' encoding='UTF-8'?
  floating_ip instance_id=None ip=15.184.42.154 fixed_ip=None 
id=3f9ce21c-d192-4478-8dd1-f7eb68d70133 pool=Ext-Net/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235351] Re: Metadata proxy queries the DB for each request

2013-10-04 Thread Phil Day
Actually I'm not sure that caching the mapping would be safe in all
cases.   The period would have to be so short to avoid the risk of the
port and address being re-assigned that it probably doesn't by that much

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1235351

Title:
  Metadata proxy queries the DB for each request

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The metadata agent has add add the Nova instance_id (i.e port
  device_id) to the query headers, which it does via a DB lookup.

  However its normal for a Nova VM to make a number of metadata queries
  in  quick succession, esp during its start-up as cloud-init makes a
  separate request for each metadata item.The
  network-port-device_id mapping is very stable over a short period of
  time, so this data could be cached to improve performance

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1235351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190360] Re: Deleting an instance in the API layer does not send usage notifications

2013-06-24 Thread Phil Day
Agree this is covered by that fix - I think it was a problem in
stacktach that made me think it wasn't

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1190360

Title:
  Deleting an instance in the API layer does not send usage
  notifications

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If the compute API layer decides to delete an instance itself rather
  than send the request to the compute host, for example because there
  is no host (not yet scheduled) or the host is down then the
  compute.delete.start and compute.delete.end notification messages are
  not send - which can be confusing for Billing and monitoring systems
  listening on the notification queue.

  Currently only an update message for the task state being set to
  Deleting  is generated.

  Although the API can't generate the usage data that the compute host
  would include it could at least send delete.start and delete.end
  messages.

  It could also, or maybe instead, set the vm_state to DELETED and
  generate the update event for that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1190360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157923] Re: localfs file-injection does not `mkdir -p`

2013-06-19 Thread Phil Day
Thanks Bill for offering to pick this up - I agree that this should be
fixed.

** Changed in: nova
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1157923

Title:
  localfs file-injection does not `mkdir -p`

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  
  The VFS documentation for file-injection states that:
Append @content to the end of the file identified
  by @path, creating the file if it does not already
  exist

  https://github.com/openstack/nova/blob/master/nova/virt/disk/vfs/api.py#L85

  This wording is ambiguous as to whether non-existent parent
  directories will be created by file-injection.

  
  Say that one tries to inject a file to /nonexistent/foo

  - guestfs creates the directory /nonexistent, and injects.
  - localfs fails with a no such file error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1157923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161472] Re: Nova-Quantum SecurityGroup API should enforce unique group names

2013-06-19 Thread Phil Day
The example you used was passing in a security group ID (which will
always be unique), but the create call allows either a name or an ID to
be passed in.  In he case of a name being used then the bug is still
valid.

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161472

Title:
   Nova-Quantum SecurityGroup API should enforce unique group names

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The API calls create instance boot and add Instance to Security Group
  accept a security group name rather than an ID.

  In Nova Security Group Names are constrained to be unique.

  In Quantum Security Group Names are not constrained to be unique - so
  if two groups are created with the same name it becomes impossible to
  add instances to them via the Nova API.

  
  To maintain backwards compatibility with Nova Security Groups, and to avoid 
issues during Instance Creation or when adding Instances to a Security Group 
the NovaQauntumSecurityGroupAPI should enforce uniqueness of Group Names.  This 
will provide consistency for users of the Nova API  (it will still be possible 
to break this model by creating SecGroups with non-unique names in Quantum).

  
  The longer term solution would be for the Nova API to work with SecurityGroup 
IDs (which are always unique) rather than Names.

  Forcing Quantum (which is already using uuids for Security groups) to
  also impose unique names to satisfy Nova does not feel like a good
  fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161473] Re: API to add instance to a SecGroup should take ID not Name

2013-06-19 Thread Phil Day
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161473

Title:
  API to add instance to a SecGroup should take ID not Name

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The API calls create instance boot and add Instance to Security Group
  accept a security group name rather than an ID.

  In Quantum Security Group Names are not constrained to be unique - so
  if two groups are created with the same name it becomes impossible to
  add instances to them via this API.

  I think the right way to fix this is to change the Nova API to take
  security group IDs instead of names.

  
  Note there are tworelated bugs which can be fixed independently in advance of 
this API change:

  
  - Make NovaQuntumSecurityGroupAPI validate that SecGroup names are unique 
within a project when it creates them  (fixes the issue for users just of the 
Nova API  - and makes the behavior consistent with Nova Security Groups)
  https://bugs.launchpad.net/nova/+bug/1161472

  
  - Nova should pass back a more meaningful message if Quantum finds multiple 
SecGroups with the same name
  https://bugs.launchpad.net/nova/+bug/1161467

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1185367] Re: AZ should be validated durign instance create at the api layer

2013-06-03 Thread Phil Day
Marking this back to New because I simply don't agree with the above
analysis.

Personally I doubt if the intention of the forcehost capability was to
allow an invalid AZ name by design,  - but if there really is a need to
be able to specify an invalid AZ name when forcing to a particular host
(a privileged operation) then it would be possible to make the API
validation skip the check for a valid AZ if the name contains a :

Not a reason to reject making a significant improvement to the common
use case.

** Changed in: nova
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1185367

Title:
  AZ should be validated durign instance create at the api layer

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently the compute API layer does not do any validation of the
  availability zone, so a request like:

  nova boot --availability_zone foobar

  will be accepted, sent to the scheduler, and the instance will go to
  error.

  There is already code in compute/api.py  which processes  the
  availability_zone  value

  def _handle_availability_zone(availability_zone)
  ...

  
  So it seems like there should be some basic validation added to check that 
the zone exists and is available

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1185367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1111498] Re: instance.update notifications don't always identify the servicev

2013-03-14 Thread Phil Day
https://review.openstack.org/#/c/20919/

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/498

Title:
  instance.update notifications don't always identify the servicev

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  instance_update in conductor/manager.py doesn't identify the service
  when calling notifications.send_update(), resulting in mesages on the
  notification queue having a service value of None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp