[Yahoo-eng-team] [Bug 1277029] [NEW] cannot distinguish error from no port in ovs_lib list operation

2014-02-06 Thread Akihiro Motoki
Public bug reported:

list operations in ovs_lib returns an empty list even if Runtime Error occurs.
As a result, when Runtime Error occurs, a caller thinks all ovs ports have 
disappeared.
ovs-vsctl sometimes fails (mostly raises alarm error?)
ovs_lib should provide a way to distinguish these two situations.

list operations in ovs_lib
- get_vif_port_set (used by OVS agent and ryu agent)
- get_vif_ports (used by NEC agent, OVS cleanup)
- get_bridge (used by OVS agent, OVS cleanup)

It affects all agent using the above list operation.

It affects OVS agent and NEC agent. It triggers unexpected port
deletions.

In OVS cleanup, there is no negative effect. It just nothing for this
case.

** Affects: neutron
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: nec ovs ryu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277029

Title:
  cannot distinguish error from no port in ovs_lib list operation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  list operations in ovs_lib returns an empty list even if Runtime Error occurs.
  As a result, when Runtime Error occurs, a caller thinks all ovs ports have 
disappeared.
  ovs-vsctl sometimes fails (mostly raises alarm error?)
  ovs_lib should provide a way to distinguish these two situations.

  list operations in ovs_lib
  - get_vif_port_set (used by OVS agent and ryu agent)
  - get_vif_ports (used by NEC agent, OVS cleanup)
  - get_bridge (used by OVS agent, OVS cleanup)

  It affects all agent using the above list operation.

  It affects OVS agent and NEC agent. It triggers unexpected port
  deletions.

  In OVS cleanup, there is no negative effect. It just nothing for this
  case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277027] [NEW] test_admin_delete_servers_of_others failure due to unexpected task state

2014-02-06 Thread Steven Hardy
Public bug reported:

I couldn't find an existing bug for this, apologies if it's a dupe,
looks like a nova bug:


https://review.openstack.org/#/c/70717/
http://logs.openstack.org/17/70717/1/gate/gate-tempest-dsvm-full/9adaf90/console.html

2014-02-06 08:04:16.350 | 2014-02-06 07:48:25,729 Response Body: {server: 
{status: ERROR, os-access-ips:access_ip_v6: , updated: 
2014-02-06T07:48:25Z, os-access-ips:access_ip_v4: , addresses: {}, 
links: [{href: 
http://127.0.0.1:8774/v3/servers/1cbd2e01-d2f3-45a0-baee-1d16df3dc7fd;, rel: 
self}, {href: 
http://127.0.0.1:8774/servers/1cbd2e01-d2f3-45a0-baee-1d16df3dc7fd;, rel: 
bookmark}], os-extended-status:task_state: null, key_name: null, image: 
{id: 7e649e07-3cea-4d95-90b2-2bbea7fce698, links: [{href: 
http://23.253.79.233:9292/images/7e649e07-3cea-4d95-90b2-2bbea7fce698;, rel: 
bookmark}]}, os-pci:pci_devices: [], 
os-extended-availability-zone:availability_zone: nova, 
os-extended-status:power_state: 0, os-config-drive:config_drive: , 
host_id: 10f0dc42e72572ed6d30e8dc32b41edc1d41a3dacda6571c5aeabe6e, 
flavor: {id: 42, links: [{href: http://127.0.0.1:8774/flavors/42;, 
rel: bookmark}]}, id: 1cbd2e01-d2
 f3-45a0-baee-1d16df3dc7fd, security_groups: [{name: default}], 
user_id: f2262ed0a64c43359867456cfbccc153, name: 
ServersAdminV3Test-instance-1705049062, created: 2014-02-06T07:48:21Z, 
tenant_id: 8e932e11471a469e85a30195b2198f63, os-extended-status:vm_state: 
error, os-server-usage:launched_at: null, 
os-extended-volumes:volumes_attached: [], os-server-usage:terminated_at: 
null, os-extended-status:locked_by: null, fault: {message: No valid host 
was found. , code: 500, created: 2014-02-06T07:48:25Z}, metadata: {}}}
2014-02-06 08:04:16.350 | }}}
2014-02-06 08:04:16.350 | 
2014-02-06 08:04:16.350 | Traceback (most recent call last):
2014-02-06 08:04:16.350 |   File 
tempest/api/compute/v3/admin/test_servers.py, line 85, in 
test_admin_delete_servers_of_others
2014-02-06 08:04:16.350 | 
self.servers_client.wait_for_server_termination(server['id'])
2014-02-06 08:04:16.351 |   File 
tempest/services/compute/v3/json/servers_client.py, line 179, in 
wait_for_server_termination
2014-02-06 08:04:16.351 | raise 
exceptions.BuildErrorException(server_id=server_id)
2014-02-06 08:04:16.351 | BuildErrorException: Server 
1cbd2e01-d2f3-45a0-baee-1d16df3dc7fd failed to build and is in ERROR status
2014-02-06 08:04:16.351 | 
2014-02-06 08:04:16.351 | 
2014-02-06 08:04:16.351 | 
==
2014-02-06 08:04:16.351 | FAIL: process-returncode
2014-02-06 08:04:16.351 | process-returncode
2014-02-06 08:04:16.351 | 
--
2014-02-06 08:04:16.352 | _StringException: Binary content:
2014-02-06 08:04:16.352 |   traceback (test/plain; charset=utf8)
2014-02-06 08:04:16.352 | 
2014-02-06 08:04:16.352 | 
2014-02-06 08:04:16.352 | 
--
2014-02-06 08:04:16.352 | Ran 2101 tests in 2350.793s
2014-02-06 08:04:16.353 | 
2014-02-06 08:04:16.353 | FAILED (failures=2, skipped=130)
2014-02-06 08:04:16.353 | ERROR: InvocationError: '/bin/bash 
tools/pretty_tox.sh 
(?!.*\\[.*\\bslow\\b.*\\])(^tempest\\.(api|scenario|thirdparty|cli)) 
--concurrency=2'
2014-02-06 08:04:16.354 | ___ summary 

2014-02-06 08:04:16.354 | ERROR:   full: commands failed
2014-02-06 08:04:16.463 | Checking logs...
2014-02-06 08:04:16.562 | Log File: n-net
2014-02-06 08:04:16.562 | 2014-02-06 07:34:40.598 30086 ERROR 
oslo.messaging._executors.base [-] Exception during message handling
2014-02-06 08:04:16.562 | 
2014-02-06 08:04:16.563 | 2014-02-06 07:34:40.601 30086 ERROR 
oslo.messaging._drivers.common [-] Returning exception Instance 
fcfcbace-c4dd-4214-957f-b01e0b47fcf4 could not be found.
2014-02-06 08:04:16.563 | 
2014-02-06 08:04:16.563 | 2014-02-06 07:34:40.601 30086 ERROR 
oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', '  
File /opt/stack/new/oslo.messaging/oslo/messaging/_executors/base.py, line 
36, in _dispatch\nincoming.reply(self.callback(incoming.ctxt, 
incoming.message))\n', '  File 
/opt/stack/new/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 134, in 
__call__\nreturn self._dispatch(endpoint, method, ctxt, args)\n', '  File 
/opt/stack/new/oslo.messaging/oslo/messaging/rpc/dispatcher.py, line 104, in 
_dispatch\nresult = getattr(endpoint, method)(ctxt, **new_args)\n', '  File 
/opt/stack/new/nova/nova/network/floating_ips.py, line 117, in 
allocate_for_instance\n**kwargs)\n', '  File 
/opt/stack/new/nova/nova/network/manager.py, line 521, in 
allocate_for_instance\nhost)\n', '  File 
/opt/stack/new/oslo.messaging/oslo/messaging/rpc/server.py, line 153, in 
inner\nreturn func(*args, **kwargs)\n', '  File /opt/stack/new/nova/
 nova/network/manager.py, line 579, in get_instance_nw_info\n

[Yahoo-eng-team] [Bug 1277051] [NEW] neutron run_test.sh has unsed flag --hide-elapsed

2014-02-06 Thread Sascha Peilicke
Public bug reported:

Should be dropped

** Affects: neutron
 Importance: Undecided
 Assignee: Sascha Peilicke (saschpe)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277051

Title:
  neutron run_test.sh has unsed flag --hide-elapsed

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Should be dropped

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277054] [NEW] Poll rescued instances fails with key error

2014-02-06 Thread John Garbutt
Public bug reported:

After an instance has been in the rescue state for some time, a periodic task 
triggers to unrescue the instances:
_poll_rescued_instances

File nova/notifications.py info_from_instance
  instance_type = flavors.extract_flavor(instance_ref)
File nova/compute/flavors.py in extract_flavor
  instance_type[key] = type_fn(sys_meta[type_key])
KeyError: 'instance_type_memory_mb'

This then continues to happen on every run of the periodic task, and
starts to fill up the DB with instance faults.

** Affects: nova
 Importance: Medium
 Assignee: John Garbutt (johngarbutt)
 Status: In Progress


** Tags: compute

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277054

Title:
  Poll rescued instances fails with key error

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  After an instance has been in the rescue state for some time, a periodic task 
triggers to unrescue the instances:
  _poll_rescued_instances

  File nova/notifications.py info_from_instance
instance_type = flavors.extract_flavor(instance_ref)
  File nova/compute/flavors.py in extract_flavor
instance_type[key] = type_fn(sys_meta[type_key])
  KeyError: 'instance_type_memory_mb'

  This then continues to happen on every run of the periodic task, and
  starts to fill up the DB with instance faults.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206345] Re: V3 Api does not provide a means of getting certificates

2014-02-06 Thread Dolph Mathews
** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
   Status: New = Triaged

** Changed in: python-keystoneclient
   Importance: Undecided = Wishlist

** Changed in: keystone
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1206345

Title:
  V3 Api does not provide a means of getting certificates

Status in OpenStack Identity (Keystone):
  Invalid
Status in Python client library for Keystone:
  Triaged

Bug description:
  Token signing and ca certificates are accessible at
  /v2/certificates/{signing,ca} however there is no way to atain these
  certificates using only the v3 api. This prevents the v3 api being
  deployed on its own.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1206345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277068] [NEW] Multiple db calls in some quota methods

2014-02-06 Thread Liyingjun
Public bug reported:

in nova/quota.py
db.quota_get_all_by_project is called twice in method limit_check(): 
https://github.com/openstack/nova/blob/master/nova/quota.py#L356 and
reserve(): https://github.com/openstack/nova/blob/master/nova/quota.py#L424

db.quota_get_all_by_project_and_user is called twice in method
get_settable_quotas():
https://github.com/openstack/nova/blob/master/nova/quota.py#L272

** Affects: nova
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277068

Title:
  Multiple db calls in some quota methods

Status in OpenStack Compute (Nova):
  New

Bug description:
  in nova/quota.py
  db.quota_get_all_by_project is called twice in method limit_check(): 
https://github.com/openstack/nova/blob/master/nova/quota.py#L356 and
  reserve(): https://github.com/openstack/nova/blob/master/nova/quota.py#L424

  db.quota_get_all_by_project_and_user is called twice in method
  get_settable_quotas():
  https://github.com/openstack/nova/blob/master/nova/quota.py#L272

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-02-06 Thread Bogdan Dobrelya
Glance is affected, see
https://github.com/openstack/glance/blob/master/glance/openstack/common/service.py

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
 ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-02-06 Thread Bogdan Dobrelya
murano-api, murano-conductor are affected, see

https://github.com/stackforge/murano-api/blob/master/muranoapi/openstack/common/service.py

https://github.com/stackforge/murano-conductor/blob/master/muranoconductor/openstack/common/service.py


** Also affects: murano
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Murano Project:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
 ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277103] [NEW] Filter always returns last created volume as result irrespective of search string until user re-navigates to the volume page

2014-02-06 Thread Erasmo Isotton
Public bug reported:

Filter always returns last created volume as result irrespective of
search string until user re-navigates to the volume page.

Steps to Reproduce the problem
1)  Add some volumes (test1, test2. test3, blabla and test4)
2)  Using the volumes filter, filter by blabla:
3)  Result will be 2 items blabla and test4 instead of only returns 
blabla.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277103

Title:
  Filter always returns last created volume as result irrespective of
  search string until user re-navigates to the volume page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Filter always returns last created volume as result irrespective of
  search string until user re-navigates to the volume page.

  Steps to Reproduce the problem
  1)  Add some volumes (test1, test2. test3, blabla and test4)
  2)  Using the volumes filter, filter by blabla:
  3)  Result will be 2 items blabla and test4 instead of only returns 
blabla.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277108] [NEW] Quota fields differ between Update Default Quotas dialog and Quota tab of Create Project dialog in the portal

2014-02-06 Thread Erasmo Isotton
Public bug reported:

The list of values for which quotas may be set differs between the
Update Default Quotas dialog (Admin -- Defaults) and the Quota tab of
the Create Project dialog (Admin -- Projects -- Create project) in
the Horizon portal.  Fields Injected File Path Bytes and Key Pairs
appear only in the Update Default Quotas dialog.  Fields Security
Groups, Security Group Rules, Floating IPs, Networks, Ports,
Routers, and Subnets appear only in the Quota tab of Create Project.
I believe these lists should be the same, or possibly the Create Project
list should be a subset of the Update Default Quotas list.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277108

Title:
  Quota fields differ between Update Default Quotas dialog and Quota tab
  of Create Project dialog in the portal

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The list of values for which quotas may be set differs between the
  Update Default Quotas dialog (Admin -- Defaults) and the Quota tab
  of the Create Project dialog (Admin -- Projects -- Create project)
  in the Horizon portal.  Fields Injected File Path Bytes and Key
  Pairs appear only in the Update Default Quotas dialog.  Fields
  Security Groups, Security Group Rules, Floating IPs, Networks,
  Ports, Routers, and Subnets appear only in the Quota tab of
  Create Project.  I believe these lists should be the same, or possibly
  the Create Project list should be a subset of the Update Default
  Quotas list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275771] Re: Notifications do not work: AttributeError: 'RequestContext' object has no attribute 'iteritems'

2014-02-06 Thread Mark McLoughlin
Julien figured out this was a bug in Nova

We still want to make it easier in oslo.messaging to catch this issue,
though

** Changed in: oslo.messaging
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275771

Title:
  Notifications do not work: AttributeError: 'RequestContext' object has
  no attribute 'iteritems'

Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  Invalid

Bug description:
  When enabling notification with notification_driver = messaging, I get
  the following:

  
  2014-02-03 14:20:41.152 ERROR oslo.messaging.notify._impl_messaging [-] Could 
not send notification to notifications. Payload={'priority': 'INFO', 
'_unique_id': 'da748b32fd144c25adc45ba5b393339d', 'event_type': 
'compute.instance.create.end', 'timestamp': '2014-02-03 14:20:41.151419', 
'publisher_id': 'compute.devstack', 'payload': {'node': u'devstack', 
'state_description': '', 'ramdisk_id': u'37ad58df-c587-4bed-9062-9428ca14eaf0', 
'created_at': '2014-02-03 14:20:33+00:00', 'access_ip_v6': None, 'disk_gb': 0, 
'availability_zone': u'nova', 'terminated_at': '', 'ephemeral_gb': 0, 
'instance_type_id': 6, 'instance_flavor_id': '42', 'image_name': 
u'cirros-0.3.1-x86_64-uec', 'host': u'devstack', 'fixed_ips': 
[FixedIP({'version': 4, 'floating_ips': [], 'label': u'private', 'meta': {}, 
'address': u'10.0.0.2', 'type': u'fixed'})], 'user_id': 
u'6bcbc8f54d65473c9a0c4a55f64fb580', 'message': u'Success', 'deleted_at': '', 
'reservation_id': u'r-jycyyveh', 'image_ref_url': u'http://162.209.87.220:9
 292/images/7b8d712a-fb31-43b8-8a05-a74d70fd8a11', 'memory_mb': 64, 'root_gb': 
0, 'display_name': u'dwq', 'instance_type': 'm1.nano', 'tenant_id': 
u'cda1741ff4ef47f48fb3d9d76e302add', 'access_ip_v4': None, 'hostname': u'dwq', 
'vcpus': 1, 'instance_id': '272c2ec6-bb98-4e84-9377-84c63c7a9ce9', 'kernel_id': 
u'5d1a6130-0e6a-4155-9c05-0174a654da68', 'state': u'active', 'image_meta': 
{u'kernel_id': u'5d1a6130-0e6a-4155-9c05-0174a654da68', u'container_format': 
u'ami', u'min_ram': u'0', u'ramdisk_id': 
u'37ad58df-c587-4bed-9062-9428ca14eaf0', u'disk_format': u'ami', u'min_disk': 
u'0', u'base_image_ref': u'7b8d712a-fb31-43b8-8a05-a74d70fd8a11'}, 
'architecture': None, 'os_type': None, 'launched_at': 
'2014-02-03T14:20:41.070490', 'metadata': {}}, 'message_id': 
'03b2985a-6bcd-44ff-8303-29618d3c2b01'}
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging Traceback 
(most recent call last):
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging   File 
/opt/stack/oslo.messaging/oslo/messaging/notify/_impl_messaging.py, line 47, 
in notify
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
version=self.version)
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging   File 
/opt/stack/oslo.messaging/oslo/messaging/transport.py, line 93, in 
_send_notification
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
self._driver.send_notification(target, ctxt, message, version)
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging   File 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py, line 393, in 
send_notification
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
return self._send(target, ctxt, message, envelope=(version == 2.0))
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging   File 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqpdriver.py, line 362, in 
_send
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
rpc_amqp.pack_context(msg, context)
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging   File 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqp.py, line 299, in 
pack_context
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
context_d = six.iteritems(context.to_dict())
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging   File 
/usr/local/lib/python2.7/dist-packages/six.py, line 484, in iteritems
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
return iter(getattr(d, _iteritems)(**kw))
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging 
AttributeError: 'RequestContext' object has no attribute 'iteritems'
  2014-02-03 14:20:41.152 TRACE oslo.messaging.notify._impl_messaging

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274121] Re: test_rescued_vm_add_remove_security_group frequently failing

2014-02-06 Thread Attila Fazekas
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
   Importance: Undecided = Critical

** Changed in: tempest
 Assignee: (unassigned) = Attila Fazekas (afazekas)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274121

Title:
  test_rescued_vm_add_remove_security_group frequently failing

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  New

Bug description:
  This test is not gating with neutron at the moment, but I see it is
  failing very frequently, it is part of the full tempest run, but not
  part of the smoke runs.

  The tempest exception and request is attached.

  Exception:
  NotFound: Object not found
  Details: itemNotFound code=404 
xmlns=http://docs.openstack.org/compute/api/v1.1;messageinstance_id 
a3123898-504e-466d-83a8-4e40a6d6e96f could not be found as device id on any 
ports/message/itemNotFound

  
  n-api:
  2014-01-28 03:35:09.605 DEBUG nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] Action: 'action', body: ?xml 
version=1.0 encoding=UTF-8?
  addSecurityGroup xmlns=http://docs.openstack.org/compute/api/v1.1; 
name=sg-tempest-801012576/ _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:953
  2014-01-28 03:35:09.605 DEBUG nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] Calling method bound method 
SecurityGroupActionController._addSecurityGroup of 
nova.api.openstack.compute.contrib.security_groups.SecurityGroupActionController
 object at 0x373ce10 _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:954
  2014-01-28 03:35:09.656 27500 DEBUG neutronclient.client [-] 
  REQ: curl -i 
http://192.168.1.50:9696/v2.0/security-groups.json?fields=idname=sg-tempest-801012576
 -X GET  -H Content-Type: application/json -H Accept: application/json -H 
User-Agent: python-neutronclient
   http_log_req 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:176
  2014-01-28 03:35:09.672 27500 DEBUG neutronclient.client [-] RESP:{'date': 
'Tue, 28 Jan 2014 03:35:09 GMT', 'status': '200', 'content-length': '69', 
'content-type': 'application/json; charset=UTF-8', 'content-location': 
'http://192.168.1.50:9696/v2.0/security-groups.json?fields=idname=sg-tempest-801012576'}
 {security_groups: [{id: 0e43fe78-0dba-415b-b87e-f3a37d0f7cf9}]}
   http_log_resp 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:182
  2014-01-28 03:35:09.672 27500 DEBUG neutronclient.client [-] 
  REQ: curl -i 
http://192.168.1.50:9696/v2.0/ports.json?device_id=a3123898-504e-466d-83a8-4e40a6d6e96f
 -X GET -H Content-Type: application/json -H Accept: application/json -H 
User-Agent: python-neutronclient
   http_log_req 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:176
  2014-01-28 03:35:09.689 27500 DEBUG neutronclient.client [-] RESP:{'date': 
'Tue, 28 Jan 2014 03:35:09 GMT', 'status': '200', 'content-length': '13', 
'content-type': 'application/json; charset=UTF-8', 'content-location': 
'http://192.168.1.50:9696/v2.0/ports.json?device_id=a3123898-504e-466d-83a8-4e40a6d6e96f'}
 {ports: []}
   http_log_resp 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:182
  2014-01-28 03:35:09.690 INFO nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] HTTP exception thrown: 
instance_id a3123898-504e-466d-83a8-4e40a6d6e96f could not be found as device 
id on any ports
  2014-01-28 03:35:09.701 DEBUG nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] Returning 404 to user: 
instance_id a3123898-504e-466d-83a8-4e40a6d6e96f could not be found as device 
id on any ports __call__ /opt/stack/new/nova/nova/api/openstack/wsgi.py:1218
  2014-01-28 03:35:09.702 INFO nova.osapi_compute.wsgi.server 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] 192.168.1.50 POST 
/v2/f025dc90f1294227ae5dc4cd0d780c53/servers/a3123898-504e-466d-83a8-4e40a6d6e96f/action
 HTTP/1.1 status: 404 len: 416 time: 0.1011932

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277204] [NEW] notification topics no longer configurable

2014-02-06 Thread Phil Day
Public bug reported:

The recent change to oslo messaging seems to have removed the ability to
specify a list of topics for notifications.   This is a critical feature
for systems which provide multiple message streams for billing and
monitoring.


To reproduce:

1) Create a devstack system

2) Add the following lines to the [DEFAULT] section of nova.conf:
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_topics = notifications,monitor
notify_on_state_change = vm_and_task_state
notify_on_any_change = True
instance_usage_audit = True
instance_usage_audit_period = hour

3) Restart all the n-* services

4) Look at the info queues in rabbit
sudo rabbitmqctl list_queues | grep info
notifications.info  15

5) Create an instance
ubuntu@devstack-net-cache:/mnt/devstack$ nova boot --image 
cirros-0.3.1-x86_64-uec --flavor 1 phil2

6) Look at the info queues in rabbit
sudo rabbitmqctl list_queues | grep info
notifications.info  17


Messages are being added to the notifications queue, but not to the
monitor queue

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277204

Title:
  notification topics no longer configurable

Status in OpenStack Compute (Nova):
  New

Bug description:
  The recent change to oslo messaging seems to have removed the ability
  to specify a list of topics for notifications.   This is a critical
  feature for systems which provide multiple message streams for billing
  and monitoring.

  
  To reproduce:

  1) Create a devstack system

  2) Add the following lines to the [DEFAULT] section of nova.conf:
  notification_driver = nova.openstack.common.notifier.rpc_notifier
  notification_topics = notifications,monitor
  notify_on_state_change = vm_and_task_state
  notify_on_any_change = True
  instance_usage_audit = True
  instance_usage_audit_period = hour

  3) Restart all the n-* services

  4) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  15

  5) Create an instance
  ubuntu@devstack-net-cache:/mnt/devstack$ nova boot --image 
cirros-0.3.1-x86_64-uec --flavor 1 phil2

  6) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  17


  Messages are being added to the notifications queue, but not to the
  monitor queue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277217] [NEW] Cisco plugins should use common network type consts

2014-02-06 Thread Henry Gessau
Public bug reported:

Cisco plugin and ML2 mech driver were not covered by
4cdccd69a45aec19d547c10f29f61359b69ad6c1

** Affects: neutron
 Importance: Undecided
 Status: In Progress


** Tags: cisco

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277217

Title:
  Cisco plugins should use common network type consts

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Cisco plugin and ML2 mech driver were not covered by
  4cdccd69a45aec19d547c10f29f61359b69ad6c1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277222] [NEW] Reorganize code tree for multiple cisco ML2 mech drivers

2014-02-06 Thread Henry Gessau
Public bug reported:

Currently there is one ML2 driver for cisco nexus in

neutron/plugins/ml2/drivers/cisco

It needs to go down a level so other cisco drivers can live alongside
it:

neutron/plugins/ml2/drivers/cisco/apic
neutron/plugins/ml2/drivers/cisco/nexus
neutron/plugins/ml2/drivers/cisco/ucs
neutron/plugins/ml2/drivers/cisco/...

** Affects: neutron
 Importance: Undecided
 Status: In Progress


** Tags: cisco

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277222

Title:
  Reorganize code tree for multiple cisco ML2 mech drivers

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Currently there is one ML2 driver for cisco nexus in

  neutron/plugins/ml2/drivers/cisco

  It needs to go down a level so other cisco drivers can live alongside
  it:

  neutron/plugins/ml2/drivers/cisco/apic
  neutron/plugins/ml2/drivers/cisco/nexus
  neutron/plugins/ml2/drivers/cisco/ucs
  neutron/plugins/ml2/drivers/cisco/...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277230] [NEW] Overlapping availability zones don't display properly

2014-02-06 Thread Vish Ishaya
Public bug reported:

If you create two overlapping availability zones, the get availability
zones command returns odd information.

Repro:

$ nova --os-username admin --os-tenant-name admin --os-password secrete 
aggregate-create foo foo
++--+---+---+--+
| Id | Name | Availability Zone | Hosts | Metadata |
++--+---+---+--+
| 2  | foo  | foo   |   |  |
++--+---+---+--+

$ nova --os-username admin --os-tenant-name admin --os-password secrete 
aggregate-add-host 2 node001-cont001
Aggregate 2 has been successfully updated.
++--+---+--++
| Id | Name | Availability Zone | Hosts| Metadata   
|
++--+---+--++
| 2  | foo  | foo   | [u'node001-cont001'] | {u'availability_zone': 
u'foo'} |
++--+---+--++

$ nova availability-zone-list
+-+---+
| Name| Status|
+-+---+
| zone001 | available |
| zone001,foo | available |
+-+---+

Expected:
+-+---+
| Name| Status|
+-+---+
| zone001 | available |
| foo | available |
+-+---+

The admin view is a little more useful:

$ nova --os-username admin --os-tenant-name admin --os-password secrete 
availability-zone-list
+---++
| Name  | Status |
+---++
| internal  | available  |
...
| zone001   | available  |
| |- node005-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:31.00 |
| |- node007-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
| |- node009-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:25.00 |
| |- node003-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
| zone001,foo   | available  |
| |- node001-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
+---++

but it could easily show two copies of the node that is in two zones:

+---++
| Name  | Status |
+---++
| internal  | available  |
...
| zone001   | available  |
| |- node005-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:31.00 |
| |- node007-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
| |- node009-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:25.00 |
| |- node003-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
| |- node001-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
| foo   | available  |
| |- node001-cont001||
| | |- nova-compute | enabled :-) 2014-02-06T19:18:27.00 |
+---++

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277230

Title:
  Overlapping availability zones don't display properly

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you create two overlapping availability zones, the get availability
  zones command returns odd information.

  Repro:

  $ nova --os-username admin --os-tenant-name admin --os-password secrete 
aggregate-create foo foo
  ++--+---+---+--+
  | Id | Name | Availability Zone | Hosts | Metadata |
  ++--+---+---+--+
  | 2  | foo  | foo   |   |  |
  ++--+---+---+--+

  $ nova --os-username admin 

[Yahoo-eng-team] [Bug 1277204] Re: notification topics no longer configurable

2014-02-06 Thread Chuck Short
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277204

Title:
  notification topics no longer configurable

Status in OpenStack Compute (Nova):
  New
Status in Messaging API for OpenStack:
  New

Bug description:
  The recent change to oslo messaging seems to have removed the ability
  to specify a list of topics for notifications.   This is a critical
  feature for systems which provide multiple message streams for billing
  and monitoring.

  
  To reproduce:

  1) Create a devstack system

  2) Add the following lines to the [DEFAULT] section of nova.conf:
  notification_driver = nova.openstack.common.notifier.rpc_notifier
  notification_topics = notifications,monitor
  notify_on_state_change = vm_and_task_state
  notify_on_any_change = True
  instance_usage_audit = True
  instance_usage_audit_period = hour

  3) Restart all the n-* services

  4) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  15

  5) Create an instance
  ubuntu@devstack-net-cache:/mnt/devstack$ nova boot --image 
cirros-0.3.1-x86_64-uec --flavor 1 phil2

  6) Look at the info queues in rabbit
  sudo rabbitmqctl list_queues | grep info
  notifications.info  17


  Messages are being added to the notifications queue, but not to the
  monitor queue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274772] Re: libvirt.txt in gate is full of error messages

2014-02-06 Thread Chuck Short
** Also affects: libvirt (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274772

Title:
  libvirt.txt in gate is full of error messages

Status in OpenStack Compute (Nova):
  New
Status in “libvirt” package in Ubuntu:
  New

Bug description:
  http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-
  full/4860441/logs/libvirtd.txt.gz is full of errors such as:

  
  2014-01-30 22:40:04.255+: 9228: error : virNetDevGetIndex:656 : Unable to 
get index for interface vnet0: No such device

  2014-01-30 22:13:14.464+: 9227: error : virExecWithHook:327 :
  Cannot find 'pm-is-supported' in path: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274121] Re: test_rescued_vm_add_remove_security_group frequently failing

2014-02-06 Thread Attila Fazekas
** Changed in: nova
   Status: Confirmed = Invalid

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274121

Title:
  test_rescued_vm_add_remove_security_group frequently failing

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  In Progress

Bug description:
  This test is not gating with neutron at the moment, but I see it is
  failing very frequently, it is part of the full tempest run, but not
  part of the smoke runs.

  The tempest exception and request is attached.

  Exception:
  NotFound: Object not found
  Details: itemNotFound code=404 
xmlns=http://docs.openstack.org/compute/api/v1.1;messageinstance_id 
a3123898-504e-466d-83a8-4e40a6d6e96f could not be found as device id on any 
ports/message/itemNotFound

  
  n-api:
  2014-01-28 03:35:09.605 DEBUG nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] Action: 'action', body: ?xml 
version=1.0 encoding=UTF-8?
  addSecurityGroup xmlns=http://docs.openstack.org/compute/api/v1.1; 
name=sg-tempest-801012576/ _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:953
  2014-01-28 03:35:09.605 DEBUG nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] Calling method bound method 
SecurityGroupActionController._addSecurityGroup of 
nova.api.openstack.compute.contrib.security_groups.SecurityGroupActionController
 object at 0x373ce10 _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:954
  2014-01-28 03:35:09.656 27500 DEBUG neutronclient.client [-] 
  REQ: curl -i 
http://192.168.1.50:9696/v2.0/security-groups.json?fields=idname=sg-tempest-801012576
 -X GET  -H Content-Type: application/json -H Accept: application/json -H 
User-Agent: python-neutronclient
   http_log_req 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:176
  2014-01-28 03:35:09.672 27500 DEBUG neutronclient.client [-] RESP:{'date': 
'Tue, 28 Jan 2014 03:35:09 GMT', 'status': '200', 'content-length': '69', 
'content-type': 'application/json; charset=UTF-8', 'content-location': 
'http://192.168.1.50:9696/v2.0/security-groups.json?fields=idname=sg-tempest-801012576'}
 {security_groups: [{id: 0e43fe78-0dba-415b-b87e-f3a37d0f7cf9}]}
   http_log_resp 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:182
  2014-01-28 03:35:09.672 27500 DEBUG neutronclient.client [-] 
  REQ: curl -i 
http://192.168.1.50:9696/v2.0/ports.json?device_id=a3123898-504e-466d-83a8-4e40a6d6e96f
 -X GET -H Content-Type: application/json -H Accept: application/json -H 
User-Agent: python-neutronclient
   http_log_req 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:176
  2014-01-28 03:35:09.689 27500 DEBUG neutronclient.client [-] RESP:{'date': 
'Tue, 28 Jan 2014 03:35:09 GMT', 'status': '200', 'content-length': '13', 
'content-type': 'application/json; charset=UTF-8', 'content-location': 
'http://192.168.1.50:9696/v2.0/ports.json?device_id=a3123898-504e-466d-83a8-4e40a6d6e96f'}
 {ports: []}
   http_log_resp 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:182
  2014-01-28 03:35:09.690 INFO nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] HTTP exception thrown: 
instance_id a3123898-504e-466d-83a8-4e40a6d6e96f could not be found as device 
id on any ports
  2014-01-28 03:35:09.701 DEBUG nova.api.openstack.wsgi 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] Returning 404 to user: 
instance_id a3123898-504e-466d-83a8-4e40a6d6e96f could not be found as device 
id on any ports __call__ /opt/stack/new/nova/nova/api/openstack/wsgi.py:1218
  2014-01-28 03:35:09.702 INFO nova.osapi_compute.wsgi.server 
[req-7faf904f-a376-4d00-b86e-eedf33fd1d3c 
ServerRescueTestXML-tempest-660237916-user 
ServerRescueTestXML-tempest-660237916-tenant] 192.168.1.50 POST 
/v2/f025dc90f1294227ae5dc4cd0d780c53/servers/a3123898-504e-466d-83a8-4e40a6d6e96f/action
 HTTP/1.1 status: 404 len: 416 time: 0.1011932

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1274121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277240] [NEW] Instance Image Name cell shows different value

2014-02-06 Thread Cindy Lu
Public bug reported:

Between the Project and Admin view.  In Project, no image is replaced
with a dash.  In Admin, no image is replaced with (Not Found)

== How to reproduce ==

Go to Instances tab in Project and Admin

In Project:
If you Create Instance and boot from volume and go back to the table, it will 
say in the Image Name column (Not found).  However, if you refresh, it is 
replaced by -.  

In Admin:
It shows as (Not Found)

See attached image

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 020614 - inconsistent terms.png
   
https://bugs.launchpad.net/bugs/1277240/+attachment/3971684/+files/020614%20-%20inconsistent%20terms.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277240

Title:
  Instance Image Name cell shows different value

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Between the Project and Admin view.  In Project, no image is replaced
  with a dash.  In Admin, no image is replaced with (Not Found)

  == How to reproduce ==

  Go to Instances tab in Project and Admin

  In Project:
  If you Create Instance and boot from volume and go back to the table, it will 
say in the Image Name column (Not found).  However, if you refresh, it is 
replaced by -.  

  In Admin:
  It shows as (Not Found)

  See attached image

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277257] [NEW] Resize Instance doesn't work

2014-02-06 Thread Cindy Lu
Public bug reported:

If I try to size up a flavor (m1.xlarge), it gives me the message
Success: Preparing instance  for resize - but nothing happens.

Please see the attached image.

The top image shows a successful resize.  The second shows the
successful message, but no behavior.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 020614 - resize instance.png
   
https://bugs.launchpad.net/bugs/1277257/+attachment/3971719/+files/020614%20-%20resize%20instance.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277257

Title:
  Resize Instance doesn't work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If I try to size up a flavor (m1.xlarge), it gives me the message
  Success: Preparing instance  for resize - but nothing happens.

  Please see the attached image.

  The top image shows a successful resize.  The second shows the
  successful message, but no behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1206345] Re: V3 Api does not provide a means of getting certificates

2014-02-06 Thread Jamie Lennox
** Changed in: keystone
   Status: Invalid = Fix Committed

** Changed in: keystone
 Assignee: (unassigned) = Jamie Lennox (jamielennox)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1206345

Title:
  V3 Api does not provide a means of getting certificates

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Python client library for Keystone:
  Triaged

Bug description:
  Token signing and ca certificates are accessible at
  /v2/certificates/{signing,ca} however there is no way to atain these
  certificates using only the v3 api. This prevents the v3 api being
  deployed on its own.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1206345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277298] [NEW] Deleting users takes a long time if there are many tokens

2014-02-06 Thread Clint Byrum
Public bug reported:

This is because we still have to filter results of an indexed query that
may return _millions_ of rows:

mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 AND 
token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: token
 type: ref
possible_keys: ix_token_expires,ix_token_valid,ix_token_expires_valid
  key: ix_token_valid
  key_len: 1
  ref: const
 rows: 697205
Extra: Using where
1 row in set (0.01 sec)


Adding an index on user_id makes this quite a bit faster:

mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 AND 
token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: token
 type: ref
possible_keys: ix_token_expires,ix_token_valid,ix_token_expires_valid,ix_user_id
  key: ix_user_id
  key_len: 195
  ref: const
 rows: 89
Extra: Using where
1 row in set (0.00 sec)

Note that memory usage still will go very high if a user one is deleting
has a lot of tokens, because the orm will select all of the rows, when
all that it needs is the id. So there are really two bugs. But the
select is slow even if you just select the id.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1277298

Title:
  Deleting users takes a long time if there are many tokens

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is because we still have to filter results of an indexed query
  that may return _millions_ of rows:

  mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 
AND token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
  *** 1. row ***
 id: 1
select_type: SIMPLE
  table: token
   type: ref
  possible_keys: ix_token_expires,ix_token_valid,ix_token_expires_valid
key: ix_token_valid
key_len: 1
ref: const
   rows: 697205
  Extra: Using where
  1 row in set (0.01 sec)

  
  Adding an index on user_id makes this quite a bit faster:

  mysql EXPLAIN SELECT token.id AS token_id FROM token WHERE token.valid = 1 
AND token.expires  '2014-02-05 01:28:07.725059' AND token.user_id = 
'356d68464dc2478992427864dca4ce6a'\G
  *** 1. row ***
 id: 1
select_type: SIMPLE
  table: token
   type: ref
  possible_keys: 
ix_token_expires,ix_token_valid,ix_token_expires_valid,ix_user_id
key: ix_user_id
key_len: 195
ref: const
   rows: 89
  Extra: Using where
  1 row in set (0.00 sec)

  Note that memory usage still will go very high if a user one is
  deleting has a lot of tokens, because the orm will select all of the
  rows, when all that it needs is the id. So there are really two bugs.
  But the select is slow even if you just select the id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1277298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277302] [NEW] required field asterisk missing

2014-02-06 Thread Cindy Lu
Public bug reported:

In Launch Instance, when you select from the drop-down menu Instance
Boot Source, it will dynamically generate some more form elements.
Image Name, Instance Snapshot, Volume Name, Volume Snapshot, Volume.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277302

Title:
  required field asterisk missing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Launch Instance, when you select from the drop-down menu Instance
  Boot Source, it will dynamically generate some more form elements.
  Image Name, Instance Snapshot, Volume Name, Volume Snapshot, Volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277316] [NEW] Diconnecting a volume with multipath generates excesive multipath calls

2014-02-06 Thread Sam Morrison
Public bug reported:

I have a compute node with 20 volumes attached using iscsi and multipath.
Each multipath device has 4 iscsi devices.

When I disconnect a volume it generates 779 multipath -ll calls.


iscsiadm -m node --rescan
iscsiadm -m session --rescan
multipath - r

multipath -ll /dev/sdch
multipath -ll /dev/sdcg
multipath -ll /dev/sdcf
multipath -ll /dev/sdce
multipath -ll /dev/sdcd
multipath -ll /dev/sdcc
multipath -ll /dev/sdcb
multipath -ll /dev/sdca
multipath -ll /dev/sdbz
multipath -ll /dev/sdby
multipath -ll /dev/sdbx
multipath -ll /dev/sdbw
multipath -ll /dev/sdbv
multipath -ll /dev/sdbu
multipath -ll /dev/sdbt
multipath -ll /dev/sdbs
multipath -ll /dev/sdbr
multipath -ll /dev/sdbq
multipath -ll /dev/sdbp
multipath -ll /dev/sdbo
multipath -ll /dev/sdbn
multipath -ll /dev/sdbm
multipath -ll /dev/sdbl
multipath -ll /dev/sdbk
multipath -ll /dev/sdbj
multipath -ll /dev/sdbi
multipath -ll /dev/sdbh
multipath -ll /dev/sdbg
multipath -ll /dev/sdbf
multipath -ll /dev/sdbe
multipath -ll /dev/sdbd
multipath -ll /dev/sdbc
multipath -ll /dev/sdbb
multipath -ll /dev/sdba

.. And so on for 779 times
cp /dev/stdin /sys/block/sdcd/device/delete
cp /dev/stdin /sys/block/sdcc/device/delete
cp /dev/stdin /sys/block/sdcb/device/delete
cp /dev/stdin /sys/block/sdca/device/delete
multipath - r




** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277316

Title:
  Diconnecting a volume with multipath generates excesive multipath
  calls

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a compute node with 20 volumes attached using iscsi and multipath.
  Each multipath device has 4 iscsi devices.

  When I disconnect a volume it generates 779 multipath -ll calls.

  
  iscsiadm -m node --rescan
  iscsiadm -m session --rescan
  multipath - r

  multipath -ll /dev/sdch
  multipath -ll /dev/sdcg
  multipath -ll /dev/sdcf
  multipath -ll /dev/sdce
  multipath -ll /dev/sdcd
  multipath -ll /dev/sdcc
  multipath -ll /dev/sdcb
  multipath -ll /dev/sdca
  multipath -ll /dev/sdbz
  multipath -ll /dev/sdby
  multipath -ll /dev/sdbx
  multipath -ll /dev/sdbw
  multipath -ll /dev/sdbv
  multipath -ll /dev/sdbu
  multipath -ll /dev/sdbt
  multipath -ll /dev/sdbs
  multipath -ll /dev/sdbr
  multipath -ll /dev/sdbq
  multipath -ll /dev/sdbp
  multipath -ll /dev/sdbo
  multipath -ll /dev/sdbn
  multipath -ll /dev/sdbm
  multipath -ll /dev/sdbl
  multipath -ll /dev/sdbk
  multipath -ll /dev/sdbj
  multipath -ll /dev/sdbi
  multipath -ll /dev/sdbh
  multipath -ll /dev/sdbg
  multipath -ll /dev/sdbf
  multipath -ll /dev/sdbe
  multipath -ll /dev/sdbd
  multipath -ll /dev/sdbc
  multipath -ll /dev/sdbb
  multipath -ll /dev/sdba
  
  .. And so on for 779 times
  cp /dev/stdin /sys/block/sdcd/device/delete
  cp /dev/stdin /sys/block/sdcc/device/delete
  cp /dev/stdin /sys/block/sdcb/device/delete
  cp /dev/stdin /sys/block/sdca/device/delete
  multipath - r

  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277325] [NEW] tutorial doesn't use new dashboard loading mechanism

2014-02-06 Thread David Lyle
Public bug reported:

The tutorial in the Horizon docs uses the old method for adding new
dashboards rather than the newer method of putting a file in  the
enabled directory.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: New

** Changed in: horizon
   Importance: Undecided = Low

** Changed in: horizon
 Assignee: (unassigned) = David Lyle (david-lyle)

** Changed in: horizon
Milestone: None = icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277325

Title:
  tutorial doesn't use new dashboard loading mechanism

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The tutorial in the Horizon docs uses the old method for adding new
  dashboards rather than the newer method of putting a file in  the
  enabled directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2014-02-06 Thread Zhongyue Luo
** Changed in: cinder
   Status: Invalid = New

** Changed in: cinder
 Assignee: (unassigned) = Zhongyue Luo (zyluo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  New
Status in Gantt:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Manila:
  New
Status in OpenStack Message Queuing Service (Marconi):
  Fix Committed
Status in OpenStack Compute (Nova):
  Triaged
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Nova:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271806] Re: unable to run tests due to missing deps in the virtual env

2014-02-06 Thread Zhi Yan Liu
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271806

Title:
  unable to run tests due to missing deps in the virtual env

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  On both my Ubuntu box and my Mac, I've been unable to run the glance
  tests since this evening due to a missing dependency, specifically a
  version of psutil between 0.6 and 1.0. The archive only has 1.1 and
  up. Here are the logs:

  Downloading/unpacking psutil=0.6.1,1.0 (from -r 
/Users/mfischer/code/glance/test-requirements.txt (line 19))

http://tarballs.openstack.org/oslo.messaging/oslo.messaging-1.2.0a11.tar.gz#egg=oslo.messaging-1.2.0a11
 uses an insecure transport scheme (http). Consider using https if 
tarballs.openstack.org has it available
Could not find a version that satisfies the requirement psutil=0.6.1,1.0 
(from -r /Users/mfischer/code/glance/test-requirements.txt (line 19)) (from 
versions: 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.2.0, 1.2.1)
Some externally hosted files were ignored (use --allow-external to allow).
  Cleaning up...
  No distributions matching the version for psutil=0.6.1,1.0 (from -r 
/Users/mfischer/code/glance/test-requirements.txt (line 19))
  Storing debug log for failure in 
/var/folders/d2/qr0r7fc10j35_lwkz9wwmtxcgp/T/tmpBIMPmg

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1271806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277324] [NEW] tutorial doesn't use new dashboard loading mechanism

2014-02-06 Thread David Lyle
Public bug reported:

The tutorial in the Horizon docs uses the old method for adding new
dashboards rather than the newer method of putting a file in  the
enabled directory.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1277324

Title:
  tutorial doesn't use new dashboard loading mechanism

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The tutorial in the Horizon docs uses the old method for adding new
  dashboards rather than the newer method of putting a file in  the
  enabled directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1277324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277339] [NEW] snapshot-failed-when-dependent-volume-exist

2014-02-06 Thread John Haan
Public bug reported:

When we use 'HpSanISCSIDriver ', there is some issues.

1. Create volume
2. Create snapshot from volume
3. Create new volume from snapshot
4. Try to delete snapshot.

In this work flow, the status of snapshot should be 'error_deleting'.

The error message says like this.

gauche version=1.0\n\n  response description=Volume delete
operation failed: \'\'The snapshot \'snapshot-d357590a-47f7-4785-bd6b-
debce5bd9a2a\' cannot be deleted because it is a clone point.  When
multiple volumes depend on a snapshot, that snapshot is a clone point.
To delete the clone point, you must first delete all but one of the
volumes that depend on it.\'\' name=CliqOperationFailed
processingTime=330 result=80001010/\n\n/gauche

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277339

Title:
  snapshot-failed-when-dependent-volume-exist

Status in OpenStack Compute (Nova):
  New

Bug description:
  When we use 'HpSanISCSIDriver ', there is some issues.

  1. Create volume
  2. Create snapshot from volume
  3. Create new volume from snapshot
  4. Try to delete snapshot.

  In this work flow, the status of snapshot should be 'error_deleting'.

  The error message says like this.

  gauche version=1.0\n\n  response description=Volume delete
  operation failed: \'\'The snapshot \'snapshot-d357590a-47f7-4785-bd6b-
  debce5bd9a2a\' cannot be deleted because it is a clone point.  When
  multiple volumes depend on a snapshot, that snapshot is a clone point.
  To delete the clone point, you must first delete all but one of the
  volumes that depend on it.\'\' name=CliqOperationFailed
  processingTime=330 result=80001010/\n\n/gauche

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp