[Openstack] l3-agent iptables-restore: line 23 failed

2013-06-01 Thread Martin Mailand
Hi List,

if I add my routers gateway to an external network, I get an error in
the l3-agent.log, about a failure in iptables-restore.
As far as I know iptables-restore gets the information on stdin, how
could I see the iptable rules which do not apply?
How could I debug this further?
Full log is attachted.

-martin

Command:
root@controller:~# quantum router-gateway-set
ac1a85c9-d5e1-4976-a16b-14ccdac49c17 61bf1c06-aea7-4966-9718-2be029abc18d
Set gateway for router ac1a85c9-d5e1-4976-a16b-14ccdac49c17
root@controller:~#

Log:

2013-06-01 16:07:35DEBUG [quantum.agent.linux.utils] Running
command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ip', 'netns', 'exec', 'qrouter-ac1a85c9-d5e1-4976-a16b-14ccdac49c17',
'iptables-restore']
2013-06-01 16:07:35DEBUG [quantum.agent.linux.utils]
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ip', 'netns', 'exec', 'qrouter-ac1a85c9-d5e1-4976-a16b-14ccdac49c17',
'iptables-restore']
Exit code: 1
Stdout: ''
Stderr: 'iptables-restore: line 23 failed\n'


quantum router-show ac1a85c9-d5e1-4976-a16b-14ccdac49c17
+---++
| Field | Value
 |
+---++
| admin_state_up| True
 |
| external_gateway_info | {"network_id":
"61bf1c06-aea7-4966-9718-2be029abc18d"} |
| id| ac1a85c9-d5e1-4976-a16b-14ccdac49c17
 |
| name  | router1
 |
| routes|
 |
| status| ACTIVE
 |
| tenant_id | b5e5af3504964760ad51c4980d30f89a
 |
+---++


quantum net-show 61bf1c06-aea7-4966-9718-2be029abc18d
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 61bf1c06-aea7-4966-9718-2be029abc18d |
| name  | ext_net  |
| provider:network_type | gre  |
| provider:physical_network |  |
| provider:segmentation_id  | 2|
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   | ccde4243-5857-4ee2-957e-a11304366f85 |
| tenant_id | 43b2bbbf5daf4badb15d67d87ed2f3dc |
+---+--+
2013-06-01 16:07:03DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:03DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 5eded77d48f9461aa029b6dfe3a72a2f
2013-06-01 16:07:03DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is a97579c5bb304cf2b8e99099bfdfeca6.
2013-06-01 16:07:07DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:07DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is cc0672ba1cc749059d6c8190a18d4721
2013-06-01 16:07:07DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 7534a2f4def14829b35186215e2aa146.
2013-06-01 16:07:11DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:11DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 2d5cde249edd4905868c401bb5075e60
2013-06-01 16:07:11DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 5d0d2d2554cf4ae1b4d2485b181fe9ad.
2013-06-01 16:07:15DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:15DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is e456789e89c843b8b098549f0a6dffd8
2013-06-01 16:07:15DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is cd6d6fe271534ea4b0bed159238b9aa9.
2013-06-01 16:07:19DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:19DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 99f0c003e361442baa9acd04f1f22cd2
2013-06-01 16:07:19DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 076d0a178ddc4b97af53ad4bd979c9bc.
2013-06-01 16:07:23DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ...
2013-06-01 16:07:23DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 822f072b44874a7b93cd2d222574343f
2013-06-01 16:07:23DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is 7513ed78f63c43a8950d6837856bcad8.
2013-06-01 16:07:27DEBUG [quantum.openstack.common.periodic_task] Running periodic task L3NATAgentWithStateReport._sync_routers_task
2013-06-01 16:07:27DEBUG [qu

Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

now everything is working, many thanks for your help, great work.

-martin

On 30.05.2013 23:24, Josh Durgin wrote:
>> I have to more things.
>> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
>> update your configuration to the new path. What is the new path?
> 
> cinder.volume.drivers.rbd.RBDDriver
> 
>> 2. I have in the glance-api.conf show_image_direct_url=True, but the
>> volumes are not clones of the original which are in the images pool.
> 
> Set glance_api_version=2 in cinder.conf. The default was changed in
> Grizzly.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

that's working.

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?

2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.

That's what I did.

root@controller:~/vm_images# !1228
glance add name="Precise Server" is_public=true container_format=ovf
disk_format=raw < ./precise-server-cloudimg-amd64-disk1.raw
Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
root@controller:~/vm_images# rbd -p images -l ls
NAMESIZE PARENT FMT PROT LOCK
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8   2048M  2
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8@snap  2048M  2 yes
root@controller:~/vm_images# cinder create --image-id
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
10+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova |
|   bootable  |false |
|  created_at |  2013-05-30T21:08:16.506094  |
| display_description | None |
| display_name|   volcli1|
|  id | 34838911-6613-4140-93e0-e1565054a2d3 |
|   image_id  | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
| source_volid| None |
|status   |   creating   |
| volume_type | None |
+-+--+
root@controller:~/vm_images# cinder list
+--+-+--+--+-+--+-+
|  ID  |Status   | Display Name |
Size | Volume Type | Bootable | Attached to |
+--+-+--+--+-+--+-+
| 34838911-6613-4140-93e0-e1565054a2d3 | downloading |   volcli1|
10  | None|  false   | |
+--+-+--+--+-+--+-+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME   SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3  10240M  2

root@controller:~/vm_images#

-martin

On 30.05.2013 22:56, Josh Durgin wrote:
> On 05/30/2013 01:50 PM, Martin Mailand wrote:
>> Hi Josh,
>>
>> I found the problem, nova-compute tries to connect to the publicurl
>> (xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
>> the management network.
>> I thought the internalurl is the one, which is used for the internal
>> communication of the openstack components and the publicurl is the ip
>> for "customer" of the cluster?
>> Am I wrong here?
> 
> I'd expect that too, but it's determined in nova by the
> cinder_catalog_info option, which defaults to volume:cinder:publicURL.
> 
> You can also override it explicitly with
> cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
> in your nova.conf.
> 
> Josh
> 
>> -martin
>>
>> On 30.05.2013 22:22, Martin Mailand wrote:
>>> Hi Josh,
>>>
>>> On 30.05.2013 21:17, Josh Durgin wrote:
>>>> It's trying to talk to the cinder api, and failing to connect at all.
>>>> Perhaps there's a firewall preventing that on the compute host, or
>>>> it's trying to use the wrong endpoint for cinder (check the keystone
>>>> service and endpoint tables for the volume service).
>>>
>>> the keystone endpoint looks like this:
>>>
>>> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
>>> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>> 5ad684c5a0154c13b54283b01744181b
>>>
>>> where 192.168.192.2 is the IP from the controller node.
>>>
>>> And from the compute node a telnet 192.168.192.2 8776 is working.
>>>
>>> -martin
>>> ___
>>> ceph-users mailing list
>>> ceph-us...@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?

-martin

On 30.05.2013 22:22, Martin Mailand wrote:
> Hi Josh,
> 
> On 30.05.2013 21:17, Josh Durgin wrote:
>> It's trying to talk to the cinder api, and failing to connect at all.
>> Perhaps there's a firewall preventing that on the compute host, or
>> it's trying to use the wrong endpoint for cinder (check the keystone
>> service and endpoint tables for the volume service).
> 
> the keystone endpoint looks like this:
> 
> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
> http://192.168.192.2:8776/v1/$(tenant_id)s |
> http://192.168.192.2:8776/v1/$(tenant_id)s |
> 5ad684c5a0154c13b54283b01744181b
> 
> where 192.168.192.2 is the IP from the controller node.
> 
> And from the compute node a telnet 192.168.192.2 8776 is working.
> 
> -martin
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:
> It's trying to talk to the cinder api, and failing to connect at all.
> Perhaps there's a firewall preventing that on the compute host, or
> it's trying to use the wrong endpoint for cinder (check the keystone
> service and endpoint tables for the volume service).

the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Weiguo,

my answers are inline.

-martin

On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can start up a regular ephemeral instance on the
> same nova node (ie, nova-compute is working correctly)
an ephemeral instance is working
> (3) if you are using cephx, make sure libvirt secret is set up correct
> per instruction at ceph.com
I do not use cephx
> (4) look at /var/lib/nova/instance/x/libvirt.xml and the
> disk file is pointing to the rbd volume
For an ephemeral instance the folder is create, for a volume bases
instance the folder is not created.

> (5) If all above look fine and you still couldn't perform nova boot
> with the volume,  you can try last thing to manually start up a kvm
> session with the volume similar to below. At least this will tell you
> if you qemu has the correct rbd enablement.
>
>   /usr/bin/kvm -m 2048 -drive
> file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
> -boot c -net nic -net user -nographic  -vnc :1000 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>
If I start kvm by hand it is working.

> --weiguo
>
> > Date: Thu, 30 May 2013 16:37:40 +0200
> > From: mar...@tuxadero.com
> > To: ceph-us...@ceph.com
> > CC: openstack@lists.launchpad.net
> > Subject: [ceph-users] Openstack with Ceph, boot from volume
> >
> > Hi Josh,
> >
> > I am trying to use ceph with openstack (grizzly), I have a multi
> host setup.
> > I followed the instruction
> http://ceph.com/docs/master/rbd/rbd-openstack/.
> > Glance is working without a problem.
> > With cinder I can create and delete volumes without a problem.
> >
> > But I cannot boot from volumes.
> > I doesn't matter if use horizon or the cli, the vm goes to the error
> state.
> >
> > From the nova-compute.log I get this.
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > .
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> > ENETUNREACH
> >
> > What tries nova to reach? How could I debug that further?
> >
> > Full Log included.
> >
> > -martin
> >
> > Log:
> >
> > ceph --version
> > ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)
> >
> > root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
> > ii ceph-common 0.61-1precise
> > common utilities to mount and interact with a ceph storage
> > cluster
> > ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0
> > python bindings to the OpenStack Volume API
> >
> >
> > nova-compute.log
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
> > in _prep_block_device
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
> > self._setup_block_device_mapping(context, instance, bdms)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
> > _setup_block_device_mapping
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
> > self.volume_api.get(context, bdm['volume_id'])
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc]
> > self._reraise_translated_volume_exception(volume_id)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] item =
> > cinderclien

[Openstack] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.

But I cannot boot from volumes.
I doesn't matter if use horizon or the cli, the vm goes to the error state.

>From the nova-compute.log I get this.

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
.
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
ENETUNREACH

What tries nova to reach? How could I debug that further?

Full Log included.

-martin

Log:

ceph --version
ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)

root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
ii  ceph-common  0.61-1precise
   common utilities to mount and interact with a ceph storage
cluster
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
   python bindings to the OpenStack Volume API


nova-compute.log

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
in _prep_block_device
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return
self._setup_block_device_mapping(context, instance, bdms)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
_setup_block_device_mapping
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
self.volume_api.get(context, bdm['volume_id'])
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]
self._reraise_translated_volume_exception(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] item =
cinderclient(context).volumes.get(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 180,
in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get("/volumes/%s"
% volume_id, "volume")
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 141, in _get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
self.api.client.get(url)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 185, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
'GET', **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 153, in
_cs_request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 123, in
request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
20