Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:50 PM, Martin Mailand wrote:

Hi Josh,

now everything is working, many thanks for your help, great work.


Great! I added those settings to 
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure 
out in the future.



-martin

On 30.05.2013 23:24, Josh Durgin wrote:

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?


cinder.volume.drivers.rbd.RBDDriver


2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.


Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

now everything is working, many thanks for your help, great work.

-martin

On 30.05.2013 23:24, Josh Durgin wrote:
>> I have to more things.
>> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
>> update your configuration to the new path. What is the new path?
> 
> cinder.volume.drivers.rbd.RBDDriver
> 
>> 2. I have in the glance-api.conf show_image_direct_url=True, but the
>> volumes are not clones of the original which are in the images pool.
> 
> Set glance_api_version=2 in cinder.conf. The default was changed in
> Grizzly.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:18 PM, Martin Mailand wrote:

Hi Josh,

that's working.

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?


cinder.volume.drivers.rbd.RBDDriver


2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.


Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.


That's what I did.

root@controller:~/vm_images# !1228
glance add name="Precise Server" is_public=true container_format=ovf
disk_format=raw < ./precise-server-cloudimg-amd64-disk1.raw
Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
root@controller:~/vm_images# rbd -p images -l ls
NAMESIZE PARENT FMT PROT LOCK
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8   2048M  2
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8@snap  2048M  2 yes
root@controller:~/vm_images# cinder create --image-id
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
10+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova |
|   bootable  |false |
|  created_at |  2013-05-30T21:08:16.506094  |
| display_description | None |
| display_name|   volcli1|
|  id | 34838911-6613-4140-93e0-e1565054a2d3 |
|   image_id  | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
| source_volid| None |
|status   |   creating   |
| volume_type | None |
+-+--+
root@controller:~/vm_images# cinder list
+--+-+--+--+-+--+-+
|  ID  |Status   | Display Name |
Size | Volume Type | Bootable | Attached to |
+--+-+--+--+-+--+-+
| 34838911-6613-4140-93e0-e1565054a2d3 | downloading |   volcli1|
10  | None|  false   | |
+--+-+--+--+-+--+-+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME   SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3  10240M  2

root@controller:~/vm_images#

-martin

On 30.05.2013 22:56, Josh Durgin wrote:

On 05/30/2013 01:50 PM, Martin Mailand wrote:

Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?


I'd expect that too, but it's determined in nova by the
cinder_catalog_info option, which defaults to volume:cinder:publicURL.

You can also override it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.

Josh


-martin

On 30.05.2013 22:22, Martin Mailand wrote:

Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:

It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).


the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

that's working.

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?

2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.

That's what I did.

root@controller:~/vm_images# !1228
glance add name="Precise Server" is_public=true container_format=ovf
disk_format=raw < ./precise-server-cloudimg-amd64-disk1.raw
Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
root@controller:~/vm_images# rbd -p images -l ls
NAMESIZE PARENT FMT PROT LOCK
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8   2048M  2
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8@snap  2048M  2 yes
root@controller:~/vm_images# cinder create --image-id
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
10+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova |
|   bootable  |false |
|  created_at |  2013-05-30T21:08:16.506094  |
| display_description | None |
| display_name|   volcli1|
|  id | 34838911-6613-4140-93e0-e1565054a2d3 |
|   image_id  | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
| source_volid| None |
|status   |   creating   |
| volume_type | None |
+-+--+
root@controller:~/vm_images# cinder list
+--+-+--+--+-+--+-+
|  ID  |Status   | Display Name |
Size | Volume Type | Bootable | Attached to |
+--+-+--+--+-+--+-+
| 34838911-6613-4140-93e0-e1565054a2d3 | downloading |   volcli1|
10  | None|  false   | |
+--+-+--+--+-+--+-+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME   SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3  10240M  2

root@controller:~/vm_images#

-martin

On 30.05.2013 22:56, Josh Durgin wrote:
> On 05/30/2013 01:50 PM, Martin Mailand wrote:
>> Hi Josh,
>>
>> I found the problem, nova-compute tries to connect to the publicurl
>> (xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
>> the management network.
>> I thought the internalurl is the one, which is used for the internal
>> communication of the openstack components and the publicurl is the ip
>> for "customer" of the cluster?
>> Am I wrong here?
> 
> I'd expect that too, but it's determined in nova by the
> cinder_catalog_info option, which defaults to volume:cinder:publicURL.
> 
> You can also override it explicitly with
> cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
> in your nova.conf.
> 
> Josh
> 
>> -martin
>>
>> On 30.05.2013 22:22, Martin Mailand wrote:
>>> Hi Josh,
>>>
>>> On 30.05.2013 21:17, Josh Durgin wrote:
 It's trying to talk to the cinder api, and failing to connect at all.
 Perhaps there's a firewall preventing that on the compute host, or
 it's trying to use the wrong endpoint for cinder (check the keystone
 service and endpoint tables for the volume service).
>>>
>>> the keystone endpoint looks like this:
>>>
>>> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
>>> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>> 5ad684c5a0154c13b54283b01744181b
>>>
>>> where 192.168.192.2 is the IP from the controller node.
>>>
>>> And from the compute node a telnet 192.168.192.2 8776 is working.
>>>
>>> -martin
>>> ___
>>> ceph-users mailing list
>>> ceph-us...@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 01:50 PM, Martin Mailand wrote:

Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?


I'd expect that too, but it's determined in nova by the 
cinder_catalog_info option, which defaults to volume:cinder:publicURL.


You can also override it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.

Josh


-martin

On 30.05.2013 22:22, Martin Mailand wrote:

Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:

It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).


the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?

-martin

On 30.05.2013 22:22, Martin Mailand wrote:
> Hi Josh,
> 
> On 30.05.2013 21:17, Josh Durgin wrote:
>> It's trying to talk to the cinder api, and failing to connect at all.
>> Perhaps there's a firewall preventing that on the compute host, or
>> it's trying to use the wrong endpoint for cinder (check the keystone
>> service and endpoint tables for the volume service).
> 
> the keystone endpoint looks like this:
> 
> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
> http://192.168.192.2:8776/v1/$(tenant_id)s |
> http://192.168.192.2:8776/v1/$(tenant_id)s |
> 5ad684c5a0154c13b54283b01744181b
> 
> where 192.168.192.2 is the IP from the controller node.
> 
> And from the compute node a telnet 192.168.192.2 8776 is working.
> 
> -martin
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Weiguo,

my answers are inline.

-martin

On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can start up a regular ephemeral instance on the
> same nova node (ie, nova-compute is working correctly)
an ephemeral instance is working
> (3) if you are using cephx, make sure libvirt secret is set up correct
> per instruction at ceph.com
I do not use cephx
> (4) look at /var/lib/nova/instance/x/libvirt.xml and the
> disk file is pointing to the rbd volume
For an ephemeral instance the folder is create, for a volume bases
instance the folder is not created.

> (5) If all above look fine and you still couldn't perform nova boot
> with the volume,  you can try last thing to manually start up a kvm
> session with the volume similar to below. At least this will tell you
> if you qemu has the correct rbd enablement.
>
>   /usr/bin/kvm -m 2048 -drive
> file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
> -boot c -net nic -net user -nographic  -vnc :1000 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>
If I start kvm by hand it is working.

> --weiguo
>
> > Date: Thu, 30 May 2013 16:37:40 +0200
> > From: mar...@tuxadero.com
> > To: ceph-us...@ceph.com
> > CC: openstack@lists.launchpad.net
> > Subject: [ceph-users] Openstack with Ceph, boot from volume
> >
> > Hi Josh,
> >
> > I am trying to use ceph with openstack (grizzly), I have a multi
> host setup.
> > I followed the instruction
> http://ceph.com/docs/master/rbd/rbd-openstack/.
> > Glance is working without a problem.
> > With cinder I can create and delete volumes without a problem.
> >
> > But I cannot boot from volumes.
> > I doesn't matter if use horizon or the cli, the vm goes to the error
> state.
> >
> > From the nova-compute.log I get this.
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > .
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> > ENETUNREACH
> >
> > What tries nova to reach? How could I debug that further?
> >
> > Full Log included.
> >
> > -martin
> >
> > Log:
> >
> > ceph --version
> > ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)
> >
> > root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
> > ii ceph-common 0.61-1precise
> > common utilities to mount and interact with a ceph storage
> > cluster
> > ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0
> > python bindings to the OpenStack Volume API
> >
> >
> > nova-compute.log
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
> > in _prep_block_device
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
> > self._setup_block_device_mapping(context, instance, bdms)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
> > _setup_block_device_mapping
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
> > self.volume_api.get(context, bdm['volume_id'])
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc]
> > self._reraise_translated_volume_exception(volume_id)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] item =
> > cinderclien