Re: [Openstack] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 07:37 AM, Martin Mailand wrote:

Hi Josh,

I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.

But I cannot boot from volumes.
I doesn't matter if use horizon or the cli, the vm goes to the error state.

 From the nova-compute.log I get this.

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
.
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
ENETUNREACH

What tries nova to reach? How could I debug that further?


It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).

Josh


Full Log included.

-martin

Log:

ceph --version
ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)

root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
ii  ceph-common  0.61-1precise
common utilities to mount and interact with a ceph storage
cluster
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
python bindings to the OpenStack Volume API


nova-compute.log

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1071,
in _prep_block_device
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return
self._setup_block_device_mapping(context, instance, bdms)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 721, in
_setup_block_device_mapping
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
self.volume_api.get(context, bdm['volume_id'])
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 193, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]
self._reraise_translated_volume_exception(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py, line 190, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] item =
cinderclient(context).volumes.get(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py, line 180,
in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get(/volumes/%s
% volume_id, volume)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/base.py, line 141, in _get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
self.api.client.get(url)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 185, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
'GET', **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 153, in
_cs_request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
/usr/lib/python2.7/dist-packages/cinderclient/client.py, line 123, in
request
2013-05-30 

Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 01:50 PM, Martin Mailand wrote:

Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for customer of the cluster?
Am I wrong here?


I'd expect that too, but it's determined in nova by the 
cinder_catalog_info option, which defaults to volume:cinder:publicURL.


You can also override it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.

Josh


-martin

On 30.05.2013 22:22, Martin Mailand wrote:

Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:

It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).


the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:18 PM, Martin Mailand wrote:

Hi Josh,

that's working.

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?


cinder.volume.drivers.rbd.RBDDriver


2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.


Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.


That's what I did.

root@controller:~/vm_images# !1228
glance add name=Precise Server is_public=true container_format=ovf
disk_format=raw  ./precise-server-cloudimg-amd64-disk1.raw
Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
root@controller:~/vm_images# rbd -p images -l ls
NAMESIZE PARENT FMT PROT LOCK
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8   2048M  2
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8@snap  2048M  2 yes
root@controller:~/vm_images# cinder create --image-id
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
10+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova |
|   bootable  |false |
|  created_at |  2013-05-30T21:08:16.506094  |
| display_description | None |
| display_name|   volcli1|
|  id | 34838911-6613-4140-93e0-e1565054a2d3 |
|   image_id  | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
| source_volid| None |
|status   |   creating   |
| volume_type | None |
+-+--+
root@controller:~/vm_images# cinder list
+--+-+--+--+-+--+-+
|  ID  |Status   | Display Name |
Size | Volume Type | Bootable | Attached to |
+--+-+--+--+-+--+-+
| 34838911-6613-4140-93e0-e1565054a2d3 | downloading |   volcli1|
10  | None|  false   | |
+--+-+--+--+-+--+-+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME   SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3  10240M  2

root@controller:~/vm_images#

-martin

On 30.05.2013 22:56, Josh Durgin wrote:

On 05/30/2013 01:50 PM, Martin Mailand wrote:

Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for customer of the cluster?
Am I wrong here?


I'd expect that too, but it's determined in nova by the
cinder_catalog_info option, which defaults to volume:cinder:publicURL.

You can also override it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.

Josh


-martin

On 30.05.2013 22:22, Martin Mailand wrote:

Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:

It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).


the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:50 PM, Martin Mailand wrote:

Hi Josh,

now everything is working, many thanks for your help, great work.


Great! I added those settings to 
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure 
out in the future.



-martin

On 30.05.2013 23:24, Josh Durgin wrote:

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?


cinder.volume.drivers.rbd.RBDDriver


2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.


Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Lost copy-on-write feature when create volume from image with using Ceph RBD

2013-04-14 Thread Josh Durgin

On 04/11/2013 11:08 PM, 陈雷 wrote:

Hi, Guys

Recently I'm testing the new version of OpenStack Grizzly. I'm using Ceph
RBD as the backend storage both for Glance and Cinder.

In the old version of Folsom, when set the parameter
‘show_image_direct_url=True’ in the config file glance-api.conf, the
process of  create volume from image will be done immediately, because it
  just to create a snapshot from Ceph's image pool to volumes pool.

But now I found the feature is lost, it always perform a convert process
which will download the the whole image file from Ceph's pool of 'images'
and convert it to RAW format then upload to Ceph's pool of 'volumes' no
matter what the original format of image file is. This process take longer
time than the older version of 'Folsom'.


So is this a bug or I miss-configured something?


I'd say it's a regression that the default behavior changed, but
if you set glance_api_version=2 in your cinder.conf and restart
cinder-volume it will work.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova ceph integration

2013-01-31 Thread Josh Durgin

Ceph has been officially production ready for block (rbd) and object
storage (radosgw) for a while. It's just the file system that isn't
ready yet:

http://ceph.com/docs/master/faq/#is-ceph-production-quality

Josh

On 01/31/2013 01:23 PM, Razique Mahroua wrote:

Speaking of which guys,
anything particular stability-wise regarding Ceph within OpenStack. It's
officially not production-ready, yet it's often that solution that comes out
when we are looking for data clustering.
GlusterFS...yah or nay?

*Razique Mahroua** - **Nuage  Co*
razique.mahr...@gmail.com mailto:razique.mahr...@gmail.com
Tel : +33 9 72 37 94 15


Le 31 janv. 2013 à 19:43, Sébastien Han han.sebast...@gmail.com
mailto:han.sebast...@gmail.com a écrit :


Try to have a look at the boot from volume feature. Basically the disk
base of your instance is an RBD volume from Ceph. Something will be
remain in /var/lib/nova/instances but it's only the kvm xml file.

http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack

Cheers!
--
Regards,
Sébastien Han.


On Thu, Jan 31, 2013 at 7:40 AM, Wolfgang Hennerbichler
wolfgang.hennerbich...@risc-software.at wrote:

Hi,

I'm sorry if this has been asked before. My question is: can I integrate ceph
into openstack's nova  cinder in a way, that I don't need
/var/lib/nova/instances anymore? I'd like to have EVERYTHING in ceph,
starting from glance images to nova-disk-images and volumes (cinder). And
more important: Can this be done in a way so that my horizon-users can still
use horizon (including snapshotting) without the need of abusing the
command-line? :)
I've read that cinder and glance are supported, but I didn't find much
information on nova-disk-images and ceph.

thanks for a reply
wolfgang



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Josh Durgin

On 07/23/2012 08:24 PM, Jonathan Proulx wrote:

Hi All,

I've been looking at Ceph as a storage back end.  I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current maturity level as far as that goes,
but I have less of a sense of how far along performance is.

My OpenStack deployment is 768 cores across 64 physical hosts which
I'd like to double in the next 12 months.  What it's used for is
widely varying and hard to classify some uses are hundreds of tiny
nodes others are looking to monopolize the biggest physical system
they can get.  I think most really heavy IO currently goes to our NAS
servers rather than through nova-volumes but that could change.

Anyone using ceph at that scale (or preferably larger)?  Does it keep
up if you keep throwing hardware at it?  My proof of concept ceph
cluster on crappy salvaged hardware has proved the concept to me but
has (unsurprisingly) crappy salvaged performance. Trying to get a
sense of what performance expectations I should have given decent
hardware before I decide if I should buy decent hardware for it...

Thanks,
-Jon


Hi Jon,

You might be interested in Jim Schutt's numbers on better hardware:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487

You'll probably get more response on the ceph mailing list though.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph + Live Migration

2012-07-24 Thread Josh Durgin

On 07/24/2012 01:04 PM, Mark Moseley wrote:

This is more of a sanity check than anything else:

Does the RBDDriver in Diablo support live migration?


Live migration has always been possible with RBD. Management layers
like libvirt or OpenStack may have bugs that make it fail. This sounds
like one of those bugs.


I was playing with this yesterday and couldn't get live migration to
succeed. The errors I was getting seem to trace back to the fact that
the RBDDriver doesn't override VolumeDriver's check_for_export call,
which just defaults to NotImplementedError. Looking at the latest
Folsom code, there's still not a check_for_export call in the
RBDDriver, which makes me think that the fact that in my PoC install
(which I've switched between qcow2, iscsi and ceph repeatedly), I've
somehow made something unhappy.

2012-07-24 13:44:29 TRACE nova.compute.manager [instance:
79c8c14f-43bf-4eaf-af94-bc578c82f921] [u'Traceback (most recent call
last):\n', u'  File
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in
_process_data\nrval = node_func(context=ctxt, **node_args)\n', u'
File /usr/lib/python2.7/dist-packages/nova/volume/manager.py, line
294, in check_for_export\nself.driver.check_for_export(context,
volume[\'id\'])\n', u'  File
/usr/lib/python2.7/dist-packages/nova/volume/driver.py, line 459, in
check_for_export\ntid =
self.db.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 983, in
volume_get_iscsi_target_num\nreturn
IMPL.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
102, in wrapper\nreturn f(*args, **kwargs)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
2455, in volume_get_iscsi_target_num\nraise
exception.ISCSITargetNotFoundForVolume(volume_id=volume_id)\n',
u'ISCSITargetNotFoundForVolume: No target id found for volume
130.\n'].

What's interesting here is that this KVM instance is only running RBD
volumes, no iSCSI volumes in sight. Here's the drives as they're set
up by openstack in the kvm command:

-drive file=rbd:nova/volume-0082:id=rbd:key=...deleted my
key...==:auth_supported=cephx
none,if=none,id=drive-virtio-disk0,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=rbd:nova/volume-0083:id=rbd:key=...deleted my
key...==:auth_supported=cephx
none,if=none,id=drive-virtio-disk1,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1

But before banging my head against it more (and chastised after
banging my head against the fiemap issue, which turned out to be an
actual bug), I figured I'd see if it was even possible. In Sebastien
Han's (completely fantastic) Ceph+Openstack article, it doesn't sound
like he was able to get RBD-based migration working either. Again, I
don't mind debugging further but wanted to make sure I wasn't
debugging something that wasn't actually there. Incidentally, if I put
something like this in
/usr/lib/python2.7/dist-packages/nova/volume/manager.py in the
for-loop

if not volume[ iscsi_target ]: continue

before the call to self.driver.check_for_export(), then live migration
*seems* to work, but obviously I might be setting up some ugly corner
case.


It should work, and if that workaround works, you could instead add

def check_for_export(self, context, volume_id):
pass

to the RBDDriver. It looks like the check_for_export method was
added and relied upon without modifying all VolumeDriver subclasses, so
e.g. sheepdog would have the same problem.


If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?


If this is still a problem in trunk I'll make sure it's fixed before
Folsom is released.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph + Live Migration

2012-07-24 Thread Josh Durgin

On 07/24/2012 05:10 PM, Mark Moseley wrote:

It should work, and if that workaround works, you could instead add

def check_for_export(self, context, volume_id):
 pass


I'll try that out. That's a heck of a lot cleaner, plus I just picked
that if not volume[ 'iscsi_target' ] because it was the only
attribute I could find, but that was just for messing around. I wasn't
able to find any attribute of the volume object that said that it
was an RBD volume.

My real concern is that the check_for_export is somehow important and
I might be missing something else down the road. That said, I've been
able to migrate back and forth just fine after I put that initial hack
in.


All the export-related functions only matter for iscsi or similar
drivers that require a block device on the host. Since qemu talks to
rbd directly, there's no need for those methods to do anything. If
someone wanted to make a volume driver for the kernel rbd module, for 
example, then the export methods would make sense.



to the RBDDriver. It looks like the check_for_export method was
added and relied upon without modifying all VolumeDriver subclasses, so
e.g. sheepdog would have the same problem.



If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?



If this is still a problem in trunk I'll make sure it's fixed before
Folsom is released.


Cool, thanks!

Incidentally (and I can open up a new thread if you like), I was also
going to post here about your quote in Sebastien's article:

quote
What’s missing is that OpenStack doesn’t yet have the ability to
initialize a volume from an image. You have to put an image on one
yourself before you can boot from it currently. This should be fixed
in the next version of OpenStack. Booting off of RBD is nice because
you can do live migration, although I haven’t tested that with
OpenStack, just with libvirt. For Folsom, we hope to have
copy-on-write cloning of images as well, so you can store images in
RBD with glance, and provision instances booting off cloned RBD
volumes in very little time.
/quote

I just wanted to confirm I'm reading it right:

With the above features, we'll be able to use --block_device_mapping
vda mapped to an RBD volume *and* a glance-based image on the same
instance and it'll clone that image onto the RBD volume? Is that a
correct interpretation? That'd be indeed sweet.


I'm not sure that'll be the exact interface, but that's the idea.


Right now I'm either booting with an image, which forces me to use
qcow2 and a second RBD-based volume (with the benefit of immediately
running), or booting with both vda and vdb on RBD volumes (but then
have to install the vm from scratch, OS-wise). Of course, I might be
missing a beautiful, already-existing 3rd option that I'm just not
aware of :)


Another option is to do some custom hack with image files and
'rbd rm vol-foo  rbd import file vol-foo' on the backend so you don't
need to copy the data within a vm.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't attach a RBD volume to an instance

2012-07-13 Thread Josh Durgin

On 07/13/2012 03:09 PM, Travis Rhoden wrote:

Was this problem ever resolved?  I'm having the identical issue right now
(perhaps because I am following Sebastien's guide [0]).  I found this
question raised on LP [1], but no helpful responses, and no linked bug.  I
can confirm that I can attach an RBD instance manually through virsh, no
problem. So it is not a libvirt thing.  It sure appears to be a faulty disk
attachment definition from OpenStack.


I didn't notice this launchpad question before, but I'll copy my answer
below for the mailing list archives:

Right now the RBD driver in OpenStack requires you to have your monitor 
addresses in /etc/ceph/ceph.conf. When you have them there, putting them 
in the libvirt xml is unnecessary. You do have to make sure that the 
unix user who ends up running qemu/kvm has read access to 
/etc/ceph/ceph.conf, and e.g. apparmor isn't preventing qemu/kvm from 
reading it.


This dependency on /etc/ceph/ceph.conf should be removed in the future.


I'm running the latest packages on Ubuntu 12.04 for OpensStack (essex) and
Ceph (argonaut).

I can confirm when using nova volume-attach it does not matter what device
you pick, dev/vdb, dev/vdf, whatever.  Always fails.  /dev/vdb manually
works great.


If the answer above doesn't help, what error message do you get?
Is there any info in the libvirt logs for the instance?

Josh


[0]
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
[1] https://answers.launchpad.net/nova/+question/201366

  - Travis

On Fri, May 25, 2012 at 8:32 PM, Josh Durgin josh.dur...@inktank.comwrote:


On 05/25/2012 01:31 AM, Sébastien Han wrote:


Hi everyone,

I setup a ceph cluster and I use the RBD driver for nova-volume.
I can create volumes and snapshots but currently I can't attach them to
an instance.
Apparently the volume is detected as busy but it's not, no matter which
name I choose.
I tried from horizon and the command line, same issue.

$ nova --debug volume-attach de3743be-69be-4ec3-8de3-**fc7df6903a97 12
/dev/vdc
connect: (localhost, 5000)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost:
localhost:5000\r\nContent-**Length: 100\r\ncontent-type:
application/json\r\naccept-**encoding: gzip, deflate\r\naccept:
application/json\r\nuser-**agent: python-novaclient\r\n\r\n{**auth:
{tenantName: admin, passwordCredentials: {username: admin,
password: admin}}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: application/json
header: Vary: X-Auth-Token
header: Date: Fri, 25 May 2012 08:29:46 GMT
header: Transfer-Encoding: chunked
connect: (172.17.1.6, 8774)
send: u'GET
/v2/**d1f5d27ccf594cdbb034c8a4123494**e9/servers/de3743be-69be-4ec3-**
8de3-fc7df6903a97
HTTP/1.1\r\nHost: 172.17.1.6:8774
http://172.17.1.6:8774\r\nx-**auth-project-id: admin\r\nx-auth-token:

37ff6354b9114178889128175494c6**66\r\naccept-encoding: gzip,
deflate\r\naccept: application/json\r\nuser-**agent:
python-novaclient\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: X-Compute-Request-Id: req-fc7686dd-7dff-4ea2-b2dd-**be584dadb7c1
header: Content-Type: application/json
header: Content-Length: 1371
header: Date: Fri, 25 May 2012 08:29:46 GMT
send: u'POST
/v2/**d1f5d27ccf594cdbb034c8a4123494**e9/servers/de3743be-69be-4ec3-**
8de3-fc7df6903a97/os-volume_**attachments
HTTP/1.1\r\nHost: 172.17.1.6:8774
http://172.17.1.6:8774\r\**nContent-Length: 60\r\nx-auth-project-id:

admin\r\naccept-encoding: gzip, deflate\r\naccept:
application/json\r\nx-auth-**token:
37ff6354b9114178889128175494c6**66\r\nuser-agent:
python-novaclient\r\ncontent-**type:
application/json\r\n\r\n{**volumeAttachment: {device: /dev/vdc,
volumeId: 12}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: X-Compute-Request-Id: req-4a22f603-4aad-4ae9-99a2-**6a7bcacf974f
header: Content-Type: application/json
header: Content-Length: 48
header: Date: Fri, 25 May 2012 08:29:46 GMT
$ echo $?
0

See the logs here:
http://paste.openstack.org/**show/18163/http://paste.openstack.org/show/18163/



Hi Seb,

It looks like the DeviceIsBusy exception is raised when libvirt
returns failure for any reason when attaching a volume. Try turning on
debug logging for libvirt and attaching again, and posting the logs
from libvirt.

You might need to upgrade libvirt - the version in Ubuntu 12.04 should
work, or upstream libvirt 0.9.12.

Josh


  If you need more information let me know :)

Thank you in advance.

Cheers.

~Seb



__**_
Mailing list: 
https://launchpad.net/~**openstackhttps://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~**openstackhttps://launchpad.net/%7Eopenstack
More help   : 
https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp







___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph/OpenStack integration on Ubuntu precise: horribly broken, or am I doing something wrong?

2012-06-08 Thread Josh Durgin

Hi Florian,

There's an Ubuntu bug already:

https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/981130

librgw was not complete, and wasn't actually used by radosgw, so it was
dropped. The Ubuntu package just needs to be updated to remove the
dependency and rgw.py, like upstream did.

Josh

On 06/08/2012 08:24 AM, Florian Haas wrote:

Hi everyone,

apologies for the cross-post, and not sure if this is new information.
I did do a cursory check of both list archives and didn't find
anything pertinent, so here goes. Feel free to point me to an existing
thread if I'm merely regurgitating something that's already known.

Either I'm doing something terribly wrong, or the current state of
OpenStack/Ceph integration in Ubuntu precise is somewhat suboptimal.
At least as far as sticking to packages available in Ubuntu repos is
concerned.

Steps to reproduce:

1. On an installation using 12.04 with current updates, create a RADOS pool.
2. Configure glance to use rbd as its backend storage.
3. Attempt to upload an image.

glance add name=Ubuntu 12.04 cloudimg amd64 is_public=true
container_format=ovf disk_format=qcow2
precise-server-cloudimg-amd64-disk1.img
Uploading image 'Ubuntu 12.04 cloudimg amd64'
==[
92%] 222.109521M/s, ETA  0h  0m  0sFailed to add image. Got
error:
Data supplied was not valid.
Details: 400 Bad Request

The server could not comply with the request since it is either
malformed or otherwise incorrect.

  Error uploading image: (NameError): global name 'rados' is not defined

Digging around in glance/store/rbd.py yields this:

try:
 import rados
 import rbd
except ImportError:
 pass

I will go so far as say the error handling here could be improved --
however, the Swift store implementation seems to do the same.

Now, in Ubuntu rados.py and rbd.py ship in the python-ceph package,
which is available in universe, but has an unresolvable dependency on
librgw1. librgw1 apparently had been in Ubuntu for quite a while, but
was dropped just before the Essex release:

https://launchpad.net/ubuntu/precise/amd64/radosgw

... and if I read
http://changelogs.ubuntu.com/changelogs/pool/main/c/ceph/ceph_0.41-1ubuntu2/changelog
correctly, then the rationale for it was to drop radosgw since
libfcgi is not in main and the code may not be suitable for LTS. (I
wonder why this wasn't factored out into a separate package then, as
was apparently the case for ceph-mds).

Does this really mean that radosgw functionality was dropped from
Ubuntu because it wasn't considered ready for main, and now it
completely breaks a package in universe that's essential for
Ceph/OpenStack integration? AFAICT the only thing that would be
unaffected by this would be nova-volume (now cinder) which rather than
using the Ceph Python bindings just calls out to the rados and rbd
binaries. But both the glance RBD store and the RADOS Swift and S3
frontends (via radosgw) would be affected.

This can of course all be fixed by using upstream packages from the
Ceph guys (thanks to Sébastien Han for pointing that out to me).

Anyone able to confirm or refute these findings? Should there be an
Ubuntu bug for this? If so, against what package?

Cheers,
Florian
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't attach a RBD volume to an instance

2012-05-25 Thread Josh Durgin

On 05/25/2012 01:31 AM, Sébastien Han wrote:

Hi everyone,

I setup a ceph cluster and I use the RBD driver for nova-volume.
I can create volumes and snapshots but currently I can't attach them to
an instance.
Apparently the volume is detected as busy but it's not, no matter which
name I choose.
I tried from horizon and the command line, same issue.

$ nova --debug volume-attach de3743be-69be-4ec3-8de3-fc7df6903a97 12
/dev/vdc
connect: (localhost, 5000)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost:
localhost:5000\r\nContent-Length: 100\r\ncontent-type:
application/json\r\naccept-encoding: gzip, deflate\r\naccept:
application/json\r\nuser-agent: python-novaclient\r\n\r\n{auth:
{tenantName: admin, passwordCredentials: {username: admin,
password: admin}}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: application/json
header: Vary: X-Auth-Token
header: Date: Fri, 25 May 2012 08:29:46 GMT
header: Transfer-Encoding: chunked
connect: (172.17.1.6, 8774)
send: u'GET
/v2/d1f5d27ccf594cdbb034c8a4123494e9/servers/de3743be-69be-4ec3-8de3-fc7df6903a97
HTTP/1.1\r\nHost: 172.17.1.6:8774
http://172.17.1.6:8774\r\nx-auth-project-id: admin\r\nx-auth-token:
37ff6354b9114178889128175494c666\r\naccept-encoding: gzip,
deflate\r\naccept: application/json\r\nuser-agent:
python-novaclient\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: X-Compute-Request-Id: req-fc7686dd-7dff-4ea2-b2dd-be584dadb7c1
header: Content-Type: application/json
header: Content-Length: 1371
header: Date: Fri, 25 May 2012 08:29:46 GMT
send: u'POST
/v2/d1f5d27ccf594cdbb034c8a4123494e9/servers/de3743be-69be-4ec3-8de3-fc7df6903a97/os-volume_attachments
HTTP/1.1\r\nHost: 172.17.1.6:8774
http://172.17.1.6:8774\r\nContent-Length: 60\r\nx-auth-project-id:
admin\r\naccept-encoding: gzip, deflate\r\naccept:
application/json\r\nx-auth-token:
37ff6354b9114178889128175494c666\r\nuser-agent:
python-novaclient\r\ncontent-type:
application/json\r\n\r\n{volumeAttachment: {device: /dev/vdc,
volumeId: 12}}'
reply: 'HTTP/1.1 200 OK\r\n'
header: X-Compute-Request-Id: req-4a22f603-4aad-4ae9-99a2-6a7bcacf974f
header: Content-Type: application/json
header: Content-Length: 48
header: Date: Fri, 25 May 2012 08:29:46 GMT
$ echo $?
0

See the logs here:
http://paste.openstack.org/show/18163/


Hi Seb,

It looks like the DeviceIsBusy exception is raised when libvirt
returns failure for any reason when attaching a volume. Try turning on
debug logging for libvirt and attaching again, and posting the logs
from libvirt.

You might need to upgrade libvirt - the version in Ubuntu 12.04 should
work, or upstream libvirt 0.9.12.

Josh


If you need more information let me know :)
Thank you in advance.

Cheers.

~Seb


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] boot from volume use cases

2012-03-22 Thread Josh Durgin

At the last couple volumes meetings, there was some discussion of the
desired behaviors of nova's boot from volume functionality. There were
a few use cases discussed, but there are probably plenty we didn't
think of, so it would be useful to get input from a larger audience.

What use cases for boot from volume are people interested in, and
which should nova support?

Here are a few use cases:

  1) I use an existing instance to manually create a bootable volume
 containing an OS there is no image for (and I cannot create an
 image). I boot the new OS from the volume I created.

  2) I create a bunch of new instances that have no ephemeral disks,
 but are all based on the same image. The data from the image is
 copied to a new volume for each new instance.

  3) I have an instance that has no ephemeral disks, and stores
 everything in a volume. I take periodic snapshots of the
 volume. Later, my original instance becomes corrupted, and I want
 to restore from the last known good snapshot I took. I boot a new
 instance from a new volume created by cloning from the last
 good snapshot.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-03 Thread Josh Durgin

On 05/02/2011 01:46 PM, Chuck Thier wrote:

This leads to another interesting question.  While our reference
implementation may not directly expose snapshot functionality, I imagine
other storage implementations could want to. I'm interested to hear what use
cases others would be interested in with snapshots.  The obvious ones are
things like creating a volume based on a snapshot, or rolling a volume back
to a previous snapshot.  I would like others' input here to shape what the
snapshot API might look like.


For RBD we only need the obvious ones:
- create/list/remove snapshots
- create volume from a snapshot
- rollback to a snapshot

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp