Re: [Qemu-devel] [PATCH] ceph/rbd block driver for qemu-kvm (v8)

2010-12-06 Thread Kevin Wolf
Am 17.11.2010 22:42, schrieb Christian Brunner:
 Here is another update for the ceph storage driver. It includes changes
 for the annotations Stefan made last week and a bit more things Sage
 discovered while looking over the driver again.
 
 I really hope that this time we are not only close, but have reached
 a quality that everyone is satisfied with. - Of course suggestions for 
 further improvements are always welcome.
 
 Regards,
 Christian
 
 
 RBD is an block driver for the distributed file system Ceph
 (http://ceph.newdream.net/). This driver uses librados (which
 is part of the Ceph server) for direct access to the Ceph object
 store and is running entirely in userspace (Yehuda also
 wrote a driver for the linux kernel, that can be used to access
 rbd volumes as a block device).
 ---
  Makefile.objs |1 +
  block/rbd.c   | 1059 
 +
  block/rbd_types.h |   71 
  configure |   31 ++
  4 files changed, 1162 insertions(+), 0 deletions(-)
  create mode 100644 block/rbd.c
  create mode 100644 block/rbd_types.h

This lacks a Signed-off-by. Please merge Yehuda's fix for configure when
you resend the patch.

What's the easiest way to try it out? I tried to use vstart.sh and copy
the generated ceph.conf to /etc/ceph/ceph.conf so that qemu-img etc.
find the monitor address. However, that leads to a hang when I try rbd
list or ./qemu-img create -f rbd rbd:data/test.img 4G, so I seem to
be missing something.

The only thing I have achieved until now with my attempts of trying it
out (and trying wrong things, of course) is that I stumbled over the the
following segfault in librados:

Program received signal SIGSEGV, Segmentation fault.
Objecter::shutdown (this=0x0) at osdc/Objecter.cc:59
59assert(client_lock.is_locked());  // otherwise event
cancellation is unsafe
(gdb) bt
#0  Objecter::shutdown (this=0x0) at osdc/Objecter.cc:59
#1  0x77ca5ce4 in RadosClient::shutdown (this=0xa58a90) at
librados.cc:392
#2  0x77ca8ccc in rados_deinitialize () at librados.cc:1770
#3  0x0043150c in rbd_create (filename=value optimized out,
options=value optimized out) at block/rbd.c:304
#4  0x00405f10 in img_create (argc=5, argv=0x7fffde80) at
qemu-img.c:409
#5  0x003c9f01eb1d in __libc_start_main () from /lib64/libc.so.6
#6  0x00403999 in _start ()

Kevin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] ceph/rbd block driver for qemu-kvm (v8)

2010-12-06 Thread Yehuda Sadeh Weinraub
On Mon, Dec 6, 2010 at 4:48 AM, Kevin Wolf kw...@redhat.com wrote:


 What's the easiest way to try it out? I tried to use vstart.sh and copy
 the generated ceph.conf to /etc/ceph/ceph.conf so that qemu-img etc.
 find the monitor address. However, that leads to a hang when I try rbd
 list or ./qemu-img create -f rbd rbd:data/test.img 4G, so I seem to
 be missing something.

What ceph version are you running? Is your system up? What's the 'ceph
-s' output?


 The only thing I have achieved until now with my attempts of trying it
 out (and trying wrong things, of course) is that I stumbled over the the
 following segfault in librados:

 Program received signal SIGSEGV, Segmentation fault.
 Objecter::shutdown (this=0x0) at osdc/Objecter.cc:59
 59        assert(client_lock.is_locked());  // otherwise event
 cancellation is unsafe
 (gdb) bt
 #0  Objecter::shutdown (this=0x0) at osdc/Objecter.cc:59
 #1  0x77ca5ce4 in RadosClient::shutdown (this=0xa58a90) at
 librados.cc:392
 #2  0x77ca8ccc in rados_deinitialize () at librados.cc:1770
 #3  0x0043150c in rbd_create (filename=value optimized out,
 options=value optimized out) at block/rbd.c:304
 #4  0x00405f10 in img_create (argc=5, argv=0x7fffde80) at
 qemu-img.c:409
 #5  0x003c9f01eb1d in __libc_start_main () from /lib64/libc.so.6
 #6  0x00403999 in _start ()


This was a bug in the librados C interface. Basically it ignored
errors when doing initialization and later on when trying to clean up
stuff after failing to do some operation (since it failed to init) it
crashed. I pushed a fix for that to the ceph rc branch (and also to
the unstable branch).

The question is still why it failed to initialize in the first place.
Were there any other messages printed? It could be that it still
couldn't find the monitors, or that it failed to authenticate for some
reason (if cephx was being used). you can try turning on several ceph
modules logs by adding the following to your ceph.conf in the global
section:

debug ms = 1
debug rados = 20
debug monc = 10
debug objecter = 10

If everything seems ok and it still doesn't work you can try to run
the rbd utility:

  $ ./rbd create test.img --size=4096 -p data

and you can add '--debug-ms=1 --debug-rados=20 --debug-...' to the
comman line too.

Let us know if you still have any problems.

Thanks,
Yehuda


Thanks,
Yehuda
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH] ceph/rbd block driver for qemu-kvm (v8)

2010-12-06 Thread Christian Brunner
2010/12/6 Kevin Wolf kw...@redhat.com:

Hi Kevin,

 This lacks a Signed-off-by. Please merge Yehuda's fix for configure when
 you resend the patch.

I've sent an updated patch.

 What's the easiest way to try it out? I tried to use vstart.sh and copy
 the generated ceph.conf to /etc/ceph/ceph.conf so that qemu-img etc.
 find the monitor address. However, that leads to a hang when I try rbd
 list or ./qemu-img create -f rbd rbd:data/test.img 4G, so I seem to
 be missing something.

The most simple ceph.conf I can think about, is the following:

[global]
auth supported = none

[mon]
mon data = /ceph/mon$id

[mon0]
host = {hostname}
mon addr = 127.0.0.1:6789

[osd]
osd data = /ceph/osd\$id

[osd0]
host = {hostname}
btrfs devs = {devicename}


Replace {hostname} with your `hostname -s` and {devicename} with the
name of an empty volume. Create a directory for the monitor and a
mountpoint for the osd volume:

# mkdir -p /ceph/mon0
# mkdir -p /ceph/osd0

After you have created the ceph.conf file, you can create your
ceph-filesystem with the following command (attention - this will
format the configured volume):

# mkcephfs -c /etc/ceph/ceph.conf --mkbtrfs -a

Now you should be able to start ceph (assuming you are using the redhat rpm):

# service ceph start

Check if ceph is running with `ceph -w` or `rados df`. `qemu-img
create -f rbd rbd:data/test.img 4G` should work now, too.

Regards
Christian
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html