Hello thg,

in the last weeks we spent some time in in improving RBDSR a rbd storage
repository for XenServer.
RBDSR is capable to userRBD by fuse, krbd and rbd-nbd.

Our improvements are based on
https://github.com/rposudnevskiy/RBDSR/tree/v2.0 and are currently
published at https://github.com/vico-research-and-consulting/RBDSR.

We are using it by rbd-nbd mode, because minimizes kernel dependencies
by a good flexibility and performance.
(http://docs.ceph.com/docs/master/man/8/rbd-nbd/)

The implementation still seems to need lots improvements, but our
current test results were promising from a stability and performance view.
I am pretty sure, that we will use this in production in a few weeks :-)

Probably this might be a alternative for you if you have in depth xen
knowledge and python programming skills.

Regards
Marc


Out Setup will look like this:

  * XEN
      o 220 virtual machines
      o 9 XEN Server 7.2 nodes (because of licencing reasons)
      o 4 * 1  GBIT LACP
  * Ceph
      o Luminous/12.2.5
      o Ubuntu 16.04
      o 5 OSD Nodes (24*8 TB HDD OSDs, 48*1TB SSD OSDS, Bluestore, 6Gb
        Cache per OSD)
      o Size per OSD, 192GB RAM, 56 HT CPUs)
      o 3 Mons (64 GB RAM, 200GB SSD, 4 visible CPUs)
      o 2 * 10 GBIT, SFP+, bonded xmit_hash_policy layer3+4 for Ceph


Am 20.05.2018 um 20:15 schrieb thg:
> Hi all@list,
>
> my background: I'm doing Xen since 10++ years, many years with DRBD for
> high availability, since some time I'm using preferable GlusterFS with
> FUSE as replicated storage, where I place the image-files for the vms.
>
> In my current project we started (successfully) with Xen/GlusterFS too,
> but because the provider, where we placed the servers, uses widely CEPH.
> So we decided to switch, because of getting better support for this.
>
> Unfortunately I'm new to CEPH, but with help of a technician, we have
> running a 3 node CEPH-cluster now, that seems to work fine.
>
> Hardware:
> - Xeons, 24 Cores, 256 GB RAM,
>   2x 240 GB system-SSDs RAID1, 4x 1,92 TB data-SSDs (no RAID)
>
> Software we are using:
> - CentOS 7.5.1804
> - Kernel: 4.9.86-30.el7             @centos-virt-xen-48
> - Xen: 4.8.3-5.el7                  @centos-virt-xen-48
> - libvirt-xen: 4.1.0-2.xen48.el7    @centos-virt-xen-48
> - Ceph: 2:12.2.5-0.el7              @Ceph
>
>
> What is working:
> I've converted a vm to a RBD-device, mapped it, mounted it and can start
> this as pvm on the Xen hypervisor via xl create:
>
> # qemu-img convert -O rbd img/testvm.img rbd:devel-pool/testvm3.rbd
> # rbd ls -l devel-pool
> -> NAME                          SIZE PARENT FMT PROT LOCK
>    ...
>    testvm3.rbd                 16384M          2
> # rbd info devel-pool/testvm3.rbd
> -> rbd image 'testvm3.rbd':
>        size 16384 MB in 4096 objects
>        order 22 (4096 kB objects)
>        block_name_prefix: rbd_data.fac72ae8944a
>        format: 2
>        features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>        flags:
>        create_timestamp: Sun May 20 14:13:42 2018
> # qemu-img info rbd:devel-pool/testvm3.rbd
> -> image: rbd:devel-pool/testvm3.rbd
>    file format: raw
>    virtual size: 16G (17179869184 bytes)
>    disk size: unavailable
>
> # rbd feature disable devel-pool/testvm2.rbd deep-flatten, fast-diff,
> object-map (otherwise mapping does not work)
> # rbd info devel-pool/testvm3.rbd
> -> rbd image 'testvm3.rbd':
>        size 16384 MB in 4096 objects
>        order 22 (4096 kB objects)
>        block_name_prefix: rbd_data.acda2ae8944a
>        format: 2
>        features: layering, exclusive-lock
>        ...
> # rbd map devel-pool/testvm3.rbd
> -> /dev/rbd0
> # rbd showmapped
> -> id pool       image       snap device
>    0  devel-pool testvm3.rbd -    /dev/rbd0
> # fdisk -l /dev/rbd0
> -> Disk /dev/rbd0: 17.2 GB, 17179869184 bytes, 33554432 sectors
>    Units = sectors of 1 * 512 = 512 bytes
>    Sector size (logical/physical): 512 bytes / 512 bytes
>    ...
>         Device Boot      Start         End      Blocks   Id  System
>    /dev/rbd0p1   *        2048     2099199     1048576   83  Linux
>    /dev/rbd0p2         2099200    29362175    13631488   83  Linux
>    /dev/rbd0p3        29362176    33554431     2096128   82  Linux swap
>    ...
> # mount /dev/rbd0p2 /mnt
> # ll /mnt/
> -> ...
>    lrwxrwxrwx.  1 root root    7 Jan  2 23:42 bin -> usr/bin
>    drwxr-xr-x.  2 root root    6 Jan  2 23:42 boot
>    drwxr-xr-x.  2 root root    6 Jan  2 23:42 dev
>    drwxr-xr-x. 81 root root 8192 May  7 02:08 etc
>    drwxr-xr-x.  8 root root   98 Jan 29 02:19 home
>    ...
>    drwxr-xr-x. 19 root root  267 Jan  3 13:22 var
> # umount /dev/rbd0p2
>
> # cat testvm3.rbd0
> -> name = "testvm3"
>    ...
>    disk = [ "phy:/dev/rbd0,xvda,w" ]
>    ...
> # xl create -c testvm3.rbd0
> -> Parsing config from vpngw1.rbd0
>    Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /grub2/grub.cfg
>    ...
>    Welcome to CentOS Linux 7 (Core)!
>    ...
>    CentOS Linux 7 (Core)
> Kernel 3.10.0-693.11.1.el7.centos.plus.x86_64 on an x86_64
>
>    testvm3 login:
>    ...
>
>
> But this is not really, how it should work, because there is no static
> assignment from rbd to the vms. As far as I understood, there is still
> no Ceph-support in Xen, since it was announced in 2013, so the way to go
> is with libvirt?
>
>
> I was following this guide, to setup Ceph with libvirt:
> <http://docs.ceph.com/docs/master/rbd/libvirt/>:
>
> # ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile
> rbd pool=devel-pool'
> -> [client.libvirt]
>        key = AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==
> # ceph auth ls
> -> ...
>    client.libvirt
>        key: AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==
>        caps: [mon] profile rbd
>        caps: [osd] profile rbd pool=devel-pool
>        ...
> # vi secret.xml
> ->
> <secret ephemeral='no' private='no'>
>         <usage type='ceph'>
>                 <name>client.libvirt secret</name>
>         </usage>
> </secret>
>
> # virsh secret-define --file secret.xml
> -> Secret 07f3a0fe-0000-1111-2222-333333333333 created
> # ceph auth get-key client.libvirt > client.libvirt.key
> # cat client.libvirt.key
> -> AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==
> # virsh secret-set-value --secret 07f3a0fe-0000-1111-2222-333333333333
> --base64 $(cat client.libvirt.key)
> -> Secret value set
>
> # vi xml/testvm3.xml
> ->
> <domain type='xen'>
>   <name>testvm3</name>
>   ...
>   <devices>
>     <disk type='network' device='disk'>
>       <source protocol='rbd' name='devel-pool/testvm3.rbd'>
>         <host name="10.20.30.1" port="6789"/>
>         <host name="10.20.30.2" port="6789"/>
>         <host name="10.20.30.3" port="6789"/>
>       </source>
>       <auth username='libvirt'>
>         <secret type='ceph' uuid='07f3a0fe-0000-1111-2222-333333333333'/>
>       </auth>
>       <target dev='xvda' bus='xen'/>
>     </disk>
>     ...
>
> # virsh define xml/testvm3.xml
> -> Domain testvm3 defined from xml/testvm3.xml
> # virsh start --console testvm3
> error: Failed to start domain testvm3
> error: internal error: libxenlight failed to create new domain 'testvm3'
>
>
> So "somthing" goes wrong:
>
> # cat /var/log/libvirt/libxl/libxl-driver.log
> -> ...
> 2018-05-20 15:28:15.270+0000: libxl:
> libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult
> logfile /var/log/xen/bootloader.7.log
> 2018-05-20 15:28:15.270+0000: libxl:
> libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [26640]
> exited with error status 1
> 2018-05-20 15:28:15.271+0000: libxl:
> libxl_create.c:1259:domcreate_rebuild_done: cannot (re-)build domain: -3
>
> # cat /var/log/xen/bootloader.7.log
> ->
> Traceback (most recent call last):
>   File "/usr/lib64/xen/bin/pygrub", line 896, in <module>
>     part_offs = get_partition_offsets(file)
>   File "/usr/lib64/xen/bin/pygrub", line 113, in get_partition_offsets
>     image_type = identify_disk_image(file)
>   File "/usr/lib64/xen/bin/pygrub", line 56, in identify_disk_image
>     fd = os.open(file, os.O_RDONLY)
> OSError: [Errno 2] No such file or directory:
> 'rbd:devel-pool/testvm3.rbd:id=libvirt:key=AQBThwFbGFRYFxxxxxxxxxxxxxxxxxxxxxxxxx==:auth_supported=cephx\\;none:mon_host=10.20.30.1\\:6789\\;10.20.30.2\\:6789\\;10.20.30.3\\:6789'
>
>
> So, as far as I "read" the logs, Xen does not find the RBD-device, but I
> have no clue, how I can solve this :-(
>
>
> Thanks a lot for your hints,

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to