Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-24 Thread thg
Hi Eugen, hi all,

thank you very much for your answer!

>> So "somthing" goes wrong:
>>
>> # cat /var/log/libvirt/libxl/libxl-driver.log
>> -> ...
>> 2018-05-20 15:28:15.270+: libxl:
>> libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult
>> logfile /var/log/xen/bootloader.7.log
>> 2018-05-20 15:28:15.270+: libxl:
>> libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [26640]
>> exited with error status 1
>> 2018-05-20 15:28:15.271+: libxl:
>> libxl_create.c:1259:domcreate_rebuild_done: cannot (re-)build domain: -3
>>
>> # cat /var/log/xen/bootloader.7.log
>> ->
>> Traceback (most recent call last):
>>   File "/usr/lib64/xen/bin/pygrub", line 896, in 
>>     part_offs = get_partition_offsets(file)
>>   File "/usr/lib64/xen/bin/pygrub", line 113, in get_partition_offsets
>>     image_type = identify_disk_image(file)
>>   File "/usr/lib64/xen/bin/pygrub", line 56, in identify_disk_image
>>     fd = os.open(file, os.O_RDONLY)
>> OSError: [Errno 2] No such file or directory:
>> 'rbd:devel-pool/testvm3.rbd:id=libvirt:key=AQBThwFbGFRYFx==:auth_supported=cephx\\;none:mon_host=10.20.30.1\\:6789\\;10.20.30.2\\:6789\\;10.20.30.3\\:6789'
>>
> 
> we used to work with Xen hypervisors before we switched to KVM, all the
> VMs are within OpenStack. There was one thing we had to configure for
> Xen instances: the base image needed two image properties,
> "hypervisor_type = xen" and "kernel_id = " where the image for
> the kernel_id was uploaded from /usr/lib/grub2/x86_64-xen/grub.xen.
> For VMs independent from openstack we had to provide the kernel like this:
> 
> # kernel="/usr/lib/grub2/x86_64-xen/grub.xen"
> kernel="/usr/lib/grub2/i386-xen/grub.xen"
> 
> I'm not sure if this is all that's required in your environment but we
> managed to run Xen VMs with Ceph backend.

I don't think that this is the cause, because as far as I understand the
error, Xen does not even try to look for the kernel or what ever.

On the first access on it's image, located on the RBD, it says "file not
found", otherwise it would say s.th. like "kernel not found".


So any other ideas?
-- 

kind regards,

thg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-22 Thread Eugen Block

Hi,


So "somthing" goes wrong:

# cat /var/log/libvirt/libxl/libxl-driver.log
-> ...
2018-05-20 15:28:15.270+: libxl:
libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult
logfile /var/log/xen/bootloader.7.log
2018-05-20 15:28:15.270+: libxl:
libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [26640]
exited with error status 1
2018-05-20 15:28:15.271+: libxl:
libxl_create.c:1259:domcreate_rebuild_done: cannot (re-)build domain: -3

# cat /var/log/xen/bootloader.7.log
->
Traceback (most recent call last):
  File "/usr/lib64/xen/bin/pygrub", line 896, in 
part_offs = get_partition_offsets(file)
  File "/usr/lib64/xen/bin/pygrub", line 113, in get_partition_offsets
image_type = identify_disk_image(file)
  File "/usr/lib64/xen/bin/pygrub", line 56, in identify_disk_image
fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory:
'rbd:devel-pool/testvm3.rbd:id=libvirt:key=AQBThwFbGFRYFx==:auth_supported=cephx\\;none:mon_host=10.20.30.1\\:6789\\;10.20.30.2\\:6789\\;10.20.30.3\\:6789'


we used to work with Xen hypervisors before we switched to KVM, all  
the VMs are within OpenStack. There was one thing we had to configure  
for Xen instances: the base image needed two image properties,  
"hypervisor_type = xen" and "kernel_id = " where the image  
for the kernel_id was uploaded from /usr/lib/grub2/x86_64-xen/grub.xen.

For VMs independent from openstack we had to provide the kernel like this:

# kernel="/usr/lib/grub2/x86_64-xen/grub.xen"
kernel="/usr/lib/grub2/i386-xen/grub.xen"

I'm not sure if this is all that's required in your environment but we  
managed to run Xen VMs with Ceph backend.


Regards,
Eugen


Zitat von thg :


Hi all@list,

my background: I'm doing Xen since 10++ years, many years with DRBD for
high availability, since some time I'm using preferable GlusterFS with
FUSE as replicated storage, where I place the image-files for the vms.

In my current project we started (successfully) with Xen/GlusterFS too,
but because the provider, where we placed the servers, uses widely CEPH.
So we decided to switch, because of getting better support for this.

Unfortunately I'm new to CEPH, but with help of a technician, we have
running a 3 node CEPH-cluster now, that seems to work fine.

Hardware:
- Xeons, 24 Cores, 256 GB RAM,
  2x 240 GB system-SSDs RAID1, 4x 1,92 TB data-SSDs (no RAID)

Software we are using:
- CentOS 7.5.1804
- Kernel: 4.9.86-30.el7 @centos-virt-xen-48
- Xen: 4.8.3-5.el7  @centos-virt-xen-48
- libvirt-xen: 4.1.0-2.xen48.el7@centos-virt-xen-48
- Ceph: 2:12.2.5-0.el7  @Ceph


What is working:
I've converted a vm to a RBD-device, mapped it, mounted it and can start
this as pvm on the Xen hypervisor via xl create:

# qemu-img convert -O rbd img/testvm.img rbd:devel-pool/testvm3.rbd
# rbd ls -l devel-pool
-> NAME  SIZE PARENT FMT PROT LOCK
   ...
   testvm3.rbd 16384M  2
# rbd info devel-pool/testvm3.rbd
-> rbd image 'testvm3.rbd':
   size 16384 MB in 4096 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.fac72ae8944a
   format: 2
   features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten
   flags:
   create_timestamp: Sun May 20 14:13:42 2018
# qemu-img info rbd:devel-pool/testvm3.rbd
-> image: rbd:devel-pool/testvm3.rbd
   file format: raw
   virtual size: 16G (17179869184 bytes)
   disk size: unavailable

# rbd feature disable devel-pool/testvm2.rbd deep-flatten, fast-diff,
object-map (otherwise mapping does not work)
# rbd info devel-pool/testvm3.rbd
-> rbd image 'testvm3.rbd':
   size 16384 MB in 4096 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.acda2ae8944a
   format: 2
   features: layering, exclusive-lock
   ...
# rbd map devel-pool/testvm3.rbd
-> /dev/rbd0
# rbd showmapped
-> id pool   image   snap device
   0  devel-pool testvm3.rbd -/dev/rbd0
# fdisk -l /dev/rbd0
-> Disk /dev/rbd0: 17.2 GB, 17179869184 bytes, 33554432 sectors
   Units = sectors of 1 * 512 = 512 bytes
   Sector size (logical/physical): 512 bytes / 512 bytes
   ...
Device Boot  Start End  Blocks   Id  System
   /dev/rbd0p1   *2048 2099199 1048576   83  Linux
   /dev/rbd0p2 20992002936217513631488   83  Linux
   /dev/rbd0p32936217633554431 2096128   82  Linux swap
   ...
# mount /dev/rbd0p2 /mnt
# ll /mnt/
-> ...
   lrwxrwxrwx.  1 root root7 Jan  2 23:42 bin -> usr/bin
   drwxr-xr-x.  2 root root6 Jan  2 23:42 boot
   drwxr-xr-x.  2 root root6 Jan  2 23:42 dev
   drwxr-xr-x. 81 root root 8192 May  7 02:08 etc
   drwxr-xr-x.  8 root root   98 Jan 29 02:19 home
   ...
   drwxr-xr-x. 19 root root  267 Jan  3 13:22 var
# umount /dev/rbd0p2

# cat testvm3.rbd0
-> name = "testvm3"
   ...
   disk = [ 

Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-22 Thread thg
Hi Marc,

> in the last weeks we spent some time in in improving RBDSR a rbd storage
> repository for XenServer.
> RBDSR is capable to userRBD by fuse, krbd and rbd-nbd.

I will have a look in this, thank you very much!

> I am pretty sure, that we will use this in production in a few weeks :-)

Well, better would be in "few days", to get up the dev-environment on
Ceph, that is still running on GlusterFS ;-)

> Probably this might be a alternative for you if you have in depth xen
> knowledge and python programming skills.

We use Xen, not XenServer, thus the xl- and not xe-tool-stack, python
would be "doable", but my ceph-knowledge still is quite limited.

So (a working) libvirt-solution would be the preferred one and relating
to the documentation, normally it should be supported ...

-- 

kind regards,

thg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-22 Thread Marc Schöchlin
Hello thg,

in the last weeks we spent some time in in improving RBDSR a rbd storage
repository for XenServer.
RBDSR is capable to userRBD by fuse, krbd and rbd-nbd.

Our improvements are based on
https://github.com/rposudnevskiy/RBDSR/tree/v2.0 and are currently
published at https://github.com/vico-research-and-consulting/RBDSR.

We are using it by rbd-nbd mode, because minimizes kernel dependencies
by a good flexibility and performance.
(http://docs.ceph.com/docs/master/man/8/rbd-nbd/)

The implementation still seems to need lots improvements, but our
current test results were promising from a stability and performance view.
I am pretty sure, that we will use this in production in a few weeks :-)

Probably this might be a alternative for you if you have in depth xen
knowledge and python programming skills.

Regards
Marc


Out Setup will look like this:

  * XEN
  o 220 virtual machines
  o 9 XEN Server 7.2 nodes (because of licencing reasons)
  o 4 * 1  GBIT LACP
  * Ceph
  o Luminous/12.2.5
  o Ubuntu 16.04
  o 5 OSD Nodes (24*8 TB HDD OSDs, 48*1TB SSD OSDS, Bluestore, 6Gb
Cache per OSD)
  o Size per OSD, 192GB RAM, 56 HT CPUs)
  o 3 Mons (64 GB RAM, 200GB SSD, 4 visible CPUs)
  o 2 * 10 GBIT, SFP+, bonded xmit_hash_policy layer3+4 for Ceph


Am 20.05.2018 um 20:15 schrieb thg:
> Hi all@list,
>
> my background: I'm doing Xen since 10++ years, many years with DRBD for
> high availability, since some time I'm using preferable GlusterFS with
> FUSE as replicated storage, where I place the image-files for the vms.
>
> In my current project we started (successfully) with Xen/GlusterFS too,
> but because the provider, where we placed the servers, uses widely CEPH.
> So we decided to switch, because of getting better support for this.
>
> Unfortunately I'm new to CEPH, but with help of a technician, we have
> running a 3 node CEPH-cluster now, that seems to work fine.
>
> Hardware:
> - Xeons, 24 Cores, 256 GB RAM,
>   2x 240 GB system-SSDs RAID1, 4x 1,92 TB data-SSDs (no RAID)
>
> Software we are using:
> - CentOS 7.5.1804
> - Kernel: 4.9.86-30.el7 @centos-virt-xen-48
> - Xen: 4.8.3-5.el7  @centos-virt-xen-48
> - libvirt-xen: 4.1.0-2.xen48.el7@centos-virt-xen-48
> - Ceph: 2:12.2.5-0.el7  @Ceph
>
>
> What is working:
> I've converted a vm to a RBD-device, mapped it, mounted it and can start
> this as pvm on the Xen hypervisor via xl create:
>
> # qemu-img convert -O rbd img/testvm.img rbd:devel-pool/testvm3.rbd
> # rbd ls -l devel-pool
> -> NAME  SIZE PARENT FMT PROT LOCK
>...
>testvm3.rbd 16384M  2
> # rbd info devel-pool/testvm3.rbd
> -> rbd image 'testvm3.rbd':
>size 16384 MB in 4096 objects
>order 22 (4096 kB objects)
>block_name_prefix: rbd_data.fac72ae8944a
>format: 2
>features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>flags:
>create_timestamp: Sun May 20 14:13:42 2018
> # qemu-img info rbd:devel-pool/testvm3.rbd
> -> image: rbd:devel-pool/testvm3.rbd
>file format: raw
>virtual size: 16G (17179869184 bytes)
>disk size: unavailable
>
> # rbd feature disable devel-pool/testvm2.rbd deep-flatten, fast-diff,
> object-map (otherwise mapping does not work)
> # rbd info devel-pool/testvm3.rbd
> -> rbd image 'testvm3.rbd':
>size 16384 MB in 4096 objects
>order 22 (4096 kB objects)
>block_name_prefix: rbd_data.acda2ae8944a
>format: 2
>features: layering, exclusive-lock
>...
> # rbd map devel-pool/testvm3.rbd
> -> /dev/rbd0
> # rbd showmapped
> -> id pool   image   snap device
>0  devel-pool testvm3.rbd -/dev/rbd0
> # fdisk -l /dev/rbd0
> -> Disk /dev/rbd0: 17.2 GB, 17179869184 bytes, 33554432 sectors
>Units = sectors of 1 * 512 = 512 bytes
>Sector size (logical/physical): 512 bytes / 512 bytes
>...
> Device Boot  Start End  Blocks   Id  System
>/dev/rbd0p1   *2048 2099199 1048576   83  Linux
>/dev/rbd0p2 20992002936217513631488   83  Linux
>/dev/rbd0p32936217633554431 2096128   82  Linux swap
>...
> # mount /dev/rbd0p2 /mnt
> # ll /mnt/
> -> ...
>lrwxrwxrwx.  1 root root7 Jan  2 23:42 bin -> usr/bin
>drwxr-xr-x.  2 root root6 Jan  2 23:42 boot
>drwxr-xr-x.  2 root root6 Jan  2 23:42 dev
>drwxr-xr-x. 81 root root 8192 May  7 02:08 etc
>drwxr-xr-x.  8 root root   98 Jan 29 02:19 home
>...
>drwxr-xr-x. 19 root root  267 Jan  3 13:22 var
> # umount /dev/rbd0p2
>
> # cat testvm3.rbd0
> -> name = "testvm3"
>...
>disk = [ "phy:/dev/rbd0,xvda,w" ]
>...
> # xl create -c testvm3.rbd0
> -> Parsing config from vpngw1.rbd0
>Using  to parse /grub2/grub.cfg
>...
>Welcome to CentOS Linux 7 (Core)!
>...
>CentOS Linux 7 (Core)
> Kernel