Hi Eneko, On 07/06/2016 08:27 AM, Eneko Lacunza wrote: > Hi Alwin, > > In Proxmox Ceph client integration is done using librbd, not krbd. Stripe > parameters can't be defined from Proxmox GUI.
If I understood correctly, this depends, if you add a ceph pool to proxmox with the KRBD option enabled or not. I know that these settings aren't done through the GUI, as you need to create the image with ceph tools manually. > Why are you changing striping parameters? We have some write intense tasks to perform and stripping can boost write performance. To test this I would like to add such a disk to a VM. > > Cheers > Eneko > > El 05/07/16 a las 19:14, Alwin Antreich escribió: >> Hi all, >> >> how can I create an image with the ceph striping feature and add the disk to >> a VM? >> >> Thanks in advance. >> >> >> I can add an image via the rbd cli, but fail to activate it through proxmox >> (timeout). I manually added the disk file to >> the VM config under /etc/pve/qemu-server/. In proxmox the storage is added >> w/o KRBD. >> >> rbd -p rbd2 --image-features 3 --stripe-count 8 --stripe-unit 524288 --size >> 4194304 --image-format 2 create >> vm-208014-disk-2 >> >> http://docs.ceph.com/docs/hammer/man/8/rbd/ >> >> ==syslog== >> Jul 5 18:58:30 hermodr pvedaemon[1340]: worker exit >> Jul 5 18:58:30 hermodr pvedaemon[3853]: worker 1340 finished >> Jul 5 18:58:30 hermodr pvedaemon[3853]: starting 1 worker(s) >> Jul 5 18:58:30 hermodr pvedaemon[3853]: worker 7328 started >> Jul 5 18:59:01 hermodr pmxcfs[3327]: [status] notice: received log >> Jul 5 18:59:05 hermodr pvestatd[3789]: status update time (300.209 seconds) >> Jul 5 19:00:28 hermodr pveproxy[3535]: worker exit >> Jul 5 19:00:28 hermodr pveproxy[30422]: worker 3535 finished >> Jul 5 19:00:28 hermodr pveproxy[30422]: starting 1 worker(s) >> Jul 5 19:00:28 hermodr pveproxy[30422]: worker 7581 started >> Jul 5 19:02:37 hermodr pvedaemon[7328]: <root@pam> update VM 208014: -ide3 >> rbd2:vm-208014-disk-2 >> Jul 5 19:03:07 hermodr pveproxy[7581]: proxy detected vanished client >> connection >> Jul 5 19:04:05 hermodr pvestatd[3789]: status update time (300.220 seconds) >> >> >> ==packages on all cluster nodes== >> proxmox-ve: 4.2-54 (running kernel: 4.4.10-1-pve) >> pve-manager: 4.2-15 (running version: 4.2-15/6669ad2c) >> pve-kernel-4.4.6-1-pve: 4.4.6-48 >> pve-kernel-4.2.6-1-pve: 4.2.6-36 >> pve-kernel-4.2.8-1-pve: 4.2.8-41 >> pve-kernel-4.4.10-1-pve: 4.4.10-54 >> lvm2: 2.02.116-pve2 >> corosync-pve: 2.3.5-2 >> libqb0: 1.0-1 >> pve-cluster: 4.0-42 >> qemu-server: 4.0-81 >> pve-firmware: 1.1-8 >> libpve-common-perl: 4.0-68 >> libpve-access-control: 4.0-16 >> libpve-storage-perl: 4.0-55 >> pve-libspice-server1: 0.12.5-2 >> vncterm: 1.2-1 >> pve-qemu-kvm: 2.5-19 >> pve-container: 1.0-68 >> pve-firewall: 2.0-29 >> pve-ha-manager: 1.0-32 >> ksm-control-daemon: 1.2-1 >> glusterfs-client: 3.5.2-2+deb8u2 >> lxc-pve: 1.1.5-7 >> lxcfs: 2.0.0-pve2 >> cgmanager: 0.39-pve1 >> criu: 1.6.0-1 >> zfsutils: 0.6.5-pve9~jessie >> ceph: 0.94.7-1~bpo80+1 >> >> Cheers, >> Alwin >> _______________________________________________ >> pve-user mailing list >> [email protected] >> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >> > > Any hint is welcome to where to look next. Thanks. Cheers, Alwin _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
