Re: [ceph-users] Qemu RBD image usage

2019-12-09 Thread Marc Roos
virsh secret-define --file secret.xml virsh secret-set-value --secret --base64 `ceph auth get-key client.rbd.vps 2>/dev/null` -Original Message- To: ceph-users@lists.ceph.com Cc: d...@ceph.io Subject: [ceph-users] Qemu RBD image usage Hi all, I want to attach another RBD image

[ceph-users] Qemu RBD image usage

2019-12-09 Thread Liu, Changcheng
Hi all, I want to attach another RBD image into the Qemu VM to be used as disk. However, it always failed. The VM definiation xml is attached. Could anyone tell me where I did wrong? || nstcc3@nstcloudcc3:~$ sudo virsh start ubuntu_18_04_mysql --console || error: Failed to start dom

[ceph-users] qemu/rbd: threads vs native, performance tuning

2018-09-27 Thread Elias Abacioglu
Hi, I was reading this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008486.html And I am trying to get better performance in my virtual machines. These are my RBD settings: "rbd_cache": "true", "rbd_cache_block_writes_upfront": "false", "rbd_cache_max_dirty":

Re: [ceph-users] qemu-rbd and ceph striping

2016-10-20 Thread Jason Dillaman
On Thu, Oct 20, 2016 at 1:51 AM, Ahmed Mostafa wrote: > different OSDs PGs -- but more or less correct since the OSDs will process requests for a particular PG sequentially and not in parallel. -- Jason ___ ceph-users mailing list ceph-users@lists.cep

Re: [ceph-users] qemu-rbd and ceph striping

2016-10-19 Thread Ahmed Mostafa
Does this also mean that strip count can be thought of as the number of parrallel writes to different objects at different OSDs ? Thank you On Thursday, 20 October 2016, Jason Dillaman wrote: > librbd (used by QEMU to provide RBD-backed disks) uses librados and > provides the necessary handling

Re: [ceph-users] qemu-rbd and ceph striping

2016-10-19 Thread Jason Dillaman
librbd (used by QEMU to provide RBD-backed disks) uses librados and provides the necessary handling for striping across multiple backing objects. When you don't specify "fancy" striping options via "--stripe-count" and "--stripe-unit", it essentially defaults to stripe count of 1 and stripe unit of

[ceph-users] qemu-rbd and ceph striping

2016-10-19 Thread Ahmed Mostafa
Hello >From the documentation i understand that clients that uses librados must perform striping for themselves, but i do not understand how could this be if we have striping options in ceph ? i mean i can create rbd images that has configuration for striping, count and unite size. So my question

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Ilya Dryomov
On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman wrote: >> Hi Jason, >> >> Le 22/03/2016 14:12, Jason Dillaman a écrit : >> > >> > We actually recommend that OpenStack be configured to use writeback cache >> > [1]. If the guest OS is properly issuing flush requests, the cache will >> > still provi

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Jason Dillaman
> Hi Jason, > > Le 22/03/2016 14:12, Jason Dillaman a écrit : > > > > We actually recommend that OpenStack be configured to use writeback cache > > [1]. If the guest OS is properly issuing flush requests, the cache will > > still provide crash-consistency. By default, the cache will automaticall

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Loris Cuoghi
Hi Jason, Le 22/03/2016 14:12, Jason Dillaman a écrit : We actually recommend that OpenStack be configured to use writeback cache [1]. If the guest OS is properly issuing flush requests, the cache will still provide crash-consistency. By default, the cache will automatically start up in wr

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Jason Dillaman
> > I've been looking on the internet regarding two settings which might > > influence > > performance with librbd. > > > > When attaching a disk with Qemu you can set a few things: > > - cache > > - aio > > > > The default for libvirt (in both CloudStack and OpenStack) for 'cache' is > > 'none'. I

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Loris Cuoghi
Hi Wido, Le 22/03/2016 13:52, Wido den Hollander a écrit : Hi, I've been looking on the internet regarding two settings which might influence performance with librbd. When attaching a disk with Qemu you can set a few things: - cache - aio The default for libvirt (in both CloudStack and OpenSt

[ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Wido den Hollander
Hi, I've been looking on the internet regarding two settings which might influence performance with librbd. When attaching a disk with Qemu you can set a few things: - cache - aio The default for libvirt (in both CloudStack and OpenStack) for 'cache' is 'none'. Is that still the recommend value

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Stefan Priebe - Profihost AG
lzer" > À: "Alex Crow" > Cc: ceph-users@lists.ceph.com > Envoyé: Samedi 12 Avril 2014 17:56:07 > Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live > migration safe ? > > > Hello, > >> On Sat, 12 Apr 2014 16:26:40 +0

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Alexandre DERUMIER
is working, I'm interested) - Mail original - De: "Alex Crow" À: ceph-users@lists.ceph.com Envoyé: Samedi 12 Avril 2014 17:26:40 Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ? Hi. I've read in many places that you sh

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Alexandre DERUMIER
hanks for link reference ! - Mail original - De: "Christian Balzer" À: "Alex Crow" Cc: ceph-users@lists.ceph.com Envoyé: Samedi 12 Avril 2014 17:56:07 Objet: Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ? Hello,

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Andrey Korolyov
Hello, AFAIK qemu calls bdrv_flush at the end of migration process so this is absolutely safe. Anyway it`s proven by our production systems very well too :) On Sat, Apr 12, 2014 at 7:01 PM, Alexandre DERUMIER wrote: > Hello, > > I known that qemu live migration with disk with cache=writeback are

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Christian Balzer
Hello, On Sat, 12 Apr 2014 16:26:40 +0100 Alex Crow wrote: > Hi. > > I've read in many places that you should never use writeback on any kind > of shared storage. Caching is better dealt with on the storage side > anyway as you have hopefully provided resilience there. In fact if your > SAN/

Re: [ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Alex Crow
Hi. I've read in many places that you should never use writeback on any kind of shared storage. Caching is better dealt with on the storage side anyway as you have hopefully provided resilience there. In fact if your SAN/NAS is good enough it's supposed to be best to use "none" as the caching

[ceph-users] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-12 Thread Alexandre DERUMIER
Hello, I known that qemu live migration with disk with cache=writeback are not safe with storage like nfs,iscsi... Is it also true with rbd ? If yes, it is possible to disable manually writeback online with qmp ? Best Regards, Alexandre ___ ceph-us

Re: [ceph-users] qemu-rbd

2014-03-17 Thread Sebastien Han
There is a RBD engine for FIO, have a look at http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bi

Re: [ceph-users] qemu-rbd

2014-03-11 Thread Kyle Bader
> I tried rbd-fuse and it's throughput using fio is approx. 1/4 that of the > kernel client. > > Can you please let me know how to setup RBD backend for FIO? I'm assuming > this RBD backend is also based on librbd? You will probably have to build fio from source since the rbd engine is new: htt

Re: [ceph-users] qemu-rbd

2014-03-11 Thread Sushma Gurram
[mailto:g...@inktank.com] Sent: Tuesday, March 11, 2014 2:41 PM To: Sushma Gurram Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] qemu-rbd On Tue, Mar 11, 2014 at 2:24 PM, Sushma Gurram wrote: > It seems good with master branch. Sorry about the confusion. > > On a side note, is it

Re: [ceph-users] qemu-rbd

2014-03-11 Thread Gregory Farnum
On Tue, Mar 11, 2014 at 2:24 PM, Sushma Gurram wrote: > It seems good with master branch. Sorry about the confusion. > > On a side note, is it possible to create/access the block device using librbd > and run fio on it? ...yes? librbd is the userspace library that QEMU is using to access it to b

Re: [ceph-users] qemu-rbd

2014-03-11 Thread Sushma Gurram
: Sushma Gurram Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] qemu-rbd On Tue, Mar 11, 2014 at 1:38 PM, Sushma Gurram wrote: > Hi, > > > > I'm trying to follow the instructions for QEMU rbd installation at > http://ceph.com/docs/master/rbd/qemu-rbd/ > > > > I

Re: [ceph-users] qemu-rbd

2014-03-11 Thread Gregory Farnum
On Tue, Mar 11, 2014 at 1:38 PM, Sushma Gurram wrote: > Hi, > > > > I'm trying to follow the instructions for QEMU rbd installation at > http://ceph.com/docs/master/rbd/qemu-rbd/ > > > > I tried to write a raw qemu image to ceph cluster using the following > command > > qemu-img convert -f raw -O

[ceph-users] qemu-rbd

2014-03-11 Thread Sushma Gurram
Hi, I'm trying to follow the instructions for QEMU rbd installation at http://ceph.com/docs/master/rbd/qemu-rbd/ I tried to write a raw qemu image to ceph cluster using the following command qemu-img convert -f raw -O raw ../linux-0.2.img rbd:data/linux OSD seems to be working, but it seems to