Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Adeel Nazir
@lists.ceph.com > Subject: Re: [ceph-users] Ceph Block device and Trim/Discard > > On 12.12.2014 12:48, Max Power wrote: > > > It would be great to shrink the used space. Is there a way to achieve > > this? Or have I done something wrong? In a professional environment > >

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Josh Durgin
On 12/18/2014 10:49 AM, Travis Rhoden wrote: One question re: discard support for kRBD -- does it matter which format the RBD is? Format 1 and Format 2 are okay, or just for Format 2? It shouldn't matter which format you use. Josh ___ ceph-users mai

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Travis Rhoden
One question re: discard support for kRBD -- does it matter which format the RBD is? Format 1 and Format 2 are okay, or just for Format 2? - Travis On Mon, Dec 15, 2014 at 8:58 AM, Max Power < mailli...@ferienwohnung-altenbeken.de> wrote: > > > Ilya Dryomov hat am 12. Dezember 2014 um > 18:00

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-15 Thread Max Power
> Ilya Dryomov hat am 12. Dezember 2014 um 18:00 > geschrieben: > Just a note, discard support went into 3.18, which was released a few > days ago. I recently compiled 3.18 on Debian 7 and what do I have to say... It works perfectly well. The used memory goes up and down again. So I think this wi

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Wido den Hollander
On 12/12/2014 01:17 PM, Max Power wrote: >> Wido den Hollander hat am 12. Dezember 2014 um 12:53 >> geschrieben: >> It depends. Kernel RBD does not support discard/trim yet. Qemu does >> under certain situations and with special configuration. > > Ah, Thank you. So this is my problem. I use rbd w

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Robert Sander
On 12.12.2014 12:48, Max Power wrote: > It would be great to shrink the used space. Is there a way to achieve this? Or > have I done something wrong? In a professional environment you may can live > with > filesystems that only grow. But on my small home-cluster this really is a > problem. As Wi

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Ilya Dryomov
On Fri, Dec 12, 2014 at 2:53 PM, Wido den Hollander wrote: > On 12/12/2014 12:48 PM, Max Power wrote: >> I am new to Ceph and start discovering its features. I used ext4 partitions >> (also mounted with -o discard) to place several osd on them. Then I created >> an >> erasure coded pool in this c

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Sebastien Han
Discard works with virtio-scsi controllers for disks in QEMU. Just use discard=unmap in the disk section (scsi disk). > On 12 Dec 2014, at 13:17, Max Power > wrote: > >> Wido den Hollander hat am 12. Dezember 2014 um 12:53 >> geschrieben: >> It depends. Kernel RBD does not support discard/tri

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Max Power
> Wido den Hollander hat am 12. Dezember 2014 um 12:53 > geschrieben: > It depends. Kernel RBD does not support discard/trim yet. Qemu does > under certain situations and with special configuration. Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I think I should port my

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Wido den Hollander
On 12/12/2014 12:48 PM, Max Power wrote: > I am new to Ceph and start discovering its features. I used ext4 partitions > (also mounted with -o discard) to place several osd on them. Then I created an > erasure coded pool in this cluster. On top of this there is the rados block > device which holds

[ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Max Power
I am new to Ceph and start discovering its features. I used ext4 partitions (also mounted with -o discard) to place several osd on them. Then I created an erasure coded pool in this cluster. On top of this there is the rados block device which holds also an ext4 filesystem (of course mounted with -