On 04/17/2014 02:39 AM, Somnath Roy wrote:
It seems Discard support for kernel rbd is targeted for v80..
http://tracker.ceph.com/issues/190
True, but it will obviously take time before this hits the upstream
kernels and goes into distributions.
For RHEL 7 it might be that the krbd module from the Ceph extra repo
might work. For Ubuntu it's waiting for newer kernels to be backported
to the LTS releases.
Wido
Thanks & Regards
Somnath
-----Original Message-----
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Christian Balzer
Sent: Wednesday, April 16, 2014 5:36 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] question on harvesting freed space
On Wed, 16 Apr 2014 13:12:15 -0500 John-Paul Robinson wrote:
So having learned some about fstrim, I ran it on an SSD backed file
system and it reported space freed. I ran it on an RBD backed file
system and was told it's not implemented.
This is consistent with the test for FITRIM.
$ cat /sys/block/rbd3/queue/discard_max_bytes
0
This looks like you're using the kernelspace RBD interface.
And very sadly, trim/discard is not implemented in it, which is a bummer for
anybody running for example a HA NFS server with RBD as the backing storage.
Even sadder is the fact that this was last brought up a year or even longer ago.
Only the userspace (librbd) interface supports this, however the client (KVM as
prime example) of course need to use a pseudo disk interface that ALSO supports
it. The standard virtio-block does not, while the very slow IDE emulation does
as well as the speedier virtio-scsi (however that isn't configurable with
ganeti for example).
Regards,
Christian
On my SSD backed device I get:
$ cat /sys/block/sda/queue/discard_max_bytes
2147450880
Is this just not needed by RBD or is cleanup handled in a different way?
I'm wondering what will happen to a thin provisioned RBD image
overtime on a file system with lots of file create delete activity.
Will the storage in the ceph pool stay allocated to this application
(the file
system) in that case?
Thanks for any additional insights.
~jpr
On 04/15/2014 04:16 PM, John-Paul Robinson wrote:
Thanks for the insight.
Based on that I found the fstrim command for xfs file systems.
http://xfs.org/index.php/FITRIM/discard
Anyone had experiences using the this command with RBD image backends?
~jpr
On 04/15/2014 02:00 PM, Kyle Bader wrote:
I'm assuming Ceph/RBD doesn't have any direct awareness of this
since the file system doesn't traditionally have a "give back blocks"
operation to the block device. Is there anything special RBD does
in this case that communicates the release of the Ceph storage
back to the pool?
VMs running a 3.2+ kernel (iirc) can "give back blocks" by issuing
TRIM.
http://wiki.qemu.org/Features/QED/Trim
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian Balzer Network/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
________________________________
PLEASE NOTE: The information contained in this electronic mail message is
intended only for the use of the designated recipient(s) named above. If the
reader of this message is not the intended recipient, you are hereby notified
that you have received this message in error and that any review,
dissemination, distribution, or copying of this message is strictly prohibited.
If you have received this communication in error, please notify the sender by
telephone or e-mail (as shown above) immediately and destroy any and all copies
of this message in your possession (whether hard copies or electronically
stored copies).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com