So having learned some about fstrim, I ran it on an SSD backed file
system and it reported space freed. I ran it on an RBD backed file
system and was told it's not implemented. 

This is consistent with the test for FITRIM. 

$ cat /sys/block/rbd3/queue/discard_max_bytes
0

On my SSD backed device I get:

$ cat /sys/block/sda/queue/discard_max_bytes
2147450880

Is this just not needed by RBD or is cleanup handled in a different way?

I'm wondering what will happen to a thin provisioned RBD image overtime
on a file system with lots of file create delete activity.  Will the
storage in the ceph pool stay allocated to this application (the file
system) in that case?

Thanks for any additional insights.

~jpr

On 04/15/2014 04:16 PM, John-Paul Robinson wrote:
> Thanks for the insight.
>
> Based on that I found the fstrim command for xfs file systems. 
>
> http://xfs.org/index.php/FITRIM/discard
>
> Anyone had experiences using the this command with RBD image backends?
>
> ~jpr
>
> On 04/15/2014 02:00 PM, Kyle Bader wrote:
>>> I'm assuming Ceph/RBD doesn't have any direct awareness of this since
>>> the file system doesn't traditionally have a "give back blocks"
>>> operation to the block device.  Is there anything special RBD does in
>>> this case that communicates the release of the Ceph storage back to the
>>> pool?
>> VMs running a 3.2+ kernel (iirc) can "give back blocks" by issuing TRIM.
>>
>> http://wiki.qemu.org/Features/QED/Trim
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to