It looks like he used 'rbd map' to map his volume. If so, then yes just run 
fstrim on the device.

If it's an instance with a cinder, or a nova ephemeral disk (on ceph) then you 
have to use virtio-scsi to run discard in your instance.

________________________________
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Jack 
<c...@jack.fr.eu.org>
Sent: Thursday, February 28, 2019 5:39 PM
To: solarflow99
Cc: Ceph Users
Subject: Re: [ceph-users] rbd space usage

Ha, that was your issue

RBD does not know that your space (on the filesystem level) is now free
to use

You have to trim your filesystem, see fstrim(8) as well as the discard
mount option

The related scsi command have to be passed down the stack, so you may
need to check on other level (for instance, your hypervisor's configuration)

Regards,

On 02/28/2019 11:31 PM, solarflow99 wrote:
> yes, but:
>
> # rbd showmapped
> id pool image snap device
> 0  rbd  nfs1  -    /dev/rbd0
> 1  rbd  nfs2  -    /dev/rbd1
>
>
> # df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/rbd0       8.0T  4.8T  3.3T  60% /mnt/nfsroot/rbd0
> /dev/rbd1       9.8T   34M  9.8T   1% /mnt/nfsroot/rbd1
>
>
> only 5T is taken up
>
>
> On Thu, Feb 28, 2019 at 2:26 PM Jack <c...@jack.fr.eu.org> wrote:
>
>> Are not you using 3-replicas pool ?
>>
>> (15745GB + 955GB + 1595M) * 3 ~= 51157G (there is overhead involved)
>>
>> Best regards,
>>
>> On 02/28/2019 11:09 PM, solarflow99 wrote:
>>> thanks, I still can't understand whats taking up all the space 27.75
>>>
>>> On Thu, Feb 28, 2019 at 7:18 AM Mohamad Gebai <mge...@suse.de> wrote:
>>>
>>>> On 2/27/19 4:57 PM, Marc Roos wrote:
>>>>> They are 'thin provisioned' meaning if you create a 10GB rbd, it does
>>>>> not use 10GB at the start. (afaik)
>>>>
>>>> You can use 'rbd -p rbd du' to see how much of these devices is
>>>> provisioned and see if it's coherent.
>>>>
>>>> Mohamad
>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: solarflow99 [mailto:solarflo...@gmail.com]
>>>>> Sent: 27 February 2019 22:55
>>>>> To: Ceph Users
>>>>> Subject: [ceph-users] rbd space usage
>>>>>
>>>>> using ceph df it looks as if RBD images can use the total free space
>>>>> available of the pool it belongs to, 8.54% yet I know they are created
>>>>> with a --size parameter and thats what determines the actual space.  I
>>>>> can't understand the difference i'm seeing, only 5T is being used but
>>>>> ceph df shows 51T:
>>>>>
>>>>>
>>>>> /dev/rbd0       8.0T  4.8T  3.3T  60% /mnt/nfsroot/rbd0
>>>>> /dev/rbd1       9.8T   34M  9.8T   1% /mnt/nfsroot/rbd1
>>>>>
>>>>>
>>>>>
>>>>> # ceph df
>>>>> GLOBAL:
>>>>>     SIZE     AVAIL     RAW USED     %RAW USED
>>>>>     180T      130T       51157G         27.75
>>>>> POOLS:
>>>>>     NAME                    ID     USED       %USED     MAX AVAIL
>>>>> OBJECTS
>>>>>     rbd                     0      15745G      8.54        39999G
>>>>> 4043495
>>>>>     cephfs_data             1           0         0        39999G
>>>>>     0
>>>>>     cephfs_metadata         2        1962         0        39999G
>>>>>    20
>>>>>     spider_stage     9       1595M         0        39999G        47835
>>>>>     spider               10       955G      0.52        39999G
>>>>> 42541237
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to