Gregory Farnum – Thu., 21. January 2016 4:02
On Wed, Jan 20, 2016 at 6:40 PM, Francois Lafont wrote:
> Hi,
>
> On 19/01/2016 07:24, Adam Tygart wrote:
>> It appears that with --apparent-size, du adds the "size" of the
>> directories to the total as well. On most
On Thu, Jan 21, 2016 at 1:20 AM, HMLTH wrote:
>
>
>
> Gregory Farnum – Thu., 21. January 2016 4:02
>>
>> On Wed, Jan 20, 2016 at 6:40 PM, Francois Lafont
>> wrote:
>> > Hi,
>> >
>> > On 19/01/2016 07:24, Adam Tygart wrote:
>> >> It appears that with
Hi,
On 19/01/2016 07:24, Adam Tygart wrote:
> It appears that with --apparent-size, du adds the "size" of the
> directories to the total as well. On most filesystems this is the
> block size, or the amount of metadata space the directory is using. On
> CephFS, this size is fabricated to be the
On Wed, Jan 20, 2016 at 6:40 PM, Francois Lafont wrote:
> Hi,
>
> On 19/01/2016 07:24, Adam Tygart wrote:
>> It appears that with --apparent-size, du adds the "size" of the
>> directories to the total as well. On most filesystems this is the
>> block size, or the amount of
On 21/01/2016 03:40, Francois Lafont wrote:
> Ah ok, interesting. I have tested and I have noticed however that size
> of a directory is not updated immediately. For instance, if I change
> the size of the regular file in a directory (of cephfs) the size of the
> size doesn't change immediately
On 19/01/2016 05:19, Francois Lafont wrote:
> However, I still have a question. Since my previous message, supplementary
> data have been put in the cephfs and the values have changes as you can see:
>
> ~# du -sh /mnt/cephfs/
> 1.2G /mnt/cephfs/
>
> ~# du --apparent-size -sh
Hi,
On 18/01/2016 05:00, Adam Tygart wrote:
> As I understand it:
I think you understand well. ;)
> 4.2G is used by ceph (all replication, metadata, et al) it is a sum of
> all the space "used" on the osds.
I confirm that.
> 958M is the actual space the data in cephfs is using (without
It appears that with --apparent-size, du adds the "size" of the
directories to the total as well. On most filesystems this is the
block size, or the amount of metadata space the directory is using. On
CephFS, this size is fabricated to be the size sum of all sub-files.
i.e. a cheap/free 'du -sh
As I understand it:
4.2G is used by ceph (all replication, metadata, et al) it is a sum of
all the space "used" on the osds.
958M is the actual space the data in cephfs is using (without replication).
3.8G means you have some sparse files in cephfs.
'ceph df detail' should return something close
On 18/01/2016 04:19, Francois Lafont wrote:
> ~# du -sh /mnt/cephfs
> 958M /mnt/cephfs
>
> ~# df -h /mnt/cephfs/
> Filesystem Size Used Avail Use% Mounted on
> ceph-fuse55T 4.2G 55T 1% /mnt/cephfs
Even with the option --apparent-size, the size are
Hello,
Can someone explain me the difference between df and du commands
concerning the data used in my cephfs? And which is the correct value,
958M or 4.2G?
~# du -sh /mnt/cephfs
958M/mnt/cephfs
~# df -h /mnt/cephfs/
Filesystem Size Used Avail Use% Mounted on
11 matches
Mail list logo