Igor,
I think I misunderstood the output of USED. The info should be allocated
size, not equal 1.5*STORED sometimes.
For example: when writing 4k file, It may allocate 64k that seems to use
more spaces, but If you write another 4k, it can use the same blob.(I
will validate the guess).
So
Norman,
>default-fs-data0 9 374 TiB 1.48G 939
TiB 74.71 212 TiB
given the above numbers 'default-fs-data0' pool has average object size
around 256K (374 TiB / 1.48G objects). Are you sure that absolute
majority of your objects in this pool are 4M?
On 2020-09-08 19:30, norman kern wrote:
> Hi,
>
> I have changed most of pools from 3-replica to ec 4+2 in my cluster,
> when I use ceph df command to show
>
> the used capactiy of the cluster:
>
[...]
>
> The USED = 3 * STORED in 3-replica mode is completely right, but for EC
> 4+2 pool
__
> From: norman
> Sent: 10 September 2020 08:34:42
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: The confusing output of ceph df command
>
> Anyone else met the same problem? Using EC instead of Replica is to save
> spaces, but now it's w
all on our ceph fs pool.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: norman
Sent: 10 September 2020 08:34:42
To: ceph-users@ceph.io
Subject: [ceph-users] Re: The confusing output of ceph df command
Anyon
Anyone else met the same problem? Using EC instead of Replica is to save
spaces, but now it's worse than replica...
On 9/9/2020 上午7:30, norman kern wrote:
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use
ceph df command to show
the used capactiy of the
Igor,
Thanks for your reply. The object size is 4M and almost no overwrites
in the pool, why space loss happened in the pool?
I have another cluster with the same config, Its USED is almost equal to
1.5*STORED, the diff between them is:
The cluster has different OSD size(12T and 8T) .
Hi Norman,
not pretending to know the exact root cause but IMO one of the working
hypothesis might be as follows :
Presuming spinners as backing devices for you OSDs and hence 64K
allocation unit (bluestore min_alloc_size_hdd param).
1) 1.48GB user objects result in 1.48G * 6 = 8.88G EC