Yes, There is a unbalance in PG's assigned to OSD's.
`ceph osd df` output snip
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META
 AVAIL    %USE   VAR   PGS  STATUS
 0    hdd  5.45799   1.00000  5.5 TiB  3.6 TiB  3.6 TiB  9.7 MiB   4.6 GiB
 1.9 TiB  65.94  1.31   13      up
 1    hdd  5.45799   1.00000  5.5 TiB  1.0 TiB  1.0 TiB  4.4 MiB   1.3 GiB
 4.4 TiB  18.87  0.38    9      up
 2    hdd  5.45799   1.00000  5.5 TiB  1.5 TiB  1.5 TiB  4.0 MiB   1.9 GiB
 3.9 TiB  28.30  0.56   10      up
 3    hdd  5.45799   1.00000  5.5 TiB  2.1 TiB  2.1 TiB  7.7 MiB   2.7 GiB
 3.4 TiB  37.70  0.75   12      up
 4    hdd  5.45799   1.00000  5.5 TiB  4.1 TiB  4.1 TiB  5.8 MiB   5.2 GiB
 1.3 TiB  75.27  1.50   20      up
 5    hdd  5.45799   1.00000  5.5 TiB  5.1 TiB  5.1 TiB  5.9 MiB   6.7 GiB
 317 GiB  94.32  1.88   18      up
 6    hdd  5.45799   1.00000  5.5 TiB  1.5 TiB  1.5 TiB  5.2 MiB   2.0 GiB
 3.9 TiB  28.32  0.56    9      up

MIN/MAX VAR: 0.19/1.88  STDDEV: 22.13

On Sun, Oct 25, 2020 at 12:08 AM Stefan Kooman <ste...@bit.nl> wrote:

> On 2020-10-24 14:53, Amudhan P wrote:
> > Hi,
> >
> > I have created a test Ceph cluster with Ceph Octopus using cephadm.
> >
> > Cluster total RAW disk capacity is 262 TB but it's allowing to use of
> only
> > 132TB.
> > I have not set quota for any of the pool. what could be the issue?
>
> Unbalance? What does ceph osd df show? How large is the standard deviation?
>
> Gr. Stefan
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to