[ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

2022-12-05 Thread Kilian Ries
Looks like we are still waiting for a merge here ... can anybody help out? 
Really looking forward for the fix to get merged ...


https://github.com/ceph/ceph/pull/47189

https://tracker.ceph.com/issues/56650


Thanks


Von: Gregory Farnum 
Gesendet: Donnerstag, 28. Juli 2022 17:01:34
An: Nicolas FONTAINE
Cc: Kilian Ries; ceph-users
Betreff: Re: [ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

https://tracker.ceph.com/issues/56650

There's a PR in progress to resolve this issue now. (Thanks, Prashant!)
-Greg

On Thu, Jul 28, 2022 at 7:52 AM Nicolas FONTAINE  wrote:
>
> Hello,
>
> We have exactly the same problem. Did you find an answer or should we
> open a bug report?
>
> Sincerely,
>
> Nicolas.
>
> Le 23/06/2022 à 11:42, Kilian Ries a écrit :
> > Hi Joachim,
> >
> >
> > yes i assigned the stretch rule to the pool (4x replica / 2x min). The rule 
> > says that two replicas should be in every datacenter.
> >
> >
> > $ ceph osd tree
> > ID   CLASS  WEIGHTTYPE NAME   STATUS  REWEIGHT  PRI-AFF
> >   -1 62.87799  root default
> > -17 31.43900  datacenter site1
> > -15 31.43900  rack b7
> >   -3 10.48000  host host01
> >0ssd   1.74699  osd.0   up   1.0  1.0
> >1ssd   1.74699  osd.1   up   1.0  1.0
> >2ssd   1.74699  osd.2   up   1.0  1.0
> >3ssd   1.74699  osd.3   up   1.0  1.0
> >4ssd   1.74699  osd.4   up   1.0  1.0
> >5ssd   1.74699  osd.5   up   1.0  1.0
> >   -5 10.48000  host host02
> >6ssd   1.74699  osd.6   up   1.0  1.0
> >7ssd   1.74699  osd.7   up   1.0  1.0
> >8ssd   1.74699  osd.8   up   1.0  1.0
> >9ssd   1.74699  osd.9   up   1.0  1.0
> >   10ssd   1.74699  osd.10  up   1.0  1.0
> >   11ssd   1.74699  osd.11  up   1.0  1.0
> >   -7 10.48000  host host03
> >   12ssd   1.74699  osd.12  up   1.0  1.0
> >   13ssd   1.74699  osd.13  up   1.0  1.0
> >   14ssd   1.74699  osd.14  up   1.0  1.0
> >   15ssd   1.74699  osd.15  up   1.0  1.0
> >   16ssd   1.74699  osd.16  up   1.0  1.0
> >   17ssd   1.74699  osd.17  up   1.0  1.0
> > -18 31.43900  datacenter site2
> > -16 31.43900  rack h2
> >   -9 10.48000  host host04
> >   18ssd   1.74699  osd.18  up   1.0  1.0
> >   19ssd   1.74699  osd.19  up   1.0  1.0
> >   20ssd   1.74699  osd.20  up   1.0  1.0
> >   21ssd   1.74699  osd.21  up   1.0  1.0
> >   22ssd   1.74699  osd.22  up   1.0  1.0
> >   23ssd   1.74699  osd.23  up   1.0  1.0
> > -11 10.48000  host host05
> >   24ssd   1.74699  osd.24  up   1.0  1.0
> >   25ssd   1.74699  osd.25  up   1.0  1.0
> >   26ssd   1.74699  osd.26  up   1.0  1.0
> >   27ssd   1.74699  osd.27  up   1.0  1.0
> >   28ssd   1.74699  osd.28  up   1.0  1.0
> >   29ssd   1.74699  osd.29  up   1.0  1.0
> > -13 10.48000  host host06
> >   30ssd   1.74699  osd.30  up   1.0  1.0
> >   31ssd   1.74699  osd.31  up   1.0  1.0
> >   32ssd   1.74699  osd.32  up   1.0  1.0
> >   33ssd   1.74699  osd.33  up   1.0  1.0
> >   34ssd   1.74699  osd.34  up   1.0  1.0
> >   35ssd   1.74699  osd.35  up   1.0  1.0
> >
> >
> > So regarding my calculation it should be
> >
> >
> > (6x Nodes

[ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

2022-07-28 Thread Gregory Farnum
https://tracker.ceph.com/issues/56650

There's a PR in progress to resolve this issue now. (Thanks, Prashant!)
-Greg

On Thu, Jul 28, 2022 at 7:52 AM Nicolas FONTAINE  wrote:
>
> Hello,
>
> We have exactly the same problem. Did you find an answer or should we
> open a bug report?
>
> Sincerely,
>
> Nicolas.
>
> Le 23/06/2022 à 11:42, Kilian Ries a écrit :
> > Hi Joachim,
> >
> >
> > yes i assigned the stretch rule to the pool (4x replica / 2x min). The rule 
> > says that two replicas should be in every datacenter.
> >
> >
> > $ ceph osd tree
> > ID   CLASS  WEIGHTTYPE NAME   STATUS  REWEIGHT  PRI-AFF
> >   -1 62.87799  root default
> > -17 31.43900  datacenter site1
> > -15 31.43900  rack b7
> >   -3 10.48000  host host01
> >0ssd   1.74699  osd.0   up   1.0  1.0
> >1ssd   1.74699  osd.1   up   1.0  1.0
> >2ssd   1.74699  osd.2   up   1.0  1.0
> >3ssd   1.74699  osd.3   up   1.0  1.0
> >4ssd   1.74699  osd.4   up   1.0  1.0
> >5ssd   1.74699  osd.5   up   1.0  1.0
> >   -5 10.48000  host host02
> >6ssd   1.74699  osd.6   up   1.0  1.0
> >7ssd   1.74699  osd.7   up   1.0  1.0
> >8ssd   1.74699  osd.8   up   1.0  1.0
> >9ssd   1.74699  osd.9   up   1.0  1.0
> >   10ssd   1.74699  osd.10  up   1.0  1.0
> >   11ssd   1.74699  osd.11  up   1.0  1.0
> >   -7 10.48000  host host03
> >   12ssd   1.74699  osd.12  up   1.0  1.0
> >   13ssd   1.74699  osd.13  up   1.0  1.0
> >   14ssd   1.74699  osd.14  up   1.0  1.0
> >   15ssd   1.74699  osd.15  up   1.0  1.0
> >   16ssd   1.74699  osd.16  up   1.0  1.0
> >   17ssd   1.74699  osd.17  up   1.0  1.0
> > -18 31.43900  datacenter site2
> > -16 31.43900  rack h2
> >   -9 10.48000  host host04
> >   18ssd   1.74699  osd.18  up   1.0  1.0
> >   19ssd   1.74699  osd.19  up   1.0  1.0
> >   20ssd   1.74699  osd.20  up   1.0  1.0
> >   21ssd   1.74699  osd.21  up   1.0  1.0
> >   22ssd   1.74699  osd.22  up   1.0  1.0
> >   23ssd   1.74699  osd.23  up   1.0  1.0
> > -11 10.48000  host host05
> >   24ssd   1.74699  osd.24  up   1.0  1.0
> >   25ssd   1.74699  osd.25  up   1.0  1.0
> >   26ssd   1.74699  osd.26  up   1.0  1.0
> >   27ssd   1.74699  osd.27  up   1.0  1.0
> >   28ssd   1.74699  osd.28  up   1.0  1.0
> >   29ssd   1.74699  osd.29  up   1.0  1.0
> > -13 10.48000  host host06
> >   30ssd   1.74699  osd.30  up   1.0  1.0
> >   31ssd   1.74699  osd.31  up   1.0  1.0
> >   32ssd   1.74699  osd.32  up   1.0  1.0
> >   33ssd   1.74699  osd.33  up   1.0  1.0
> >   34ssd   1.74699  osd.34  up   1.0  1.0
> >   35ssd   1.74699  osd.35  up   1.0  1.0
> >
> >
> > So regarding my calculation it should be
> >
> >
> > (6x Nodes * 6x SSD * 1,8TB) / 4 = 16 TB
> >
> >
> > Is this maybe a bug in the stretch mode that i only get displayed half the 
> > size available?
> >
> >
> > Regards,
> >
> > Kilian
> >
> >
> > 
> > Von: Clyso GmbH - Ceph Foundation Member 
> > Gesendet: Mittwoch, 22. Juni 2022 18:20:59
> > An: Kilian Ries; ceph-users(a)ceph.io
> > Betreff: Re: [ceph-users] Ceph Stretch Cluster - df pool size (Max Avail)
> >
> > Hi Kilian,
> >
> > we do not currently use this mode of ceph clustering. but normally you
> > need to assign the crush rule to the pool as well, otherwise it will
> > take rule 0 as default.
> >
> > the following calculation for rule 0 would also work approximately:
> >
> > (3 Nodes *6 x SSD *1,8TB)/4 = 8,1 TB
> >
> > hope it helps, Joachim
> >
> >
> > ___
> > Clyso GmbH - Ceph Foun

[ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

2022-07-28 Thread Nicolas FONTAINE

Hello,

We have exactly the same problem. Did you find an answer or should we 
open a bug report?


Sincerely,

Nicolas.

Le 23/06/2022 à 11:42, Kilian Ries a écrit :

Hi Joachim,


yes i assigned the stretch rule to the pool (4x replica / 2x min). The rule 
says that two replicas should be in every datacenter.


$ ceph osd tree
ID   CLASS  WEIGHTTYPE NAME   STATUS  REWEIGHT  PRI-AFF
  -1 62.87799  root default
-17 31.43900  datacenter site1
-15 31.43900  rack b7
  -3 10.48000  host host01
   0ssd   1.74699  osd.0   up   1.0  1.0
   1ssd   1.74699  osd.1   up   1.0  1.0
   2ssd   1.74699  osd.2   up   1.0  1.0
   3ssd   1.74699  osd.3   up   1.0  1.0
   4ssd   1.74699  osd.4   up   1.0  1.0
   5ssd   1.74699  osd.5   up   1.0  1.0
  -5 10.48000  host host02
   6ssd   1.74699  osd.6   up   1.0  1.0
   7ssd   1.74699  osd.7   up   1.0  1.0
   8ssd   1.74699  osd.8   up   1.0  1.0
   9ssd   1.74699  osd.9   up   1.0  1.0
  10ssd   1.74699  osd.10  up   1.0  1.0
  11ssd   1.74699  osd.11  up   1.0  1.0
  -7 10.48000  host host03
  12ssd   1.74699  osd.12  up   1.0  1.0
  13ssd   1.74699  osd.13  up   1.0  1.0
  14ssd   1.74699  osd.14  up   1.0  1.0
  15ssd   1.74699  osd.15  up   1.0  1.0
  16ssd   1.74699  osd.16  up   1.0  1.0
  17ssd   1.74699  osd.17  up   1.0  1.0
-18 31.43900  datacenter site2
-16 31.43900  rack h2
  -9 10.48000  host host04
  18ssd   1.74699  osd.18  up   1.0  1.0
  19ssd   1.74699  osd.19  up   1.0  1.0
  20ssd   1.74699  osd.20  up   1.0  1.0
  21ssd   1.74699  osd.21  up   1.0  1.0
  22ssd   1.74699  osd.22  up   1.0  1.0
  23ssd   1.74699  osd.23  up   1.0  1.0
-11 10.48000  host host05
  24ssd   1.74699  osd.24  up   1.0  1.0
  25ssd   1.74699  osd.25  up   1.0  1.0
  26ssd   1.74699  osd.26  up   1.0  1.0
  27ssd   1.74699  osd.27  up   1.0  1.0
  28ssd   1.74699  osd.28  up   1.0  1.0
  29ssd   1.74699  osd.29  up   1.0  1.0
-13 10.48000  host host06
  30ssd   1.74699  osd.30  up   1.0  1.0
  31ssd   1.74699  osd.31  up   1.0  1.0
  32ssd   1.74699  osd.32  up   1.0  1.0
  33ssd   1.74699  osd.33  up   1.0  1.0
  34ssd   1.74699  osd.34  up   1.0  1.0
  35ssd   1.74699  osd.35  up   1.0  1.0


So regarding my calculation it should be


(6x Nodes * 6x SSD * 1,8TB) / 4 = 16 TB


Is this maybe a bug in the stretch mode that i only get displayed half the size 
available?


Regards,

Kilian



Von: Clyso GmbH - Ceph Foundation Member 
Gesendet: Mittwoch, 22. Juni 2022 18:20:59
An: Kilian Ries; ceph-users(a)ceph.io
Betreff: Re: [ceph-users] Ceph Stretch Cluster - df pool size (Max Avail)

Hi Kilian,

we do not currently use this mode of ceph clustering. but normally you
need to assign the crush rule to the pool as well, otherwise it will
take rule 0 as default.

the following calculation for rule 0 would also work approximately:

(3 Nodes *6 x SSD *1,8TB)/4 = 8,1 TB

hope it helps, Joachim


___
Clyso GmbH - Ceph Foundation Member

Am 22.06.22 um 18:09 schrieb Kilian Ries:

Hi,


i'm running a ceph stretch cluster with two datacenters. Each of the 
datacenters has 3x OSD nodes (in total 6x) and 2x monitors. A third monitor is 
deployed as arbiter node in a third datacenter.


Each OSD node has 6x SSDs with 1,8 TB storage - that gives me a total of about 
63 TB storage (6x nodes * 6x SSD * 1,8TB = 63TB)c.


In stretch mode my pool is configured with replication 4x - and as far as i unterstand 
this should give me a max pool storage size of ~15TB (63TB

[ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)

2022-06-23 Thread Kilian Ries
Hi Joachim,


yes i assigned the stretch rule to the pool (4x replica / 2x min). The rule 
says that two replicas should be in every datacenter.


$ ceph osd tree
ID   CLASS  WEIGHTTYPE NAME   STATUS  REWEIGHT  PRI-AFF
 -1 62.87799  root default
-17 31.43900  datacenter site1
-15 31.43900  rack b7
 -3 10.48000  host host01
  0ssd   1.74699  osd.0   up   1.0  1.0
  1ssd   1.74699  osd.1   up   1.0  1.0
  2ssd   1.74699  osd.2   up   1.0  1.0
  3ssd   1.74699  osd.3   up   1.0  1.0
  4ssd   1.74699  osd.4   up   1.0  1.0
  5ssd   1.74699  osd.5   up   1.0  1.0
 -5 10.48000  host host02
  6ssd   1.74699  osd.6   up   1.0  1.0
  7ssd   1.74699  osd.7   up   1.0  1.0
  8ssd   1.74699  osd.8   up   1.0  1.0
  9ssd   1.74699  osd.9   up   1.0  1.0
 10ssd   1.74699  osd.10  up   1.0  1.0
 11ssd   1.74699  osd.11  up   1.0  1.0
 -7 10.48000  host host03
 12ssd   1.74699  osd.12  up   1.0  1.0
 13ssd   1.74699  osd.13  up   1.0  1.0
 14ssd   1.74699  osd.14  up   1.0  1.0
 15ssd   1.74699  osd.15  up   1.0  1.0
 16ssd   1.74699  osd.16  up   1.0  1.0
 17ssd   1.74699  osd.17  up   1.0  1.0
-18 31.43900  datacenter site2
-16 31.43900  rack h2
 -9 10.48000  host host04
 18ssd   1.74699  osd.18  up   1.0  1.0
 19ssd   1.74699  osd.19  up   1.0  1.0
 20ssd   1.74699  osd.20  up   1.0  1.0
 21ssd   1.74699  osd.21  up   1.0  1.0
 22ssd   1.74699  osd.22  up   1.0  1.0
 23ssd   1.74699  osd.23  up   1.0  1.0
-11 10.48000  host host05
 24ssd   1.74699  osd.24  up   1.0  1.0
 25ssd   1.74699  osd.25  up   1.0  1.0
 26ssd   1.74699  osd.26  up   1.0  1.0
 27ssd   1.74699  osd.27  up   1.0  1.0
 28ssd   1.74699  osd.28  up   1.0  1.0
 29ssd   1.74699  osd.29  up   1.0  1.0
-13 10.48000  host host06
 30ssd   1.74699  osd.30  up   1.0  1.0
 31ssd   1.74699  osd.31  up   1.0  1.0
 32ssd   1.74699  osd.32  up   1.0  1.0
 33ssd   1.74699  osd.33  up   1.0  1.0
 34ssd   1.74699  osd.34  up   1.0  1.0
 35ssd   1.74699  osd.35  up   1.0  1.0


So regarding my calculation it should be


(6x Nodes * 6x SSD * 1,8TB) / 4 = 16 TB


Is this maybe a bug in the stretch mode that i only get displayed half the size 
available?


Regards,

Kilian



Von: Clyso GmbH - Ceph Foundation Member 
Gesendet: Mittwoch, 22. Juni 2022 18:20:59
An: Kilian Ries; ceph-users@ceph.io
Betreff: Re: [ceph-users] Ceph Stretch Cluster - df pool size (Max Avail)

Hi Kilian,

we do not currently use this mode of ceph clustering. but normally you
need to assign the crush rule to the pool as well, otherwise it will
take rule 0 as default.

the following calculation for rule 0 would also work approximately:

(3 Nodes *6 x SSD *1,8TB)/4 = 8,1 TB

hope it helps, Joachim


___
Clyso GmbH - Ceph Foundation Member

Am 22.06.22 um 18:09 schrieb Kilian Ries:
> Hi,
>
>
> i'm running a ceph stretch cluster with two datacenters. Each of the 
> datacenters has 3x OSD nodes (in total 6x) and 2x monitors. A third monitor 
> is deployed as arbiter node in a third datacenter.
>
>
> Each OSD node has 6x SSDs with 1,8 TB storage - that gives me a total of 
> about 63 TB storage (6x nodes * 6x SSD * 1,8TB = 63TB)c.
>
>
> In stretch mode my pool is configured with replication 4x - and as far as i 
> unterstand this should give me a max pool storage size of ~15TB (63TB / 4 = 
> 15,75TB. But if i run "ceph df" it shows me only the half, about ~7,5TB.
>
>
>
> $ ceph df
>
> --- RAW STORAGE ---
>
> CLASSSIZE   AVAILUSED  RAW USED  %RAW USED
>
> ssd