[ceph-users] Re: Crush map & rule

2023-11-09 Thread David C.
(I wrote it freehand, test before applying)
If your goal is to have a replication of 3 on a row and to be able to
switch to the secondary row, then you need 2 roles and you change the crush
rule on the pool side :

rule primary_location {
(...)
   step take primary class ssd
   step chooseleaf firstn 0 type host
   step emit
}

rule secondary_loc {
(...)
  step take secondary ...

If the aim is to make a replica 2 on 2 rows (not recommended) :

rule row_repli {
(...)
  step take default class ssd
  step chooseleaf firstn 0 type row
  step emit
}

If the aim is to distribute replications over the 2 rows (for example 2*2
or 2*3 replica) :

type replicated
step take primary  class ssd
step chooseleaf firstn 2 type host
step emit
step take secondary  class ssd
step chooseleaf firstn 2 type host
step emit

as far as erasure code is concerned, I really don't see what's reasonably
possible on this architecture.


Cordialement,

*David CASIER*






Le jeu. 9 nov. 2023 à 08:48, Albert Shih  a écrit :

> Le 08/11/2023 à 19:29:19+0100, David C. a écrit
> Hi David.
>
> >
> > What would be the number of replicas (in total and on each row) and their
> > distribution on the tree ?
>
> Well “inside” a row that would be 3 in replica mode.
>
> Between row...well two ;-)
>
> Beside to understanding how to write a rule a little more complex than the
> example in the official documentation, they are another purpose and it's
> to try to have
> a protocole for changing the hardware.
>
> For example if «row primary» are only with old bare metal server, and I
> have some new server I put inside the ceph and want to copy everything
> from the “row primary” to “row secondary”.
>
> Regards
>
> >
> >
> > Le mer. 8 nov. 2023 à 18:45, Albert Shih  a
> écrit :
> >
> > Hi everyone,
> >
> > I'm totally newbie with ceph, so sorry if I'm asking some stupid
> question.
> >
> > I'm trying to understand how the crush map & rule work, my goal is
> to have
> > two groups of 3 servers, so I'm using “row” bucket
> >
> > ID   CLASS  WEIGHTTYPE NAME STATUS  REWEIGHT
> PRI-AFF
> >  -1 59.38367  root default
> > -15 59.38367  zone City
> > -17 29.69183  row primary
> >  -3  9.89728  host server1
> >   0ssd   3.49309  osd.0 up   1.0
> 1.0
> >   1ssd   1.74660  osd.1 up   1.0
> 1.0
> >   2ssd   1.74660  osd.2 up   1.0
> 1.0
> >   3ssd   2.91100  osd.3 up   1.0
> 1.0
> >  -5  9.89728  host server2
> >   4ssd   1.74660  osd.4 up   1.0
> 1.0
> >   5ssd   1.74660  osd.5 up   1.0
> 1.0
> >   6ssd   2.91100  osd.6 up   1.0
> 1.0
> >   7ssd   3.49309  osd.7 up   1.0
> 1.0
> >  -7  9.89728  host server3
> >   8ssd   3.49309  osd.8 up   1.0
> 1.0
> >   9ssd   1.74660  osd.9 up   1.0
> 1.0
> >  10ssd   2.91100  osd.10up   1.0
> 1.0
> >  11ssd   1.74660  osd.11up   1.0
> 1.0
> > -19 29.69183  row secondary
> >  -9  9.89728  host server4
> >  12ssd   1.74660  osd.12up   1.0
> 1.0
> >  13ssd   1.74660  osd.13up   1.0
> 1.0
> >  14ssd   3.49309  osd.14up   1.0
> 1.0
> >  15ssd   2.91100  osd.15up   1.0
> 1.0
> > -11  9.89728  host server5
> >  16ssd   1.74660  osd.16up   1.0
> 1.0
> >  17ssd   1.74660  osd.17up   1.0
> 1.0
> >  18ssd   3.49309  osd.18up   1.0
> 1.0
> >  19ssd   2.91100  osd.19up   1.0
> 1.0
> > -13  9.89728  host server6
> >  20ssd   1.74660  osd.20up   1.0
> 1.0
> >  21ssd   1.74660  osd.21up   1.0
> 1.0
> >  22ssd   2.91100  osd.22up   1.0
> 1.0
> >
> > and I want to create a some rules, first I like to have
> >
> >   a rule «replica» (over host) inside the «row» primary
> >   a rule «erasure» (over host)  inside the «row» primary
> >
> > but also two crush rule between primary/secondary, meaning I like to
> have a
> > replica (with only 1 copy of course) of pool from “

[ceph-users] Re: Crush map & rule

2023-11-08 Thread Albert Shih
Le 08/11/2023 à 19:29:19+0100, David C. a écrit
Hi David. 

> 
> What would be the number of replicas (in total and on each row) and their
> distribution on the tree ?

Well “inside” a row that would be 3 in replica mode. 

Between row...well two ;-)

Beside to understanding how to write a rule a little more complex than the
example in the official documentation, they are another purpose and it's to try 
to have
a protocole for changing the hardware.

For example if «row primary» are only with old bare metal server, and I
have some new server I put inside the ceph and want to copy everything
from the “row primary” to “row secondary”.

Regards

> 
> 
> Le mer. 8 nov. 2023 à 18:45, Albert Shih  a écrit :
> 
> Hi everyone,
> 
> I'm totally newbie with ceph, so sorry if I'm asking some stupid question.
> 
> I'm trying to understand how the crush map & rule work, my goal is to have
> two groups of 3 servers, so I'm using “row” bucket
> 
> ID   CLASS  WEIGHT    TYPE NAME                 STATUS  REWEIGHT  PRI-AFF
>  -1         59.38367  root default
> -15         59.38367      zone City
> -17         29.69183          row primary
>  -3          9.89728              host server1
>   0    ssd   3.49309                  osd.0         up   1.0  1.0
>   1    ssd   1.74660                  osd.1         up   1.0  1.0
>   2    ssd   1.74660                  osd.2         up   1.0  1.0
>   3    ssd   2.91100                  osd.3         up   1.0  1.0
>  -5          9.89728              host server2
>   4    ssd   1.74660                  osd.4         up   1.0  1.0
>   5    ssd   1.74660                  osd.5         up   1.0  1.0
>   6    ssd   2.91100                  osd.6         up   1.0  1.0
>   7    ssd   3.49309                  osd.7         up   1.0  1.0
>  -7          9.89728              host server3
>   8    ssd   3.49309                  osd.8         up   1.0  1.0
>   9    ssd   1.74660                  osd.9         up   1.0  1.0
>  10    ssd   2.91100                  osd.10        up   1.0  1.0
>  11    ssd   1.74660                  osd.11        up   1.0  1.0
> -19         29.69183          row secondary
>  -9          9.89728              host server4
>  12    ssd   1.74660                  osd.12        up   1.0  1.0
>  13    ssd   1.74660                  osd.13        up   1.0  1.0
>  14    ssd   3.49309                  osd.14        up   1.0  1.0
>  15    ssd   2.91100                  osd.15        up   1.0  1.0
> -11          9.89728              host server5
>  16    ssd   1.74660                  osd.16        up   1.0  1.0
>  17    ssd   1.74660                  osd.17        up   1.0  1.0
>  18    ssd   3.49309                  osd.18        up   1.0  1.0
>  19    ssd   2.91100                  osd.19        up   1.0  1.0
> -13          9.89728              host server6
>  20    ssd   1.74660                  osd.20        up   1.0  1.0
>  21    ssd   1.74660                  osd.21        up   1.0  1.0
>  22    ssd   2.91100                  osd.22        up   1.0  1.0
> 
> and I want to create a some rules, first I like to have
> 
>   a rule «replica» (over host) inside the «row» primary
>   a rule «erasure» (over host)  inside the «row» primary
> 
> but also two crush rule between primary/secondary, meaning I like to have 
> a
> replica (with only 1 copy of course) of pool from “row” primary to
> secondary.
> 
> How can I achieve that ?
> 
> Regards
> 
> 
> 
> --
> Albert SHIH 🦫 🐸
> mer. 08 nov. 2023 18:37:54 CET
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
-- 
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 09 nov. 2023 08:39:41 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Crush map & rule

2023-11-08 Thread David C.
Hi Albert,

What would be the number of replicas (in total and on each row) and their
distribution on the tree ?


Le mer. 8 nov. 2023 à 18:45, Albert Shih  a écrit :

> Hi everyone,
>
> I'm totally newbie with ceph, so sorry if I'm asking some stupid question.
>
> I'm trying to understand how the crush map & rule work, my goal is to have
> two groups of 3 servers, so I'm using “row” bucket
>
> ID   CLASS  WEIGHTTYPE NAME STATUS  REWEIGHT  PRI-AFF
>  -1 59.38367  root default
> -15 59.38367  zone City
> -17 29.69183  row primary
>  -3  9.89728  host server1
>   0ssd   3.49309  osd.0 up   1.0  1.0
>   1ssd   1.74660  osd.1 up   1.0  1.0
>   2ssd   1.74660  osd.2 up   1.0  1.0
>   3ssd   2.91100  osd.3 up   1.0  1.0
>  -5  9.89728  host server2
>   4ssd   1.74660  osd.4 up   1.0  1.0
>   5ssd   1.74660  osd.5 up   1.0  1.0
>   6ssd   2.91100  osd.6 up   1.0  1.0
>   7ssd   3.49309  osd.7 up   1.0  1.0
>  -7  9.89728  host server3
>   8ssd   3.49309  osd.8 up   1.0  1.0
>   9ssd   1.74660  osd.9 up   1.0  1.0
>  10ssd   2.91100  osd.10up   1.0  1.0
>  11ssd   1.74660  osd.11up   1.0  1.0
> -19 29.69183  row secondary
>  -9  9.89728  host server4
>  12ssd   1.74660  osd.12up   1.0  1.0
>  13ssd   1.74660  osd.13up   1.0  1.0
>  14ssd   3.49309  osd.14up   1.0  1.0
>  15ssd   2.91100  osd.15up   1.0  1.0
> -11  9.89728  host server5
>  16ssd   1.74660  osd.16up   1.0  1.0
>  17ssd   1.74660  osd.17up   1.0  1.0
>  18ssd   3.49309  osd.18up   1.0  1.0
>  19ssd   2.91100  osd.19up   1.0  1.0
> -13  9.89728  host server6
>  20ssd   1.74660  osd.20up   1.0  1.0
>  21ssd   1.74660  osd.21up   1.0  1.0
>  22ssd   2.91100  osd.22up   1.0  1.0
>
> and I want to create a some rules, first I like to have
>
>   a rule «replica» (over host) inside the «row» primary
>   a rule «erasure» (over host)  inside the «row» primary
>
> but also two crush rule between primary/secondary, meaning I like to have a
> replica (with only 1 copy of course) of pool from “row” primary to
> secondary.
>
> How can I achieve that ?
>
> Regards
>
>
>
> --
> Albert SHIH 🦫 🐸
> mer. 08 nov. 2023 18:37:54 CET
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io