Le 06/12/2023 à 16:21, Frank Schilder a écrit :
Hi,

the post linked in the previous message is a good source for different 
approaches.

To provide some first-hand experience, I was operating a pool with a 6+2 EC profile on 4 
hosts for a while (until we got more hosts) and the "subdivide a physical host into 
2 crush-buckets" approach is actually working best (I basically tried all the 
approaches described in the linked post and they all had pitfalls).

Procedure is more or less:

- add second (logical) host bucket for each physical host by suffixing the host name with "-B" 
(ceph osd crush add-bucket <name> <type> <location>)
- move half the OSDs per host to this new host bucket (ceph osd crush move 
osd.ID host=HOSTNAME-B)
- make this location persist reboot of the OSDs (ceph config set osd.ID 
crush_location host=HOSTNAME-B")

This will allow you to move OSDs back easily when you get more hosts and can afford the 
recommended 1 shard per host. It will also show which and where OSDs are moved to with a 
simple "ceph config dump | grep crush_location". Bets of all, you don't have to 
fiddle around with crush maps and hope they do what you want. Just use failure domain 
host and you are good. No more than 2 host buckets per physical host means no more than 2 
shards per physical host with default placement rules.

I was operating this set-up with min_size=6 and feeling bad about it due to the 
reduced maintainability (risk of data loss during maintenance). Its not great 
really, but sometimes there is no way around it. I was happy when I got the 
extra hosts.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Curt<light...@gmail.com>
Sent: Wednesday, December 6, 2023 3:56 PM
To: Patrick Begou
Cc:ceph-users@ceph.io
Subject: [ceph-users] Re: EC Profiles & DR

Hi Patrick,

Yes K and M are chunks, but the default crush map is a chunk per host,
which is probably the best way to do it, but I'm no expert. I'm not sure
why you would want to do a crush map with 2 chunks per host and min size 4
as it' s just asking for trouble at some point, in my opinion.  Anyway,
take a look at this post if your interested in doing 2 chunks per host it
will give you an idea of crushmap setup,
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NB3M22GNAC7VNWW7YBVYTH6TBZOYLTWA/
.

Regards,
Curt

Thanks all for this details that clarify many things for me.

Rich, yes I'm starting with 5 nodes and 4 HDD/node to set up the first Ceph cluster in the laboratory and my goal is to increase this cluster (may be up to  10 nodes) and to add storage in the nodes (until 12 OSD per node). It is a starting point for capacitif storage connected to my two clusters (400 cores + 256 cores).

Thanks Franck for these details, as a newbie Iwould never have thought to this strategy. In my mind, this is the best way for starting the first setup and moving to a more standard configuration later. I've all the template now, just have to dive deeper in the details to build it.

Patrick
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to