[ceph-users] Re: How to use hardware

2023-11-22 Thread Albert Shih
Le 20/11/2023 à 09:24:41+, Frank Schilder a écrit Hi, Thanks everyone for your answer. > > we are using something similar for ceph-fs. For a backup system your setup > can work, depending on how you back up. While HDD pools have poor IOP/s > performance, they are very good for streaming

[ceph-users] Re: How to use hardware

2023-11-20 Thread Frank Schilder
Shih; ceph-users@ceph.io Subject: [ceph-users] Re: How to use hardware Common motivations for this strategy include the lure of unit economics and RUs. Often ultra dense servers can’t fill racks anyway due to power and weight limits. Here the osd_memory_target would have to be severely reduce

[ceph-users] Re: How to use hardware

2023-11-18 Thread Anthony D'Atri
Common motivations for this strategy include the lure of unit economics and RUs. Often ultra dense servers can’t fill racks anyway due to power and weight limits. Here the osd_memory_target would have to be severely reduced to avoid oomkilling. Assuming the OSDs are top load LFF HDDs with

[ceph-users] Re: How to use hardware

2023-11-18 Thread David C.
Hello Albert, 5 vs 3 MON => you won't notice any difference 5 vs 3 MGR => by default, only 1 will be active Le sam. 18 nov. 2023 à 09:28, Albert Shih a écrit : > Le 17/11/2023 à 11:23:49+0100, David C. a écrit > > Hi, > > > > > 5 instead of 3 mon will allow you to limit the impact if you

[ceph-users] Re: How to use hardware

2023-11-18 Thread Albert Shih
Le 17/11/2023 à 11:23:49+0100, David C. a écrit Hi, > > 5 instead of 3 mon will allow you to limit the impact if you break a mon (for > example, with the file system full) > > 5 instead of 3 MDS, this makes sense if the workload can be distributed over > several trees in your file system.

[ceph-users] Re: How to use hardware

2023-11-18 Thread Albert Shih
Le 18/11/2023 à 02:31:22+0100, Simon Kepp a écrit Hi, > I know that your question is regarding the service servers, but may I ask, why > you are planning to place so many OSDs ( 300) on so few OSD hosts( 6) (= 50 > OSDs per node)? > This is possible to do, but sounds like the nodes were

[ceph-users] Re: How to use hardware

2023-11-17 Thread Simon Kepp
I know that your question is regarding the service servers, but may I ask, why you are planning to place so many OSDs ( 300) on so few OSD hosts( 6) (= 50 OSDs per node)? This is possible to do, but sounds like the nodes were designed for scale-up rather than a scale-out architecture like ceph.

[ceph-users] Re: How to use hardware

2023-11-17 Thread David C.
Hi Albert , 5 instead of 3 mon will allow you to limit the impact if you break a mon (for example, with the file system full) 5 instead of 3 MDS, this makes sense if the workload can be distributed over several trees in your file system. Sometimes it can also make sense to have several FSs in