Dear listers,

my employer already has a production Ceph cluster running but we need a second one. I just wanted to ask your opininion on the following setup. It is planned for 500 TB net capacity, expandable to 2 PB. I expect the number of OSD servers to double in the next 4 years. Erasure Code 3:2 will be used for OSDs. Usage will be file storage, Rados block devices and S3:

5x OSD servers (12x18 TB Toshiba MG09SCA18TE SAS spinning disks for data, 2x512 GB Samsung PM9A1 M.2 NVME SSD 0,55 DWPD for system, 1xAMD 7313P CPU with 16 cores @3GHz, 256 GB RAM, LSI SAS 9500 HBA, Broadcom P425G network adapter 4x25 Gbit/s)

3x MON servers (1x2 TB Samsung PM9A1 M.2 NVME SSD 0,55 DWPD for system, 2x1.6TB Kioxia CD6-V SSD 3.0 DWPD for data, 2x Broadcom P210/N210 network 4x10 GBit/s, 1xAMD 7232P CPU with 8 cores @3.1 GHz, 64 GB RAM)

3x MDS servers (1x2 TB Samsung PM9A1 M.2 NVME SSD 0,55 DWPD for system, 2x1.6 TB Kioxia CD6-V SSD 3.0 DWPD for data, 2x Broadcom P210/N210 network 4x10 GBit/s, 1xAMD 7313P CPU with 16 cores @3 GHz, 128 GB RAM)

OSD servers will be connected via 2x25 GBit fibre interfaces "backend" to

2x Mikrotik CRS518-16XS-2XQ (which are connected for high-availability via 100 GBit)

For the "frontend" connection to servers/clients via 2x10 GBit we're looking into

3x Mikrotik CRS326-24S+2Q+RM (which are connected for high-availability via 40 GBit)

Especially for the "frontend" switches i'm looking for alternatives. Currently we use Huawei C6810-32T16A4Q-LI models with 2x33 LACP connections connected via 10 GBit/s RJ45. But those had ports blocking after a number of errors which resulted in some trouble. We'd like to avoid IOS and clones in general and would prefer a decent web interface.

Any comments/recommendations?

Best regards,

Kai
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to