Hi,a

El 15/2/21 a las 12:16, mj escribió:

Hapy to report that we recently upgraded our three-host 24 OSD cluster from HDD filestore to SSD BlueStore. After a few months of use, their WEAR is still at 1%, and the cluster performance ("rados bench" etc) has dramatically improved. So all in all: yes, we're happy Samsung PM883 ceph users. :-)

We currently have a "meshed" ceph setup, with the three hosts connected directly to each other over 10G ethernet, as described here:

https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Method_2_.28routed.29

As we would like to be able to add more storage hosts, we need to loose the meshed network setup.

My idea is to add two stacked 10G ethernet switches to the setup, so we can start using lacp bonded networking over two physical switches.

Looking around, we can get refurb Cisco Small Business 550X for around 1300 euro. We also noticed that mikrotik and TP-Link have some even nicer-priced 10G switches, but those all lack bonding. :-(

Therfore I'm asking here: anyone here with suggestions on what to look at, for nice-priced 10G stackable switches..?

We would like to continue using ethernet, as we use that everywhere, and also performance-wise we're happy with what we currently have.

Last december I wrote to mikrotik support, asking if they will support stacking / LACP any time soon, and their answer was: probably 2nd half of 2021.

So, anyone here with interesting insights to share for ceph 10G ethernet storage networking?

Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2 simple switches (Mikrotik for example) and in Proxmox use an active-pasive bond, with default interface in all nodes to the same switch.

Cheers


--
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO/
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to