On Mon, 28 Jun 2021 22:35:36 +0300
mhnx wrote:
> To be clear.
> I have stacked switch and this is my configuration.
>
> Bonding cluster: (hash 3+4)
> Cluster nic1(10Gbe) -> Switch A
> Cluster nic2(10Gbe) -> Switch B
>
> Bonding public: (hash 3+4)
> Public nic1(10Gbe) -> Switch A
> Public nic2(
o I believe you could configure one Aggregator with 1Gbps
> ports and one Aggregator with 10Gbps ports, and have intelligent selection
> depending on whether you have 20/10/2/1Gbps available.
>
>
>
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
&g
with...@gmail.com>
Sent: 28 June 2021 18:46
To: Marc 'risson' Schmitt<mailto:ris...@cri.epita.fr>
Cc: Ceph Users<mailto:ceph-users@ceph.io>
Subject: [ceph-users] Re: Nic bonding (lacp) settings for ceph
Thanks for the answer.
I'm into ad_select bandwitdh because we use osd
Thanks for the answer.
I'm into ad_select bandwitdh because we use osd nodes as rgw gateways, VMs
and different applications.
I have seperate cluster (10+10Gbe) and public (10+10Gbe) network.
I tested stable, bandwitdh and count. Results are clearly good with
bandwitdh. Count is the worst option.
Hi,
On Sat, 26 Jun 2021 16:47:19 +0300
mhnx wrote:
> I've changed ad_select to bandwitdh and both nic is in use now but
> layer2 hash prevents dual nic usage for between two nodes (because
> layer2 using only Mac ).
As I understand it, setting ad_select to bandwidth is only going to be
useful if