> On Jun 15, 2021, at 10:26 AM, Andrew Walker-Brown <andrew_jbr...@hotmail.com> 
> wrote:
> 
> With an unstable link/port you could see the issues you describe.  Ping 
> doesn’t have the packet rate for you to necessarily have a packet in transit 
> at exactly the same time as the port fails temporarily.  Iperf on the other 
> hand could certainly show the issue, higher packet rate and more likely to 
> have packets in flight at the time of a link fail...combined with packet 
> loss/retries gives poor throughput.
> 
> Depending on what you want to happen, there are a number of tuning options 
> both on the switches and Linux.  If you want the LAG to be down if any link 
> fails, the you should be able to config this on the switches and/or Linux  
> (minimum number of links = 2 if you have 2 links in the lag).

Or ensure that the links are active/active.

Some of the trickiest situations I’ve encountered are when a bond is configured 
for active/backup, and there’s a latent issue with the backup link.  Active 
goes down, and the bond is horqued.

Another is when the backup link has CRC errors that only show up on the switch 
side, or when a configuration error causes packets sent over one of the links 
to be blackholed.
> 
> 
> Flapping/unstable links are the worst kind of situation.  Ideally you’d pick 
> that up quickly from monitoring/alerts and either fix immediately or take the 
> link down until you can fix it.

This.

Flakiness on a cluster/replciation network is one reason to favor not having 
one, it removes certain flappy situations and OSDs are more likely up for real, 
or down hard. 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to