Hi,

I'm currently running the platform with following version and facing the issue 
with iSCSI timeout issue during the network switch maintenance.

oVirt Version 4.5
oVirt manager : Redhat 9.0
oVirt node : Rocky 9.4


Here's a drawing of the topology:

                   +---------------+               
                   |    Storage    |
                   +---------------+
192.168.11.2 |         | 192.168.11.1
                      |         | 10GbE Links
                      |         |
+---------------+       +---------------+
| Nexus 5k   #1 |   | Nexus 5k   #2 |
+---------------+        +---------------+
192.168.11.3 |         | 192.168.11.4
                     |          | 10GbE Links
                      |         |
               +---------------+
                |   oVirt 4.5   | 
               +---------------+

VLANx: 192.168.11.3/28
VLANx: 192.168.11.4/28

I've configured the oVirt nodes without bonding,it works like one NIC assigned 
with vLANX with IP .3 and another NIC assigned with vLANX with IP .4. I could 
see actively only NIC is utilizing the iSCSI traffic(without ISCSI Bond). 

I would like to utilize both NIC traffic Active-Active mode what is the 
topology to follow? [iSCSI in LACP is not recommended for iscsi traffic which 
read from blog].Correct me if I'm wrong.

When actively used NIC connected switch reboot,the iscsi traffic which over to 
another nic takes almost 30 seconds.Can I reduce the failover of NIC timing 
from 30 seconds to 3 seconds something like that? I don't see the lapsed 
timeout behaviour in CentOS 8 but rocky9 behaviors differently.

Any suggestions or recommendation?
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LO4FPIXT3FX5XQBW2EKDYRF5DB7QOP3T/

Reply via email to