Hi,The issue you are facing might be because the name of the network adapters is the same.I wanted to emphasize the importance of the following step: https://docs.openvswitch.org/en/latest/intro/install/windows/#add-virtual-interfaces-vifsThis is something particularly important for Windows and
Thanks! Let me try this.
On Mon, Apr 17, 2023 at 1:44 PM Alin Serdean wrote:
> Hi,
>
> The issue you are facing might be because the name of the network adapters
> is the same.
>
> I wanted to emphasize the importance of the following step:
>
Do you have any ideas, or if this is the wrong list, the recommendation for
another one?
On Thu, Apr 13, 2023 at 7:14 PM Abu Rasheda via discuss <
ovs-discuss@openvswitch.org> wrote:
> Hello!
>
> On Windows Server 2019, compiled, and loaded OVS kernel module.
> Commands like ovs-vsctl &
Hello everyone,
We set up the openvswitch-switch-dpdk on a Debian server with Intel XL710
NIC.
We have bind the XL710 NIC to the vfio-pci driver via following command:
dpdk-devbind.py --bind=vfio-pci eth0
We created an OVS switch named br0 with netdev datapath and added the eth0
to the bridge
I would definitely recommend the Napatech LinkVirtualization SmartNICs. Works
great with OvS-DPDK w/ rte_flow and are lightning fast (line rate @ 64B
packets).
Best regards,
Justas Poderys, PhD
Product Architect
Napatech A/S
Tobaksvejen 23 A
DK-2860 Soeborg
Denmark
-Original
"Plato, Michael" writes:
> Hi Paolo,
> I installed the patch for 2.17 on april 6th in our test environment and can
> confirm that it works. We haven't had any crashes since then. Many thanks for
> the quick solution!
>
Hi Micheal,
Nice! That's helpful. Thanks for testing it.
Paolo
> Best
Hi Paolo,
I installed the patch for 2.17 on april 6th in our test environment and can
confirm that it works. We haven't had any crashes since then. Many thanks for
the quick solution!
Best regards
Michael
-Ursprüngliche Nachricht-
Von: Paolo Valerio
Gesendet: Montag, 17. April 2023
Lazuardi Nasution writes:
> Hi Paolo,
>
> I'm interested in your statement of "expired connections (but not yet
> reclaimed)". Do you think that shortening conntrack timeout policy will help?
> Or, should we make it larger so there will be fewer conntrack table update and
> flush attempts?
>