I assume by Intel cards you mean something that is running ixgbe? If so and you are trying to use SR-IOV with OVS and VLANs running on top of the PF it will fail. The issue is that OVS requires the ability to place the PF in promiscuous mode to support VLAN trunking, and ixgbe driver prevents that when SR-IOV is enabled.

The "bridge fdb add" approach mentioned should work as long as ixgbe PF is used on a flat network.

- Alex

On 10/19/2015 07:33 PM, yujie wrote:
Hi Moshe Levi,
Sorry for replying to this message after so long time. The testing environment was unavailable before. I use Intel cards, but could only tested base kilo and vlan. Could it work?

在 2015/9/22 13:24, Moshe Levi 写道:
Hi Yujie,

There is a patch https://review.openstack.org/#/c/198736/ which I wrote to add the mac of the normal instance to the SR-IOV embedded switch so that the packet will go to the PF instead of going to the wire. This is done by using bridge tool with the command "bridge fdb add <mac> dev <interface>"

I was able to test it on Mellanox ConnectX3 card with both vlan and flat network and it worked fine. I wasn't able to test it on any of the Intel cards, but I was told the it only working on flat network, in vlan network the Intel card is dropping the tagged packets and they are not go up to the VF.

What NIC are you using? Can you try using "bridge fdb add <mac> dev <interface>" where <mac> is the mac of the normal vm and <interface> is the PF
and see if  that resolve the issue.
Also can you check it with  flat and vlan networks.


-----Original Message-----
From: yujie [mailto:[email protected]]
Sent: Tuesday, September 22, 2015 6:28 AM
To: [email protected]
Subject: [openstack-dev] [neutron][sriov] SRIOV-VM could not work well with normal VM

Hi all,
I am using neutron kilo without dvr to create sriov instance VM-A,it works well and could connect to its gateway fine. But when I let the normal instance VM-B which in the same compute-node with VM-A ping its gateway, it failed. I capture the packet on the network-node, find the gateway already reply the ARP-reply message to VM-B. But compute-node which VM-B lives could not send the package to VM-B.
If delete VM-A and set : echo 0 >
/sys/class/enp5s0f0/device/sriov_numvfs, the problem solved.

Is it a same question with the bug: SR-IOV port doesn't reach OVS port on same compute node ?
https://bugs.launchpad.net/neutron/+bug/1492228
Any suggestions will be grateful.

Thanks,
Yujie


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to