On Fri, Sep 18, 2015 at 04:32:05PM +0800, openvswitcher wrote: > Why the vxlan offloading NICs can improve the bandwith performance? > Could you explain me by the source code?
You have TSO and checksumming offloading on your card at least and very likely enabled by default. So, the iperf can send up to 64k bytes of data at once, which will transverse all the networking stack once, and then the NIC will do all the heavy lifting segmenting the packet and calculating the chsum. With VXLAN, that doesn't work anymore. The host CPU has to push MTU packets down into the networking stack each time and calculate chsum, etc... so the main CPU becomes a bottleneck. The work described above moves from the main CPU back to the NIC if you have/enable VXLAN offloading. fbl > > > Thanks. > > > > > > > At 2015-09-09 21:09:26, "YaoJun" <seamanhans...@gmail.com> wrote: > > Try vxlan offloading firstly if you have the proper NICs. > > > > On Wed, Sep 9, 2015 at 6:35 PM, openvswitcher <openvswitc...@163.com> wrote: > I use the openvswitch vxlan tunnel as the basic overlay service. > But I find the bandwith between two virtual machine only reaches 4G/s using > iperf to test. > So could anybody tell me how to improve it or where is the bottleneck? > > > Looking forward for your reply. Thank you! > _______________________________________________ > dev mailing list > dev@openvswitch.org > http://openvswitch.org/mailman/listinfo/dev > > > _______________________________________________ > dev mailing list > dev@openvswitch.org > http://openvswitch.org/mailman/listinfo/dev _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev