On Monday, May 22, 2017 10:28 AM, Jason Wang wrote: > On 2017年05月19日 23:33, Stefan Hajnoczi wrote: > > On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote: > >> On 2017年05月18日 11:03, Wei Wang wrote: > >>> On 05/17/2017 02:22 PM, Jason Wang wrote: > >>>> On 2017年05月17日 14:16, Jason Wang wrote: > >>>>> On 2017年05月16日 15:12, Wei Wang wrote: > >>>>>>> Hi: > >>>>>>> > >>>>>>> Care to post the driver codes too? > >>>>>>> > >>>>>> OK. It may take some time to clean up the driver code before post > >>>>>> it out. You can first have a check of the draft at the repo here: > >>>>>> https://github.com/wei-w-wang/vhost-pci-driver > >>>>>> > >>>>>> Best, > >>>>>> Wei > >>>>> Interesting, looks like there's one copy on tx side. We used to > >>>>> have zerocopy support for tun for VM2VM traffic. Could you please > >>>>> try to compare it with your vhost-pci-net by: > >>>>> > >>> We can analyze from the whole data path - from VM1's network stack > >>> to send packets -> VM2's network stack to receive packets. The > >>> number of copies are actually the same for both. > >> That's why I'm asking you to compare the performance. The only reason > >> for vhost-pci is performance. You should prove it. > > There is another reason for vhost-pci besides maximum performance: > > > > vhost-pci makes it possible for end-users to run networking or storage > > appliances in compute clouds. Cloud providers do not allow end-users > > to run custom vhost-user processes on the host so you need vhost-pci. > > > > Stefan > > Then it has non NFV use cases and the question goes back to the performance > comparing between vhost-pci and zerocopy vhost_net. If it does not perform > better, it was less interesting at least in this case. >
Probably I can share what we got about vhost-pci and vhost-user: https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf Right now, I don’t have the environment to add the vhost_net test. Btw, do you have data about vhost_net v.s. vhost_user? Best, Wei