On Mon, 14 Nov 2016, Andrii Anisov wrote:
> > Could you define unacceptable performance drop? Have you tried to measure
> > what would be the impact?
> 
> > I know it can be bad, depending on the class of protocols. I think that
> > if numbers were provided to demonstrate that bounce buffers (the swiotlb
> > in Linux) are too slow for a given use case
> 
> Unfortunately I could not come up with exact requirements numbers.
> Introducing another memcpy (what bouncing buffer approach does) for
> block or network IO would not only reduce the operation performance
> but also increase the overall system load.
> All what we does for any of our PV driver solutions is avoiding data
> copying inside FE-BE pair in order to increase performance, reduce
> latency and system load.
 
I think it might be worth running those numbers: you might be surprised
by how well a simple data copy protocol can perform, even on ARM.

For example, take a look at PVCalls which is entirely based on data
copies:

http://marc.info/?l=xen-devel&m=147639616310487 


I have already shown that it performs better than netfront/netback on
x86 in this blog post:

https://blog.xenproject.org/2016/08/30/pv-calls-a-new-paravirtualized-protocol-for-posix-syscalls/


I have just run the numbers on ARM64 (APM m400) and it is still much
faster than netfront/netback. This is what I get by running iperf -c in
a VM and iperf -s in Dom0:

        PVCalls             Netfront/Netback
-P 1    9.9 gbit/s          4.53 gbit/s
-P 2    17.4 gbit/s         5.57 gbit/s
-P 4    24.36 gbit/s        5.34 gbit/s

PVCalls is still significantly faster than Netfront/Netback.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to