we at Virtual Open Systems did some work and tested vhost-net on ARM
back in March.
The setup was based on:
  - host kernel with our ioeventfd patches:
http://www.spinics.net/lists/kvm-arm/msg08413.html

- qemu with the aforementioned patches from Ying-Shiuan Pan
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html

The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
Ethernet adapter connected to a 1Gbps switch. I can't find the actual
numbers but I remember that with multiple streams the gain was clearly
seen. Note that it used the minimum required ioventfd implementation
and not irqfd.

I guess it is feasible to think that it all can be put together and
rebased + the recent irqfd work. One can achiev even better
performance (because of the irqfd).

Managed to replicate the setup with the old versions e used in March:

Single stream from another machine to chromebook with 1Gbps USB3
Ethernet adapter.
iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
to HOST: 858316 Kbits/sec
to GUEST: 761563 Kbits/sec
to GUEST vhost=off: 508150 Kbits/sec
10 parallel streams
iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
to HOST: 842420 Kbits/sec
to GUEST: 625144 Kbits/sec
to GUEST vhost=off: 425276 Kbits/sec
I have tested the same cases on a Hisilicon board (Cortex-A15@1G)
with Integrated 1Gbps Ethernet adapter.

iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec

10 parallel streams, the performance gets <10% plus:
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec

I't easy to see vhost-net brings great performance improvements,
almost 50%+.
That's pretty impressive for not even having irqfd. I guess we should renew some effort to get these patches merged upstream.

Reply via email to