On 2014/8/13 19:25, Nikolay Nikolaev wrote:
> On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
> <n.nikol...@virtualopensystems.com> wrote:
>> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
>> <n.nikol...@virtualopensystems.com> wrote:
>>>
>>> Hello,
>>>
>>>
>>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu <john.li...@huawei.com> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>>
>>>> Half a year has passed from Isa Ansharullah asked this question:
>>>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>>>
>>>> I have found two patches which have provided the kvm-arm support of
>>>> eventfd and irqfd:
>>>>
>>>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>>>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>>>
>>>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>>>> https://patches.linaro.org/32261/
>>>>
>>>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:
>>>>
>>>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>>
>>>> But there no any comments of this patch. And I can found nothing about qemu
>>>> to support irqfd. Do I lost the track?
>>>>
>>>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>>>> supporing irqfd and multiqueue.
>>>>
>>>>
>>>
>>> we at Virtual Open Systems did some work and tested vhost-net on ARM
>>> back in March.
>>> The setup was based on:
>>>  - host kernel with our ioeventfd patches:
>>> http://www.spinics.net/lists/kvm-arm/msg08413.html
>>>
>>> - qemu with the aforementioned patches from Ying-Shiuan Pan
>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>
>>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
>>> Ethernet adapter connected to a 1Gbps switch. I can't find the actual
>>> numbers but I remember that with multiple streams the gain was clearly
>>> seen. Note that it used the minimum required ioventfd implementation
>>> and not irqfd.
>>>
>>> I guess it is feasible to think that it all can be put together and
>>> rebased + the recent irqfd work. One can achiev even better
>>> performance (because of the irqfd).
>>>
>>
>> Managed to replicate the setup with the old versions e used in March:
>>
>> Single stream from another machine to chromebook with 1Gbps USB3
>> Ethernet adapter.
>> iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
>> to HOST: 858316 Kbits/sec
>> to GUEST: 761563 Kbits/sec
> to GUEST vhost=off: 508150 Kbits/sec
>>
>> 10 parallel streams
>> iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
>> to HOST: 842420 Kbits/sec
>> to GUEST: 625144 Kbits/sec
> to GUEST vhost=off: 425276 Kbits/sec

I have tested the same cases on a Hisilicon board (Cortex-A15@1G)
with Integrated 1Gbps Ethernet adapter.

iperf -c <address> -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec

10 parallel streams, the performance gets <10% plus:
iperf -c <address> -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec

I't easy to see vhost-net brings great performance improvements,
almost 50%+.

Li.

>>
>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> kvmarm mailing list
>>>> kvm...@lists.cs.columbia.edu
>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>
>>>
>>> regards,
>>> Nikolay Nikolaev
>>> Virtual Open Systems
> 
> .
> 


Reply via email to