Ying-Shiuan Pan,
Your experiments with arndale Exynos-5250 board can help me greatly and i
would really appreciate if you share with me the following information:
1. Which Linux kernel did you use for the host and for the guest?
2. Which Linux kernel patches did you use for KVM?
3. Which config files did you use for both the host and guest?
4. Which QEMU did you use?
5. Which QEMU patches did you use?
6. What is the exact command line you used for invoking the guest, with and
without vhost-net?

Many thanks in advance!

Regards,
Barak



On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan
<yingshiuan....@gmail.com>wrote:

> Hi, Barak,
>
> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it requires
> some patches in qemu and kvm, of course). It works (without irqfd support),
> however, the performance does not increase much. The throughput (iperf) of
> virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively. I thought
> the result are because both virtio-net and vhost-net almost reached the
> limitation of 100Mbps Ethernet.
>
> The good news is that we even ported vhost-net in our kvm-a9 hypervisor
> (refer:
> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
> increased from 323Mbps to 435Mbps.
>
> --
> Ying-Shiuan Pan,
> H Div., CCMA, ITRI, TW
>
>
> ----
> Best Regards,
> 潘穎軒Ying-Shiuan Pan
>
>
> 2014/1/13 Peter Maydell <peter.mayd...@linaro.org>
>
>> On 12 January 2014 21:49, Barak Wasserstrom <wba...@gmail.com> wrote:
>> > Thanks - I got virtio-net-device running now, but performance is
>> terrible.
>> > When i look at the guest's ethernet interface features (ethtool -k
>> eth0) i
>> > see all offload features are disabled.
>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>> > On the tap i also see all offload features are disabled, while on br0
>> and
>> > eth3 i see the expected offload features.
>> > Can this explain the terrible performance i'm facing?
>> > If so, how can this be changed?
>> > If not, what else can cause such bad performance?
>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
>> > though the guest doesn't support PCI & MSIX?
>>
>> I have no idea, I'm afraid. I don't have enough time available to
>> investigate performance issues at the moment; if you find anything
>> specific you can submit patches...
>>
>> thanks
>> -- PMM
>>
>>
>

Reply via email to