Ying-Shiuan Pan,
Thanks again - few questions.
1. Can you refer to my question about tap offload features? In the guest i
can see that eth0 has all offload features disabled and cannot be enabled.
I suspect this is related to the tap configuration in the host.
2. I can see that virtio-net notifies KVM upon each received packet, even
though the guest implements NAPI. This causes lots of switches from user
space to hypervisor. Isn't there support for RX packet coalescing in QEMU's
virtio-net?
3. What is your best TX, RX iperf results today on Cortex A15?

Regards,
Barak


On Wed, Jan 15, 2014 at 4:42 AM, Ying-Shiuan Pan
<yingshiuan....@gmail.com>wrote:

>
>
> ----
> Best Regards,
> 潘穎軒Ying-Shiuan Pan
>
>
> 2014/1/14 Barak Wasserstrom <wba...@gmail.com>
>
>> Ying-Shiuan Pan,
>> Thanks again - please see few questions below.
>>
>> Regards,
>> Barak
>>
>>
>> On Tue, Jan 14, 2014 at 5:37 AM, Ying-Shiuan Pan <
>> yingshiuan....@gmail.com> wrote:
>>
>>> Hi, Barak,
>>>
>>> Hope the following info can help you
>>>
>>> 1.
>>> HOST:
>>>  <http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git>
>>> http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
>>> branch: v3.10-arndale
>>> config: arch/arm/configs/exynos5_arndale_defconfig
>>> dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
>>> rootfs: Ubuntu 13.10
>>>
>>> GUEST:
>>> Official 3.12
>>>  config: arch/arm/configs/vexpress_defconfig  with virtio-devices enabled
>>> dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
>>> rootfs: Ubuntu 12.04
>>>
>>> 2.
>>> We are still developing it in progress and will try to open source asap.
>>> The main purpose of that patch is to introduce the ioeventfd into kvm-arm
>>>
>> [Barak] Do you have any estimation about when you can release these
>> patches?
>>
> Actually, No. I will discuss with my boss about the release plan.
>
>>  [Barak] Is this required for enabling vhost-net?
>>
> Yes, it is because vhost-net relies on ioeventfd to get kick request from
> front-end driver.
>
>
>>
>>>
>>> 3. as mentioned in 1.
>>>
>>> 4. qemu-1.6.0
>>>
>>> 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio
>>>
>> [Barak] Any patches available for this?
>>
> I did not see any.. but there might be somebody is also developing this..
>
>> [Barak] Is this required for enabling vhost-net?
>>
> Yes. Without those notifiers, you will see the error messages as you
> mentioned below.
>
>>
>>
>>>
>>> 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
>>> --machine vexpress-a15 -cpu cortex-a15 -drive
>>> file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
>>> -device virtio-blk-device,drive=virtio-blk -append "earlyprintk=ttyAMA0
>>> console=ttyAMA0 root=/dev/vda rw 
>>> ip=192.168.101.101::192.168.101.1:vm1:eth0:off
>>> --no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
>>> socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
>>> chardev=mon,id=monitor,mode=readline -device
>>> virtio-net-device,netdev=net0,mac="52:54:00:12:34:01" -netdev
>>> type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off
>>>
>> [Barak] Could you share "/root/nfs/net.sh" with me?
>>
> Sorry, I forgot that.
> ---------------
> #!/bin/sh
> ifconfig $1 0.0.0.0
> brctl addif virbr0 $1
> ---------------
>
> virbr0 is a bridge created by manual. The setup steps of virbr0 are also
> provided:
> brctl create virbr0
> brctl addif virbr0 eth0
> ifconfig virbr0 [ETH0_IP]
> ifconfig eth0 0.0.0.0
>
> [Barak] In the guest i can see that eth0 has all offload features disabled
>> and cannot be enabled. I suspect this is related to the tap configuration
>> in the host. Do you have any ideas?
>>
>>
>>>
>>> vhost-net could be truned on by changing the last parameter vhost=on.
>>>
>> [Barak] When enabling vhost i get errors in qemu, do you know what might
>> be the reason?
>> [Barak] qemu-system-arm: binding does not support guest notifiers
>> [Barak] qemu-system-arm: unable to start vhost net: 38: falling back on
>> userspace virtio
>>
> QEMU requires host/guest notifiers to setup vhost-net, but currently
> virtio-mmio does not support yet.
> That's why you got those error messages.
>
>>
>>
>>>
>>>
>>> --
>>> Ying-Shiuan Pan,
>>> H Div., CCMA, ITRI, TW
>>>
>>>
>>> ----
>>> Best Regards,
>>> 潘穎軒Ying-Shiuan Pan
>>>
>>>
>>> 2014/1/13 Barak Wasserstrom <wba...@gmail.com>
>>>
>>>> Ying-Shiuan Pan,
>>>> Your experiments with arndale Exynos-5250 board can help me greatly
>>>> and i would really appreciate if you share with me the following
>>>> information:
>>>> 1. Which Linux kernel did you use for the host and for the guest?
>>>> 2. Which Linux kernel patches did you use for KVM?
>>>> 3. Which config files did you use for both the host and guest?
>>>> 4. Which QEMU did you use?
>>>> 5. Which QEMU patches did you use?
>>>> 6. What is the exact command line you used for invoking the guest, with
>>>> and without vhost-net?
>>>>
>>>> Many thanks in advance!
>>>>
>>>> Regards,
>>>> Barak
>>>>
>>>>
>>>>
>>>> On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan <
>>>> yingshiuan....@gmail.com> wrote:
>>>>
>>>>> Hi, Barak,
>>>>>
>>>>> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
>>>>> requires some patches in qemu and kvm, of course). It works (without irqfd
>>>>> support), however, the performance does not increase much. The throughput
>>>>> (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps 
>>>>> respectively.
>>>>> I thought the result are because both virtio-net and vhost-net almost
>>>>> reached the limitation of 100Mbps Ethernet.
>>>>>
>>>>> The good news is that we even ported vhost-net in our kvm-a9
>>>>> hypervisor (refer:
>>>>> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
>>>>> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
>>>>> increased from 323Mbps to 435Mbps.
>>>>>
>>>>> --
>>>>> Ying-Shiuan Pan,
>>>>> H Div., CCMA, ITRI, TW
>>>>>
>>>>>
>>>>> ----
>>>>> Best Regards,
>>>>> 潘穎軒Ying-Shiuan Pan
>>>>>
>>>>>
>>>>> 2014/1/13 Peter Maydell <peter.mayd...@linaro.org>
>>>>>
>>>>>> On 12 January 2014 21:49, Barak Wasserstrom <wba...@gmail.com> wrote:
>>>>>> > Thanks - I got virtio-net-device running now, but performance is
>>>>>> terrible.
>>>>>> > When i look at the guest's ethernet interface features (ethtool -k
>>>>>> eth0) i
>>>>>> > see all offload features are disabled.
>>>>>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>>>>>> > On the tap i also see all offload features are disabled, while on
>>>>>> br0 and
>>>>>> > eth3 i see the expected offload features.
>>>>>> > Can this explain the terrible performance i'm facing?
>>>>>> > If so, how can this be changed?
>>>>>> > If not, what else can cause such bad performance?
>>>>>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest,
>>>>>> even
>>>>>> > though the guest doesn't support PCI & MSIX?
>>>>>>
>>>>>> I have no idea, I'm afraid. I don't have enough time available to
>>>>>> investigate performance issues at the moment; if you find anything
>>>>>> specific you can submit patches...
>>>>>>
>>>>>> thanks
>>>>>> -- PMM
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to