2015-12-16 16:46 GMT+01:00 Paolo Bonzini :
>
>
> On 16/12/2015 15:25, Vincenzo Maffione wrote:
>>> vhost-net actually had better performance, so virtio-net dataplane
>>> was never committed. As Michael mentioned, in practice on Linux you
>>> use vhost, and non-Linux hypervisors you do not use QEMU
On 16/12/2015 15:25, Vincenzo Maffione wrote:
>> vhost-net actually had better performance, so virtio-net dataplane
>> was never committed. As Michael mentioned, in practice on Linux you
>> use vhost, and non-Linux hypervisors you do not use QEMU. :)
>
> Yes, I understand. However, another poss
2015-12-16 12:37 GMT+01:00 Paolo Bonzini :
>
>
> On 16/12/2015 11:39, Vincenzo Maffione wrote:
>> No problems.
>>
>> I have some additional (orthogonal) curiosities:
>>
>> 1) Assuming "hw/virtio/dataplane/vring.c" is what I think it is (VQ
>> data structures directly accessible in the host virtua
On 16/12/2015 11:39, Vincenzo Maffione wrote:
> No problems.
>
> I have some additional (orthogonal) curiosities:
>
> 1) Assuming "hw/virtio/dataplane/vring.c" is what I think it is (VQ
> data structures directly accessible in the host virtual memory, with
> guest-phyisical-to-host-virtual ma
2015-12-16 12:02 GMT+01:00 Michael S. Tsirkin :
> On Wed, Dec 16, 2015 at 11:39:46AM +0100, Vincenzo Maffione wrote:
>> 2015-12-16 10:34 GMT+01:00 Paolo Bonzini :
>> >
>> >
>> > On 16/12/2015 10:28, Vincenzo Maffione wrote:
>> >> Assuming my TX experiments with disconnected backend (and I disable
>
On Wed, Dec 16, 2015 at 11:39:46AM +0100, Vincenzo Maffione wrote:
> 2015-12-16 10:34 GMT+01:00 Paolo Bonzini :
> >
> >
> > On 16/12/2015 10:28, Vincenzo Maffione wrote:
> >> Assuming my TX experiments with disconnected backend (and I disable
> >> CPU dynamic scaling of performance, etc.):
> >> 1
2015-12-16 10:34 GMT+01:00 Paolo Bonzini :
>
>
> On 16/12/2015 10:28, Vincenzo Maffione wrote:
>> Assuming my TX experiments with disconnected backend (and I disable
>> CPU dynamic scaling of performance, etc.):
>> 1) after patch 1 and 2, virtio bottleneck jumps from ~1Mpps to 1.910 Mpps.
>> 2)
On 16/12/2015 10:28, Vincenzo Maffione wrote:
> Assuming my TX experiments with disconnected backend (and I disable
> CPU dynamic scaling of performance, etc.):
> 1) after patch 1 and 2, virtio bottleneck jumps from ~1Mpps to 1.910 Mpps.
> 2) after patch 1,2 and 3, virtio bottleneck jumps to
Assuming my TX experiments with disconnected backend (and I disable
CPU dynamic scaling of performance, etc.):
1) after patch 1 and 2, virtio bottleneck jumps from ~1Mpps to 1.910 Mpps.
2) after patch 1,2 and 3, virtio bottleneck jumps to 2.039 Mpps.
So I see an improvement for patch 3, and I
On 15/12/2015 23:33, Vincenzo Maffione wrote:
> This patch slightly rewrites the code to reduce the number of accesses, since
> many of them seems unnecessary to me. After this reduction, the bottleneck
> jumps from 1 Mpps to 2 Mpps.
Very nice. Did you get new numbers with the rebase? That wou
Hi,
I am doing performance experiments to test how QEMU behaves when the
guest is transmitting (short) network packets at very high packet rates, say
over 1Mpps.
I run a netmap application in the guest to generate high packet rates,
but this is not relevant to this discussion. The only important
11 matches
Mail list logo