On 18.03.2013 10:53, Michael S. Tsirkin wrote:
Do you see all interrupts going to the same CPU? If yes is
irqbalance in guest running?
I had the same issue today. Problem is irqs are going to all vCPUs. If the
smp_affinity is set to one CPU only
performance is OK. irqbalance is running, but it
quot;
> Cc: "Davide Guerri" , "Alexandre DERUMIER"
> , "Stefan Hajnoczi" ,
> qemu-devel@nongnu.org, "Jan Kiszka" , "Peter Lieven"
> , "Dietmar Maurer"
> Envoyé: Dimanche 17 Mars 2013 10:08:17
> Objet: Re: [Qemu-devel]
-
De: "Michael S. Tsirkin"
À: "Peter Lieven"
Cc: "Davide Guerri" , "Alexandre DERUMIER"
, "Stefan Hajnoczi" ,
qemu-devel@nongnu.org, "Jan Kiszka" , "Peter Lieven"
, "Dietmar Maurer"
Envoyé: Dimanche 17 Mars 2013 10:0
On Fri, Mar 15, 2013 at 08:23:44AM +0100, Peter Lieven wrote:
> On 15.03.2013 00:04, Davide Guerri wrote:
> >Yes this is definitely an option :)
> >
> >Just for curiosity, what is the effect of "in-kernel irqchip"?
>
> it emulates the irqchip in-kernel (in the KVM kernel module) which
> avoids use
On 15.03.2013 00:04, Davide Guerri wrote:
Yes this is definitely an option :)
Just for curiosity, what is the effect of "in-kernel irqchip"?
it emulates the irqchip in-kernel (in the KVM kernel module) which
avoids userspace exits to qemu. in your particular case I remember
that it made all IR
Yes this is definitely an option :)
Just for curiosity, what is the effect of "in-kernel irqchip"?
Is it possible to disable it on a "live" domain?
Cheers,
Davide
On 14/mar/2013, at 19:21, Peter Lieven wrote:
>
> Am 14.03.2013 um 19:15 schrieb Davide Guerri :
>
>> Of course I can do some t
Am 14.03.2013 um 19:15 schrieb Davide Guerri :
> Of course I can do some test but a kernel upgrade is not an option here :(
disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe
this is an option.
Peter
"Peter Lieven" , "Michael S. Tsirkin" ,
>> "Stefan Hajnoczi" , qemu-devel@nongnu.org, "Jan Kiszka"
>> , "Peter Lieven" , "Dietmar
>> Maurer"
>> Envoyé: Jeudi 14 Mars 2013 10:22:22
>> Objet: Re: [Qemu-devel] slow vi
ichael S. Tsirkin" ,
> "Stefan Hajnoczi" , qemu-devel@nongnu.org, "Jan Kiszka"
> , "Peter Lieven" , "Dietmar
> Maurer"
> Envoyé: Jeudi 14 Mars 2013 10:22:22
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
&g
t; - Mail original -
>
> De: "Peter Lieven"
> À: "Alexandre DERUMIER"
> Cc: "Dietmar Maurer" , "Stefan Hajnoczi"
> , "Jan Kiszka" ,
> qemu-devel@nongnu.org, "Michael S. Tsirkin" , "Peter Liev
"Michael S. Tsirkin" ,
"Stefan Hajnoczi" , qemu-devel@nongnu.org, "Jan Kiszka"
, "Peter Lieven" , "Dietmar Maurer"
Envoyé: Jeudi 14 Mars 2013 10:22:22
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
I'd l
uot;
, "Jan Kiszka" , qemu-devel@nongnu.org,
"Michael S. Tsirkin" , "Peter Lieven"
Envoyé: Lundi 3 Décembre 2012 12:23:11
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER :
>>> Whi
ot;Jan Kiszka" , qemu-devel@nongnu.org, "Michael S. Tsirkin"
>
> Envoyé: Vendredi 16 Novembre 2012 11:44:26
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>
>>> I only tested with RHEL6.3 kernel on host.
>>
>> can yo
On 13.11.2012 17:33, Michael S. Tsirkin wrote:
On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
On 09.11.2012 19:03, Peter Lieven wrote:
Remark:
If i disable interrupts on CPU1-3 for virtio the performance is ok a
t; , "Peter Lieven" ,
"Jan Kiszka" , qemu-devel@nongnu.org, "Michael S. Tsirkin"
Envoyé: Vendredi 16 Novembre 2012 11:44:26
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> > I only tested with RHEL6.3 kernel on host.
>
> can y
> > I only tested with RHEL6.3 kernel on host.
>
> can you check if there is a difference on interrupt delivery between those
> two?
>
> cat /proc/interrupts should be sufficient after some traffic has flown.
While trying to reproduce the bug, we just detected that it depends on the
hardware (m
Dietmar Maurer wrote:
>> > You can also reproduce the problem with RHEL6.2 as guest But it seems
>> > RHEL 6.3 fixed it.
>>
>> RHEL6.2 on ubuntu host?
>
> I only tested with RHEL6.3 kernel on host.
can you check if there is a difference on interrupt delivery between those
two?
cat /proc/interrupt
> > You can also reproduce the problem with RHEL6.2 as guest But it seems
> > RHEL 6.3 fixed it.
>
> RHEL6.2 on ubuntu host?
I only tested with RHEL6.3 kernel on host.
On Tue, Nov 13, 2012 at 05:03:20PM +, Dietmar Maurer wrote:
> > Tried with upstream qemu on rhel kernel and that's even a bit faster.
> > So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all so maybe
> > their
> > vhost backport is broken insome way.
>
> You can also reproduce the p
> Tried with upstream qemu on rhel kernel and that's even a bit faster.
> So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all so maybe their
> vhost backport is broken insome way.
You can also reproduce the problem with RHEL6.2 as guest
But it seems RHEL 6.3 fixed it.
There seem to be
On Tue, Nov 13, 2012 at 05:27:17PM +0100, Peter Lieven wrote:
>
> Am 13.11.2012 um 17:26 schrieb Michael S. Tsirkin:
>
> > On Tue, Nov 13, 2012 at 05:21:50PM +0100, Peter Lieven wrote:
> >> On 13.11.2012 17:22, Michael S. Tsirkin wrote:
> >>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven
> the bug seems to be only relevant when vhost-net is used.
>
> Dietmar, see you implications with normal virtio?
no, only with vhost=on
On Tue, Nov 13, 2012 at 05:35:55PM +0100, Peter Lieven wrote:
>
> Am 13.11.2012 um 17:33 schrieb Michael S. Tsirkin:
>
> > On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
> >> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >>>
> >>> On 09.11.2012 19:03, Peter
Am 13.11.2012 um 17:33 schrieb Michael S. Tsirkin:
> On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
>>>
>>> On 09.11.2012 19:03, Peter Lieven wrote:
Remark:
If i disable interrupts on CPU1-3 for virti
On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >
> > On 09.11.2012 19:03, Peter Lieven wrote:
> > >Remark:
> > >If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> > >
> > >Now we need so
Am 13.11.2012 um 17:26 schrieb Michael S. Tsirkin:
> On Tue, Nov 13, 2012 at 05:21:50PM +0100, Peter Lieven wrote:
>> On 13.11.2012 17:22, Michael S. Tsirkin wrote:
>>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
On 09.11.2012 19:03, Peter Lieven wrote:
> Remark:
>
On Tue, Nov 13, 2012 at 05:21:50PM +0100, Peter Lieven wrote:
> On 13.11.2012 17:22, Michael S. Tsirkin wrote:
> >On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >>On 09.11.2012 19:03, Peter Lieven wrote:
> >>>Remark:
> >>>If i disable interrupts on CPU1-3 for virtio the performance
On 13.11.2012 17:22, Michael S. Tsirkin wrote:
On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
On 09.11.2012 19:03, Peter Lieven wrote:
Remark:
If i disable interrupts on CPU1-3 for virtio the performance is ok again.
Now we need someone with deeper knowledge of the in-kernel irq
On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
>
> On 09.11.2012 19:03, Peter Lieven wrote:
> >Remark:
> >If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> >
> >Now we need someone with deeper knowledge of the in-kernel irqchip and the
> >virtio/vhost drive
On 09.11.2012 19:03, Peter Lieven wrote:
Remark:
If i disable interrupts on CPU1-3 for virtio the performance is ok again.
Now we need someone with deeper knowledge of the in-kernel irqchip and the
virtio/vhost driver development to say if this is a regression in qemu-kvm
or a problem with the
On 09.11.2012 19:03, Peter Lieven wrote:
Remark:
If i disable interrupts on CPU1-3 for virtio the performance is ok again.
Now we need someone with deeper knowledge of the in-kernel irqchip and the
virtio/vhost driver development to say if this is a regression in qemu-kvm
or a problem with the
Remark:
If i disable interrupts on CPU1-3 for virtio the performance is ok again.
Now we need someone with deeper knowledge of the in-kernel irqchip and the
virtio/vhost driver development to say if this is a regression in qemu-kvm
or a problem with the old virtio drivers if they receive the inter
it seems that with in-kernel irqchip the interrupts are distributed across
all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
this is related.
without inkernel irqchip
CPU0 CPU1 CPU2 CPU3
0: 16 0 0 0 IO-APIC-ed
Dietmar Maurer wrote:
>> Dietmar, how is the speed if you specify --machine pc,kernel_irqchip=off
>> as
>> cmdline option to qemu-kvm-1.2.0?
>
> I get full speed if i use that flag.
>
>
I also tried to reproduce it and can confirm your findings. Host Ubuntu
12.04 LTS (kernel 3.2) with vanilla qemu
> Dietmar, how is the speed if you specify --machine pc,kernel_irqchip=off as
> cmdline option to qemu-kvm-1.2.0?
I get full speed if i use that flag.
Dietmar Maurer wrote:
>> Meanwhile I quickly tried to reproduce but didn't succeed so far
>> (>10GBit
>> between host and guest with vhost=on and 2 guest cores).
>> However, I finally realized that we are talking about a pretty special
>> host
>> kernel which I don't have around. I guess this is be
> Meanwhile I quickly tried to reproduce but didn't succeed so far (>10GBit
> between host and guest with vhost=on and 2 guest cores).
> However, I finally realized that we are talking about a pretty special host
> kernel which I don't have around. I guess this is better dealt with by Red Hat
> fol
Jan Kiszka wrote:
> On 2012-11-06 12:24, Dietmar Maurer wrote:
>>> On 2012-11-06 10:46, Dietmar Maurer wrote:
>> This obviously breaks vhost when using multiple cores.
>
> With "obviously" you mean you already have a clue why?
>
> I'll try to reproduce.
No, sorry - jus
On 2012-11-06 12:24, Dietmar Maurer wrote:
>> On 2012-11-06 10:46, Dietmar Maurer wrote:
> This obviously breaks vhost when using multiple cores.
With "obviously" you mean you already have a clue why?
I'll try to reproduce.
>>>
>>> No, sorry - just meant the performance regr
> On 2012-11-06 10:46, Dietmar Maurer wrote:
> >>> This obviously breaks vhost when using multiple cores.
> >>
> >> With "obviously" you mean you already have a clue why?
> >>
> >> I'll try to reproduce.
> >
> > No, sorry - just meant the performance regression is obvious (factor 20 to
> 40).
> >
>
On 2012-11-06 10:46, Dietmar Maurer wrote:
>>> This obviously breaks vhost when using multiple cores.
>>
>> With "obviously" you mean you already have a clue why?
>>
>> I'll try to reproduce.
>
> No, sorry - just meant the performance regression is obvious (factor 20 to
> 40).
>
OK. Did you try
> > This obviously breaks vhost when using multiple cores.
>
> With "obviously" you mean you already have a clue why?
>
> I'll try to reproduce.
No, sorry - just meant the performance regression is obvious (factor 20 to 40).
On 2012-11-06 10:01, Dietmar Maurer wrote:
> OK, bisect point me to this commit:
>
> # git bisect bad
> 7d37d351dffee60fc7048bbfd8573421f15eb724 is the first bad commit
> commit 7d37d351dffee60fc7048bbfd8573421f15eb724
> Author: Jan Kiszka
> Date: Thu May 17 10:32:39 2012 -0300
>
> virtio/
Dietmar Maurer wrote:
>> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
>> hda
>> > debian-squeeze-netinst.raw -netdev
>> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
>> > virtio-net-pci,netdev=net0,id=net0
>> >
>> > Downloading a larger file with wget inside the guest
multiple cores.
> -Original Message-
> From: Peter Lieven [mailto:lieven-li...@dlhnet.de]
> Sent: Dienstag, 06. November 2012 08:47
> To: Dietmar Maurer
> Cc: Stefan Hajnoczi; qemu-devel@nongnu.org; Michael S. Tsirkin
> Subject: Re: [Qemu-devel] slow virtio network
> > got incredible bad performance - can't you reproduce the problem?
>
> I have seen a similar problem, but it seems to be only with extreme old guest
> kernels. With Ubuntu 12.04 it seems not to be there. Can you try a recent
> guest kernel.
I know that it works with newer kernels. But we need
> Is the network path you are downloading across reasonably idle so that you
> get reproducible results between runs?
Also tested with netperf (to local host) now.
Results in short (performance in Mbit/sec):
vhost=off,cores=1: 3982
vhost=off,cores=2: 3930
vhost=off,cores=4: 3912
vhost=on,cores=
> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
> hda
> > debian-squeeze-netinst.raw -netdev
> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
> > virtio-net-pci,netdev=net0,id=net0
> >
> > Downloading a larger file with wget inside the guest will show the problem.
> Spe
> On Mon, Nov 5, 2012 at 12:51 PM, Dietmar Maurer
> wrote:
> > Network performance with vhost=on is extremely bad if a guest uses
> multiple cores:
> >
> > HOST kernel: RHEL 2.6.32-279.11.1.el6
> > KVM 1.2.0
> > GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
> >
> > Test with something like (i
On Mon, Nov 5, 2012 at 12:51 PM, Dietmar Maurer wrote:
> Network performance with vhost=on is extremely bad if a guest uses multiple
> cores:
>
> HOST kernel: RHEL 2.6.32-279.11.1.el6
> KVM 1.2.0
> GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
>
> Test with something like (install Debian Squ
Network performance with vhost=on is extremely bad if a guest uses multiple
cores:
HOST kernel: RHEL 2.6.32-279.11.1.el6
KVM 1.2.0
GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
Test with something like (install Debian Squeeze first):
./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores
51 matches
Mail list logo