Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-23 Thread Gleb Natapov
On Mon, Sep 23, 2013 at 03:44:21PM +0200, Paolo Bonzini wrote:
> Il 23/09/2013 15:36, Paul Gortmaker ha scritto:
> >> > The change is not completely trivial, it splits lock. There is no
> >> > obvious problem of course, otherwise you wouldn't send it and I
> >> > would ack it :), but it does not mean that the chance for problem is
> >> > zero, so why risk stability of stable even a little bit if the patch
> >> > does not fix anything in stable?
> >> > 
> >> > I do not know how -rt development goes and how it affects decisions for
> >> > stable acceptance, why can't they carry the patch in their tree until
> >> > they move to 3.12?
> > The -rt tree regularly carries mainline backports that are of interest
> > to -rt but perhaps not of interest to stable, so there is no problem
> > doing the same with content like this, if desired.
> 
> Perfect, I'll queue [v2 of] these patches for 3.12 then.
> 
Why 3.12 if it is not going to stable?

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-23 Thread Paolo Bonzini
Il 23/09/2013 16:59, Gleb Natapov ha scritto:
> > Perfect, I'll queue [v2 of] these patches for 3.12 then.
> 
> Why 3.12 if it is not going to stable?

Off-by-one. :)

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-23 Thread Paolo Bonzini
Il 23/09/2013 15:36, Paul Gortmaker ha scritto:
>> > The change is not completely trivial, it splits lock. There is no
>> > obvious problem of course, otherwise you wouldn't send it and I
>> > would ack it :), but it does not mean that the chance for problem is
>> > zero, so why risk stability of stable even a little bit if the patch
>> > does not fix anything in stable?
>> > 
>> > I do not know how -rt development goes and how it affects decisions for
>> > stable acceptance, why can't they carry the patch in their tree until
>> > they move to 3.12?
> The -rt tree regularly carries mainline backports that are of interest
> to -rt but perhaps not of interest to stable, so there is no problem
> doing the same with content like this, if desired.

Perfect, I'll queue [v2 of] these patches for 3.12 then.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-23 Thread Paul Gortmaker
On 13-09-22 05:53 AM, Gleb Natapov wrote:
> On Sun, Sep 22, 2013 at 10:53:14AM +0200, Paolo Bonzini wrote:
>> Il 22/09/2013 09:42, Gleb Natapov ha scritto:
>>> On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
 Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
 mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
 patch that shrunk the kvm_lock critical section so that the mmu_lock
 critical section does not nest with it, but in the end there is no reason
 for the vm_list to be protected by a raw spinlock.  Only manipulations
 of kvm_usage_count and the consequent hardware_enable/disable operations
 are not preemptable.

 This small series thus splits the kvm_lock in the "raw" part and the
 "non-raw" part.

 Paul, could you please provide your Tested-by?

>>> Reviewed-by: Gleb Natapov 
>>>
>>> But why should it go to stable?
>>
>> It is a regression from before the kvm_lock was made raw.  Secondarily,
> It was made raw in 2.6.39 and commit message claims that it is done for
> -rt sake, why regression was noticed only now?
> 
>> it takes a much longer time before a patch hits -rt trees (can even be
>> as much as a year) and this patch does nothing on non-rt trees.  So
>> without putting it into stable it would get no actual coverage.
>>
> The change is not completely trivial, it splits lock. There is no
> obvious problem of course, otherwise you wouldn't send it and I
> would ack it :), but it does not mean that the chance for problem is
> zero, so why risk stability of stable even a little bit if the patch
> does not fix anything in stable?
> 
> I do not know how -rt development goes and how it affects decisions for
> stable acceptance, why can't they carry the patch in their tree until
> they move to 3.12?

The -rt tree regularly carries mainline backports that are of interest
to -rt but perhaps not of interest to stable, so there is no problem
doing the same with content like this, if desired.

Thanks,
Paul.
--

> 
> --
>   Gleb.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-22 Thread Jan Kiszka
On 2013-09-22 11:53, Gleb Natapov wrote:
> On Sun, Sep 22, 2013 at 10:53:14AM +0200, Paolo Bonzini wrote:
>> Il 22/09/2013 09:42, Gleb Natapov ha scritto:
>>> On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
 Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
 mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
 patch that shrunk the kvm_lock critical section so that the mmu_lock
 critical section does not nest with it, but in the end there is no reason
 for the vm_list to be protected by a raw spinlock.  Only manipulations
 of kvm_usage_count and the consequent hardware_enable/disable operations
 are not preemptable.

 This small series thus splits the kvm_lock in the "raw" part and the
 "non-raw" part.

 Paul, could you please provide your Tested-by?

>>> Reviewed-by: Gleb Natapov 
>>>
>>> But why should it go to stable?
>>
>> It is a regression from before the kvm_lock was made raw.  Secondarily,
> It was made raw in 2.6.39 and commit message claims that it is done for
> -rt sake, why regression was noticed only now?

Probably, the patch is stressed to infrequently. Just checked: the issue
was present from day #1 one, what a shame.

> 
>> it takes a much longer time before a patch hits -rt trees (can even be
>> as much as a year) and this patch does nothing on non-rt trees.  So
>> without putting it into stable it would get no actual coverage.
>>
> The change is not completely trivial, it splits lock. There is no
> obvious problem of course, otherwise you wouldn't send it and I
> would ack it :), but it does not mean that the chance for problem is
> zero, so why risk stability of stable even a little bit if the patch
> does not fix anything in stable?
> 
> I do not know how -rt development goes and how it affects decisions for
> stable acceptance, why can't they carry the patch in their tree until
> they move to 3.12?

I think it would be fair to let stable -rt carry these. -rt requires
more specific patching anyway due to the waitqueue issue Paul reported.
But CC'ing Steven to obtain his view.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-22 Thread Gleb Natapov
On Sun, Sep 22, 2013 at 10:53:14AM +0200, Paolo Bonzini wrote:
> Il 22/09/2013 09:42, Gleb Natapov ha scritto:
> > On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
> >> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> >> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> >> patch that shrunk the kvm_lock critical section so that the mmu_lock
> >> critical section does not nest with it, but in the end there is no reason
> >> for the vm_list to be protected by a raw spinlock.  Only manipulations
> >> of kvm_usage_count and the consequent hardware_enable/disable operations
> >> are not preemptable.
> >>
> >> This small series thus splits the kvm_lock in the "raw" part and the
> >> "non-raw" part.
> >>
> >> Paul, could you please provide your Tested-by?
> >>
> > Reviewed-by: Gleb Natapov 
> > 
> > But why should it go to stable?
> 
> It is a regression from before the kvm_lock was made raw.  Secondarily,
It was made raw in 2.6.39 and commit message claims that it is done for
-rt sake, why regression was noticed only now?

> it takes a much longer time before a patch hits -rt trees (can even be
> as much as a year) and this patch does nothing on non-rt trees.  So
> without putting it into stable it would get no actual coverage.
> 
The change is not completely trivial, it splits lock. There is no
obvious problem of course, otherwise you wouldn't send it and I
would ack it :), but it does not mean that the chance for problem is
zero, so why risk stability of stable even a little bit if the patch
does not fix anything in stable?

I do not know how -rt development goes and how it affects decisions for
stable acceptance, why can't they carry the patch in their tree until
they move to 3.12?

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-22 Thread Paolo Bonzini
Il 22/09/2013 09:42, Gleb Natapov ha scritto:
> On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>> critical section does not nest with it, but in the end there is no reason
>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>> of kvm_usage_count and the consequent hardware_enable/disable operations
>> are not preemptable.
>>
>> This small series thus splits the kvm_lock in the "raw" part and the
>> "non-raw" part.
>>
>> Paul, could you please provide your Tested-by?
>>
> Reviewed-by: Gleb Natapov 
> 
> But why should it go to stable?

It is a regression from before the kvm_lock was made raw.  Secondarily,
it takes a much longer time before a patch hits -rt trees (can even be
as much as a year) and this patch does nothing on non-rt trees.  So
without putting it into stable it would get no actual coverage.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-22 Thread Gleb Natapov
On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote:
> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> patch that shrunk the kvm_lock critical section so that the mmu_lock
> critical section does not nest with it, but in the end there is no reason
> for the vm_list to be protected by a raw spinlock.  Only manipulations
> of kvm_usage_count and the consequent hardware_enable/disable operations
> are not preemptable.
> 
> This small series thus splits the kvm_lock in the "raw" part and the
> "non-raw" part.
> 
> Paul, could you please provide your Tested-by?
> 
Reviewed-by: Gleb Natapov 

But why should it go to stable?

> Thanks,
> 
> Paolo
> 
> Paolo Bonzini (3):
>   KVM: cleanup (physical) CPU hotplug
>   KVM: protect kvm_usage_count with its own spinlock
>   KVM: Convert kvm_lock back to non-raw spinlock
> 
>  Documentation/virtual/kvm/locking.txt |  8 --
>  arch/x86/kvm/mmu.c|  4 +--
>  arch/x86/kvm/x86.c|  8 +++---
>  include/linux/kvm_host.h  |  2 +-
>  virt/kvm/kvm_main.c   | 51 
> ++-
>  5 files changed, 40 insertions(+), 33 deletions(-)
> 
> -- 
> 1.8.3.1

--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-21 Thread Michael S. Tsirkin
On Fri, Sep 20, 2013 at 08:04:19PM +0200, Jan Kiszka wrote:
> On 2013-09-20 19:51, Paul Gortmaker wrote:
> > [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul 
> > Gortmaker wrote:
> > 
> >> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
> >>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> >>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> >>> patch that shrunk the kvm_lock critical section so that the mmu_lock
> >>> critical section does not nest with it, but in the end there is no reason
> >>> for the vm_list to be protected by a raw spinlock.  Only manipulations
> >>> of kvm_usage_count and the consequent hardware_enable/disable operations
> >>> are not preemptable.
> >>>
> >>> This small series thus splits the kvm_lock in the "raw" part and the
> >>> "non-raw" part.
> >>>
> >>> Paul, could you please provide your Tested-by?
> >>
> >> Sure, I'll go back and see if I can find what triggered it in the
> >> original report, and give the patches a spin on 3.4.x-rt (and probably
> >> 3.10.x-rt, since that is where rt-current is presently).
> > 
> > Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
> > issues, probably not explicitly related to this patchset (see below).
> > 
> > Paul.
> > --
> > 
> > e1000e :00:19.0 eth1: removed PHC
> > assign device 0:0:19.0
> > pci :00:19.0: irq 43 for MSI/MSI-X
> > pci :00:19.0: irq 43 for MSI/MSI-X
> > pci :00:19.0: irq 43 for MSI/MSI-X
> > pci :00:19.0: irq 43 for MSI/MSI-X
> > BUG: sleeping function called from invalid context at 
> > /home/paul/git/linux-rt/kernel/rtmutex.c:659
> > in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
> > 2 locks held by swapper/0/0:
> >  #0:  (rcu_read_lock){.+.+.+}, at: [] 
> > kvm_set_irq_inatomic+0x2a/0x4a0
> >  #1:  (rcu_read_lock){.+.+.+}, at: [] 
> > kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> > irq event stamp: 6121390
> > hardirqs last  enabled at (6121389): [] 
> > restore_args+0x0/0x30
> > hardirqs last disabled at (6121390): [] 
> > common_interrupt+0x6a/0x6f
> > softirqs last  enabled at (0): [<  (null)>]   (null)
> > softirqs last disabled at (0): [<  (null)>]   (null)
> > Preemption disabled at:[] cpu_startup_entry+0x1ba/0x430
> > 
> > CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
> > Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
> >  8201c440 880223603cf0 819f177d 880223603d18
> >  810c90d3 880214a50110 0001 0001
> >  880223603d38 819f89a4 880214a50110 880214a50110
> > Call Trace:
> >[] dump_stack+0x19/0x1b
> >  [] __might_sleep+0x153/0x250
> >  [] rt_spin_lock+0x24/0x60
> >  [] __wake_up+0x36/0x70
> >  [] kvm_vcpu_kick+0x3b/0xd0
> 
> -rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
> assigned devices directly from the host IRQ handler. We need to disable
> this fast-path in -rt or introduce such an abstraction (I did this once
> over 2.6.33-rt).
> 
> IIRC, VFIO goes the slower patch via a kernel thread unconditionally,
> thus cannot trigger this.

AFAIK VFIO just uses eventfds and these can
inject MSI interrupts directly from IRQ without going through a thread.


> Only legacy device assignment is affected.
> 
> Jan
> 
> >  [] __apic_accept_irq+0x2b2/0x3a0
> >  [] kvm_apic_set_irq+0x27/0x30
> >  [] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
> >  [] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> >  [] kvm_set_irq_inatomic+0x12b/0x4a0
> >  [] ? kvm_set_irq_inatomic+0x2a/0x4a0
> >  [] kvm_assigned_dev_msi+0x23/0x40
> >  [] handle_irq_event_percpu+0x88/0x3d0
> >  [] ? cpu_startup_entry+0x19c/0x430
> >  [] handle_irq_event+0x48/0x70
> >  [] handle_edge_irq+0x77/0x120
> >  [] handle_irq+0x1e/0x30
> >  [] do_IRQ+0x5a/0xd0
> >  [] common_interrupt+0x6f/0x6f
> >[] ? retint_restore_args+0xe/0xe
> >  [] ? cpu_startup_entry+0x19c/0x430
> >  [] ? cpu_startup_entry+0x158/0x430
> >  [] rest_init+0x137/0x140
> >  [] ? rest_init+0x5/0x140
> >  [] start_kernel+0x3af/0x3bc
> >  [] ? repair_env_string+0x5e/0x5e
> >  [] x86_64_start_reservations+0x2a/0x2c
> >  [] x86_64_start_kernel+0xcc/0xcf
> > 
> > =
> > [ INFO: inconsistent lock state ]
> > 

Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-20 Thread Paul Gortmaker
On 13-09-20 02:04 PM, Jan Kiszka wrote:
> On 2013-09-20 19:51, Paul Gortmaker wrote:
>> [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul 
>> Gortmaker wrote:
>>
>>> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
>>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>>> critical section does not nest with it, but in the end there is no reason
>>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>>> are not preemptable.
>>>>
>>>> This small series thus splits the kvm_lock in the "raw" part and the
>>>> "non-raw" part.
>>>>
>>>> Paul, could you please provide your Tested-by?
>>>
>>> Sure, I'll go back and see if I can find what triggered it in the
>>> original report, and give the patches a spin on 3.4.x-rt (and probably
>>> 3.10.x-rt, since that is where rt-current is presently).
>>
>> Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
>> issues, probably not explicitly related to this patchset (see below).
>>
>> Paul.
>> --
>>
>> e1000e :00:19.0 eth1: removed PHC
>> assign device 0:0:19.0
>> pci :00:19.0: irq 43 for MSI/MSI-X
>> pci :00:19.0: irq 43 for MSI/MSI-X
>> pci :00:19.0: irq 43 for MSI/MSI-X
>> pci :00:19.0: irq 43 for MSI/MSI-X
>> BUG: sleeping function called from invalid context at 
>> /home/paul/git/linux-rt/kernel/rtmutex.c:659
>> in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
>> 2 locks held by swapper/0/0:
>>  #0:  (rcu_read_lock){.+.+.+}, at: [] 
>> kvm_set_irq_inatomic+0x2a/0x4a0
>>  #1:  (rcu_read_lock){.+.+.+}, at: [] 
>> kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>> irq event stamp: 6121390
>> hardirqs last  enabled at (6121389): [] 
>> restore_args+0x0/0x30
>> hardirqs last disabled at (6121390): [] 
>> common_interrupt+0x6a/0x6f
>> softirqs last  enabled at (0): [<  (null)>]   (null)
>> softirqs last disabled at (0): [<  (null)>]   (null)
>> Preemption disabled at:[] cpu_startup_entry+0x1ba/0x430
>>
>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
>> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>>  8201c440 880223603cf0 819f177d 880223603d18
>>  810c90d3 880214a50110 0001 0001
>>  880223603d38 819f89a4 880214a50110 880214a50110
>> Call Trace:
>>[] dump_stack+0x19/0x1b
>>  [] __might_sleep+0x153/0x250
>>  [] rt_spin_lock+0x24/0x60
>>  [] __wake_up+0x36/0x70
>>  [] kvm_vcpu_kick+0x3b/0xd0
> 
> -rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
> assigned devices directly from the host IRQ handler. We need to disable
> this fast-path in -rt or introduce such an abstraction (I did this once
> over 2.6.33-rt).

Ah, right -- the simple wait queue support (currently -rt specific)
would have to be used here.  It is on the todo list to get that moved
from -rt into mainline.

Paul.
--

> 
> IIRC, VFIO goes the slower patch via a kernel thread unconditionally,
> thus cannot trigger this. Only legacy device assignment is affected.
> 
> Jan
> 
>>  [] __apic_accept_irq+0x2b2/0x3a0
>>  [] kvm_apic_set_irq+0x27/0x30
>>  [] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
>>  [] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>>  [] kvm_set_irq_inatomic+0x12b/0x4a0
>>  [] ? kvm_set_irq_inatomic+0x2a/0x4a0
>>  [] kvm_assigned_dev_msi+0x23/0x40
>>  [] handle_irq_event_percpu+0x88/0x3d0
>>  [] ? cpu_startup_entry+0x19c/0x430
>>  [] handle_irq_event+0x48/0x70
>>  [] handle_edge_irq+0x77/0x120
>>  [] handle_irq+0x1e/0x30
>>  [] do_IRQ+0x5a/0xd0
>>  [] common_interrupt+0x6f/0x6f
>>[] ? retint_restore_args+0xe/0xe
>>  [] ? cpu_startup_entry+0x19c/0x430
>>  [] ? cpu_startup_entry+0x158/0x430
>>  [] rest_init+0x137/0x140
>>  [] ? rest_init+0x5/0x140
>>  [] start_kernel+0x3af/0x3bc
>>  [] ? repair_env_string+0x5e/0x5e
>>  [] x86_64_start_reservations+0x2a/0x2c
>>  [] x86_64_start_kernel+0xcc/0xcf
>>
>> =
>> [ INFO: inconsistent lock state ]
>> 3.10.10-rt7 #2 Not tainted
>> ---

Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-20 Thread Jan Kiszka
On 2013-09-20 19:51, Paul Gortmaker wrote:
> [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul 
> Gortmaker wrote:
> 
>> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>> critical section does not nest with it, but in the end there is no reason
>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>> are not preemptable.
>>>
>>> This small series thus splits the kvm_lock in the "raw" part and the
>>> "non-raw" part.
>>>
>>> Paul, could you please provide your Tested-by?
>>
>> Sure, I'll go back and see if I can find what triggered it in the
>> original report, and give the patches a spin on 3.4.x-rt (and probably
>> 3.10.x-rt, since that is where rt-current is presently).
> 
> Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
> issues, probably not explicitly related to this patchset (see below).
> 
> Paul.
> --
> 
> e1000e :00:19.0 eth1: removed PHC
> assign device 0:0:19.0
> pci :00:19.0: irq 43 for MSI/MSI-X
> pci :00:19.0: irq 43 for MSI/MSI-X
> pci :00:19.0: irq 43 for MSI/MSI-X
> pci :00:19.0: irq 43 for MSI/MSI-X
> BUG: sleeping function called from invalid context at 
> /home/paul/git/linux-rt/kernel/rtmutex.c:659
> in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
> 2 locks held by swapper/0/0:
>  #0:  (rcu_read_lock){.+.+.+}, at: [] 
> kvm_set_irq_inatomic+0x2a/0x4a0
>  #1:  (rcu_read_lock){.+.+.+}, at: [] 
> kvm_irq_delivery_to_apic_fast+0x60/0x3d0
> irq event stamp: 6121390
> hardirqs last  enabled at (6121389): [] 
> restore_args+0x0/0x30
> hardirqs last disabled at (6121390): [] 
> common_interrupt+0x6a/0x6f
> softirqs last  enabled at (0): [<  (null)>]   (null)
> softirqs last disabled at (0): [<  (null)>]   (null)
> Preemption disabled at:[] cpu_startup_entry+0x1ba/0x430
> 
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>  8201c440 880223603cf0 819f177d 880223603d18
>  810c90d3 880214a50110 0001 0001
>  880223603d38 819f89a4 880214a50110 880214a50110
> Call Trace:
>[] dump_stack+0x19/0x1b
>  [] __might_sleep+0x153/0x250
>  [] rt_spin_lock+0x24/0x60
>  [] __wake_up+0x36/0x70
>  [] kvm_vcpu_kick+0x3b/0xd0

-rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
assigned devices directly from the host IRQ handler. We need to disable
this fast-path in -rt or introduce such an abstraction (I did this once
over 2.6.33-rt).

IIRC, VFIO goes the slower patch via a kernel thread unconditionally,
thus cannot trigger this. Only legacy device assignment is affected.

Jan

>  [] __apic_accept_irq+0x2b2/0x3a0
>  [] kvm_apic_set_irq+0x27/0x30
>  [] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
>  [] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>  [] kvm_set_irq_inatomic+0x12b/0x4a0
>  [] ? kvm_set_irq_inatomic+0x2a/0x4a0
>  [] kvm_assigned_dev_msi+0x23/0x40
>  [] handle_irq_event_percpu+0x88/0x3d0
>  [] ? cpu_startup_entry+0x19c/0x430
>  [] handle_irq_event+0x48/0x70
>  [] handle_edge_irq+0x77/0x120
>  [] handle_irq+0x1e/0x30
>  [] do_IRQ+0x5a/0xd0
>  [] common_interrupt+0x6f/0x6f
>[] ? retint_restore_args+0xe/0xe
>  [] ? cpu_startup_entry+0x19c/0x430
>  [] ? cpu_startup_entry+0x158/0x430
>  [] rest_init+0x137/0x140
>  [] ? rest_init+0x5/0x140
>  [] start_kernel+0x3af/0x3bc
>  [] ? repair_env_string+0x5e/0x5e
>  [] x86_64_start_reservations+0x2a/0x2c
>  [] x86_64_start_kernel+0xcc/0xcf
> 
> =
> [ INFO: inconsistent lock state ]
> 3.10.10-rt7 #2 Not tainted
> -
> inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
> swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
>  (&(&(&q->lock)->lock)->wait_lock){?.+.-.}, at: [] 
> rt_spin_lock_slowlock+0x48/0x370
> {HARDIRQ-ON-W} state was registered at:
>   [] __lock_acquire+0x69d/0x20e0
>   [] lock_acquire+0x9e/0x1f0
>   [] _raw_spin_lock+0x40/0x80
>   [] rt_spin_lock_slowlock+0x48/0x370
>   [] rt_spin_lock+0x2c/0x60
>   [] __wake_up+0x36/0x70
>   [] run_timer_softirq+0x1be/0x390
>   [] do_current_softirqs+0x239/0x5b0
>   [] run_ksoftirqd+0x38/0x60
>   [] smpboot

Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-20 Thread Jan Kiszka
On 2013-09-20 20:18, Paul Gortmaker wrote:
> On 13-09-20 02:04 PM, Jan Kiszka wrote:
>> On 2013-09-20 19:51, Paul Gortmaker wrote:
>>> [Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul 
>>> Gortmaker wrote:
>>>
>>>> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
>>>>> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
>>>>> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
>>>>> patch that shrunk the kvm_lock critical section so that the mmu_lock
>>>>> critical section does not nest with it, but in the end there is no reason
>>>>> for the vm_list to be protected by a raw spinlock.  Only manipulations
>>>>> of kvm_usage_count and the consequent hardware_enable/disable operations
>>>>> are not preemptable.
>>>>>
>>>>> This small series thus splits the kvm_lock in the "raw" part and the
>>>>> "non-raw" part.
>>>>>
>>>>> Paul, could you please provide your Tested-by?
>>>>
>>>> Sure, I'll go back and see if I can find what triggered it in the
>>>> original report, and give the patches a spin on 3.4.x-rt (and probably
>>>> 3.10.x-rt, since that is where rt-current is presently).
>>>
>>> Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
>>> issues, probably not explicitly related to this patchset (see below).
>>>
>>> Paul.
>>> --
>>>
>>> e1000e :00:19.0 eth1: removed PHC
>>> assign device 0:0:19.0
>>> pci :00:19.0: irq 43 for MSI/MSI-X
>>> pci :00:19.0: irq 43 for MSI/MSI-X
>>> pci :00:19.0: irq 43 for MSI/MSI-X
>>> pci :00:19.0: irq 43 for MSI/MSI-X
>>> BUG: sleeping function called from invalid context at 
>>> /home/paul/git/linux-rt/kernel/rtmutex.c:659
>>> in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
>>> 2 locks held by swapper/0/0:
>>>  #0:  (rcu_read_lock){.+.+.+}, at: [] 
>>> kvm_set_irq_inatomic+0x2a/0x4a0
>>>  #1:  (rcu_read_lock){.+.+.+}, at: [] 
>>> kvm_irq_delivery_to_apic_fast+0x60/0x3d0
>>> irq event stamp: 6121390
>>> hardirqs last  enabled at (6121389): [] 
>>> restore_args+0x0/0x30
>>> hardirqs last disabled at (6121390): [] 
>>> common_interrupt+0x6a/0x6f
>>> softirqs last  enabled at (0): [<  (null)>]   (null)
>>> softirqs last disabled at (0): [<  (null)>]   (null)
>>> Preemption disabled at:[] cpu_startup_entry+0x1ba/0x430
>>>
>>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
>>> Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
>>>  8201c440 880223603cf0 819f177d 880223603d18
>>>  810c90d3 880214a50110 0001 0001
>>>  880223603d38 819f89a4 880214a50110 880214a50110
>>> Call Trace:
>>>[] dump_stack+0x19/0x1b
>>>  [] __might_sleep+0x153/0x250
>>>  [] rt_spin_lock+0x24/0x60
>>>  [] __wake_up+0x36/0x70
>>>  [] kvm_vcpu_kick+0x3b/0xd0
>>
>> -rt lacks an atomic waitqueue for triggering VCPU wakeups on MSIs from
>> assigned devices directly from the host IRQ handler. We need to disable
>> this fast-path in -rt or introduce such an abstraction (I did this once
>> over 2.6.33-rt).
> 
> Ah, right -- the simple wait queue support (currently -rt specific)
> would have to be used here.  It is on the todo list to get that moved
> from -rt into mainline.

Oh, it's there in -rt already - perfect! If there is a good reason for
upstream, kvm can switch of course.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-20 Thread Paul Gortmaker
[Re: [PATCH 0/3] KVM: Make kvm_lock non-raw] On 16/09/2013 (Mon 18:12) Paul 
Gortmaker wrote:

> On 13-09-16 10:06 AM, Paolo Bonzini wrote:
> > Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> > mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> > patch that shrunk the kvm_lock critical section so that the mmu_lock
> > critical section does not nest with it, but in the end there is no reason
> > for the vm_list to be protected by a raw spinlock.  Only manipulations
> > of kvm_usage_count and the consequent hardware_enable/disable operations
> > are not preemptable.
> > 
> > This small series thus splits the kvm_lock in the "raw" part and the
> > "non-raw" part.
> > 
> > Paul, could you please provide your Tested-by?
> 
> Sure, I'll go back and see if I can find what triggered it in the
> original report, and give the patches a spin on 3.4.x-rt (and probably
> 3.10.x-rt, since that is where rt-current is presently).

Seems fine on 3.4-rt.  On 3.10.10-rt7 it looks like there are other
issues, probably not explicitly related to this patchset (see below).

Paul.
--

e1000e :00:19.0 eth1: removed PHC
assign device 0:0:19.0
pci :00:19.0: irq 43 for MSI/MSI-X
pci :00:19.0: irq 43 for MSI/MSI-X
pci :00:19.0: irq 43 for MSI/MSI-X
pci :00:19.0: irq 43 for MSI/MSI-X
BUG: sleeping function called from invalid context at 
/home/paul/git/linux-rt/kernel/rtmutex.c:659
in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0
2 locks held by swapper/0/0:
 #0:  (rcu_read_lock){.+.+.+}, at: [] 
kvm_set_irq_inatomic+0x2a/0x4a0
 #1:  (rcu_read_lock){.+.+.+}, at: [] 
kvm_irq_delivery_to_apic_fast+0x60/0x3d0
irq event stamp: 6121390
hardirqs last  enabled at (6121389): [] restore_args+0x0/0x30
hardirqs last disabled at (6121390): [] 
common_interrupt+0x6a/0x6f
softirqs last  enabled at (0): [<  (null)>]   (null)
softirqs last disabled at (0): [<  (null)>]   (null)
Preemption disabled at:[] cpu_startup_entry+0x1ba/0x430

CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
Hardware name: Dell Inc. OptiPlex 990/0VNP2H, BIOS A17 03/14/2013
 8201c440 880223603cf0 819f177d 880223603d18
 810c90d3 880214a50110 0001 0001
 880223603d38 819f89a4 880214a50110 880214a50110
Call Trace:
   [] dump_stack+0x19/0x1b
 [] __might_sleep+0x153/0x250
 [] rt_spin_lock+0x24/0x60
 [] __wake_up+0x36/0x70
 [] kvm_vcpu_kick+0x3b/0xd0
 [] __apic_accept_irq+0x2b2/0x3a0
 [] kvm_apic_set_irq+0x27/0x30
 [] kvm_irq_delivery_to_apic_fast+0x1ae/0x3d0
 [] ? kvm_irq_delivery_to_apic_fast+0x60/0x3d0
 [] kvm_set_irq_inatomic+0x12b/0x4a0
 [] ? kvm_set_irq_inatomic+0x2a/0x4a0
 [] kvm_assigned_dev_msi+0x23/0x40
 [] handle_irq_event_percpu+0x88/0x3d0
 [] ? cpu_startup_entry+0x19c/0x430
 [] handle_irq_event+0x48/0x70
 [] handle_edge_irq+0x77/0x120
 [] handle_irq+0x1e/0x30
 [] do_IRQ+0x5a/0xd0
 [] common_interrupt+0x6f/0x6f
   [] ? retint_restore_args+0xe/0xe
 [] ? cpu_startup_entry+0x19c/0x430
 [] ? cpu_startup_entry+0x158/0x430
 [] rest_init+0x137/0x140
 [] ? rest_init+0x5/0x140
 [] start_kernel+0x3af/0x3bc
 [] ? repair_env_string+0x5e/0x5e
 [] x86_64_start_reservations+0x2a/0x2c
 [] x86_64_start_kernel+0xcc/0xcf

=
[ INFO: inconsistent lock state ]
3.10.10-rt7 #2 Not tainted
-
inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
 (&(&(&q->lock)->lock)->wait_lock){?.+.-.}, at: [] 
rt_spin_lock_slowlock+0x48/0x370
{HARDIRQ-ON-W} state was registered at:
  [] __lock_acquire+0x69d/0x20e0
  [] lock_acquire+0x9e/0x1f0
  [] _raw_spin_lock+0x40/0x80
  [] rt_spin_lock_slowlock+0x48/0x370
  [] rt_spin_lock+0x2c/0x60
  [] __wake_up+0x36/0x70
  [] run_timer_softirq+0x1be/0x390
  [] do_current_softirqs+0x239/0x5b0
  [] run_ksoftirqd+0x38/0x60
  [] smpboot_thread_fn+0x22c/0x340
  [] kthread+0xcd/0xe0
  [] ret_from_fork+0x7c/0xb0
irq event stamp: 6121390
hardirqs last  enabled at (6121389): [] restore_args+0x0/0x30
hardirqs last disabled at (6121390): [] 
common_interrupt+0x6a/0x6f
softirqs last  enabled at (0): [<  (null)>]   (null)
softirqs last disabled at (0): [<  (null)>]   (null)

other info that might help us debug this:
 Possible unsafe locking scenario:

   CPU0
   
  lock(&(&(&q->lock)->lock)->wait_lock);
  
lock(&(&(&q->lock)->lock)->wait_lock);

 *** DEADLOCK ***

2 locks held by swapper/0/0:
 #0:  (rcu_read_lock){.+.+.+}, at: [] 
kvm_set_irq_inatomic+0x2a/0x4a0
 #1:  (rcu_read_lock){.+.+.+}, at: [] 
kvm_irq_delivery_to_apic_fast+0x60/0x3d0

stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.10-rt7 #2
Hardware name: Dell

Re: [PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-16 Thread Paul Gortmaker
On 13-09-16 10:06 AM, Paolo Bonzini wrote:
> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
> mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
> patch that shrunk the kvm_lock critical section so that the mmu_lock
> critical section does not nest with it, but in the end there is no reason
> for the vm_list to be protected by a raw spinlock.  Only manipulations
> of kvm_usage_count and the consequent hardware_enable/disable operations
> are not preemptable.
> 
> This small series thus splits the kvm_lock in the "raw" part and the
> "non-raw" part.
> 
> Paul, could you please provide your Tested-by?

Sure, I'll go back and see if I can find what triggered it in the
original report, and give the patches a spin on 3.4.x-rt (and probably
3.10.x-rt, since that is where rt-current is presently).

Paul.
--

> 
> Thanks,
> 
> Paolo
> 
> Paolo Bonzini (3):
>   KVM: cleanup (physical) CPU hotplug
>   KVM: protect kvm_usage_count with its own spinlock
>   KVM: Convert kvm_lock back to non-raw spinlock
> 
>  Documentation/virtual/kvm/locking.txt |  8 --
>  arch/x86/kvm/mmu.c|  4 +--
>  arch/x86/kvm/x86.c|  8 +++---
>  include/linux/kvm_host.h  |  2 +-
>  virt/kvm/kvm_main.c   | 51 
> ++-
>  5 files changed, 40 insertions(+), 33 deletions(-)
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/3] KVM: Make kvm_lock non-raw

2013-09-16 Thread Paolo Bonzini
Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the
mmu_lock within the raw kvm_lock in mmu_shrink_scan.  He provided a
patch that shrunk the kvm_lock critical section so that the mmu_lock
critical section does not nest with it, but in the end there is no reason
for the vm_list to be protected by a raw spinlock.  Only manipulations
of kvm_usage_count and the consequent hardware_enable/disable operations
are not preemptable.

This small series thus splits the kvm_lock in the "raw" part and the
"non-raw" part.

Paul, could you please provide your Tested-by?

Thanks,

Paolo

Paolo Bonzini (3):
  KVM: cleanup (physical) CPU hotplug
  KVM: protect kvm_usage_count with its own spinlock
  KVM: Convert kvm_lock back to non-raw spinlock

 Documentation/virtual/kvm/locking.txt |  8 --
 arch/x86/kvm/mmu.c|  4 +--
 arch/x86/kvm/x86.c|  8 +++---
 include/linux/kvm_host.h  |  2 +-
 virt/kvm/kvm_main.c   | 51 ++-
 5 files changed, 40 insertions(+), 33 deletions(-)

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/