On 08/13/2013 01:02 PM, Raghavendra K T wrote:
> * Ingo Molnar <mi...@kernel.org> [2013-08-13 18:55:52]:
>
>> Would be nice to have a delta fix patch against tip:x86/spinlocks, which 
>> I'll then backmerge into that series via rebasing it.
>>
> There was a namespace collision of PER_CPU lock_waiting variable when
> we have both Xen and KVM enabled. 
>
> Perhaps this week wasn't for me. Had run 100 times randconfig in a loop
> for the fix sent earlier :(. 
>
> Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
> back to patch 14/14 itself. Else please let me.
> I have already run allnoconfig, allyesconfig, randconfig with below patch. 
> But will
> test again. This should apply on top of tip:x86/spinlocks.
>
> ---8<---
> From: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
>
> Fix Namespace collision for lock_waiting
>
> Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
> ---
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index d442471..b8ef630 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -673,7 +673,7 @@ struct kvm_lock_waiting {
>  static cpumask_t waiting_cpus;
>  
>  /* Track spinlock on which a cpu is waiting */
> -static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
> +static DEFINE_PER_CPU(struct kvm_lock_waiting, klock_waiting);

Has static stopped meaning static?

    J

>  
>  static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>  {
> @@ -685,7 +685,7 @@ static void kvm_lock_spinning(struct arch_spinlock *lock, 
> __ticket_t want)
>       if (in_nmi())
>               return;
>  
> -     w = &__get_cpu_var(lock_waiting);
> +     w = &__get_cpu_var(klock_waiting);
>       cpu = smp_processor_id();
>       start = spin_time_start();
>  
> @@ -756,7 +756,7 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, 
> __ticket_t ticket)
>  
>       add_stats(RELEASED_SLOW, 1);
>       for_each_cpu(cpu, &waiting_cpus) {
> -             const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
> +             const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu);
>               if (ACCESS_ONCE(w->lock) == lock &&
>                   ACCESS_ONCE(w->want) == ticket) {
>                       add_stats(RELEASED_SLOW_KICKED, 1);
>
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to