Re: [PATCH RFC 2/2] KVM: thread creating a vcpu is the owner of that vcpu

2014-12-03 Thread Paolo Bonzini


On 25/11/2014 17:04, David Hildenbrand wrote:
> @@ -124,15 +124,6 @@ int vcpu_load(struct kvm_vcpu *vcpu)
>  
>   if (mutex_lock_killable(&vcpu->mutex))
>   return -EINTR;
> - if (unlikely(vcpu->pid != current->pids[PIDTYPE_PID].pid)) {
> - /* The thread running this VCPU changed. */
> - struct pid *oldpid = vcpu->pid;
> - struct pid *newpid = get_task_pid(current, PIDTYPE_PID);
> - rcu_assign_pointer(vcpu->pid, newpid);
> - if (oldpid)
> - synchronize_rcu();
> - put_pid(oldpid);
> - }

I think it would make more sense to do this only for the KVM_RUN ioctl.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC 2/2] KVM: thread creating a vcpu is the owner of that vcpu

2014-11-25 Thread Christian Borntraeger
Am 25.11.2014 um 17:04 schrieb David Hildenbrand:
> Currently, we allow changing the PID of a VCPU. This PID is used to
> identify the thread to yield to if we want to yield to this specific
> VCPU.
> 
> In practice (e.g. QEMU), the thread creating and executing the VCPU remains
> always the same. Temporarily exchanging the PID (e.g. because an ioctl is
> triggered from a wrong thread) doesn't really make sense.
> 
> The PID is exchanged and a synchronize_rcu() is called. When the executing
> thread tries to run the VCPU again, another synchronize_rcu() happens.
> 
> If a yield to that VCPU is triggered while the PID of the wrong thread is 
> active,
> the wrong thread might receive a yield, but this will most likely not
> help the executing thread at all. The executing thread won't have a higher
> priority after the wrong thread has finished with the ioctl. The wrong thread
> will even receive yields afterwards that were targeted to the executing vcpu,
> until the executing VCPU has replaced the PID on the next ioctl - doesn't feel
> correct to me.
> 
> This patch makes the creating thread the owning thread, and therefore the only
> valid yield candidate (especially because VCPU ioctls are - in theory - only
> valid when triggered from the owning thread - old user space versions may not
> stick to this rule). This should also speed up the initial start of all VCPUs,
> when the PID is assigned for the first time.
> 
> Should be backwards compatible - if there is any old user space version out
> there that doesn't stick to the creating == executing thread rule, yields will
> not work as intended.
> 
> Signed-off-by: David Hildenbrand 

This change actually makes perfect sense to me:
- The runtime change logic was problematic, (e.g. see commit 7103f60de8 "KVM: 
avoid unnecessary synchronize_rc" and the qemu fixes for s390 to bring all vCPU 
ioctls in the right thread).
- It makes vcpu_load cheaper
- It emphasizes what in api.txt: " Only run vcpu ioctls from the same thread 
that was used to create the
   vcpu."


Acked-by: Christian Borntraeger 

> ---
>  include/linux/kvm_host.h |  1 +
>  virt/kvm/kvm_main.c  | 18 ++
>  2 files changed, 3 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index aa56894..f1fe655 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -245,6 +245,7 @@ struct kvm_vcpu {
>   int fpu_active;
>   int guest_fpu_loaded, guest_xcr0_loaded;
>   wait_queue_head_t wq;
> + /* the pid owning this vcpu - target for vcpu yields */
>   struct pid *pid;
>   int sigset_active;
>   sigset_t sigset;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 184f52e..4ba7810 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -124,15 +124,6 @@ int vcpu_load(struct kvm_vcpu *vcpu)
> 
>   if (mutex_lock_killable(&vcpu->mutex))
>   return -EINTR;
> - if (unlikely(vcpu->pid != current->pids[PIDTYPE_PID].pid)) {
> - /* The thread running this VCPU changed. */
> - struct pid *oldpid = vcpu->pid;
> - struct pid *newpid = get_task_pid(current, PIDTYPE_PID);
> - rcu_assign_pointer(vcpu->pid, newpid);
> - if (oldpid)
> - synchronize_rcu();
> - put_pid(oldpid);
> - }
>   cpu = get_cpu();
>   preempt_notifier_register(&vcpu->preempt_notifier);
>   kvm_arch_vcpu_load(vcpu, cpu);
> @@ -220,7 +211,7 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, 
> unsigned id)
>   vcpu->cpu = -1;
>   vcpu->kvm = kvm;
>   vcpu->vcpu_id = id;
> - vcpu->pid = NULL;
> + vcpu->pid = get_task_pid(current, PIDTYPE_PID);
>   init_waitqueue_head(&vcpu->wq);
>   kvm_async_pf_vcpu_init(vcpu);
> 
> @@ -1771,15 +1762,10 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
> 
>  int kvm_vcpu_yield_to(struct kvm_vcpu *target)
>  {
> - struct pid *pid;
>   struct task_struct *task = NULL;
>   int ret = 0;
> 
> - rcu_read_lock();
> - pid = rcu_dereference(target->pid);
> - if (pid)
> - task = get_pid_task(pid, PIDTYPE_PID);
> - rcu_read_unlock();
> + task = get_pid_task(target->pid, PIDTYPE_PID);
>   if (!task)
>   return ret;
>   ret = yield_to(task, 1);
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html