On Wed, Jun 10, 2020 at 11:01:39AM +0200, Vitaly Kuznetsov wrote:
> Paolo Bonzini <pbonz...@redhat.com> writes:
> 
> > On 09/06/20 21:10, Vivek Goyal wrote:
> >> Hi Vitaly,
> >> 
> >> Have a question about page ready events. 
> >> 
> >> Now we deliver PAGE_NOT_PRESENT page faults only if guest is not in
> >> kernel mode. So say kernel tried to access a page and we halted cpu.
> >> When page is available, we will inject page_ready interrupt. At
> >> that time we don't seem to check whether page_not_present was injected
> >> or not. 
> >> 
> >> IOW, we seem to deliver page_ready irrespective of the fact whether
> >> PAGE_NOT_PRESENT was delivered or not. And that means we will be
> >> sending page present tokens to guest. Guest will not have a state
> >> associated with that token and think that page_not_present has
> >> not been delivered yet and allocate an element in hash table for
> >> future page_not_present event. And that will lead to memory leak
> >> and token conflict etc.
> >
> > Yes, and this is https://bugzilla.kernel.org/show_bug.cgi?id=208081
> > which I was looking at right today.
> >
> 
> The issue isn't related to the interrupt based APF mechanism, right?
> 'Page ready' events are always injected (sooner or later). I'll take a
> look.
> 
> >> While setting up async pf, should we keep track whether associated
> >> page_not_present was delivered to guest or not and deliver page_ready
> >> accordingly.
> >
> > Yes, I think so.
> >
> 
> Something like this? (not even compile tested yet):
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 8e8fea13b6c7..68178d29d35c 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1661,7 +1661,7 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm);
>  void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
>                                      unsigned long *vcpu_bitmap);
>  
> -void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
> +bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
>                                    struct kvm_async_pf *work);
>  void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
>                                struct kvm_async_pf *work);
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c26dd1363151..e1e840df6b69 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10515,7 +10515,7 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu)
>       return kvm_arch_interrupt_allowed(vcpu);
>  }
>  
> -void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
> +bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
>                                    struct kvm_async_pf *work)
>  {
>       struct x86_exception fault;
> @@ -10532,17 +10532,19 @@ void kvm_arch_async_page_not_present(struct 
> kvm_vcpu *vcpu,
>               fault.address = work->arch.token;
>               fault.async_page_fault = true;
>               kvm_inject_page_fault(vcpu, &fault);
> -     } else {
> -             /*
> -              * It is not possible to deliver a paravirtualized asynchronous
> -              * page fault, but putting the guest in an artificial halt state
> -              * can be beneficial nevertheless: if an interrupt arrives, we
> -              * can deliver it timely and perhaps the guest will schedule
> -              * another process.  When the instruction that triggered a page
> -              * fault is retried, hopefully the page will be ready in the 
> host.
> -              */
> -             kvm_make_request(KVM_REQ_APF_HALT, vcpu);
> +             return true;
>       }
> +
> +     /*
> +      * It is not possible to deliver a paravirtualized asynchronous
> +      * page fault, but putting the guest in an artificial halt state
> +      * can be beneficial nevertheless: if an interrupt arrives, we
> +      * can deliver it timely and perhaps the guest will schedule
> +      * another process.  When the instruction that triggered a page
> +      * fault is retried, hopefully the page will be ready in the host.
> +      */
> +     kvm_make_request(KVM_REQ_APF_HALT, vcpu);
> +     return false;
>  }
>  
>  void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
> @@ -10559,7 +10561,8 @@ void kvm_arch_async_page_present(struct kvm_vcpu 
> *vcpu,
>               kvm_del_async_pf_gfn(vcpu, work->arch.gfn);
>       trace_kvm_async_pf_ready(work->arch.token, work->cr2_or_gpa);
>  
> -     if (kvm_pv_async_pf_enabled(vcpu) &&
> +     if (work->notpresent_injected &&
> +         kvm_pv_async_pf_enabled(vcpu) &&
>           !apf_put_user_ready(vcpu, work->arch.token)) {
>               vcpu->arch.apf.pageready_pending = true;
>               kvm_apic_set_irq(vcpu, &irq, NULL);
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 802b9e2306f0..2456dc5338f8 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -206,6 +206,7 @@ struct kvm_async_pf {
>       unsigned long addr;
>       struct kvm_arch_async_pf arch;
>       bool   wakeup_all;
> +     bool notpresent_injected;
>  };
>  
>  void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu);
> diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
> index f1e07fae84e9..de28413abefd 100644
> --- a/virt/kvm/async_pf.c
> +++ b/virt/kvm/async_pf.c
> @@ -189,12 +189,14 @@ int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t 
> cr2_or_gpa,
>               goto retry_sync;
>  
>       INIT_WORK(&work->work, async_pf_execute);
> -     if (!schedule_work(&work->work))
> -             goto retry_sync;
>  
>       list_add_tail(&work->queue, &vcpu->async_pf.queue);
>       vcpu->async_pf.queued++;
> -     kvm_arch_async_page_not_present(vcpu, work);
> +     work->notpresent_injected = kvm_arch_async_page_not_present(vcpu, work);
> +
> +     /* schedule_work() only fails for already queued works */
> +     schedule_work(&work->work);
> +
>       return 1;
>  retry_sync:

This label and associated logic can go away as we are not expecting
error from schedule_work().

>       kvm_put_kvm(work->vcpu->kvm);
> @@ -216,6 +218,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu)
>               return -ENOMEM;
>  
>       work->wakeup_all = true;
> +     work->notpresent_injected = true;

This probably is not needed. We already have work->wakeup_all set and
this has to be always injected. There is no associated page_not_present
event. IMHO, it probably is cleaner to not set it for  wake up all
events and check work->wakeup_all instead and always inject it.

Once you have final patch, I will test it. I have now configured nvdimm
device with a file backing the device. I drop the page cache on host
and that means any page accessed in guest triggers async pf on host. So
I can easily test it.

Thanks
Vivek

Reply via email to