On Fri, Aug 19, 2022, Kirill A. Shutemov wrote:
> On Fri, Jun 17, 2022 at 09:30:53PM +, Sean Christopherson wrote:
> > > @@ -4088,7 +4144,12 @@ static int direct_page_fault(struct kvm_vcpu
> > > *vcpu, struct kvm_page_fault *fault
> > > read_unlock(>kvm->mmu_lock);
> > > else
> >
On Fri, Jun 17, 2022 at 09:30:53PM +, Sean Christopherson wrote:
> > @@ -4088,7 +4144,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu,
> > struct kvm_page_fault *fault
> > read_unlock(>kvm->mmu_lock);
> > else
> > write_unlock(>kvm->mmu_lock);
> > -
On Wed, Jul 20, 2022 at 04:08:10PM -0700, Vishal Annapurve wrote:
> > > Hmm, so a new slot->arch.page_attr array shouldn't be necessary, KVM can
> > > instead
> > > update slot->arch.lpage_info on shared<->private conversions. Detecting
> > > whether
> > > a given range is partially mapped
> > Hmm, so a new slot->arch.page_attr array shouldn't be necessary, KVM can
> > instead
> > update slot->arch.lpage_info on shared<->private conversions. Detecting
> > whether
> > a given range is partially mapped could get nasty if KVM defers tracking to
> > the
> > backing store, but if KVM
On 7/8/2022 4:08 AM, Sean Christopherson wrote:
On Fri, Jul 01, 2022, Xiaoyao Li wrote:
On 7/1/2022 6:21 AM, Michael Roth wrote:
On Thu, Jun 30, 2022 at 12:14:13PM -0700, Vishal Annapurve wrote:
With transparent_hugepages=always setting I see issues with the
current implementation.
...
On Fri, Jul 01, 2022, Xiaoyao Li wrote:
> On 7/1/2022 6:21 AM, Michael Roth wrote:
> > On Thu, Jun 30, 2022 at 12:14:13PM -0700, Vishal Annapurve wrote:
> > > With transparent_hugepages=always setting I see issues with the
> > > current implementation.
...
> > > Looks like with transparent huge
On 7/1/2022 6:21 AM, Michael Roth wrote:
On Thu, Jun 30, 2022 at 12:14:13PM -0700, Vishal Annapurve wrote:
With transparent_hugepages=always setting I see issues with the
current implementation.
Scenario:
1) Guest accesses a gfn range 0x800-0xa00 as private
2) Guest calls mapgpa to convert the
On Thu, Jun 30, 2022 at 12:14:13PM -0700, Vishal Annapurve wrote:
> With transparent_hugepages=always setting I see issues with the
> current implementation.
>
> Scenario:
> 1) Guest accesses a gfn range 0x800-0xa00 as private
> 2) Guest calls mapgpa to convert the range 0x84d-0x86e as shared
>
...
> > > /*
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index afe18d70ece7..e18460e0d743 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -2899,6 +2899,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
> > > if (max_level
On 5/19/2022 9:07 PM, Chao Peng wrote:
> A page fault can carry the information of whether the access if private
> or not for KVM_MEM_PRIVATE memslot, this can be filled by architecture
> code(like TDX code). To handle page faut for such access, KVM maps the
> page only when this private property
On Fri, Jun 24, 2022 at 09:28:23AM +0530, Nikunj A. Dadhania wrote:
> On 5/19/2022 9:07 PM, Chao Peng wrote:
> > A page fault can carry the information of whether the access if private
> > or not for KVM_MEM_PRIVATE memslot, this can be filled by architecture
> > code(like TDX code). To handle
On Fri, Jun 17, 2022 at 09:30:53PM +, Sean Christopherson wrote:
> On Thu, May 19, 2022, Chao Peng wrote:
> > @@ -4028,8 +4081,11 @@ static bool is_page_fault_stale(struct kvm_vcpu
> > *vcpu,
> > if (!sp && kvm_test_request(KVM_REQ_MMU_FREE_OBSOLETE_ROOTS, vcpu))
> > return
On Thu, May 19, 2022, Chao Peng wrote:
> @@ -4028,8 +4081,11 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
> if (!sp && kvm_test_request(KVM_REQ_MMU_FREE_OBSOLETE_ROOTS, vcpu))
> return true;
>
> - return fault->slot &&
> -
13 matches
Mail list logo