Heads up, you may get this multiple times, our mail servers got "upgraded"
recently and are giving me troubles...
On Mon, Oct 12, 2020 at 03:59:35PM -0700, Ben Gardon wrote:
> On Tue, Sep 29, 2020 at 11:06 PM Sean Christopherson
> wrote:
> > > @@ -3691,7 +3690,13 @@ static int mmu_alloc_direct_ro
On Tue, Sep 29, 2020 at 11:06 PM Sean Christopherson
wrote:
>
> On Fri, Sep 25, 2020 at 02:22:44PM -0700, Ben Gardon wrote:
> static u64 __read_mostly shadow_nx_mask;
> > @@ -3597,10 +3592,14 @@ static void mmu_free_root_page(struct kvm *kvm,
> > hpa_t *root_hpa,
> > if (!VALID_PAGE(*root
On Wed, Sep 30, 2020 at 08:26:28AM +0200, Paolo Bonzini wrote:
> On 30/09/20 08:06, Sean Christopherson wrote:
> >> +static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu)
> >> +{
> >> + struct kvm_mmu_page *root;
> >> + union kvm_mmu_page_role role;
> >> +
> >> + role = vcpu->
On 30/09/20 08:06, Sean Christopherson wrote:
>> +static struct kvm_mmu_page *alloc_tdp_mmu_root(struct kvm_vcpu *vcpu,
>> + union kvm_mmu_page_role role)
>> +{
>> +struct kvm_mmu_page *new_root;
>> +struct kvm_mmu_page *root;
>> +
>> +new_root
On Fri, Sep 25, 2020 at 02:22:44PM -0700, Ben Gardon wrote:
static u64 __read_mostly shadow_nx_mask;
> @@ -3597,10 +3592,14 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t
> *root_hpa,
> if (!VALID_PAGE(*root_hpa))
> return;
>
> - sp = to_shadow_page(*root_hpa
The TDP MMU must be able to allocate paging structure root pages and track
the usage of those pages. Implement a similar, but separate system for root
page allocation to that of the x86 shadow paging implementation. When
future patches add synchronization model changes to allow for parallel
page fa
6 matches
Mail list logo