Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 25/03/2016 15:07, Xiao Guangrong wrote: >> >> @@ -2037,13 +2037,14 @@ static void mmu_pages_clear_parents(struct >> mmu_page_path *parents) >> { >> struct kvm_mmu_page *sp; >> unsigned int level = 0; >> +unsigned int idx; >> >> do { >> -unsigned int idx = parents->idx[level]; >> sp = parents->parent[level]; >> -if (!sp) >> +if (!sp || WARN_ON(level == PT64_ROOT_LEVEL-1)) >> return; >> >> +idx = parents->idx[level]; >> WARN_ON(idx == INVALID_INDEX); >> clear_unsync_child_bit(sp, idx); >> level++; >> > > Yes, exactly. > > [ actually, we can keep mmu_pages_clear_parents() unchanged ] You cannot because ubsan would complain. :) Paolo
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 25/03/2016 15:07, Xiao Guangrong wrote: >> >> @@ -2037,13 +2037,14 @@ static void mmu_pages_clear_parents(struct >> mmu_page_path *parents) >> { >> struct kvm_mmu_page *sp; >> unsigned int level = 0; >> +unsigned int idx; >> >> do { >> -unsigned int idx = parents->idx[level]; >> sp = parents->parent[level]; >> -if (!sp) >> +if (!sp || WARN_ON(level == PT64_ROOT_LEVEL-1)) >> return; >> >> +idx = parents->idx[level]; >> WARN_ON(idx == INVALID_INDEX); >> clear_unsync_child_bit(sp, idx); >> level++; >> > > Yes, exactly. > > [ actually, we can keep mmu_pages_clear_parents() unchanged ] You cannot because ubsan would complain. :) Paolo
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 03/25/2016 09:56 PM, Paolo Bonzini wrote: On 25/03/2016 14:48, Xiao Guangrong wrote: This patch and the previous one are basically redoing commit 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you find your version easier to understand, I of course find mine easier. Rather than getting stuck in a ko fight, the solution is to stick with the code in KVM and add comments. I'll give it a try... If you do not like this one, we can just make the .index is [PT64_ROOT_LEVEL - 1] and keep the sentinel in .parents[], that little change and nice code shape. I suppose you'd have something like this then: diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 70e95d097ef1..15e1735a2e3a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1980,7 +1980,7 @@ static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, struct mmu_page_path { struct kvm_mmu_page *parent[PT64_ROOT_LEVEL]; - unsigned int idx[PT64_ROOT_LEVEL]; + unsigned int idx[PT64_ROOT_LEVEL-1]; }; #define for_each_sp(pvec, sp, parents, i) \ @@ -2037,13 +2037,14 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) { struct kvm_mmu_page *sp; unsigned int level = 0; + unsigned int idx; do { - unsigned int idx = parents->idx[level]; sp = parents->parent[level]; - if (!sp) + if (!sp || WARN_ON(level == PT64_ROOT_LEVEL-1)) return; + idx = parents->idx[level]; WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; Yes, exactly. [ actually, we can keep mmu_pages_clear_parents() unchanged ] By making the arrays the same size, the effect of the sentinel seems clearer to me. It doesn't seem worth 4 bytes (and strictly speaking those 4 bytes would be there anyway due to padding)... The sentinel is NULL forever so it can not go to the inner loop anyway... Okay, i am not strong opinion on it, it is not a big deal. Let's happily drop it if you really dislike it. :)
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 03/25/2016 09:56 PM, Paolo Bonzini wrote: On 25/03/2016 14:48, Xiao Guangrong wrote: This patch and the previous one are basically redoing commit 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you find your version easier to understand, I of course find mine easier. Rather than getting stuck in a ko fight, the solution is to stick with the code in KVM and add comments. I'll give it a try... If you do not like this one, we can just make the .index is [PT64_ROOT_LEVEL - 1] and keep the sentinel in .parents[], that little change and nice code shape. I suppose you'd have something like this then: diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 70e95d097ef1..15e1735a2e3a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1980,7 +1980,7 @@ static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, struct mmu_page_path { struct kvm_mmu_page *parent[PT64_ROOT_LEVEL]; - unsigned int idx[PT64_ROOT_LEVEL]; + unsigned int idx[PT64_ROOT_LEVEL-1]; }; #define for_each_sp(pvec, sp, parents, i) \ @@ -2037,13 +2037,14 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) { struct kvm_mmu_page *sp; unsigned int level = 0; + unsigned int idx; do { - unsigned int idx = parents->idx[level]; sp = parents->parent[level]; - if (!sp) + if (!sp || WARN_ON(level == PT64_ROOT_LEVEL-1)) return; + idx = parents->idx[level]; WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; Yes, exactly. [ actually, we can keep mmu_pages_clear_parents() unchanged ] By making the arrays the same size, the effect of the sentinel seems clearer to me. It doesn't seem worth 4 bytes (and strictly speaking those 4 bytes would be there anyway due to padding)... The sentinel is NULL forever so it can not go to the inner loop anyway... Okay, i am not strong opinion on it, it is not a big deal. Let's happily drop it if you really dislike it. :)
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 25/03/2016 14:48, Xiao Guangrong wrote: >>> >> >> This patch and the previous one are basically redoing commit >> 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you >> find your version easier to understand, I of course find mine easier. >> >> Rather than getting stuck in a ko fight, the solution is to stick with >> the code in KVM and add comments. I'll give it a try... > > If you do not like this one, we can just make the .index is > [PT64_ROOT_LEVEL - 1] and keep the sentinel in .parents[], that little > change and nice code shape. I suppose you'd have something like this then: diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 70e95d097ef1..15e1735a2e3a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1980,7 +1980,7 @@ static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, struct mmu_page_path { struct kvm_mmu_page *parent[PT64_ROOT_LEVEL]; - unsigned int idx[PT64_ROOT_LEVEL]; + unsigned int idx[PT64_ROOT_LEVEL-1]; }; #define for_each_sp(pvec, sp, parents, i) \ @@ -2037,13 +2037,14 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) { struct kvm_mmu_page *sp; unsigned int level = 0; + unsigned int idx; do { - unsigned int idx = parents->idx[level]; sp = parents->parent[level]; - if (!sp) + if (!sp || WARN_ON(level == PT64_ROOT_LEVEL-1)) return; + idx = parents->idx[level]; WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; By making the arrays the same size, the effect of the sentinel seems clearer to me. It doesn't seem worth 4 bytes (and strictly speaking those 4 bytes would be there anyway due to padding)... Paolo
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 25/03/2016 14:48, Xiao Guangrong wrote: >>> >> >> This patch and the previous one are basically redoing commit >> 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you >> find your version easier to understand, I of course find mine easier. >> >> Rather than getting stuck in a ko fight, the solution is to stick with >> the code in KVM and add comments. I'll give it a try... > > If you do not like this one, we can just make the .index is > [PT64_ROOT_LEVEL - 1] and keep the sentinel in .parents[], that little > change and nice code shape. I suppose you'd have something like this then: diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 70e95d097ef1..15e1735a2e3a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1980,7 +1980,7 @@ static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, struct mmu_page_path { struct kvm_mmu_page *parent[PT64_ROOT_LEVEL]; - unsigned int idx[PT64_ROOT_LEVEL]; + unsigned int idx[PT64_ROOT_LEVEL-1]; }; #define for_each_sp(pvec, sp, parents, i) \ @@ -2037,13 +2037,14 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) { struct kvm_mmu_page *sp; unsigned int level = 0; + unsigned int idx; do { - unsigned int idx = parents->idx[level]; sp = parents->parent[level]; - if (!sp) + if (!sp || WARN_ON(level == PT64_ROOT_LEVEL-1)) return; + idx = parents->idx[level]; WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; By making the arrays the same size, the effect of the sentinel seems clearer to me. It doesn't seem worth 4 bytes (and strictly speaking those 4 bytes would be there anyway due to padding)... Paolo
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 03/25/2016 09:45 PM, Paolo Bonzini wrote: On 25/03/2016 14:19, Xiao Guangrong wrote: Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry in .parent[] is used as a sentinel, the additional entry in .idx[] is purely wasted This patch reduces its size and sets the sentinel on the upper level of the place where we start from This patch and the previous one are basically redoing commit 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you find your version easier to understand, I of course find mine easier. Rather than getting stuck in a ko fight, the solution is to stick with the code in KVM and add comments. I'll give it a try... If you do not like this one, we can just make the .index is [PT64_ROOT_LEVEL - 1] and keep the sentinel in .parents[], that little change and nice code shape.
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 03/25/2016 09:45 PM, Paolo Bonzini wrote: On 25/03/2016 14:19, Xiao Guangrong wrote: Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry in .parent[] is used as a sentinel, the additional entry in .idx[] is purely wasted This patch reduces its size and sets the sentinel on the upper level of the place where we start from This patch and the previous one are basically redoing commit 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you find your version easier to understand, I of course find mine easier. Rather than getting stuck in a ko fight, the solution is to stick with the code in KVM and add comments. I'll give it a try... If you do not like this one, we can just make the .index is [PT64_ROOT_LEVEL - 1] and keep the sentinel in .parents[], that little change and nice code shape.
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 25/03/2016 14:19, Xiao Guangrong wrote: > Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry > in .parent[] is used as a sentinel, the additional entry in .idx[] is > purely wasted > > This patch reduces its size and sets the sentinel on the upper level of > the place where we start from This patch and the previous one are basically redoing commit 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you find your version easier to understand, I of course find mine easier. Rather than getting stuck in a ko fight, the solution is to stick with the code in KVM and add comments. I'll give it a try... Paolo
Re: [PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
On 25/03/2016 14:19, Xiao Guangrong wrote: > Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry > in .parent[] is used as a sentinel, the additional entry in .idx[] is > purely wasted > > This patch reduces its size and sets the sentinel on the upper level of > the place where we start from This patch and the previous one are basically redoing commit 0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you find your version easier to understand, I of course find mine easier. Rather than getting stuck in a ko fight, the solution is to stick with the code in KVM and add comments. I'll give it a try... Paolo
[PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry in .parent[] is used as a sentinel, the additional entry in .idx[] is purely wasted This patch reduces its size and sets the sentinel on the upper level of the place where we start from Signed-off-by: Xiao Guangrong--- arch/x86/kvm/mmu.c | 32 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e273144..c396e8b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1984,12 +1984,12 @@ static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, } struct mmu_page_path { - struct kvm_mmu_page *parent[PT64_ROOT_LEVEL]; - unsigned int idx[PT64_ROOT_LEVEL]; + struct kvm_mmu_page *parent[PT64_ROOT_LEVEL - 1]; + unsigned int idx[PT64_ROOT_LEVEL - 1]; }; #define for_each_sp(pvec, sp, parents, i) \ - for (i = mmu_pages_first(, ); \ + for (i = mmu_pages_next(, , -1); \ i < pvec.nr && ({ sp = pvec.page[i].sp; 1;}); \ i = mmu_pages_next(, , i)) @@ -2016,25 +2016,15 @@ static int mmu_pages_next(struct kvm_mmu_pages *pvec, return n; } -static int mmu_pages_first(struct kvm_mmu_pages *pvec, - struct mmu_page_path *parents) +static void +mmu_pages_init(struct mmu_page_path *parents, struct kvm_mmu_page *parent) { - struct kvm_mmu_page *sp; - int level; - - if (pvec->nr == 0) - return 0; - - sp = pvec->page[0].sp; - level = sp->role.level; - WARN_ON(level == PT_PAGE_TABLE_LEVEL); - /* -* Also set up a sentinel. Further entries in pvec are all -* children of sp, so this element is never overwritten. +* set up a sentinel. Further entries in pvec are all children of +* sp, so this element is never overwritten. */ - parents->parent[level - 1] = NULL; - return mmu_pages_next(pvec, parents, -1); + if (parent->role.level < PT64_ROOT_LEVEL) + parents->parent[parent->role.level - 1] = NULL; } static void mmu_pages_clear_parents(struct mmu_page_path *parents) @@ -2051,7 +2041,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; - } while (!sp->unsync_children); + } while (!sp->unsync_children && (level < PT64_ROOT_LEVEL - 1)); } static void mmu_sync_children(struct kvm_vcpu *vcpu, @@ -2064,6 +2054,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, LIST_HEAD(invalid_list); bool flush = false; + mmu_pages_init(, parent); while (mmu_unsync_walk(parent, )) { bool protected = false; @@ -2335,6 +2326,7 @@ static int mmu_zap_unsync_children(struct kvm *kvm, if (parent->role.level == PT_PAGE_TABLE_LEVEL) return 0; + mmu_pages_init(, parent); while (mmu_unsync_walk(parent, )) { struct kvm_mmu_page *sp; -- 1.8.3.1
[PATCH 3/4] KVM: MMU: reduce the size of mmu_page_path
Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry in .parent[] is used as a sentinel, the additional entry in .idx[] is purely wasted This patch reduces its size and sets the sentinel on the upper level of the place where we start from Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 32 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e273144..c396e8b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1984,12 +1984,12 @@ static bool kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, } struct mmu_page_path { - struct kvm_mmu_page *parent[PT64_ROOT_LEVEL]; - unsigned int idx[PT64_ROOT_LEVEL]; + struct kvm_mmu_page *parent[PT64_ROOT_LEVEL - 1]; + unsigned int idx[PT64_ROOT_LEVEL - 1]; }; #define for_each_sp(pvec, sp, parents, i) \ - for (i = mmu_pages_first(, ); \ + for (i = mmu_pages_next(, , -1); \ i < pvec.nr && ({ sp = pvec.page[i].sp; 1;}); \ i = mmu_pages_next(, , i)) @@ -2016,25 +2016,15 @@ static int mmu_pages_next(struct kvm_mmu_pages *pvec, return n; } -static int mmu_pages_first(struct kvm_mmu_pages *pvec, - struct mmu_page_path *parents) +static void +mmu_pages_init(struct mmu_page_path *parents, struct kvm_mmu_page *parent) { - struct kvm_mmu_page *sp; - int level; - - if (pvec->nr == 0) - return 0; - - sp = pvec->page[0].sp; - level = sp->role.level; - WARN_ON(level == PT_PAGE_TABLE_LEVEL); - /* -* Also set up a sentinel. Further entries in pvec are all -* children of sp, so this element is never overwritten. +* set up a sentinel. Further entries in pvec are all children of +* sp, so this element is never overwritten. */ - parents->parent[level - 1] = NULL; - return mmu_pages_next(pvec, parents, -1); + if (parent->role.level < PT64_ROOT_LEVEL) + parents->parent[parent->role.level - 1] = NULL; } static void mmu_pages_clear_parents(struct mmu_page_path *parents) @@ -2051,7 +2041,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; - } while (!sp->unsync_children); + } while (!sp->unsync_children && (level < PT64_ROOT_LEVEL - 1)); } static void mmu_sync_children(struct kvm_vcpu *vcpu, @@ -2064,6 +2054,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, LIST_HEAD(invalid_list); bool flush = false; + mmu_pages_init(, parent); while (mmu_unsync_walk(parent, )) { bool protected = false; @@ -2335,6 +2326,7 @@ static int mmu_zap_unsync_children(struct kvm *kvm, if (parent->role.level == PT_PAGE_TABLE_LEVEL) return 0; + mmu_pages_init(, parent); while (mmu_unsync_walk(parent, )) { struct kvm_mmu_page *sp; -- 1.8.3.1