[PATCH v2 1/7] KVM: MMU: correct the behavior of mmu_spte_update_no_track

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong Current behavior of mmu_spte_update_no_track() does not match the name of _no_track() as actually the A/D bits are tracked and returned to the caller This patch introduces the real _no_track() function to update the spte regardless of A/D bits and

[PATCH v2 1/7] KVM: MMU: correct the behavior of mmu_spte_update_no_track

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong Current behavior of mmu_spte_update_no_track() does not match the name of _no_track() as actually the A/D bits are tracked and returned to the caller This patch introduces the real _no_track() function to update the spte regardless of A/D bits and rename the original

[PATCH v2 5/7] KVM: MMU: allow dirty log without write protect

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong A new flag, KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT, is introduced which indicates the userspace just wants to get the snapshot of dirty bitmap During live migration, after all snapshot of dirty bitmap is fetched from KVM, the guest memory can be

[PATCH v2 5/7] KVM: MMU: allow dirty log without write protect

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong A new flag, KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT, is introduced which indicates the userspace just wants to get the snapshot of dirty bitmap During live migration, after all snapshot of dirty bitmap is fetched from KVM, the guest memory can be write protected by calling

[PATCH v2 4/7] KVM: MMU: enable KVM_WRITE_PROTECT_ALL_MEM

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong The functionality of write protection for all guest memory is ready, it is the time to make its usable for userspace which is indicated by KVM_CAP_X86_WRITE_PROTECT_ALL_MEM Signed-off-by: Xiao Guangrong ---

[PATCH v2 4/7] KVM: MMU: enable KVM_WRITE_PROTECT_ALL_MEM

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong The functionality of write protection for all guest memory is ready, it is the time to make its usable for userspace which is indicated by KVM_CAP_X86_WRITE_PROTECT_ALL_MEM Signed-off-by: Xiao Guangrong --- arch/x86/kvm/x86.c | 21 +

[PATCH v2 6/7] KVM: MMU: clarify fast_pf_fix_direct_spte

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong The writable spte can not be locklessly fixed and add a WARN_ON() to trigger the warning if something out of our mind happens, that is good for us to track if the log for writable spte is missed on the fast path Signed-off-by: Xiao Guangrong

[PATCH v2 6/7] KVM: MMU: clarify fast_pf_fix_direct_spte

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong The writable spte can not be locklessly fixed and add a WARN_ON() to trigger the warning if something out of our mind happens, that is good for us to track if the log for writable spte is missed on the fast path Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 11

[PATCH v2 2/7] KVM: MMU: introduce possible_writable_spte_bitmap

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible

[PATCH v2 3/7] KVM: MMU: introduce kvm_mmu_write_protect_all_pages

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong The original idea is from Avi. kvm_mmu_write_protect_all_pages() is extremely fast to write protect all the guest memory. Comparing with the ordinary algorithm which write protects last level sptes based on the rmap one by one, it just simply

[PATCH v2 2/7] KVM: MMU: introduce possible_writable_spte_bitmap

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible writable sptes is introduced to

[PATCH v2 3/7] KVM: MMU: introduce kvm_mmu_write_protect_all_pages

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong The original idea is from Avi. kvm_mmu_write_protect_all_pages() is extremely fast to write protect all the guest memory. Comparing with the ordinary algorithm which write protects last level sptes based on the rmap one by one, it just simply updates the generation number to

[PATCH v2 0/7] KVM: MMU: fast write protect

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong Changelog in v2: thanks to Paolo's review, this version disables write-protect-all if PML is supported Background == The original idea of this patchset is from Avi who raised it in the mailing list during my vMMU development some years ago

[PATCH v2 7/7] KVM: MMU: stop using mmu_spte_get_lockless under mmu-lock

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong mmu_spte_age() is under the protection of mmu-lock, no reason to use mmu_spte_get_lockless() Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git

[PATCH v2 0/7] KVM: MMU: fast write protect

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong Changelog in v2: thanks to Paolo's review, this version disables write-protect-all if PML is supported Background == The original idea of this patchset is from Avi who raised it in the mailing list during my vMMU development some years ago This patchset introduces

[PATCH v2 7/7] KVM: MMU: stop using mmu_spte_get_lockless under mmu-lock

2017-06-20 Thread guangrong . xiao
From: Xiao Guangrong mmu_spte_age() is under the protection of mmu-lock, no reason to use mmu_spte_get_lockless() Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index

[PATCH 3/7] KVM: MMU: introduce kvm_mmu_write_protect_all_pages

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong The original idea is from Avi. kvm_mmu_write_protect_all_pages() is extremely fast to write protect all the guest memory. Comparing with the ordinary algorithm which write protects last level sptes based on the rmap one by one, it just simply

[PATCH 3/7] KVM: MMU: introduce kvm_mmu_write_protect_all_pages

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong The original idea is from Avi. kvm_mmu_write_protect_all_pages() is extremely fast to write protect all the guest memory. Comparing with the ordinary algorithm which write protects last level sptes based on the rmap one by one, it just simply updates the generation number to

[PATCH 6/7] KVM: MMU: clarify fast_pf_fix_direct_spte

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong The writable spte can not be locklessly fixed and add a WARN_ON() to trigger the warning if something out of our mind happens, that is good for us to track if the log for writable spte is missed on the fast path Signed-off-by: Xiao Guangrong

[PATCH 6/7] KVM: MMU: clarify fast_pf_fix_direct_spte

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong The writable spte can not be locklessly fixed and add a WARN_ON() to trigger the warning if something out of our mind happens, that is good for us to track if the log for writable spte is missed on the fast path Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 11

[PATCH 5/7] KVM: MMU: allow dirty log without write protect

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong A new flag, KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT, is introduced which indicates the userspace just wants to get the snapshot of dirty bitmap During live migration, after all snapshot of dirty bitmap is fetched from KVM, the guest memory can be

[PATCH 5/7] KVM: MMU: allow dirty log without write protect

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong A new flag, KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT, is introduced which indicates the userspace just wants to get the snapshot of dirty bitmap During live migration, after all snapshot of dirty bitmap is fetched from KVM, the guest memory can be write protected by calling

[PATCH 4/7] KVM: MMU: enable KVM_WRITE_PROTECT_ALL_MEM

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong The functionality of write protection for all guest memory is ready, it is the time to make its usable for userspace which is indicated by KVM_CAP_X86_WRITE_PROTECT_ALL_MEM Signed-off-by: Xiao Guangrong ---

[PATCH 7/7] KVM: MMU: stop using mmu_spte_get_lockless under mmu-lock

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong mmu_spte_age() is under the protection of mmu-lock, no reason to use mmu_spte_get_lockless() Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git

[PATCH 7/7] KVM: MMU: stop using mmu_spte_get_lockless under mmu-lock

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong mmu_spte_age() is under the protection of mmu-lock, no reason to use mmu_spte_get_lockless() Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index

[PATCH 4/7] KVM: MMU: enable KVM_WRITE_PROTECT_ALL_MEM

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong The functionality of write protection for all guest memory is ready, it is the time to make its usable for userspace which is indicated by KVM_CAP_X86_WRITE_PROTECT_ALL_MEM Signed-off-by: Xiao Guangrong --- arch/x86/kvm/x86.c | 6 ++ include/uapi/linux/kvm.h | 2

[PATCH 1/7] KVM: MMU: correct the behavior of mmu_spte_update_no_track

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong Current behavior of mmu_spte_update_no_track() does not match the name of _no_track() as actually the A/D bits are tracked and returned to the caller This patch introduces the real _no_track() function to update the spte regardless of A/D bits and

[PATCH 2/7] KVM: MMU: introduce possible_writable_spte_bitmap

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible

[PATCH 1/7] KVM: MMU: correct the behavior of mmu_spte_update_no_track

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong Current behavior of mmu_spte_update_no_track() does not match the name of _no_track() as actually the A/D bits are tracked and returned to the caller This patch introduces the real _no_track() function to update the spte regardless of A/D bits and rename the original

[PATCH 2/7] KVM: MMU: introduce possible_writable_spte_bitmap

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible writable sptes is introduced to

[PATCH 0/7] KVM: MMU: fast write protect

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong Background == The original idea of this patchset is from Avi who raised it in the mailing list during my vMMU development some years ago This patchset introduces a extremely fast way to write protect all the guest memory. Comparing with

[PATCH 0/7] KVM: MMU: fast write protect

2017-05-03 Thread guangrong . xiao
From: Xiao Guangrong Background == The original idea of this patchset is from Avi who raised it in the mailing list during my vMMU development some years ago This patchset introduces a extremely fast way to write protect all the guest memory. Comparing with the ordinary algorithm which

[PATCH 7/9] KVM: MMU: introduce kvm_zap_gfn_range()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong It is used to zap all the rmaps of the specified gfn range and will be used by the later patch Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 30 ++ arch/x86/kvm/mmu.h | 1 + 2 files changed, 31 insertions(+) diff --git

[PATCH 8/9] KVM: MMU: fix MTRR update

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong Currently, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since: 1) the cache type is not cached into shadow page's attribute so that the original root shadow page will be reused

[PATCH 2/9] KVM: MMU: introduce slot_handle_level() and its helper

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong There are several places walking all rmaps for the memslot so that introduce common functions to cleanup the code Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 63 ++ 1 file changed, 63 insertions(+) diff --git

[PATCH 4/9] KVM: MMU: introduce for_each_rmap_spte()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong It's used to walk all the sptes on the rmap to clean up the code Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 63 +++- arch/x86/kvm/mmu_audit.c | 4 +-- 2 files changed, 26 insertions(+), 41 deletions(-) diff

[PATCH 6/9] KVM: MMU: introduce kvm_zap_rmapp

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the later patch Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 20 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c

[PATCH 5/9] KVM: MMU: KVM: introduce for_each_slot_rmap

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong It is used to clean up the code between kvm_handle_hva_range and slot_handle_level, also it will be used by later patch Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 144 - 1 file changed, 99 insertions(+), 45

[PATCH 7/9] KVM: MMU: introduce kvm_zap_gfn_range()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong It is used to zap all the rmaps of the specified gfn range and will be used by the later patch Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 30 ++ arch/x86/kvm/mmu.h | 1 + 2 files changed, 31 insertions(+) diff --git

[PATCH 0/9] KVM: MTRR fixes and some cleanups

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU: - In current code, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since: 1) the cache type is not cached

[PATCH 3/9] KVM: MMU: use slot_handle_level and its helper to clean up the code

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong slot_handle_level and its helper functions are ready now, use them to clean up the code Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 129 - 1 file changed, 18 insertions(+), 111 deletions(-) diff --git

[PATCH 4/9] KVM: MMU: introduce for_each_rmap_spte()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong It's used to walk all the sptes on the rmap to clean up the code Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 63 +++- arch/x86/kvm/mmu_audit.c | 4 +-- 2 files changed, 26 insertions(+), 41 deletions(-) diff

[PATCH 6/9] KVM: MMU: introduce kvm_zap_rmapp

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the later patch Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 20 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c

[PATCH 2/9] KVM: MMU: introduce slot_handle_level() and its helper

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong There are several places walking all rmaps for the memslot so that introduce common functions to cleanup the code Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 63 ++ 1 file changed, 63 insertions(+) diff --git

[PATCH 9/9] KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong CR0.CD and CR0.NW are not used by shadow page table so that need not adjust mmu if these two bit are changed Signed-off-by: Xiao Guangrong --- arch/x86/kvm/x86.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c

[PATCH 9/9] KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong CR0.CD and CR0.NW are not used by shadow page table so that need not adjust mmu if these two bit are changed Signed-off-by: Xiao Guangrong --- arch/x86/kvm/x86.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c

[PATCH 3/9] KVM: MMU: use slot_handle_level and its helper to clean up the code

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong slot_handle_level and its helper functions are ready now, use them to clean up the code Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 129 - 1 file changed, 18 insertions(+), 111 deletions(-) diff --git

[PATCH 5/9] KVM: MMU: KVM: introduce for_each_slot_rmap

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong It is used to clean up the code between kvm_handle_hva_range and slot_handle_level, also it will be used by later patch Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 144 - 1 file changed, 99 insertions(+), 45

[PATCH 1/9] KVM: MMU: fix decoding cache type from MTRR

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong There are some bugs in current get_mtrr_type(); 1: bit 2 of mtrr_state->enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE MSR which completely control MTRR's enablement that means other bits are ignored if it is cleared 2: the fixed MTRR ranges are controlled by

[PATCH 0/9] KVM: MTRR fixes and some cleanups

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU: - In current code, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since: 1) the cache type is not cached

[PATCH 8/9] KVM: MMU: fix MTRR update

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong Currently, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since: 1) the cache type is not cached into shadow page's attribute so that the original root shadow page will be reused

[PATCH 1/9] KVM: MMU: fix decoding cache type from MTRR

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong There are some bugs in current get_mtrr_type(); 1: bit 2 of mtrr_state->enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE MSR which completely control MTRR's enablement that means other bits are ignored if it is cleared 2: the fixed MTRR ranges are controlled by

[PATCH 8/9] KVM: MMU: fix MTRR update

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com Currently, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since: 1) the cache type is not cached into shadow page's attribute so that the original

[PATCH 7/9] KVM: MMU: introduce kvm_zap_gfn_range()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com It is used to zap all the rmaps of the specified gfn range and will be used by the later patch Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 30 ++ arch/x86/kvm/mmu.h | 1 +

[PATCH 1/9] KVM: MMU: fix decoding cache type from MTRR

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com There are some bugs in current get_mtrr_type(); 1: bit 2 of mtrr_state-enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE MSR which completely control MTRR's enablement that means other bits are ignored if it is cleared 2: the fixed

[PATCH 8/9] KVM: MMU: fix MTRR update

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com Currently, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since: 1) the cache type is not cached into shadow page's attribute so that the original

[PATCH 0/9] KVM: MTRR fixes and some cleanups

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU: - In current code, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since:

[PATCH 1/9] KVM: MMU: fix decoding cache type from MTRR

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com There are some bugs in current get_mtrr_type(); 1: bit 2 of mtrr_state-enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE MSR which completely control MTRR's enablement that means other bits are ignored if it is cleared 2: the fixed

[PATCH 3/9] KVM: MMU: use slot_handle_level and its helper to clean up the code

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com slot_handle_level and its helper functions are ready now, use them to clean up the code Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 129 - 1 file

[PATCH 5/9] KVM: MMU: KVM: introduce for_each_slot_rmap

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com It is used to clean up the code between kvm_handle_hva_range and slot_handle_level, also it will be used by later patch Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 144

[PATCH 9/9] KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com CR0.CD and CR0.NW are not used by shadow page table so that need not adjust mmu if these two bit are changed Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/x86.c | 3 +-- 1 file changed, 1 insertion(+), 2

[PATCH 9/9] KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com CR0.CD and CR0.NW are not used by shadow page table so that need not adjust mmu if these two bit are changed Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/x86.c | 3 +-- 1 file changed, 1 insertion(+), 2

[PATCH 2/9] KVM: MMU: introduce slot_handle_level() and its helper

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com There are several places walking all rmaps for the memslot so that introduce common functions to cleanup the code Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 63

[PATCH 6/9] KVM: MMU: introduce kvm_zap_rmapp

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the later patch Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 20 1 file changed, 12 insertions(+), 8

[PATCH 4/9] KVM: MMU: introduce for_each_rmap_spte()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com It's used to walk all the sptes on the rmap to clean up the code Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 63 +++- arch/x86/kvm/mmu_audit.c | 4 +--

[PATCH 5/9] KVM: MMU: KVM: introduce for_each_slot_rmap

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com It is used to clean up the code between kvm_handle_hva_range and slot_handle_level, also it will be used by later patch Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 144

[PATCH 7/9] KVM: MMU: introduce kvm_zap_gfn_range()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com It is used to zap all the rmaps of the specified gfn range and will be used by the later patch Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 30 ++ arch/x86/kvm/mmu.h | 1 +

[PATCH 0/9] KVM: MTRR fixes and some cleanups

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU: - In current code, whenever guest MTRR registers are changed kvm_mmu_reset_context is called to switch to the new root shadow page table, however, it's useless since:

[PATCH 3/9] KVM: MMU: use slot_handle_level and its helper to clean up the code

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com slot_handle_level and its helper functions are ready now, use them to clean up the code Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 129 - 1 file

[PATCH 6/9] KVM: MMU: introduce kvm_zap_rmapp

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the later patch Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 20 1 file changed, 12 insertions(+), 8

[PATCH 4/9] KVM: MMU: introduce for_each_rmap_spte()

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com It's used to walk all the sptes on the rmap to clean up the code Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 63 +++- arch/x86/kvm/mmu_audit.c | 4 +--

[PATCH 2/9] KVM: MMU: introduce slot_handle_level() and its helper

2015-04-30 Thread guangrong . xiao
From: Xiao Guangrong guangrong.x...@linux.intel.com There are several places walking all rmaps for the memslot so that introduce common functions to cleanup the code Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com --- arch/x86/kvm/mmu.c | 63