On Mon, 2015-03-30 at 10:39 +0530, Aneesh Kumar K.V wrote:
> This patch remove helpers which we had used only once in the code.
> Limiting page table walk variants help in ensuring that we won't
> end up with code walking page table with wrong assumptions.
>
> Signed-off-by: Aneesh Kumar K.V
Ale
pte can get updated from other CPUs as part of multiple activities
like THP split, huge page collapse, unmap. We need to make sure we
don't reload the pte value again and again for different checks.
Signed-off-by: Aneesh Kumar K.V
---
Note:
This is posted previously as part of
http://article.gman
This patch remove helpers which we had used only once in the code.
Limiting page table walk variants help in ensuring that we won't
end up with code walking page table with wrong assumptions.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgtable.h | 21 -
arch/powerpc
> -Original Message-
> From: Marcelo Tosatti [mailto:mtosa...@redhat.com]
> Sent: Saturday, March 28, 2015 3:30 AM
> To: Wu, Feng
> Cc: h...@zytor.com; t...@linutronix.de; mi...@redhat.com; x...@kernel.org;
> g...@kernel.org; pbonz...@redhat.com; dw...@infradead.org;
> j...@8bytes.org; al
There are two scenarios for the requirement of collapsing small sptes
into large sptes.
- dirty logging tracks sptes in 4k granularity, so large sptes are splitted,
the large sptes will be reallocated in the destination machine and the
guest in the source machine will be destroyed when live mig
After speed-up of cpuid_maxphyaddr() it can be called easily. Now instead of
heavy enumeration of CPUID entries it returns cached pre-computed value. It is
also inlined now. So caching its result became unnecessary and can be removed.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx.c | 14
cpuid_maxphyaddr() which performs lot of memory accesses is called extensively
across KVM, especially in nVMX code.
This patch adds cached value of maxphyaddr to vcpu.arch to reduce the pressure
onto
CPU cache and simplify the code of cpuid_maxphyaddr() callers. The cached value
is
initialized in
On each VM-entry CPU should check the following VMCS fields for zero bits
beyond physical address width:
- APIC-access address
- virtual-APIC address
- posted-interrupt descriptor address
This patch adds these checks required by Intel SDM.
Signed-off-by: Eugene Korenevsky
---
arch/x86/kvm/vmx
There are several redundant definitions in processor-flags.h and emulator.c.
Slowly, but surely they will get mixed, so removing those of emulator.c seems
like a reasonable move (unless I am missing something, e.g., kvm-kmod
consideration).
Nadav Amit (2):
KVM: x86: removing redundant eflags bi
Some constants are redfined in emulate.c. Avoid it.
s/SELECTOR_RPL_MASK/SEGMENT_RPL_MASK
s/SELECTOR_TI_MASK/SEGMENT_TI_MASK
No functional change.
Signed-off-by: Nadav Amit
---
arch/x86/include/asm/kvm_host.h | 3 ---
arch/x86/kvm/emulate.c | 6 +++---
arch/x86/kvm/vmx.c
The eflags are redefined (using other defines) in emulate.c.
Use the definition from processor-flags.h as some mess already started.
No functional change.
Signed-off-by: Nadav Amit
---
arch/x86/include/asm/kvm_host.h | 2 -
arch/x86/kvm/emulate.c | 105 ++--
11 matches
Mail list logo