https://bugzilla.kernel.org/show_bug.cgi?id=65561
--- Comment #31 from Jidong Xiao jidong.x...@gmail.com ---
Hi, Paolo,
I added a sti instruction in kvm-unit-tests:x86/debug.c, like this:
asm volatile(
pushf\n\t
pop %%rax\n\t
sti\n\t
https://bugzilla.kernel.org/show_bug.cgi?id=65561
--- Comment #32 from Jatin Kumar jatin.iitde...@gmail.com ---
(In reply to Jidong Xiao from comment #29)
Okay, I will try the sti instruction.
Jatin, since your title says that there is something wrong with the sti
instruction, but looking at
https://bugzilla.kernel.org/show_bug.cgi?id=65561
--- Comment #33 from Jidong Xiao jidong.x...@gmail.com ---
Jatin, thanks for the clarification. Are you using kgdb to perform single step
kernel code? I am still wondering how did you do the single step execution for
kernel level code.
--
You
https://bugzilla.kernel.org/show_bug.cgi?id=65561
--- Comment #34 from Jatin Kumar jatin.iitde...@gmail.com ---
(In reply to Jidong Xiao from comment #33)
Jatin, thanks for the clarification. Are you using kgdb to perform single
step kernel code? I am still wondering how did you do the single
On 24.05.14 10:21, Paul Mackerras wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
The dirty map that we construct for the KVM_GET_DIRTY_LOG ioctl has
one bit per system page (4K/64K). Currently, we only set one bit in
the map for each HPT entry with the Change bit set, even if the HPT is
for
On 24.05.14 10:22, Paul Mackerras wrote:
This adds workarounds for two hardware bugs in the POWER8 performance
monitor unit (PMU), both related to interrupt generation. The effect
of these bugs is that PMU interrupts can get lost, leading to tools
such as perf reporting fewer counts and
On Tue, May 20, 2014 at 05:55:38PM +0100, Marc Zyngier wrote:
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.
Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order
On Tue, May 20, 2014 at 05:55:39PM +0100, Marc Zyngier wrote:
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.
On Tue, May 20, 2014 at 05:55:36PM +0100, Marc Zyngier wrote:
This patch series adds debug support, a key feature missing from the
KVM/arm64 port.
The main idea is to keep track of whether the debug registers are
dirty (changed by the guest) or not. In this case, perform the usual
On Tue, May 20, 2014 at 05:55:40PM +0100, Marc Zyngier wrote:
As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.
Reviewed-by: Anup Patel anup.pa...@linaro.org
Signed-off-by: Marc Zyngier
On Tue, May 20, 2014 at 05:55:37PM +0100, Marc Zyngier wrote:
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.
On Tue, May 20, 2014 at 05:55:42PM +0100, Marc Zyngier wrote:
We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).
Reviewed-by: Anup Patel
On Tue, May 20, 2014 at 05:55:41PM +0100, Marc Zyngier wrote:
An interesting feature of the CP14 encoding is that there is
an overlap between 32 and 64bit registers, meaning they cannot
live in the same table as we did for CP15.
Create separate tables for 64bit CP14 and CP15 registers, and
On Tue, May 20, 2014 at 05:55:43PM +0100, Marc Zyngier wrote:
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.
Reviewed-by: Anup
On Tue, May 20, 2014 at 05:55:44PM +0100, Marc Zyngier wrote:
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers only).
Also, we only
On Tue, May 20, 2014 at 05:55:45PM +0100, Marc Zyngier wrote:
Enable trapping of the debug registers, preventing the guests to
mess with the host state (and allowing guests to use the debug
infrastructure as well).
Reviewed-by: Anup Patel anup.pa...@linaro.org
Signed-off-by: Marc Zyngier
On Tue, May 20, 2014 at 06:06:03PM +0100, Marc Zyngier wrote:
In order to allow KVM to run on Cortex-A53 implementations, wire the
minimal support required.
Signed-off-by: Marc Zyngier marc.zyng...@arm.com
ack,
I've applied this to kvmarm/next.
-Christoffer
--
To unsubscribe from this
Hi Paolo and Gleb,
The following changes since commit 198c74f43f0f5473f99967aead30ddc622804bc1:
KVM: MMU: flush tlb out of mmu lock when write-protect the sptes (2014-04-23
17:49:52 -0300)
are available in the git repository at:
From: Anup Patel anup.pa...@linaro.org
Currently, we don't have an exit reason to notify user space about
a system-level event (for e.g. system reset or shutdown) triggered
by the VCPU. This patch adds exit reason KVM_EXIT_SYSTEM_EVENT for
this purpose. We can also inform user space about the
From: Anup Patel anup.pa...@linaro.org
We need a common place to share PSCI related defines among ARM kernel,
ARM64 kernel, KVM ARM/ARM64 PSCI emulation, and user space.
We introduce uapi/linux/psci.h for this purpose. This newly added
header will be first used by KVM ARM/ARM64 in-kernel PSCI
From: Anup Patel anup.pa...@linaro.org
This patch adds emulation of PSCI v0.2 MIGRATE, MIGRATE_INFO_TYPE, and
MIGRATE_INFO_UP_CPU function calls for KVM ARM/ARM64.
KVM ARM/ARM64 being a hypervisor (and not a Trusted OS), we cannot provide
this functions hence we emulate these functions in
From: Anup Patel anup.pa...@linaro.org
Currently, the in-kernel PSCI emulation provides PSCI v0.1 interface to
VCPUs. This patch extends current in-kernel PSCI emulation to provide
PSCI v0.2 interface to VCPUs.
By default, ARM/ARM64 KVM will always provide PSCI v0.1 interface for
keeping the ABI
From: Anup Patel anup.pa...@linaro.org
As-per PSCI v0.2, the source CPU provides physical address of
entry point and context id for starting a target CPU. Also,
if target CPU is already running then we should return ALREADY_ON.
Current emulation of CPU_ON function does not consider physical
From: Anup Patel anup.pa...@linaro.org
User space (i.e. QEMU or KVMTOOL) should be able to check whether KVM
ARM/ARM64 supports in-kernel PSCI v0.2 emulation. For this purpose, we
define KVM_CAP_ARM_PSCI_0_2 in KVM user space interface header.
Signed-off-by: Anup Patel anup.pa...@linaro.org
From: Anup Patel anup.pa...@linaro.org
We have in-kernel emulation of PSCI v0.2 in KVM ARM/ARM64. To provide
PSCI v0.2 interface to VCPUs, we have to enable KVM_ARM_VCPU_PSCI_0_2
feature when doing KVM_ARM_VCPU_INIT ioctl.
The patch updates documentation of KVM_ARM_VCPU_INIT ioctl to provide
From: Anup Patel anup.pa...@linaro.org
The PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET functions are system-level
functions hence cannot be fully emulated by in-kernel PSCI emulation code.
To tackle this, we forward PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET function
calls from vcpu to user space (i.e.
From: Anup Patel anup.pa...@linaro.org
We have PSCI v0.2 emulation available in KVM ARM/ARM64
hence advertise this to user space (i.e. QEMU or KVMTOOL)
via KVM_CHECK_EXTENSION ioctl.
Signed-off-by: Anup Patel anup.pa...@linaro.org
Signed-off-by: Pranavkumar Sawargaonkar pranavku...@linaro.org
From: Anup Patel anup.pa...@linaro.org
Currently, the kvm_psci_call() returns 'true' or 'false' based on whether
the PSCI function call was handled successfully or not. This does not help
us emulate system-level PSCI functions where the actual emulation work will
be done by user space (QEMU or
From: Anup Patel anup.pa...@linaro.org
This patch adds emulation of PSCI v0.2 CPU_SUSPEND function call for
KVM ARM/ARM64. This is a CPU-level function call which can suspend
current CPU or current CPU cluster. We don't have VCPU clusters in
KVM so we only suspend the current VCPU.
The
From: Anup Patel anup.pa...@linaro.org
This patch adds emulation of PSCI v0.2 AFFINITY_INFO function call
for KVM ARM/ARM64. This is a VCPU-level function call which will be
used to determine current state of given affinity level.
Signed-off-by: Anup Patel anup.pa...@linaro.org
Signed-off-by:
From: Marc Zyngier marc.zyng...@arm.com
In order to allow KVM to run on Cortex-A53 implementations, wire the
minimal support required.
Signed-off-by: Marc Zyngier marc.zyng...@arm.com
Signed-off-by: Christoffer Dall christoffer.d...@linaro.org
---
arch/arm64/include/asm/cputype.h | 1 +
From: Ashwin Chaugule ashwin.chaug...@linaro.org
PSCIv0.2 adds a new function called AFFINITY_INFO, which
can be used to query if a specified CPU has actually gone
offline. Calling this function via cpu_kill ensures that
a CPU has quiesced after a call to cpu_die. This helps
prevent the CPU from
From: Ashwin Chaugule ashwin.chaug...@linaro.org
The PSCIv0.2 spec defines standard values of function IDs
and introduces a few new functions. Detect version of PSCI
and appropriately select the right PSCI functions.
Signed-off-by: Ashwin Chaugule ashwin.chaug...@linaro.org
Reviewed-by: Rob
From: Ashwin Chaugule ashwin.chaug...@linaro.org
The PSCI v0.2+ spec defines standard values for PSCI function IDs.
Add a new binding entry so that pre v0.2 implementations can
use DT entries for function IDs and v0.2+ implementations use
standard entries as defined by the PSCIv0.2 specification.
MOV CR/DR instructions ignore the mod field (in the ModR/M byte). As the SDM
states: The 2 bits in the mod field are ignored. Accordingly, the second
operand of these instructions is always a general purpose register.
The current emulator implementation does not do so. If the mod bits do not
Another day, another CPL patch...
It turns out that the simple approach of getting CPL from SS.DPL
broke x86/taskswitch2.flat. To fix that, already imagine that the
CPL is CS.RPL, or 3 for VM86 tasks, while loading segment descriptors
during task switches. This removes the hack where task
Not needed anymore now that the CPL is computed directly
by the task switch code.
Given the current form, looks OK to me.
Reviewed-by: Wei Huang huangwei.v...@gmail.com
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
arch/x86/include/asm/kvm_emulate.h | 1 -
arch/x86/kvm/x86.c
On Sat, May 24, 2014 at 1:12 PM, Wei Huang huangwei.v...@gmail.com wrote:
Table 7-1 of the SDM mentions a check that the code segment's
DPL must match the selector's RPL. This was not done by KVM,
fix it.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
arch/x86/kvm/emulate.c | 31
CS.RPL is not equal to the CPL in the few instructions between
setting CR0.PE and reloading CS. And CS.DPL is also not equal
to the CPL for conforming code segments.
Out of my curiousity, could you elaborate the problem of this
CPL gap window, such as breaking any VMs or tests? From Linux
On Sun, May 25, 2014 at 12:00:48PM +0200, Alexander Graf wrote:
Please document in the function header that the return value is the number
of pages that are dirty. Alternatively rename the function.
OK.
for (i = 0; i memslot-npages; ++i) {
-if (kvm_test_clear_dirty(kvm,
During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
to all the other requirements) and will be the new CPL. So far this
worked by carefully setting the CS selector and flag before doing the
s/flag/EFLAGS/
task switch; however, this will not work once we get the CPL
On 24.05.14 10:21, Paul Mackerras wrote:
From: Alexey Kardashevskiy a...@ozlabs.ru
The dirty map that we construct for the KVM_GET_DIRTY_LOG ioctl has
one bit per system page (4K/64K). Currently, we only set one bit in
the map for each HPT entry with the Change bit set, even if the HPT is
for
On 24.05.14 10:22, Paul Mackerras wrote:
This adds workarounds for two hardware bugs in the POWER8 performance
monitor unit (PMU), both related to interrupt generation. The effect
of these bugs is that PMU interrupts can get lost, leading to tools
such as perf reporting fewer counts and
On Sun, May 25, 2014 at 12:00:48PM +0200, Alexander Graf wrote:
Please document in the function header that the return value is the number
of pages that are dirty. Alternatively rename the function.
OK.
for (i = 0; i memslot-npages; ++i) {
-if (kvm_test_clear_dirty(kvm,
44 matches
Mail list logo