x20
| [] kvm_reset_vcpu+0x5c/0xac
| [] kvm_arch_vcpu_ioctl+0x3e4/0x490
| [] kvm_vcpu_ioctl+0x5b8/0x720
| [] do_vfs_ioctl+0x2f4/0x884
| [] SyS_ioctl+0x78/0x9c
| [] __sys_trace_return+0x0/0x4
Cc: # < v5.3 with 2a5f1b67ec57 backported
Signed-off-by: James Morse
---
arch/arm64/kvm/sys_regs.c | 4 +++-
1
l.org # ${GITHASHHERE}: arm64: Add Cortex-A510 CPU part
definition
Cc: sta...@vger.kernel.org
Signed-off-by: James Morse
---
Changes since v1:
* Moved the SPSR_EL2 fixup into a helper called earlier
* Use final cap
* Dropped the IS_ENABLED() check
Documentation/arm64/silicon-errata.rst | 2 ++
arch/
rollback on SError to HYP")
Cc: sta...@vger.kernel.org
Signed-off-by: James Morse
---
It may be possible to remove both this patch, and the HVC handling code
in fixup_guest_exit(). This means KVM would always handle the exception
and the SError. This may result in unnecessary work if the guest
messing with ELR_EL2, IRQs don't
update this register so don't need to check.
Fixes: defe21f49bc9 ("KVM: arm64: Move PC rollback on SError to HYP")
Cc: sta...@vger.kernel.org
Reported-by: Steven Price
Signed-off-by: James Morse
---
arch/arm64/kvm/hyp/include/hyp/switch.
Signed-off-by: James Morse
---
arch/arm64/include/asm/cputype.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 19b8441aa8f2..e8fdc10395b6 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm
ttps://git.gitlab.arm.com/linux-arm/linux-jm.git a510_errata/kvm_bits/v2
Thanks,
James
Anshuman Khandual (1):
arm64: Add Cortex-A510 CPU part definition
James Morse (3):
KVM: arm64: Avoid consuming a stale esr value when SError occur
KVM: arm64: Stop handle_exit() from handling HVC twice
Hi Marc,
On 25/01/2022 18:36, Marc Zyngier wrote:
> On Tue, 25 Jan 2022 18:19:45 +,
> James Morse wrote:
>> On 25/01/2022 16:51, Marc Zyngier wrote:
>>> On Tue, 25 Jan 2022 15:38:03 +,
>>> James Morse wrote:
>>>>
>>>> Cortex-A510
Hi Marc,
On 25/01/2022 16:51, Marc Zyngier wrote:
> On Tue, 25 Jan 2022 15:38:03 +,
> James Morse wrote:
>>
>> Cortex-A510's erratum #2077057 causes SPSR_EL2 to be corrupted when
>> single-stepping authenticated ERET instructions. A single step is
>> exp
l.org # ${GITHASHHERE}: arm64: Add Cortex-A510 CPU part
definition
Cc: sta...@vger.kernel.org
Signed-off-by: James Morse
---
Documentation/arm64/silicon-errata.rst | 2 ++
arch/arm64/Kconfig | 16
arch/arm64/kernel/cpu_errata.c | 8
arch/arm64/kv
Signed-off-by: James Morse
---
arch/arm64/include/asm/cputype.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 19b8441aa8f2..e8fdc10395b6 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm
messing with ELR_EL2, IRQs don't
update this register so don't need to check.
Fixes: defe21f49bc9 ("KVM: arm64: Move PC rollback on SError to HYP")
Cc: sta...@vger.kernel.org
Reported-by: Steven Price
Signed-off-by: James Morse
---
arch/arm64/kvm/hyp/include/hyp/switch.
rollback on SError to HYP")
Cc: sta...@vger.kernel.org
Signed-off-by: James Morse
---
It may be possible to remove both this patch, and the HVC handling code
in fixup_guest_exit(). This means KVM would always handle the exception
and the SError. This may result in unnecessary work if the guest
-A510 CPU part definition
James Morse (3):
KVM: arm64: Avoid consuming a stale esr value when SError occur
KVM: arm64: Stop handle_exit() from handling HVC twice when an SError
occurs
KVM: arm64: Workaround Cortex-A510's single-step and PAC trap errata
Documentation/arm64/silicon-e
Hi Marc,
On 16/10/2021 14:50, Marc Zyngier wrote:
> On Fri, 15 Oct 2021 17:14:13 +0100,
> James Morse wrote:
>>
>> If the CPUs support HPDS2, and there is a DT description of PBHA values
>> that only affect performance, enable those bits for both TTBR0 and TTBR1.
>>
Hi Marc,
On 16/10/2021 14:27, Marc Zyngier wrote:
> On Fri, 15 Oct 2021 17:14:10 +0100,
> James Morse wrote:
>>
>> Page Based Hardware Attributes (PBHA, aka HPDS2) allow a page table entry
>> to specify up to four bits that can be used by the hardware for some
>>
PBHA isn't defined by the Arm CPU architecture, so may have surprising
side-effects.
Document what is, and what is not supported. List the arch code's
expectations regarding how PBHA behaves.
Signed-off-by: James Morse
---
Documentation/arm64/index.rst | 1 +
Documentation/arm6
value (5 -> 5, 4, 1), and check each of
these values is listed as only affecting performance. If so, the bits
of the original value (5) can be used by the guest at stage1. (by clearing
the bits from VTCR_EL2)
Signed-off-by: James Morse
---
I've checked the TRMs for the listed CPUs.
T
Add a pgprot_pbha() helper that modifies a pgprot_t to include a pbha
value. The value is checked against those that were listed as only
affecting performance.
Signed-off-by: James Morse
---
arch/arm64/include/asm/pgtable-hwdef.h | 1 +
arch/arm64/include/asm/pgtable.h | 12
inear region") used these, but only as an optimisation.
Only the necessary PBHA bits are enabled to reduce the risk of an
unsafe bit/value being used by accident.
Signed-off-by: James Morse
---
arch/arm64/Kconfig | 13 +
arch/arm64/include/asm/pgtable-hwdef.h | 4 ++
ow these hints to be used, add a way of describing which
values only have a performance impact, and which can only be
used if all mappings use the same PBHA value. This goes in the
cpus node binding, as it must be the same for all CPUs.
Signed-off-by: James Morse
---
.../devicetree/bindings/a
The cpus.yaml file describes the cpu nodes, not the cpus node.
Rename it to allow integration properties of all the cpus to be described
in the cpus node.
Signed-off-by: James Morse
---
Documentation/devicetree/bindings/arm/{cpus.yaml => cpu.yaml} | 2 +-
1 file changed, 1 insertion(+)
s.
Enable PBHA for stage2, where the configured value is zero. This has no
effect if PBHA isn't in use. On Cortex cores that have the 'stage2 wins'
behaviour, this disables whatever the guest may be doing. For any other
core with a sensible combination policy, it should be harmles
o
this. (do we need to?)
I don't have a platform that uses any of this, so I can't detect whether or not
the PBHA values were generated with the read/writes.
Thanks,
James Morse (7):
KVM: arm64: Detect and enable PBHA for stage2
dt-bindings: Rename the description of cpu node
Hi Steven, Catalin,
On 18/11/2020 16:01, Steven Price wrote:
> On 17/11/2020 16:07, Catalin Marinas wrote:
>> On Mon, Oct 26, 2020 at 03:57:27PM +, Steven Price wrote:
>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>> index 19aacc7d64de..38fe25310ca1 100644
>>> --- a/arch/arm64/
Hi Alex,
On 27/10/2020 17:26, Alexandru Elisei wrote:
> Stage 2 faults triggered by the profiling buffer attempting to write to
> memory are reported by the SPE hardware by asserting a buffer management
> event interrupt. Interrupts are by their nature asynchronous, which means
> that the guest mi
Hi Alex,
On 27/10/2020 17:26, Alexandru Elisei wrote:
> From: Sudeep Holla
>
> To configure the virtual SPE buffer management interrupt number, we use a
> VCPU kvm_device ioctl, encapsulating the KVM_ARM_VCPU_SPE_IRQ attribute
> within the KVM_ARM_VCPU_SPE_CTRL group.
>
> After configuring the
Hi Alex,
On 27/10/2020 17:26, Alexandru Elisei wrote:
> KVM SPE emulation depends on the configuration option KVM_ARM_SPE and on on
> having hardware SPE support on all CPUs.
> The host driver must be
> compiled-in because we need the SPE interrupt to be enabled; it will be
> used to kick us out
Hi Alex,
On 27/10/2020 17:26, Alexandru Elisei wrote:
> Detect Statistical Profiling Extension (SPE) support using the cpufeatures
> framework. The presence of SPE is reported via the ARM64_SPE capability.
>
> The feature will be necessary for emulating SPE in KVM, because KVM needs
> that all CP
Hi Alex,
On 27/10/2020 17:26, Alexandru Elisei wrote:
> When a VCPU is created, the kvm_vcpu struct is initialized to zero in
> kvm_vm_ioctl_create_vcpu(). On VHE systems, the first time
> vcpu.arch.mdcr_el2 is loaded on hardware is in vcpu_load(), before it is
> set to a sensible value in kvm_arm
Hi Marc,
On 02/11/2020 19:16, Marc Zyngier wrote:
> The use of the AArch32-specific accessors have always been a bit
> annoying on 64bit, and it is time for a change.
>
> Let's move the AArch32 exception injection over to the AArch64 encoding,
> which requires us to split the two halves of FAR_EL
Hi Marc,
On 02/11/2020 19:16, Marc Zyngier wrote:
> Similarly to what has been done on the cp15 front, repaint the
> debug registers to use their AArch64 counterparts. This results
> in some simplification as we can remove the 32bit-specific
> accessors.
> diff --git a/arch/arm64/kvm/sys_regs.c b
Hi Marc,
On 02/11/2020 19:16, Marc Zyngier wrote:
> Since the very beginning of KVM/arm64, we represented the system
> register file using a dual view: on one side the AArch64 state, on the
> other a bizarre mapping of the AArch64 state onto the Aarch64
> registers.
Now that would be bizarre!
m
Hi Marc,
On 02/11/2020 19:16, Marc Zyngier wrote:
> Move all the cp15 registers over to their AArch64 counterpart.
> This requires the annotation of a few of them (such as the usual
> DFAR/IFAR vs FAR_EL1), and a new helper that generates mask/shift
> pairs for the various configurations.
> diff
Hi Marc,
On 27/10/2020 19:21, Marc Zyngier wrote:
>>> +static inline u32 __vcpu_read_cp15(const struct kvm_vcpu *vcpu, int reg)
>>> +{
>>> + return __vcpu_read_sys_reg(vcpu, reg / 2);
>>> +}
>> Doesn't this re-implement the issue 3204be4109ad biased?
> I don't think it does. The issue existed
Hi Marc,
On 26/10/2020 13:34, Marc Zyngier wrote:
> Move the AArch64 exception injection code from EL1 to HYP, leaving
> only the ESR_EL1 updates to EL1. In order to come with the differences
(cope with the differences?)
> between VHE and nVHE, two set of system register accessors are provided.
Hi Marc,
On 26/10/2020 13:34, Marc Zyngier wrote:
> Similarily to what has been done for AArch64, move the AArch32 exception
> inhjection to HYP.
>
> In order to not use the regmap selection code at EL2, simplify the code
> populating the target mode's LR register by harcoding the two possible
>
Hi Marc,
On 26/10/2020 13:34, Marc Zyngier wrote:
> Instead of handling the "PC rollback on SError during HVC" at EL1 (which
> requires disclosing PC to a potentially untrusted kernel), let's move
> this fixup to ... fixup_guest_exit(), which is where we do all fixups.
> diff --git a/arch/arm64/k
Hi Gavin,
[my mail client went a bit nuts - it thinks this got sent already, sorry if you
received
it twice!]
I only got so far through this, so may have focussed on the wrong things.
This patch has too many things going on. Please split it up.
I think the page-fault and page-present should be
Hi Gavin,
On 18/08/2020 02:13, Gavin Shan wrote:
> This defines the struct for ACPI APF table. The information included
> in this table will be used by guest kernel to retrieve SDEI event
> number, PPI number and its triggering properties:
>
>* @sdei_event: number of SDEI event used for page-
Hi Gavin,
On 18/08/2020 02:13, Gavin Shan wrote:
> This renames user_mem_abort() to kvm_handle_user_mem_abort(), and
> then exports it. The function will be used in asynchronous page fault
> to populate a page table entry once the corresponding page is populated
> from the backup device (e.g. swap
Hi Gavin,
I think this series would benefit from being in smaller pieces. I got lost in
patch 4 for
quite a while. Suggestion on where to do that in patch 4.
On 18/08/2020 02:13, Gavin Shan wrote:
> There are two stages of page fault. The guest kernel is responsible
> for handling stage one pag
ttle simpler than having an in-kernel SDEI dispatcher,
and has no additional state that would need migrating.
RFC - I haven't tested this. My question is why can't we do it?
CC: Gavin Shan
NAK-bait-for: Marc Zyngier
Signed-off-by: James Morse
---
arch/arm64/include/asm/kvm_emulate.h | 1 +
a
Hi Marc, Drew,
On 28/09/2020 12:52, Marc Zyngier wrote:
> On 2020-09-26 10:48, Andrew Jones wrote:
>> On Fri, Sep 25, 2020 at 05:01:02PM +0100, James Morse wrote:
>>> Commit 011e5f5bf529 ("arm64/cpufeature: Add remaining feature bits in
>>> ID_AA64PFR0 regist
xes: 011e5f5bf529 ("arm64/cpufeature: Add remaining feature bits in
ID_AA64PFR0 register")
Cc:
Cc: Anshuman Khandual
Signed-off-by: James Morse
---
I'll be back at rc1 with the minimal KVM support to ensure the traps
are enabled and handled islently.
---
arch/arm64/kvm/sys_r
Hi Pingfan,
On 12/08/2020 15:05, Pingfan Liu wrote:
> Both arm and arm64 kernel entry point have the following prerequisite:
> MMU = off, D-cache = off, I-cache = dont care.
>
> HVC_SOFT_RESTART call should meet this prerequisite before jumping to the
> new kernel.
I think you have this the wr
Hi Andrew,
On 11/08/2020 16:12, Andrew Scull wrote:
> On Wed, Aug 05, 2020 at 03:37:27PM +0100, James Morse wrote:
>> On 31/07/2020 11:20, Andrew Scull wrote:
>>> If there is a pending physical SError, we'd have to keep it pending so
>>> the host can consume it.
&
Hi Andrew,
On 11/08/2020 15:53, Andrew Scull wrote:
> On Wed, Aug 05, 2020 at 03:34:11PM +0100, James Morse wrote:
>> On 30/07/2020 23:31, Andrew Scull wrote:
>>> On Thu, Jul 30, 2020 at 04:18:23PM +0100, Andrew Scull wrote:
>>>> The ESB at the start of the ve
Hi Andrew,
On 31/07/2020 11:20, Andrew Scull wrote:
> On Fri, Jul 31, 2020 at 09:00:03AM +0100, Marc Zyngier wrote:
>> On 2020-07-30 23:31, Andrew Scull wrote:
>>> On Thu, Jul 30, 2020 at 04:18:23PM +0100, Andrew Scull wrote:
The ESB at the start of the vectors causes any SErrors to be
c
Hi Andrew,
On 30/07/2020 23:31, Andrew Scull wrote:
> On Thu, Jul 30, 2020 at 04:18:23PM +0100, Andrew Scull wrote:
>> The ESB at the start of the vectors causes any SErrors to be consumed to
>> DISR_EL1. If the exception came from the host and the ESB caught an
>> SError, it would not be noticed
Hi Andrew,
On 30/07/2020 16:18, Andrew Scull wrote:
> The ESB at the start of the vectors causes any SErrors to be consumed to
> DISR_EL1. If the exception came from the host and the ESB caught an
> SError, it would not be noticed until a guest exits and DISR_EL1 is
> checked. Further, the SError
be made to the PC.
Oops!
Reviewed-by: James Morse
Thanks,
James
> Fixes: ddb3d07cfe90 ("arm64: KVM: Inject a Virtual SError if it was pending")
> Signed-off-by: Andrew Scull
> ---
> arch/arm64/kvm/handle_exit.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 del
Hi Marc, Andrew,
On 06/07/2020 11:11, Marc Zyngier wrote:
> On 2020-07-06 10:52, Andrew Scull wrote:
>> HVC_SOFT_RESTART is given values for x0-2 that it should installed
>> before exiting to the new address so should not set x0 to stub HVC
>> success or failure code.
>> diff --git a/arch/arm64/k
Hi guys,
On 30/06/2020 09:36, Will Deacon wrote:
> On Tue, Jun 30, 2020 at 09:15:15AM +0100, Marc Zyngier wrote:
>> On 2020-06-29 22:33, Rob Herring wrote:
>>> On Cortex-A77 r0p0 and r1p0, a sequence of a non-cacheable or device
>>> load
>>> and a store exclusive or PAR_EL1 read can cause a deadlo
Hi guys,
On 24/06/2020 17:24, Catalin Marinas wrote:
> On Wed, Jun 24, 2020 at 03:59:35PM +0100, Steven Price wrote:
>> On 24/06/2020 15:21, Catalin Marinas wrote:
>>> On Wed, Jun 24, 2020 at 12:16:28PM +0100, Steven Price wrote:
On 23/06/2020 18:48, Catalin Marinas wrote:
> This causes p
Hi Steve,
On 17/06/2020 16:34, Steven Price wrote:
> On 17/06/2020 15:38, Catalin Marinas wrote:
>> On Wed, Jun 17, 2020 at 01:38:44PM +0100, Steven Price wrote:
>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>>> index e3b9ee268823..040a7fffaa93 100644
>>> --- a/virt/kvm/arm/mmu.c
>>> ++
Finally, remove the target table. Merge the code that checks the
tables into kvm_reset_sys_regs() as there is now only one table.
Signed-off-by: James Morse
---
arch/arm64/include/asm/kvm_coproc.h | 7
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/sys_regs.c
sys_regs
array, kvm_register_target_sys_reg_table() becomes
kvm_check_target_sys_reg_table(), which uses BUG_ON() in keeping
with the other callers in this file.
Signed-off-by: James Morse
---
arch/arm64/include/asm/kvm_coproc.h | 3 +--
arch/arm64/kvm/sys_regs.c| 16
The only entry in the genericv8_sys_regs arrays is for emulation of
ACTLR_EL1. As all targets emulate this in the same way, move it to
sys_reg_descs[].
Signed-off-by: James Morse
---
arch/arm64/kvm/sys_regs.c| 28 ++
arch/arm64/kvm/sys_regs_generic_v8.c | 30
that take
it.
Signed-off-by: James Morse
---
arch/arm64/kvm/sys_regs.c | 87 +++
1 file changed, 16 insertions(+), 71 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f8407cfa9032..14333005b476 100644
--- a/arch/arm64/kvm
Before emptying the target_table lists, and then removing their
infrastructure, add some tolerance to an empty list.
Instead of bugging-out on an empty list, pretend we already
reached the end in the two-list-walk.
Signed-off-by: James Morse
---
arch/arm64/kvm/sys_regs.c | 5 -
1 file
arm64/kill_target_table/v1
Thanks,
James Morse (5):
KVM: arm64: Drop the target_table[] indirection
KVM: arm64: Tolerate an empty target_table list
KVM: arm64: Move ACTLR_EL1 emulation to the sys_reg_descs array
KVM: arm64: Remove target_table from exit handlers
KVM: arm64: Remove the t
t; an instruction which wasn't mapped in the EL2 translation tables. Using
> objdump showed the two functions as separate symbols in the .text section.
Bother. Looks like I didn't have CONFIG_ARM64_PSEUDO_NMI enabled when I went
looking for
these!
Acked-by: James Morse
Thanks,
p_va applied in __kvm_vcpu_run_nvhe.
So it is!
> kern_hyp_va is currently idempotent as it just masks and inserts the
> tag, but this could change in future and the second application is
> unnecessary.
Reviewed-by: James Morse
Thanks,
James
___
same array, as they are backed by the
* same system registers.
*/
-#ifdef CPU_BIG_ENDIAN
-#define CPx_OFFSET 1
-#else
-#define CPx_OFFSET 0
-#endif
+#define CPx_OFFSET IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)
#define vcpu_cp14(v,r) ((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
#define vcp
ACTLR_EL1 is a 64bit register while the 32bit ACTLR is obviously 32bit.
For 32bit software, the extra bits are accessible via ACTLR2... which
KVM doesn't emulate.
Suggested-by: Marc Zyngier
Signed-off-by: James Morse
---
arch/arm64/kvm/sys_regs_generic_v8.c | 10 ++
1 file change
oring this register. Keep the storage for this register
in sys_regs[] as this is how the value is exposed to user-space,
removing it would break migration.
Signed-off-by: James Morse
---
arch/arm64/kvm/hyp/sysreg-sr.c | 2 --
arch/arm64/kvm/sys_regs.c | 2 --
2 files changed, 4 deletions(-)
for the guest. This register
only affects execution at EL1, and the host's value is restored before
we return to host EL1.
Convert the 32bit register index back to the 64bit version.
Cc: sta...@vger.kernel.org
Suggested-by: Marc Zyngier
Signed-off-by: James Morse
---
arch/arm64/kvm/
, I'm not sure about 2&3.
Thanks,
James Morse (3):
KVM: arm64: Stop writing aarch32's CSSELR into ACTLR
KVM: arm64: Add emulation for 32bit guests accessing ACTLR2
KVM: arm64: Stop save/restoring ACTLR_EL1
arch/arm64/kvm/hyp/sysreg-sr.c | 2 --
arch/arm64/kv
Hi Marc,
On 28/05/2020 13:38, Marc Zyngier wrote:
> On 2020-05-28 13:36, Marc Zyngier wrote:
>> On 2020-05-26 17:18, James Morse wrote:
>>> KVM sets HCR_EL2.TACR (which it calls HCR_TAC) via HCR_GUEST_FLAGS.
>>> This means ACTLR* accesses from the guest are always tra
Hi Marc,
On 28/05/2020 09:57, Marc Zyngier wrote:
> On 2020-05-26 17:18, James Morse wrote:
>> access_csselr() uses the 32bit r->reg value to access the 64bit array,
>> so reads and write the wrong value. sys_regs[4], is ACTLR_EL1, which
>> is subsequently save/restored
Hi Marc,
On 22/04/2020 13:00, Marc Zyngier wrote:
> SPSR_EL1 being a VNCR-capable register with ARMv8.4-NV, move it to
> the sysregs array and update the accessors.
Reviewed-by: James Morse
Thanks,
James
___
kvmarm mailing list
Hi Marc,
On 22/04/2020 13:00, Marc Zyngier wrote:
> As we're about to move SPSR_EL1 into the VNCR page, we need to
> disassociate it from the rest of the 32bit cruft. Let's break
> the array into individual fields.
Reviewed-by: James Mors
Hi Marc,
On 22/04/2020 13:00, Marc Zyngier wrote:
> SP_EL1 being a VNCR-capable register with ARMv8.4-NV, move it to the
> system register array and update the accessors.
Reviewed-by: James Morse
Thanks,
James
___
kvmarm mailing list
Hi Marc,
On 22/04/2020 13:00, Marc Zyngier wrote:
> As ELR-EL1 is a VNCR-capable register with ARMv8.4-NV, let's move it to
> the sys_regs array and repaint the accessors. While we're at it, let's
> kill the now useless accessors used only on the fault injection path.
With the reset thing reported by Zenghui and Zengtao on the previous patch
fixed:
Reviewed-by: James Morse
(otherwise struct kvm_regs isn't userspace-only!)
Thanks,
James
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
erpart, so there is no semantic change here.
Reviewed-by: James Morse
Thanks,
James
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
al, int reg)
> +{
> + if (!vcpu->arch.sysregs_loaded_on_cpu)
> + goto memory_write;
> +
> + if (__vcpu_write_sys_reg_to_cpu(val, reg))
> + return;
> +
> +memory_write:
> __vcpu_sys_reg(vcpu, reg) =
ACTLR_EL1 is a 64bit register while the 32bit ACTLR is obviously 32bit.
For 32bit software, the extra bits are accessible via ACTLR2... which
KVM doesn't emulate.
Signed-off-by: James Morse
---
I'm not convinced this is endian safe, but it does match what
kvm_inject_undef32(
for the guest. This register
only affects execution at EL1, and the host's value is restored before
we return to host EL1.
Rename access_csselr() to access_csselr_el1(), to indicate it expects
the 64bit register index, and pass it CSSELR_EL1 from cp15_regs[].
Cc: sta...@vger.ker
m the host.
Stop save/restoring this register.
This also stops this register being affected by sysregs_loaded_on_cpu,
so we can provide 32 bit accessors that always use the in-memory copy.
Signed-off-by: James Morse
---
arch/arm64/kvm/hyp/sysreg-sr.c | 2 --
arch/arm64/kvm/sys_regs.c | 2
with VHE?
vcpu_cp15() modifies the in-memory copy, surely a vcpu_put() will clobber
everything it did, or fail to restore it when entering the guest.
Thanks,
James Morse (3):
KVM: arm64: Stop writing aarch32's CSSELR into ACTLR
KVM: arm64: Stop save/restoring ACTLR_EL1
KVM: arm64: Add em
define S2_PMD_LEVEL 2
> +#define S2_PTE_LEVEL 3
Are these really just for stage2, would the stage1 definition be the same?
~
Digging into the VTCR_EL2.SL0 trickery, it does everything at pgd where there
are no block
mappings, and
Hi Alex,
On 12/05/2020 16:47, Alexandru Elisei wrote:
> On 5/12/20 12:17 PM, James Morse wrote:
>> On 11/05/2020 17:38, Alexandru Elisei wrote:
>>> On 4/22/20 1:00 PM, Marc Zyngier wrote:
>>>> From: Christoffer Dall
>>>>
>>>> As we are about
Hi Andrew,
On 07/05/2020 16:13, Andrew Scull wrote:
>> @@ -176,7 +177,7 @@ static void clear_stage2_pud_entry(struct kvm_s2_mmu
>> *mmu, pud_t *pud, phys_addr
>> pmd_t *pmd_table __maybe_unused = stage2_pmd_offset(kvm, pud, 0);
>> VM_BUG_ON(stage2_pud_huge(kvm, *pud));
>> stage2_pu
Hi Alex, Marc,
(just on this last_vcpu_ran thing...)
On 11/05/2020 17:38, Alexandru Elisei wrote:
> On 4/22/20 1:00 PM, Marc Zyngier wrote:
>> From: Christoffer Dall
>>
>> As we are about to reuse our stage 2 page table manipulation code for
>> shadow stage 2 page tables in the context of nested
nning in vEL2?
> so move the used_lrs
> field and change the prototypes and implementations of these functions to
> take the cpu_if parameter directly.
> No functional change.
Looks like no change!
Reviewed-by: James Morse
Thanks,
James
_
h the vtcr properties into
kvm_s2_mmu
that way you could drop the kvm backref, and only things that take vm-wide
locks would
need the kvm pointer. But I don't think it matters.
I think I get it. I can't see anything that should be the other vm/vcpu pointer.
Reviewed-by: James Morse
Hi guys,
On 23/04/2020 13:03, Marc Zyngier wrote:
> On 2020-04-23 12:35, James Morse wrote:
>> On 22/04/2020 17:18, Marc Zyngier wrote:
>>> From: Zenghui Yu
>>>
>>> It's likely that the vcpu fails to handle all virtual interrupts if
>>> userspa
Hi Zenghui,
On 23/04/2020 12:57, Zenghui Yu wrote:
> On 2020/4/23 19:35, James Morse wrote:
>> On 22/04/2020 17:18, Marc Zyngier wrote:
>>> From: Zenghui Yu
>>>
>>> It's likely that the vcpu fails to handle all virtual interrupts if
>>> userspa
^
[ 1742.386399] 0008e1bf1f80: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff
[ 1742.393645] 0008e1bf2000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff
[ 1742.400889]
==
[ 1742.408132] Disabling lock
eal solely with
> the virtual state. Note that the API differs from that of GICv3,
> where userspace exclusively uses ISPENDR to set the state. Too
> bad we can't reuse it.
Reviewed-by: James Morse
Thanks,
James
___
kvmarm mailing list
kvmar
Hi Marc,
On 20/04/2020 11:03, Marc Zyngier wrote:
> On Fri, 17 Apr 2020 17:48:34 +0100
> James Morse wrote:
>> On 17/04/2020 13:41, Marc Zyngier wrote:
>>> On Fri, 17 Apr 2020 12:22:10 +0100 James Morse wrote:
>>>
>>>> On 17/04/2020 09:33, Marc Z
Hi Marc,
On 17/04/2020 13:41, Marc Zyngier wrote:
> On Fri, 17 Apr 2020 12:22:10 +0100 James Morse wrote:
>> On 17/04/2020 09:33, Marc Zyngier wrote:
>>> There is no point in accessing the HW when writing to any of the
>>> ISPENDR/ICPENDR registers from userspace, as o
Hi Marc,
On 17/04/2020 09:33, Marc Zyngier wrote:
> There is no point in accessing the HW when writing to any of the
> ISPENDR/ICPENDR registers from userspace, as only the guest should
> be allowed to change the HW state.
>
> Introduce new userspace-specific accessors that deal solely with
> the
eal solely with
> the virtual state.
>
> Reported-by: James Morse
Tested on both machines I've hit this on:
Tested-by: James Morse
and perhaps more useful:
Reviewed-by: James Morse
Thanks,
James
___
kvmarm mailing list
kvmar
Hi Geng,
On 16/04/2020 13:07, gengdongjiu wrote:
> On 2020/4/14 20:18, James Morse wrote:
>> On 11/04/2020 13:17, Dongjiu Geng wrote:
>>> When the RAS Extension is implemented, b0b011000, 0b011100,
>>> 0b011101, 0b00, and 0b01, are not used and reserved
>
Hi Geng,
On 11/04/2020 13:17, Dongjiu Geng wrote:
> When the RAS Extension is implemented, b0b011000, 0b011100,
> 0b011101, 0b00, and 0b01, are not used and reserved
> to the DFSC[5:0] of ESR_ELx, but the code still checks these
> unused bits, so remove them.
They aren't unused: CPUs with
Hi Marc,
On 09/04/2020 09:27, Marc Zyngier wrote:
> On Wed, 8 Apr 2020 12:16:01 +0100
> James Morse wrote:
>> On 08/04/2020 11:07, Marc Zyngier wrote:
>>> I don't fully agree with the analysis, Remember we are looking at the
>>> state of the physical interrupt
Hi Marc,
On 08/04/2020 11:07, Marc Zyngier wrote:
> On Mon, 6 Apr 2020 16:03:55 +0100
> James Morse wrote:
>
>> kvm_arch_timer_get_input_level() needs to get the arch_timer_context for
>> a particular vcpu, and uses kvm_get_running_vcpu() to find it.
>>
>> kvm
sense for handling a device ioctl(),
so instead pass the vcpu through to kvm_arch_timer_get_input_level(). Its
not clear that an intid makes much sense without the paired vcpu.
Suggested-by: Andre Przywara
Signed-off-by: James Morse
---
include/kvm/arm_arch_timer.h | 2 +-
include/kvm/arm_v
1 - 100 of 843 matches
Mail list logo