Hi James,
On 2017/12/7 3:04, James Morse wrote:
> Hi gengdongjiu,
>
> On 06/12/17 10:26, gengdongjiu wrote:
>> On 2017/11/15 0:00, James Morse wrote:
+ * error has not been propagated
+ */
+ run->exit_reason = KVM_EXIT_EXCEPTION;
+ run->ex
On Tue, Oct 31, 2017 at 8:51 AM, Dave Martin wrote:
> This patch implements support for saving and restoring the SVE
> registers around signals.
>
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery,
Hi gengdongjiu,
On 06/12/17 10:26, gengdongjiu wrote:
> On 2017/11/15 0:00, James Morse wrote:
>>> +* error has not been propagated
>>> +*/
>>> + run->exit_reason = KVM_EXIT_EXCEPTION;
>>> + run->ex.exception = ESR_ELx_EC_SERROR;
>>> + run->ex.
When CONFIG_UNMAP_KERNEL_AT_EL0 is set the SDEI entry point and the rest
of the kernel may be unmapped when we take an event. If this may be the
case, use an entry trampoline that can switch to the kernel page tables.
We can't use the provided PSTATE to determine whether to switch page
tables as w
SDEI needs to calculate an offset in the trampoline page too. Move
the extern char[] to sections.h.
This patch just moves code around.
Signed-off-by: James Morse
---
... there were more of these in v2 of KPTI ...
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/mm/mmu.c | 2 -
Private SDE events are per-cpu, and need to be registered and enabled
on each CPU.
Hide this detail from the caller by adapting our {,un}register and
{en,dis}able calls to send an IPI to each CPU if the event is private.
CPU private events are unregistered when the CPU is powered-off, and
re-regi
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications.
Such notifications enter the kernel at the registered entry-point
with the register values of the interrupte
Add __uaccess_{en,dis}able_hw_pan() helpers to set/clear the PSTATE.PAN
bit.
Signed-off-by: James Morse
---
arch/arm64/include/asm/uaccess.h | 12
arch/arm64/kernel/suspend.c | 4 ++--
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/uacce
SDEI inherits the 'use hvc' bit that is also used by PSCI. PSCI does all
its initialisation early, SDEI does its late.
Remove the __init annotation from acpi_psci_use_hvc().
Signed-off-by: James Morse
Acked-by: Catalin Marinas
---
The function name is unchanged as this bit is named 'PSCI_USE_HV
Today the arm64 arch code allocates an extra IRQ stack per-cpu. If we
also have SDEI and VMAP stacks we need two extra per-cpu VMAP stacks.
Move the VMAP stack allocation out to a helper in a new header file.
This avoids missing THREADINFO_GFP, or getting the all-important alignment
wrong.
Signed
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement firmware notifications (such as
firmware-first RAS) or promote an IRQ that has been promoted to a
firmware-assisted NMI.
Add th
When a CPU enters an idle lower-power state or is powering off, we
need to mask SDE events so that no events can be delivered while we
are messing with the MMU as the registered entry points won't be valid.
If the system reboots, we want to unregister all events and mask the CPUs.
For kexec this a
SDEI defines a new ACPI table to indicate the presence of the interface.
The conduit is discovered in the same way as PSCI.
For ACPI we need to create the platform device ourselves as SDEI doesn't
have an entry in the DSDT.
The SDEI platform device should be created after ACPI has been initialise
KVM uses tpidr_el2 as its private vcpu register, which makes sense for
non-vhe world switch as only KVM can access this register. This means
vhe Linux has to use tpidr_el1, which KVM has to save/restore as part
of the host context.
If the SDEI handler code runs behind KVMs back, it mustn't access
Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the
host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and
on VHE they can be the same register.
KVM calls hyp_panic() when anything unexpected happens. This may occur
while a guest owns the EL1 registers. KVM stashes
The Software Delegated Exception Interface (SDEI) is an ARM standard
for registering callbacks from the platform firmware into the OS.
This is typically used to implement RAS notifications, or from an
IRQ that has been promoted to a firmware-assisted NMI.
Add a new devicetree binding to describe t
Now that a VHE host uses tpidr_el2 for the cpu offset we no longer
need KVM to save/restore tpidr_el1. Move this from the 'common' code
into the non-vhe code. While we're at it, on VHE we don't need to
save the ELR or SPSR as kernel_entry in entry.S will have pushed these
onto the kernel stack, and
kvm_host_cpu_state is a per-cpu allocation made from kvm_arch_init()
used to store the host EL1 registers when KVM switches to a guest.
Make it easier for ASM to generate pointers into this per-cpu memory
by making it a static allocation.
Signed-off-by: James Morse
Acked-by: Christoffer Dall
--
Now that KVM uses tpidr_el2 in the same way as Linux's cpu_offset in
tpidr_el1, merge the two. This saves KVM from save/restoring tpidr_el1
on VHE hosts, and allows future code to blindly access per-cpu variables
without triggering world-switch.
Signed-off-by: James Morse
Reviewed-by: Christoffer
The Software Delegated Exception Interface (SDEI) is an ARM specification
for registering callbacks from the platform firmware into the OS.
This is intended to be used to implement firmware-first RAS notifications,
but also supports vendor-defined events and binding IRQs as events.
The document is
On Wed, Dec 06, 2017 at 05:09:49PM +, Julien Thierry wrote:
> When VHE is not present, KVM needs to save and restores PMSCR_EL1 when
> possible. If SPE is used by the host, value of PMSCR_EL1 cannot be saved
> for the guest.
> If the host starts using SPE between two save+restore on the same vc
When VHE is not present, KVM needs to save and restores PMSCR_EL1 when
possible. If SPE is used by the host, value of PMSCR_EL1 cannot be saved
for the guest.
If the host starts using SPE between two save+restore on the same vcpu,
restore will write the value of PMSCR_EL1 read during the first save
When deciding whether to invalidate FPSIMD state cached in the cpu,
the backend function sve_flush_cpu_state() attempts to dereference
__this_cpu_read(fpsimd_last_state). However, this is not safe:
there is no guarantee that this task_struct pointer is still valid,
because the task could have exit
Currently, loading of a task's fpsimd state into the CPU registers
is skipped if that task's state is already present in the registers
of that CPU.
However, the code relies on the struct fpsimd_state * (and by
extension struct task_struct *) to unambiguously identify a task.
There is a particular
This series contains a few fixes for known issues in the arm64 FPSIMD
and SVE implementation.
This supersedes the previous posting. [1]
Note that although patch 2 is not a fix, it provides refactoring that is
used by the fix in patch 3.
[1] [PATCH 0/3] arm64: SVE fixes for v4.15-rc1
http://lists
There is currently some duplicate logic to associate current's
FPSIMD context with the cpu when loading FPSIMD state into the cpu
regs.
Subsequent patches will update that logic, so in order to ensure it
only needs to be done in one place, this patch factors the relevant
code out into a new functi
On Sat, 2017-12-02 at 10:01 +0530, Sagar Arun Kamble wrote:
> There is no real need for the users of timecounters to define
> cyclecounter
> and timecounter variables separately. Since timecounter will always
> be
> based on cyclecounter, have cyclecounter struct as member of
> timecounter
> struct
On Wed, Dec 06, 2017 at 05:17:28PM +0300, Yury Norov wrote:
> On Wed, Dec 06, 2017 at 11:59:04AM +0100, Christoffer Dall wrote:
> > On Tue, Dec 05, 2017 at 06:24:46PM +0300, Yury Norov wrote:
> > > On Mon, Dec 04, 2017 at 09:05:05PM +0100, Christoffer Dall wrote:
> > > > From: Christoffer Dall
> >
On 06/12/17 15:18, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 06, 2017 at 02:57:29PM +, Marc Zyngier wrote:
>> On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 06, 2017 at 02:38:25PM +, Marc Zyngier wrote:
We're playing a dangerous game with struct alt_instr, as we produ
On 01/12/17 15:19, Christoffer Dall wrote:
Hi Julien,
On Tue, Nov 14, 2017 at 04:42:13PM +, Julien Thierry wrote:
On 12/10/17 11:41, Christoffer Dall wrote:
The debug save/restore functions can be improved by using the has_vhe()
static key instead of the instruction alternative. Using t
On Wed, Dec 06, 2017 at 02:57:29PM +, Marc Zyngier wrote:
> On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 06, 2017 at 02:38:25PM +, Marc Zyngier wrote:
> >> We're playing a dangerous game with struct alt_instr, as we produce
> >> it using assembly tricks, but parse them us
On 06/12/17 14:48, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 06, 2017 at 02:38:25PM +, Marc Zyngier wrote:
>> We're playing a dangerous game with struct alt_instr, as we produce
>> it using assembly tricks, but parse them using the C structure.
>> We just assume that the respective alignments
On Wed, Dec 06, 2017 at 02:38:25PM +, Marc Zyngier wrote:
> We're playing a dangerous game with struct alt_instr, as we produce
> it using assembly tricks, but parse them using the C structure.
> We just assume that the respective alignments of the two will
> be the same.
>
> But as we add mor
On 06/12/17 14:17, Andre Przywara wrote:
> Hi,
>
> On 06/12/17 14:11, Andre Przywara wrote:
>> Hi,
>>
>> while trying to boot 4.15-rc1 on my Calxeda Midway I observed a crash
>> (see below). I can't look further into this today, but wanted to report
>> this anyway.
>>
>> Digging around a bit this
Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.
Signed-off-by: Marc Zyngier
---
arch/arm/include/asm/kvm_mmu.h | 3 ++-
arch/arm64/include
As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.
One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can als
We so far mapped our HYP IO (which is essencially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:
we compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
an
Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/insn.h | 6 ++
arch/arm64/kernel/insn.c | 32
2 files changed, 38 insertions(+)
diff --git a/arch/ar
We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/insn.h | 1 +
arch/arm64/kernel/insn.c | 4
2 files changed, 5 insertions(+)
d
The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.
Let's fix this small oversight by allowing the LSL_12 bit to be set.
Signed-off-by: Marc Zyngier
---
arch/arm64/kernel/insn.c | 18 ++
kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).
It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler
We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/insn.h |
Now that we can dynamically compute the kernek/hyp VA mask, there
is need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/cpucaps.h | 2 +-
arch/arm64/kernel/cpufeature.c | 19
Update the documentation to reflect the new tricks we play on the
EL2 mappings...
Signed-off-by: Marc Zyngier
---
Documentation/arm64/memory.txt | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 671bc
Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.
Signed-off-by: Marc Zyngier
---
virt/kvm/arm/mmu.c | 7 ---
1 file changed, 4 insertion
The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.
Those bits are by definition a bunch of zeroes, and could be useful
to move things around a bit. Of course, the m
So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.
THe newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instea
asm-offsets.h contains a few DMA related definitions that have
the exact same name than the enum members they are derived from.
While this is not a problem so far, it will become an issue if
both asm-offsets.h and include/linux/dma-direction.h: are pulled
by the same file.
Let's sidestep the issu
We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to finely control what gets
patched. For a single feature, this is an all or nothing thing.
It would be interesting to have a more fine grained way of patching
the kernel though, where we could
asm-offsets.h contains a number of definitions that are not used
at all, and in some cases conflict with other definitions (such as
NSEC_PER_SEC).
Spring clean-up time.
Signed-off-by: Marc Zyngier
---
arch/arm64/kernel/asm-offsets.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/arch/a
So far, we've been lucky enough that none of the include files
that asm-offsets.c requires do include asm-offsets.h. This is
about to change, and would introduce a nasty circular dependency...
Let's now guard the inclusion of asm-offsets.h so that it never
gets pulled from asm-offsets.c.
Signed-o
Whilst KVM benefits from the kernel randomisation when running VHE,
there is no randomisation whatsoever when the kernel is running at
EL1, as we directly use a fixed offset from the linear mapping.
This series proposes to randomise the offset by inserting a few random
bits between the MSB of the
We're playing a dangerous game with struct alt_instr, as we produce
it using assembly tricks, but parse them using the C structure.
We just assume that the respective alignments of the two will
be the same.
But as we add more fields to this structure, the alignment requirements
of the structure ma
On 06/12/17 14:11, Andre Przywara wrote:
> Hi,
>
> while trying to boot 4.15-rc1 on my Calxeda Midway I observed a crash
> (see below). I can't look further into this today, but wanted to report
> this anyway.
>
> Digging around a bit this is due to the VGIC not initializing properly
> due to GIC
Hi,
On 06/12/17 14:11, Andre Przywara wrote:
> Hi,
>
> while trying to boot 4.15-rc1 on my Calxeda Midway I observed a crash
> (see below). I can't look further into this today, but wanted to report
> this anyway.
>
> Digging around a bit this is due to the VGIC not initializing properly
> due t
On Wed, Dec 06, 2017 at 11:59:04AM +0100, Christoffer Dall wrote:
> On Tue, Dec 05, 2017 at 06:24:46PM +0300, Yury Norov wrote:
> > On Mon, Dec 04, 2017 at 09:05:05PM +0100, Christoffer Dall wrote:
> > > From: Christoffer Dall
> > >
> > > The VGIC can now support the life-cycle of mapped level-tr
Hi,
while trying to boot 4.15-rc1 on my Calxeda Midway I observed a crash
(see below). I can't look further into this today, but wanted to report
this anyway.
Digging around a bit this is due to the VGIC not initializing properly
due to GICC being advertised as just 4K, not 8K.
This can be worked
On Wed, Dec 06, 2017 at 11:53:00AM +0100, Christoffer Dall wrote:
> On Tue, Dec 05, 2017 at 12:31:51PM +, Dave Martin wrote:
> > On Tue, Dec 05, 2017 at 10:09:15AM +0100, Christoffer Dall wrote:
> > > On Fri, Dec 01, 2017 at 03:19:40PM +, Dave Martin wrote:
> > > > The HCR_EL2.TID3 flag nee
On Tue, Dec 05, 2017 at 06:24:46PM +0300, Yury Norov wrote:
> On Mon, Dec 04, 2017 at 09:05:05PM +0100, Christoffer Dall wrote:
> > From: Christoffer Dall
> >
> > The VGIC can now support the life-cycle of mapped level-triggered
> > interrupts, and we no longer have to read back the timer state o
On Tue, Dec 05, 2017 at 04:46:08PM +0300, Yury Norov wrote:
> On Mon, Dec 04, 2017 at 09:05:00PM +0100, Christoffer Dall wrote:
> > From: Christoffer Dall
> >
> > We are about to distinguish between userspace accesses and mmio traps
> > for a number of the mmio handlers. When the requester vcpu
On Tue, Dec 05, 2017 at 12:31:51PM +, Dave Martin wrote:
> On Tue, Dec 05, 2017 at 10:09:15AM +0100, Christoffer Dall wrote:
> > On Fri, Dec 01, 2017 at 03:19:40PM +, Dave Martin wrote:
> > > The HCR_EL2.TID3 flag needs to be set when trapping guest access to
> > > the CPU ID registers is r
On 2017/11/15 0:00, James Morse wrote:
>> + * error has not been propagated
>> + */
>> +run->exit_reason = KVM_EXIT_EXCEPTION;
>> +run->ex.exception = ESR_ELx_EC_SERROR;
>> +run->ex.error_code = KVM_SEI_SEV_RECOVERABLE;
>> +re
On 05/12/17 22:39, Yury Norov wrote:
> On Tue, Dec 05, 2017 at 04:47:46PM +, Marc Zyngier wrote:
>> On 05/12/17 15:03, Yury Norov wrote:
>>> On Mon, Dec 04, 2017 at 09:05:04PM +0100, Christoffer Dall wrote:
From: Christoffer Dall
For mapped IRQs (with the HW bit set in the LR) w
63 matches
Mail list logo