From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_set_regs().
Signed-off-by: Christoffer Dall
---
arch/mips/kvm/mips.c | 3 +++
arch/powerpc/kvm/book3s.c | 3 +++
arch/powerpc/kvm/booke.c | 3 +++
arch/s390/kvm/kv
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_set_guest_debug().
Reviewed-by: David Hildenbrand
Signed-off-by: Christoffer Dall
---
arch/arm64/kvm/guest.c| 15 ---
arch/powerpc/kvm/book3s.c | 2 ++
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_set_fpu().
Reviewed-by: David Hildenbrand
Signed-off-by: Christoffer Dall
---
arch/s390/kvm/kvm-s390.c | 15 ---
arch/x86/kvm/x86.c | 8 ++--
vir
From: Christoffer Dall
Move the calls to vcpu_load() and vcpu_put() in to the architecture
specific implementations of kvm_arch_vcpu_ioctl() which dispatches
further architecture-specific ioctls on to other functions.
Some architectures support asynchronous vcpu ioctls which cannot call
vcpu_loa
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_get_fpu().
Reviewed-by: David Hildenbrand
Signed-off-by: Christoffer Dall
---
arch/s390/kvm/kvm-s390.c | 4
arch/x86/kvm/x86.c | 7 +--
virt/kvm/kvm_main
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_get_mpstate().
Reviewed-by: David Hildenbrand
Signed-off-by: Christoffer Dall
---
arch/s390/kvm/kvm-s390.c | 11 +--
arch/x86/kvm/x86.c | 3 +++
virt/kvm
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_get_sregs().
Signed-off-by: Christoffer Dall
---
arch/powerpc/kvm/book3s.c | 8 +++-
arch/powerpc/kvm/booke.c | 9 -
arch/s390/kvm/kvm-s390.c | 4
arc
From: Christoffer Dall
Moving the call to vcpu_load() in kvm_arch_vcpu_ioctl_run() to after
we've called kvm_vcpu_first_run_init() simplifies some of the vgic and
there is also no need to do vcpu_load() for things such as handling the
immediate_exit flag.
Signed-off-by: Christoffer Dall
---
vi
From: Christoffer Dall
Calling vcpu_load() registers preempt notifiers for this vcpu and calls
kvm_arch_vcpu_load(). The latter will soon be doing a lot of heavy
lifting on arm/arm64 and will try to do things such as enabling the
virtual timer and setting us up to handle interrupts from the time
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_translate().
Reviewed-by: David Hildenbrand
Signed-off-by: Christoffer Dall
---
arch/powerpc/kvm/booke.c | 2 ++
arch/x86/kvm/x86.c | 3 +++
virt/kvm/kvm_main.c
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_get_regs().
Signed-off-by: Christoffer Dall
---
arch/mips/kvm/mips.c | 3 +++
arch/powerpc/kvm/book3s.c | 3 +++
arch/powerpc/kvm/booke.c | 3 +++
arch/s390/kvm/kv
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_set_mpstate().
Reviewed-by: David Hildenbrand
Signed-off-by: Christoffer Dall
---
arch/s390/kvm/kvm-s390.c | 3 +++
arch/x86/kvm/x86.c | 14 +++---
virt/
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_set_sregs().
Signed-off-by: Christoffer Dall
---
arch/powerpc/kvm/book3s.c | 8 +++-
arch/powerpc/kvm/booke.c | 15 +++
arch/s390/kvm/kvm-s390.c | 4
From: Christoffer Dall
In preparation for moving calls to vcpu_load() and vcpu_put() into the
architecture specific implementations of the KVM vcpu ioctls, move the
calls in the main kvm_vcpu_ioctl() dispatcher function to each case
of the ioctl select statement. This allows us to move the vcpu_
From: Christoffer Dall
Some architectures may decide to do different things during
kvm_arch_vcpu_load depending on the ioctl being executed. For example,
arm64 is about to do significant work in vcpu load/put when running a
vcpu, but it's problematic to do this for any other vcpu ioctl than
KVM_
From: Christoffer Dall
Move vcpu_load() and vcpu_put() into the architecture specific
implementations of kvm_arch_vcpu_ioctl_run().
Signed-off-by: Christoffer Dall
---
arch/mips/kvm/mips.c | 3 +++
arch/powerpc/kvm/powerpc.c | 6 +-
arch/s390/kvm/kvm-s390.c | 10 --
arch/
From: Christoffer Dall
As we're about to call vcpu_load() from architecture-specific
implementations of the KVM vcpu ioctls, but yet we access data
structures protected by the vcpu->mutex in the generic code, factor
this logic out from vcpu_load().
x86 is the only architecture which calls vcpu_l
Hi Will,
On 12/03/2017 07:35 AM, Shanker Donthineni wrote:
> Hi Will, thanks for your review comments.
>
> On 12/01/2017 05:24 AM, Will Deacon wrote:
>> On Mon, Nov 27, 2017 at 05:18:00PM -0600, Shanker Donthineni wrote:
>>> The ARM architecture defines the memory locations that are permitted
>>>
From: Christoffer Dall
This series is an alternative approach to Eric Auger's direct EOI setup
patches [1] in terms of the KVM VGIC support.
The idea is to maintain existing semantics for the VGIC for mapped
level-triggered IRQs and also support the timer using mapped IRQs with
the same VGIC sup
From: Christoffer Dall
The __this_cpu_read() and __this_cpu_write() functions already implement
checks for the required preemption levels when using
CONFIG_DEBUG_PREEMPT which gives you nice error messages and such.
Therefore there is no need to explicitly check this using a BUG_ON() in
the code
From: Christoffer Dall
The GIC sometimes need to sample the physical line of a mapped
interrupt. As we know this to be notoriously slow, provide a callback
function for devices (such as the timer) which can do this much faster
than talking to the distributor, for example by comparing a few
in-me
From: Christoffer Dall
The VGIC can now support the life-cycle of mapped level-triggered
interrupts, and we no longer have to read back the timer state on every
exit from the VM if we had an asserted timer interrupt signal, because
the VGIC already knows if we hit the unlikely case where the gues
From: Christoffer Dall
We are about to distinguish between userspace accesses and mmio traps
for a number of the mmio handlers. When the requester vcpu is NULL, it
mens we are handling a userspace acccess.
Factor out the functionality to get the request vcpu into its own
function, mostly so we
From: Christoffer Dall
We currently check if the VM has a userspace irqchip on every exit from
the VCPU, and if so, we do some work to ensure correct timer behavior.
This is unfortunate, as we could avoid doing any work entirely, if we
didn't have to support irqchip in userspace.
Realizing the u
From: Christoffer Dall
For mapped IRQs (with the HW bit set in the LR) we have to follow some
rules of the architecture. One of these rules is that VM must not be
allowed to deactivate a virtual interrupt with the HW bit set unless the
physical interrupt is also active.
This works fine when inj
From: Christoffer Dall
Level-triggered mapped IRQs are special because we only observe rising
edges as input to the VGIC, and we don't set the EOI flag and therefore
are not told when the level goes down, so that we can re-queue a new
interrupt when the level goes up.
One way to solve this probl
From: Christoffer Dall
The timer was modeled after a strict idea of modelling an interrupt line
level in software, meaning that only transitions in the level needed to
be reported to the VGIC. This works well for the timer, because the
arch timer code is in complete control of the device and can
On Mon, Dec 04, 2017 at 12:11:22PM +0100, Gomonovych, Vasyl wrote:
> Hi Christoffer
>
> It is just syntax sugar of course
> and in mentioned function context it looks harmonically because it is
> in the end of function return statement.
> But in context of around source files it is looks not so ha
On Wed, Nov 29, 2017 at 04:13:14PM +0100, Andrew Jones wrote:
> On Mon, Nov 20, 2017 at 08:16:47PM +0100, Christoffer Dall wrote:
> > For mapped IRQs (with the HW bit set in the LR) we have to follow some
> > rules of the architecture. One of these rules is that VM must not be
> > allowed to deact
On Fri, Dec 01, 2017 at 06:04:32PM +, Andre Przywara wrote:
> Hi,
>
> On 20/11/17 19:16, Christoffer Dall wrote:
> > We are about to distinguish between userspace accesses and mmio traps
> > for a number of the mmio handlers. When the requester vcpu is NULL, it
> > mens we are handling a user
On Mon, Dec 04, 2017 at 05:27:10PM +, Ard Biesheuvel wrote:
> On 4 December 2017 at 17:18, Steve Capper wrote:
> > Hi Ard,
> >
> > On Mon, Dec 04, 2017 at 04:25:18PM +, Ard Biesheuvel wrote:
> >> On 4 December 2017 at 14:13, Steve Capper wrote:
> >> > Re-arrange the kernel memory map s.t.
On 4 December 2017 at 17:18, Steve Capper wrote:
> Hi Ard,
>
> On Mon, Dec 04, 2017 at 04:25:18PM +, Ard Biesheuvel wrote:
>> On 4 December 2017 at 14:13, Steve Capper wrote:
>> > Re-arrange the kernel memory map s.t. the kernel image resides in the
>> > bottom 514MB of memory.
>>
>> I guess
On Mon, Dec 04, 2017 at 05:18:09PM +, Steve Capper wrote:
> Hi Ard,
>
[...]
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose th
Hi Ard,
On Mon, Dec 04, 2017 at 04:25:18PM +, Ard Biesheuvel wrote:
> On 4 December 2017 at 14:13, Steve Capper wrote:
> > Re-arrange the kernel memory map s.t. the kernel image resides in the
> > bottom 514MB of memory.
>
> I guess this breaks KASLR entirely, no? Given that it adds an offset
On 4 December 2017 at 14:13, Steve Capper wrote:
> Re-arrange the kernel memory map s.t. the kernel image resides in the
> bottom 514MB of memory.
I guess this breaks KASLR entirely, no? Given that it adds an offset
in the range [0 ... sizeof(VMALLOC_SPACE) /4 ].
In any case, it makes sense to k
On Mon, Dec 04, 2017 at 01:53:21PM +, Ard Biesheuvel wrote:
> On 1 December 2017 at 15:19, Dave Martin wrote:
> > When deciding whether to invalidate FPSIMD state cached in the cpu,
> > the backend function sve_flush_cpu_state() attempts to dereference
> > __this_cpu_read(fpsimd_last_state).
On 04/12/17 14:13, Steve Capper wrote:
In save_elrsr(.), we use the following technique to ascertain the
address of the vgic global state:
(kern_hyp_va(&kvm_vgic_global_state))->nr_lr
For arm, kern_hyp_va(va) == va, and this call effectively compiles out.
For arm64, this call can be spu
Apologies for sending the cover-letter twice; my connection dropped
during my initial attempt to send this pull request, and I thought
nothing came through, but apparently that wasn't the case.
Thanks,
-Christoffer
___
kvmarm mailing list
kvmarm@lists.cs
Add the option to use 52-bit VA support upon availability at boot. We
use the same KASAN_SHADOW_OFFSET for both 48 and 52 bit VA spaces as in
both cases the start and end of the KASAN shadow region are PGD aligned.
>From ID_AA64MMFR2, we check the LVA field on very early boot and set the
VA size,
The kernel page table dumper assumes that the placement of VA regions is
constant and determined at compile time. As we are about to introduce
variable VA logic, we need to be able to determine certain regions at
boot time.
This patch adds logic to the kernel page table dumper s.t. these regions
c
In order to allow the kernel to select different virtual address sizes
on boot we need to "de-constify" VA_BITS. This patch introduces
vabits_actual, a variable which is defined at very early boot, and
VA_BITS is then re-defined to reference this variable.
Having VA_BITS variable can potentially b
This patch adjusts the alternative patching logic for kern_hyp_va to
take into account a change in virtual address space size on boot.
Because the instructions in the alternatives regions have to be fixed at
compile time, in order to make the logic depend on a dynamic VA size
the predicates have t
KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
line argument and affects the codegen of the inline address sanetiser.
Essentially, for an example memory access:
*ptr1 = val;
The compiler will insert logic similar to the below:
shadowValue = *(ptr1 >> 3 + KAS
Put the direct linear map in the top half of the VA space and then the
kernel + everything else in the bottom half.
We need to adjust:
*) KASAN shadow region placement logic,
*) KASAN_SHADOW_OFFSET computation logic,
*) virt_to_phys, phys_to_virt checks
*) page table dumper
*) KVM hyp map fli
For systems that are not executing with VHE, we need to create page
tables for HYP/EL2 mode in order to access data from the kernel running
at EL1.
In addition to parts of the kernel address space being mapped to EL2, we
also need to make space for an identity mapping of the __hyp_idmap_text
area
update_mapping_prot assumes that it will be used on the VA for the
kernel .text section. (Via the check virt >= VMALLOC_START)
Recent kdump patches employ this function to modify the protection of
the direct linear mapping (which is strictly speaking outside of this
area), via mark_linear_text_ali
Re-arrange the kernel memory map s.t. the kernel image resides in the
bottom 514MB of memory. With the modules, fixed map, PCI IO space placed
above it. At the very bottom of the memory map we set aside a 2MB guard
region to prevent ambiguity with PTR_ERR/ERR_PTR.
Dynamically resizable objects suc
This patch series brings 52-bit kernel VA support to arm64; if supported
at boot time. A new kernel option CONFIG_ARM64_VA_BITS_48_52 is available
when configured with a 64KB PAGE_SIZE (as on ARMv8.2-LPA, 52-bit VAs are
only allowed when running with a 64KB granule).
Switching between 48 and 52-bi
The high_memory global variable is used by
cma_declare_contiguous(.) before it is defined.
We don't notice this as we compute __pa(high_memory - 1), and it looks
like we're processing a VA from the direct linear map.
This problem becomes apparent when we flip the kernel virtual address
space and
In save_elrsr(.), we use the following technique to ascertain the
address of the vgic global state:
(kern_hyp_va(&kvm_vgic_global_state))->nr_lr
For arm, kern_hyp_va(va) == va, and this call effectively compiles out.
For arm64, this call can be spurious as the address of kvm_vgic_global_s
We assume that the direct linear map ends at ~0 in the KVM HYP map
intersection checking code. This assumption will become invalid later on
for arm64 when the address space of the kernel is re-arranged.
This patch introduces a new constant PAGE_OFFSET_END for both arm and
arm64 and defines it to b
From: Andrew Jones
kvm_vgic_vcpu_destroy already gets called from kvm_vgic_destroy for
each vcpu, so we don't have to call it from kvm_arch_vcpu_free.
Additionally the other architectures set kvm->online_vcpus to zero
after freeing them. We might as well do that for ARM too.
Signed-off-by: Andr
From: Christoffer Dall
We are incorrectly rearranging 32-bit words inside a 64-bit typed value
for big endian systems, which would result in never marking a virtual
interrupt as inactive on big endian systems (assuming 32 or fewer LRs on
the hardware). Fix this by not doing any word order manipu
From: Alex Bennée
There is a fast-path of MMIO emulation inside hyp mode. The handling
of single-step is broadly the same as kvm_arm_handle_step_debug()
except we just setup ESR/HSR so handle_exit() does the correct thing
as we exit.
For the case of an emulated illegal access causing an SError w
From: Ard Biesheuvel
Since it is perfectly legal to run the kernel at EL1, it is not
actually an error if HYP mode is not available when attempting to
initialize KVM, given that KVM support cannot be built as a module.
So demote the kvm_err() to kvm_info(), which prevents the error from
appearing
From: Christoffer Dall
The timer optimization patches inadvertendly changed the logic to always
load the timer state as if we have a vgic, even if we don't have a vgic.
Fix this by doing the usual irqchip_in_kernel() check and call the
appropriate load function.
Signed-off-by: Christoffer Dall
From: Marc Zyngier
vgic_set_owner acquires the irq lock without disabling interrupts,
resulting in a lockdep splat (an interrupt could fire and result
in the same lock being taken if the same virtual irq is to be
injected).
In practice, it is almost impossible to trigger this bug, but
better saf
From: Kristina Martsenko
VTTBR_BADDR_MASK is used to sanity check the size and alignment of the
VTTBR address. It seems to currently be off by one, thereby only
allowing up to 47-bit addresses (instead of 48-bit) and also
insufficiently checking the alignment. This patch fixes it.
As an example,
From: Alex Bennée
After emulating instructions we may want return to user-space to handle
single-step debugging. Introduce a helper function, which, if
single-step is enabled, sets the run structure for return and returns
true.
Signed-off-by: Alex Bennée
Reviewed-by: Julien Thierry
Signed-off-
From: Alex Bennée
If we are using guest debug to single-step the guest, we need to ensure
that we exit after emulating the instruction. This only affects
instructions completely emulated by the kernel. For instructions
emulated in userspace, we need to exit and return to complete the
emulation.
From: Alex Bennée
When an SError arrives during single-step both the SError and debug
exceptions may be pending when the step is completed, and the
architecture doesn't define the ordering of the two. This means that we
can observe en SError even though we've just completed a step, without
recei
From: Alex Bennée
The system state of KVM when using userspace emulation is not complete
until we return into KVM_RUN. To handle mmio related updates we wait
until they have been committed and then schedule our KVM_EXIT_DEBUG.
The kvm_arm_handle_step_debug() helper tells us if we need to return
From: Marc Zyngier
VTTBR_BADDR_MASK is used to sanity check the size and alignment of the
VTTBR address. It seems to currently be off by one, thereby only
allowing up to 39-bit addresses (instead of 40-bit) and also
insufficiently checking the alignment. This patch fixes it.
This patch is the 32
From: Marc Zyngier
The current pending table parsing code assumes that we keep the
previous read of the pending bits, but keep that variable in
the current block, making sure it is discarded on each loop.
We end-up using whatever is on the stack. Who knows, it might
just be the right thing...
F
From: Marc Zyngier
We miss a test against NULL after allocation.
Fixes: 6d03a68f8054 ("KVM: arm64: vgic-its: Turn device_id validation into
generic ID validation")
Cc: # 4.8
Reported-by: AKASHI Takahiro
Acked-by: Christoffer Dall
Signed-off-by: Marc Zyngier
Signed-off-by: Christoffer Dall
From: Marc Zyngier
Before performing an unmap, let's check that what we have was
really mapped the first place.
Reviewed-by: Christoffer Dall
Signed-off-by: Marc Zyngier
Signed-off-by: Christoffer Dall
---
virt/kvm/arm/vgic/vgic-v4.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-
From: Marc Zyngier
Using the size of the structure we're allocating is a good idea
and avoids any surprise... In this case, we're happilly confusing
kvm_kernel_irq_routing_entry and kvm_irq_routing_entry...
Fixes: 95b110ab9a09 ("KVM: arm/arm64: Enable irqchip routing")
Cc: # 4.8
Reported-by: AK
From: Marc Zyngier
The current pending table parsing code assumes that we keep the
previous read of the pending bits, but keep that variable in
the current block, making sure it is discarded on each loop.
We end-up using whatever is on the stack. Who knows, it might
just be the right thing...
F
From: Andre Przywara
Commit f39d16cbabf9 ("KVM: arm/arm64: Guard kvm_vgic_map_is_active against
!vgic_initialized") introduced a check whether the VGIC has been
initialized before accessing the spinlock and the VGIC data structure.
However the vgic_get_irq() call in the variable declaration sneak
From: Christoffer Dall
After the timer optimization rework we accidentally end up calling
physical timer enable/disable functions on VHE systems, which is neither
needed nor correct, since the CNTHCTL_EL2 register format is
different when HCR_EL2.E2H is set.
The CNTHCTL_EL2 is initialized when C
From: Christoffer Dall
Hi Paolo and Radim,
Here's the first round of fixes for KVM/ARM for v4.15. This is a fairly large
set of fixes, partially because we spotted a handful of issues from running the
SMATCH static analysis on the code (thanks to AKASHI Takahiro).
In more details, this pull re
From: Christoffer Dall
Hi Paolo and Radim,
Here's the first round of fixes for KVM/ARM for v4.15. This is a fairly large
set of fixes, partially because we spotted a handful of issues from running the
SMATCH static analysis on the code (thanks to AKASHI Takahiro).
In more details, this pull re
On Mon, Dec 04, 2017 at 12:39:33PM +, Mark Rutland wrote:
> On Tue, Nov 28, 2017 at 04:07:26PM +0100, Andrew Jones wrote:
> > Hi Mark,
>
> Hi Drew,
>
> > On Mon, Nov 27, 2017 at 04:38:06PM +, Mark Rutland wrote:
> > > +Architecture overview
> > > +-
> > > +
> > > +The
On Tue, Nov 28, 2017 at 04:07:26PM +0100, Andrew Jones wrote:
> Hi Mark,
Hi Drew,
> On Mon, Nov 27, 2017 at 04:38:06PM +, Mark Rutland wrote:
> > +Architecture overview
> > +-
> > +
> > +The ARMv8.3 Pointer Authentication extension adds primitives that can be
> > +used to
Hi Christoffer
It is just syntax sugar of course
and in mentioned function context it looks harmonically because it is
in the end of function return statement.
But in context of around source files it is looks not so harmonically because
existing code uses old approach.
And this old approach is on
On 03/12/17 23:04, Christoffer Dall wrote:
> From: Christoffer Dall
>
> We are incorrectly rearranging 32-bit words inside a 64-bit typed value
> for big endian systems, which would result in never marking a virtual
> interrupt as inactive on big endian systems (assuming 32 or fewer LRs on
> the
On Sun, Dec 03, 2017 at 08:50:26PM +0100, Christoffer Dall wrote:
> On Mon, Nov 13, 2017 at 06:54:02PM +0100, Andrew Jones wrote:
...
> > > + }
> > > +
> > > + vcpu_sys_reg(vcpu, reg) = val;
> > > +}
> > > +
> > > /*
> > > * Generic accessor for VM registers. Only called as long as HCR_TVM
> > >
77 matches
Mail list logo