On 27/05/19 18:56, Thomas Huth wrote:
> The FSF moved from the "Temple Place" to "51 Franklin Street" quite
> a while ago already, so we should not refer to the old address in
> the source code anymore. Anyway, instead of replacing it with the
> new address, let's rather add proper SPDX identifiers
Hi Alex,
On 6/4/19 12:31 AM, Alex Williamson wrote:
> On Sun, 26 May 2019 18:10:03 +0200
> Eric Auger wrote:
>
>> Add a new VFIO_PCI_DMA_FAULT_IRQ_INDEX index. This allows to
>> set/unset an eventfd that will be triggered when DMA translation
>> faults are detected at physical level when the nes
Hi Alex,
On 6/4/19 12:31 AM, Alex Williamson wrote:
> On Sun, 26 May 2019 18:10:01 +0200
> Eric Auger wrote:
>
>> This patch registers a fault handler which records faults in
>> a circular buffer and then signals an eventfd. This buffer is
>> exposed within the fault region.
>>
>> Signed-off-by:
On 2019-06-04 13:58:51 [+0100], Julien Grall wrote:
> Hi,
Hi,
> This is happening because vgic_v2_fold_lr_state() is expected
> to be called with interrupt disabled. However, some of the path
> (e.g eventfd) will take a spinlock.
>
> The spinlock is from the waitqueue, so using a raw_spin_lock ca
On Tue, 4 Jun 2019 11:52:18 +0100
Jean-Philippe Brucker wrote:
> On 03/06/2019 23:32, Alex Williamson wrote:
> > It doesn't seem to make much sense to include this patch without
> > also including "iommu: handle page response timeout". Was that one
> > lost? Dropped? Lives elsewhere?
>
> The
On Mon, 3 Jun 2019 16:31:45 -0600
Alex Williamson wrote:
> On Sun, 26 May 2019 18:09:39 +0200
> Eric Auger wrote:
>
> > From: Jean-Philippe Brucker
> >
> > Some IOMMU hardware features, for example PCI's PRI and Arm SMMU's
> > Stall, enable recoverable I/O page faults. Allow IOMMU drivers to
On Tue, 4 Jun 2019 14:53:26 +0100
Marc Zyngier wrote:
> That's to prevent the injection of an interrupt firing on the same CPU
> while we're saving the corresponding vcpu interrupt context, among other
> things (the whole guest exit path runs with interrupt disabled in order
> to avoid this kind
Now that we've taken isr_el1 out of the box, there are a few more places
we could use it. During __guest_exit() we need to consume any SError left
pending by the guest so it doesn't contaminate the host. With v8.2 we use
the ESB-instruction. For systems without v8.2, we use dsb+isb and unmask
SErro
On systems with v8.2 we switch the 'vaxorcism' of guest SError with an
alternative sequence that uses the ESB-instruction, then reads DISR_EL1.
This saves the unmasking and re-masking of asynchronous exceptions.
We do this after we've saved the guest registers and restored the
host's. Any SError t
Neoverse-N1 affected by #1349291 may report an Uncontained RAS Error
as Unrecoverable. The kernel's architecture code already considers
Unrecoverable errors as fatal as without kernel-first support no
further error-handling is possible.
Now that KVM attributes SError to the host/guest more precise
Previously we added a dsb before reading isr_el1 to ensure that the
hosts write's have finished, before we read isr_el1 to see if any of
them caused an SError.
This only really matters if we have the v8.2 RAS extensions with its
poison tracking and containment reporting via SError's ESR value.
Bef
The EL2 vector hardening feature causes KVM to generate vectors for
each type of CPU present in the system. The generated sequences already
do some of the early guest-exit work (i.e. saving registers). To avoid
duplication the generated vectors branch to the original vector just
after the preamble.
SError that occur during world-switch's entry to the guest will be
accounted to the guest, as the exception is masked until we enter the
guest... but we want to attribute the SError as precisely as possible.
Reading DISR_EL1 before guest entry requires free registers, and using
ESB+DISR_EL1 to con
Hello!
v1? Yes: I intend to repost this with/without the last two patches
depending on whether anyone thinks they are needed, and should be considered
as part of this series, or separate.
This series started as a workaround for Neoverse-N1 #1349291, but has
become an improvement in RAS error acco
On 04/06/2019 14:53, Marc Zyngier wrote:
> On 04/06/2019 14:16, Steven Rostedt wrote:
>> On Tue, 4 Jun 2019 13:58:51 +0100
>> Julien Grall wrote:
>>
>>> This is happening because vgic_v2_fold_lr_state() is expected
>>> to be called with interrupt disabled. However, some of the path
>>> (e.g eventf
On 04/06/2019 14:16, Steven Rostedt wrote:
> On Tue, 4 Jun 2019 13:58:51 +0100
> Julien Grall wrote:
>
>> This is happening because vgic_v2_fold_lr_state() is expected
>> to be called with interrupt disabled. However, some of the path
>> (e.g eventfd) will take a spinlock.
>>
>> The spinlock is f
We currently have duplicated functionality for the mmu_memory_cache used
to pre-allocate memory for the page table manipulation code which cannot
allocate memory while holding spinlocks. This functionality is
duplicated across x86, arm/arm64, and mips.
There were recently a debate of modifying th
As we have moved the mmu memory cache definitions and functions to
common code, they are exported as symols to the rest of the kernel.
Let's rename the functions and data types to have a kvm_ prefix to make
it clear where these functions belong and take this chance to rename
memory_cache to memcac
Now that we have a common infrastructure for doing MMU cache
allocations, use this for mips as well.
Signed-off-by: Christoffer Dall
---
arch/mips/include/asm/kvm_host.h | 15 ++---
arch/mips/include/asm/kvm_types.h | 6
arch/mips/kvm/mips.c | 2 +-
arch/mips/kvm/mmu.c
We are currently duplicating the mmu memory cache functionality quite
heavily between the architectures that support KVM. As a first step,
move the x86 implementation (which seems to have the most recently
maintained version of the mmu memory cache) to common code.
We introduce an arch-specific k
Now when we have a common mmu mmemcache implementation, we can reuse
this for arm and arm64.
The common implementation has a slightly different behavior when
allocating objects under high memory pressure; whereas the current
arm/arm64 implementation will give up and return -ENOMEM if the full
size
On Tue, 4 Jun 2019 13:58:51 +0100
Julien Grall wrote:
> This is happening because vgic_v2_fold_lr_state() is expected
> to be called with interrupt disabled. However, some of the path
> (e.g eventfd) will take a spinlock.
>
> The spinlock is from the waitqueue, so using a raw_spin_lock cannot
>
Hi,
While trying device passthrough on Linux-rt with KVM Arm, I had
the following splat.
[ 363.410141] 000: BUG: sleeping function called from invalid context at
kernel/locking/rtmutex.c:974
[ 363.410150] 000: in_atomic(): 0, irqs_disabled(): 128, pid: 2916, name:
qemu-system-aar
[ 363.41015
On 04/06/2019 12:12, Catalin Marinas wrote:
> On Tue, May 21, 2019 at 06:21:38PM +0100, Julien Grall wrote:
>> The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is
>> the KVM FPSIMD code.
>>
>> A following patch will introduce a mechanism to acquire owernship of the
>> FPSIMD/SVE
On Tue, May 21, 2019 at 06:21:38PM +0100, Julien Grall wrote:
> The only external user of fpsimd_save() and fpsimd_flush_cpu_state() is
> the KVM FPSIMD code.
>
> A following patch will introduce a mechanism to acquire owernship of the
> FPSIMD/SVE context for performing context management operati
On 03/06/2019 23:32, Alex Williamson wrote:
> It doesn't seem to make much sense to include this patch without also
> including "iommu: handle page response timeout". Was that one lost?
> Dropped? Lives elsewhere?
The first 7 patches come from my sva/api branch, where I had forgotten
to add the
Hi Catalin,
On 6/3/19 10:21 PM, Catalin Marinas wrote:
On Mon, Jun 03, 2019 at 05:25:34PM +0100, Catalin Marinas wrote:
On Tue, May 21, 2019 at 06:21:39PM +0100, Julien Grall wrote:
Since a softirq is supposed to check may_use_simd() anyway before
attempting to use FPSIMD/SVE, there is limited
On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> We currently get following compilation warning:
>
> arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present'
> from incompatible pointer type
> arch/arm64/kvm/gue
On Tue, Jun 04, 2019 at 03:01:53PM +0530, Viresh Kumar wrote:
> On 04-06-19, 10:26, Dave Martin wrote:
> > I'm in two minds about whether this is worth fixing, but if you want to
> > post a patch to remove the extra const (or convert vq_present() to a
> > macro), I'll take a look at it.
>
> This p
On Tue, Jun 04, 2019 at 11:23:01AM +0200, Andrew Jones wrote:
> On Mon, Jun 03, 2019 at 05:52:07PM +0100, Dave Martin wrote:
> > Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register
> > access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs
> > that do not correspond to
On 04-06-19, 10:26, Dave Martin wrote:
> I'm in two minds about whether this is worth fixing, but if you want to
> post a patch to remove the extra const (or convert vq_present() to a
> macro), I'll take a look at it.
This patch already does what you are asking for (remove the extra
const), isn't
On Tue, Jun 04, 2019 at 02:25:45PM +0530, Viresh Kumar wrote:
> On 04-06-19, 09:43, Catalin Marinas wrote:
> > On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> > > We currently get following compilation warning:
> > >
> > > arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> > > ar
On Mon, Jun 03, 2019 at 05:52:07PM +0100, Dave Martin wrote:
> Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register
> access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs
> that do not correspond to a single underlying architectural register.
>
> KVM_GET_REG_LIST was
On 04/06/2019 09:43, Catalin Marinas wrote:
> On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
>> We currently get following compilation warning:
>>
>> arch/arm64/kvm/guest.c: In function 'set_sve_vls':
>> arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present'
>> f
On 04-06-19, 09:43, Catalin Marinas wrote:
> On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> > We currently get following compilation warning:
> >
> > arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> > arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present'
On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> We currently get following compilation warning:
>
> arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present'
> from incompatible pointer type
> arch/arm64/kvm/gue
On 04-06-19, 09:30, Marc Zyngier wrote:
> On 04/06/2019 05:43, Viresh Kumar wrote:
> > We currently get following compilation warning:
> >
> > arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> > arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present'
> > from incompatible po
On 04/06/2019 05:43, Viresh Kumar wrote:
> We currently get following compilation warning:
>
> arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present'
> from incompatible pointer type
> arch/arm64/kvm/guest.c:212:13: note: ex
38 matches
Mail list logo