Hi Frederic,
Thanks for having a crack at this, but I'm pretty confused now so please
prepare for a bunch of silly questions!
On Tue, Oct 15, 2024 at 03:48:55PM +0200, Frederic Weisbecker wrote:
> Le Tue, Oct 08, 2024 at 11:54:35AM +0100, Will Deacon a écrit :
> > On Fri, Sep 27,
ome CPUs that can execute only 64-bit tasks. Who is unwilling
to integrate what?
Will
kernel log (easiest way
> to trigger it seems to be trying to ssh into it, which fails).
Thanks for the report. I was able to reproduce this using QEMU and it
looks like the problem is because bpf_arch_text_copy() silently fails
to write to the read-only area as a result of patch_map() faulting and
the res
On Thu, Apr 18, 2024 at 12:53:26PM -0700, Sean Christopherson wrote:
> On Thu, Apr 18, 2024, Will Deacon wrote:
> > On Mon, Apr 15, 2024 at 10:03:51AM -0700, Sean Christopherson wrote:
> > > On Sat, Apr 13, 2024, Marc Zyngier wrote:
> > > > On Fri, 12 Apr 2024 15:54
On Mon, Apr 15, 2024 at 10:03:51AM -0700, Sean Christopherson wrote:
> On Sat, Apr 13, 2024, Marc Zyngier wrote:
> > On Fri, 12 Apr 2024 15:54:22 +0100, Sean Christopherson
> > wrote:
> > >
> > > On Fri, Apr 12, 2024, Marc Zyngier wrote:
> > > > On
end().
> - */
> - if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn)))
> - return false;
> -
> - /*
> - * We've moved a page around, probably through CoW, so let's treat
> - * it just like a translation fault and the map handler will c
a ring corruption.
>
> To fix, factor out the correct index access code from vhost_get_vq_desc.
> As a side benefit, we also validate the index on all paths now, which
> will hopefully help catch future errors earlier.
>
> Note: current code is inconsistent in how it handles errors:
On Tue, Mar 26, 2024 at 11:43:13AM +, Will Deacon wrote:
> On Tue, Mar 26, 2024 at 09:38:55AM +, Keir Fraser wrote:
> > On Tue, Mar 26, 2024 at 03:49:02AM -0400, Michael S. Tsirkin wrote:
> > > > Secondly, the debugging code is enhanced so that t
of
> the idx low byte, as I observed in the earlier log. Surely this is
> more than coincidence?
Yeah, I'd still really like to see the disassembly for both sides of the
protocol here. Gavin, is that something you're able to provide? Worst
case, the host and guest vmlinux objects would be a starting point.
Personally, I'd be fairly surprised if this was a hardware issue.
Will
On Tue, Mar 19, 2024 at 02:59:23PM +1000, Gavin Shan wrote:
> On 3/19/24 02:59, Will Deacon wrote:
> > On Thu, Mar 14, 2024 at 05:49:23PM +1000, Gavin Shan wrote:
> > > The issue is reported by Yihuang Yu who have 'netperf' test on
> > > NVidia's grace-gra
On Tue, Mar 19, 2024 at 03:36:31AM -0400, Michael S. Tsirkin wrote:
> On Mon, Mar 18, 2024 at 04:59:24PM +0000, Will Deacon wrote:
> > On Thu, Mar 14, 2024 at 05:49:23PM +1000, Gavin Shan wrote:
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> &
avail_idx_shadow);
Replacing a DMB with a DSB is _very_ unlikely to be the correct solution
here, especially when ordering accesses to coherent memory.
In practice, either the larger timing different from the DSB or the fact
that you're going from a Store->Store barrier to a full barrier is what
makes things "work" for you. Have you tried, for example, a DMB SY
(e.g. via __smb_mb()).
We definitely shouldn't take changes like this without a proper
explanation of what is going on.
Will
On Wed, 11 Oct 2023 19:57:26 +0200, Luca Weiss wrote:
> From: Vladimir Lypak
>
> If the IOMMU has a power domain then some state will be lost in
> qcom_iommu_suspend and TZ will reset device if we don't call
> qcom_scm_restore_sec_cfg before accessing it again.
>
>
On Mon, Oct 30, 2023 at 09:00:53AM +0200, Mike Rapoport wrote:
> On Thu, Oct 26, 2023 at 11:24:39AM +0100, Will Deacon wrote:
> > On Thu, Oct 26, 2023 at 11:58:00AM +0300, Mike Rapoport wrote:
> > > On Mon, Oct 23, 2023 at 06:14:20PM +0100, Will Deacon wrote:
> > > >
On Thu, Oct 26, 2023 at 11:58:00AM +0300, Mike Rapoport wrote:
> On Mon, Oct 23, 2023 at 06:14:20PM +0100, Will Deacon wrote:
> > On Mon, Sep 18, 2023 at 10:29:46AM +0300, Mike Rapoport wrote:
> > > diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
> &
; -{
> - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END,
> - GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS,
> - NUMA_NO_NODE, __builtin_return_address(0));
> -}
It's slightly curious that we didn't clear the tag here, so it's nice that
it all happens magically with your series:
Acked-by: Will Deacon
Will
ge->flags & EXECMEM_KASAN_SHADOW;
> + unsigned long vm_flags = VM_FLUSH_RESET_PERMS;
> + bool fallback = !!fallback_start;
> + gfp_t gfp_flags = GFP_KERNEL;
> + void *p;
>
> - return __vmalloc_node_range(size, align, start, end,
> -GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS,
> -NUMA_NO_NODE, __builtin_return_address(0));
> + if (PAGE_ALIGN(size) > (end - start))
> + return NULL;
> +
> + if (kasan)
> + vm_flags |= VM_DEFER_KMEMLEAK;
Hmm, I don't think we passed this before on arm64, should we have done?
Will
drop deprecated references and the rest attempt to stop
> direct bus clock abuses.
>
> [...]
Applied SMMU bindings fix to will (for-joerg/arm-smmu/fixes), thanks!
[04/14] dt-bindings: arm-smmu: Fix SDM630 clocks description
https://git.kernel.org/will/c/938ba2f252a5
Cheers,
--
W
f atomic
> + * operations to behave well together, please audit them carefully to ensure
> + * they all have forward progress. Many atomic operations may default to
> + * cmpxchg() loops which will not have good forward progress properties on
> + * LL/SC architectures.
> +
> > > > > > CPU
> > > > > > binded context as it makes heavy use of per-cpu variables and
> > > > > > shouldn't
> > > > > > be invoked from preemptible context.
> > > > > >
> > > > >
> > > > > Do you have any further comments on this?
> > > > >
>
> Since there aren't any further comments, can you re-pick this feature for
> 5.13?
I'd still like Mark's Ack on this, as the approach you have taken doesn't
really sit with what he was suggesting.
I also don't understand how all the CPUs get initialised with your patch,
since the PMU driver will be initialised after SMP is up and running.
Will
On Fri, Apr 09, 2021 at 09:38:15PM +0200, Arnd Bergmann wrote:
> On Fri, Apr 9, 2021 at 6:56 PM Sven Peter wrote:
> > On Wed, Apr 7, 2021, at 12:44, Will Deacon wrote:
> > > On Sun, Mar 28, 2021 at 09:40:07AM +0200, Sven Peter wrote:
> > >
> > > > + cfg-&
On Thu, Apr 08, 2021 at 01:38:17PM -0500, Rob Herring wrote:
> On Thu, Apr 8, 2021 at 6:08 AM Mark Rutland wrote:
> > On Wed, Apr 07, 2021 at 01:44:37PM +0100, Will Deacon wrote:
> > > On Thu, Apr 01, 2021 at 02:45:21PM -0500, Rob Herring wrote:
> > > > On Wed,
On Wed, Mar 10, 2021 at 10:53:23AM +0530, Viresh Kumar wrote:
> Rename freq_scale to a less generic name, as it will get exported soon
> for modules. Since x86 already names its own implementation of this as
> arch_freq_scale, lets stick to that.
>
> Suggested-by: Will Deacon
the same
> > potential problems if called on compat pt_regs.
>
> I think this is a problem we created for ourselves back in commit:
>
> 15956689a0e60aa0 ("arm64: compat: Ensure upper 32 bits of x0 are zero on
> syscall return)
>
> AFAICT, the perf regs samples a
https://bit.ly/3x8LDhsWill
just that, even the write needs to be atomic_store_explicit in order to
> avoid a data race.atomic_store_explicit
https://wg21.link/P0690
was an attempt to address this, but I don't know if any of the ideas got
adopted in the end.
Will
On Thu, Apr 15, 2021 at 04:26:46PM +, Ali Saidi wrote:
>
> On Thu, 15 Apr 2021 16:02:29 +0100, Will Deacon wrote:
> > On Thu, Apr 15, 2021 at 02:25:52PM +, Ali Saidi wrote:
> > > While this code is executed with the wait_lock held, a reader can
> > > ac
On Thu, Apr 15, 2021 at 04:37:58PM +0100, Catalin Marinas wrote:
> On Thu, Apr 15, 2021 at 04:28:21PM +0100, Will Deacon wrote:
> > On Thu, Apr 15, 2021 at 05:03:58PM +0200, Peter Zijlstra wrote:
> > > diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> &
acquire(&lock->cnts, _QW_WAITING,
> > _QW_LOCKED) != _QW_WAITING);
> > unlock:
> > arch_spin_unlock(&lock->wait_lock);
>
> This doesn't make sense, there is no such thing as a store-acquire. What
> you're
elaxed(&lock->cnts, _QW_WAITING,
> + atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);
> + } while (atomic_cmpxchg_acquire(&lock->cnts, _QW_WAITING,
> _QW_LOCKED) != _QW_WAITING);
Patch looks good, so with an updated message:
Acked-by: Will Deacon
Will
obably got better with atomics but we moved to
> qspinlocks eventually (the Juno board didn't have atomics).
>
> (leaving the rest of the text below for Will's convenience)
Yes, I think it was this thread:
https://lore.kernel.org/lkml/alpine.DEB.2.20.1707261548560.2186@nanos
but I don't think you can really fix such hardware by changing the locking
algorithm (although my proposed cpu_relax() hack was worryingly effective).
Will
Hi Linus,
Please pull these three arm64 fixes for -rc8; summary in the tag. We
don't have anything else on the horizon, although two of these issues
(the asm constraint and kprobes bugs) have been around for a while so
you never know.
Cheers,
Will
--->8
The following changes sinc
On Mon, 12 Apr 2021 17:41:01 +0800, Jisheng Zhang wrote:
> If instruction being single stepped caused a page fault, the kprobes
> is cancelled to let the page fault handler continue as a normal page
> fault. But the local irqflags are disabled so cpu will restore pstate
> with DAIF m
On Thu, Apr 08, 2021 at 04:06:23PM +0100, Mark Rutland wrote:
> On Thu, Apr 08, 2021 at 03:56:04PM +0100, Will Deacon wrote:
> > On Thu, Apr 08, 2021 at 03:37:23PM +0100, Vincenzo Frascino wrote:
> > > diff --git a/arch/arm64/kernel/entry-common.c
> > > b/arch/
d asynchronous
> tag check faults")
> Cc: Catalin Marinas
> Cc: Will Deacon
> Reported-by: Will Deacon
> Signed-off-by: Vincenzo Frascino
> ---
> arch/arm64/include/asm/mte.h | 8
> arch/arm64/kernel/entry-common.c | 6 ++
> arch/arm64/kernel
Hi Joerg,
There's hardly anything on the SMMU front for 5.13, but please pull
these regardless. Summary in the tag.
Cheers,
Will
--->8
The following changes since commit 1e28eed17697bcf343c6743f0028cc3b5dd88bf0:
Linux 5.12-rc3 (2021-03-14 14:41:02 -0700)
are available in
Hi John,
On Thu, Apr 08, 2021 at 01:55:02PM +0100, John Garry wrote:
> On 08/04/2021 10:01, Jonathan Cameron wrote:
> > On Wed, 7 Apr 2021 21:40:05 +0100
> > Will Deacon wrote:
> >
> > > On Wed, Apr 07, 2021 at 05:49:02PM +0800, Qi Liu wrote:
> > > >
constrain usable stack space), and for
> + * compiler/arch-specific stack alignment to remove the lower bits.
> + */
> +#define KSTACK_OFFSET_MAX(x) ((x) & 0x3FF)
> +
> +/*
> + * These macros must be used during syscall entry when interrupts and
> + * preempt are disabled, and after user registers have been stored to
> + * the stack.
> + */
This comment is out of date, as this is called from preemptible context on
arm64. Does that matter in terms of offset randomness?
Will
1 +
> arch/arm64/kernel/Makefile | 5 +
> arch/arm64/kernel/syscall.c | 16 ++++
> 3 files changed, 22 insertions(+)
Acked-by: Will Deacon
Will
mp;aic_vipi_flag, cpu));
> +
> + /*
> + * The atomic_fetch_or_release() above must complete before the
> + * atomic_read_acquire() below to avoid racing aic_ipi_unmask().
> + */
(same here)
> + smp_mb__after_atomic();
&g
eturn ioremap_np(offset, size) ?: ioremap(offset, size);
but however it's done, the logic looks good to me and thanks Hector for
updating this:
Acked-by: Will Deacon
Will
g. translation faults, permission faults) or some really
> > unnecessary guest faults caused by BBM, CMOs for the first vcpu are
>
> I can't figure out what BBM means.
Oh, I know that one! BBM means "Break Before Make". Not to be confused with
DBM (Dirty Bit Management) or BFM (Bit Field Move).
Will
1 means smaller.
> thr_len - set threshold for statistics.
> thr_mode - set threshold mode. 0 means count when bigger than
>threshold, and 1 means smaller.
>
> Reviewed-by: Jonathan Cameron
Do you have a link to this review, please?
Will
[Moving Mark to To: since I'd like his view on this]
On Thu, Apr 01, 2021 at 02:45:21PM -0500, Rob Herring wrote:
> On Wed, Mar 31, 2021 at 11:01 AM Will Deacon wrote:
> >
> > On Tue, Mar 30, 2021 at 12:09:38PM -0500, Rob Herring wrote:
> > > On Tue, Mar 30, 2021 a
On Wed, 7 Apr 2021 16:44:48 +0800, Zenghui Yu wrote:
> Per SMMUv3 spec, there is no Size and Addr field in the PREFETCH_CONFIG
> command and they're not used by the driver. Remove them.
>
> We can add them back if we're going to use PREFETCH_ADDR in the future.
Applied
tations leaking into the page-table
code here; it doesn't feel so unlikely that future implementations of this
IP might have greater addressing capabilities, for example, and so I don't
see why the page-table code needs to police this.
> + cfg->pgsize_bitmap &= SZ_16K;
> + if (!cfg->pgsize_bitmap)
> + return NULL;
This is worrying (and again, I don't think this belongs here). How is this
thing supposed to work if the CPU is using 4k pages?
Will
c(sizeof(*stream), GFP_KERNEL);
> + if (!stream) {
> + ret = -ENOMEM;
> + goto error;
> + }
Just in case you missed it, a cocci bot noticed that you're using GFP_KERNEL
to allocate while holding a spinlock here:
https://lore.kernel.org/r/alpine.DEB.2.22.394.2104041724340.2958@hadrien
Will
On Thu, 1 Apr 2021 19:16:41 +0800, Qi Liu wrote:
> The initialization of value in function armv8pmu_read_hw_counter()
> and armv8pmu_read_counter() seem redundant, as they are soon updated.
> So, We can remove them.
Applied to will (for-next/perf), thanks!
[1/1] arm64: perf: Remove
On Wed, Mar 31, 2021 at 12:52:11PM -0500, Rob Herring wrote:
> On Wed, Mar 31, 2021 at 10:38 AM Will Deacon wrote:
> >
> > On Tue, Mar 30, 2021 at 04:08:11PM -0500, Rob Herring wrote:
> > > On Tue, Mar 30, 2021 at 12:09 PM Rob Herring wrote:
> > > > On Tue,
Hi Kees,
On Wed, Mar 31, 2021 at 01:54:52PM -0700, Kees Cook wrote:
> Hi Will (and Mark and Catalin),
>
> Can you take this via the arm64 tree for v5.13 please? Thomas has added
> his Reviewed-by, so it only leaves arm64's. :)
Sorry, these got mixed up in my inbox so I just rep
* The AAPCS mandates a 16-byte (i.e. 4-bit) aligned SP at
> + * function boundaries. We want at least 5 bits of entropy so we
> + * must randomize at least SP[8:4].
> + */
> + choose_random_kstack_offset(get_random_int() & 0x1FF);
Not sure about either of these new calls -- aren't we preemptible in
invoke_syscall()?
Will
quot;memory"); \
Using the "m" constraint here is dangerous if you don't actually evaluate it
inside the asm. For example, if the compiler decides to generate an
addressing mode relative to the stack but with writeback (autodecrement), then
the stack pointer will be off by 8 bytes. Can you use "o" instead?
Will
On Tue, Mar 30, 2021 at 12:09:38PM -0500, Rob Herring wrote:
> On Tue, Mar 30, 2021 at 10:31 AM Will Deacon wrote:
> >
> > On Wed, Mar 10, 2021 at 05:08:29PM -0700, Rob Herring wrote:
> > > From: Raphael Gault
> > >
> > > Keep track of event opened w
umentation/arm64/index.rst | 1 +
> .../arm64/perf_counter_user_access.rst| 60 +++
We already have Documentation/arm64/perf.rst so I think you can add this
in there as a new section.
Will
On Tue, Mar 30, 2021 at 04:08:11PM -0500, Rob Herring wrote:
> On Tue, Mar 30, 2021 at 12:09 PM Rob Herring wrote:
> > On Tue, Mar 30, 2021 at 10:31 AM Will Deacon wrote:
> > > The logic here feels like it
> > > could with a bit of untangling.
> >
> > Yes
On Tue, Mar 30, 2021 at 10:35:21AM -0700, Daniel Walker wrote:
> On Mon, Mar 29, 2021 at 11:07:51AM +0100, Will Deacon wrote:
> > On Thu, Mar 25, 2021 at 12:59:56PM -0700, Daniel Walker wrote:
> > > On Thu, Mar 25, 2021 at 01:03:55PM +0100, Christophe Leroy wrote:
> > >
On Wed, Mar 31, 2021 at 05:22:18PM +0800, Jianlin Lv wrote:
> On Tue, Mar 30, 2021 at 5:31 PM Will Deacon wrote:
> >
> > On Tue, Mar 30, 2021 at 03:42:35PM +0800, Jianlin Lv wrote:
> > > A64_MOV is currently mapped to Add Instruction. Architecturally MOV
> > &
pose micro/arch events supported by this PMU */
> if ((hw_event_id > 0) && (hw_event_id < ARMV8_PMUV3_MAX_COMMON_EVENTS)
> && test_bit(hw_event_id, armpmu->pmceid_bitmap)) {
> @@ -1115,6 +1181,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char
> *name,
> cpu_pmu->filter_match = armv8pmu_filter_match;
>
> cpu_pmu->pmu.event_idx = armv8pmu_access_event_idx;
> + cpu_pmu->pmu.event_mapped = armv8pmu_event_mapped;
> + cpu_pmu->pmu.event_unmapped = armv8pmu_event_unmapped;
>
> cpu_pmu->name = name;
> cpu_pmu->map_event = map_event;
> @@ -1290,6 +1358,18 @@ void arch_perf_update_userpage(struct perf_event
> *event,
> userpg->cap_user_time = 0;
> userpg->cap_user_time_zero = 0;
> userpg->cap_user_time_short = 0;
> + userpg->cap_user_rdpmc = !!(event->hw.flags & ARMPMU_EL0_RD_CNTR) &&
> + (event->oncpu == smp_processor_id());
> +
> + if (userpg->cap_user_rdpmc) {
> + struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
> +
> + if (armv8pmu_event_is_64bit(event) &&
> + (armv8pmu_has_long_event(cpu_pmu) || (userpg->index == 32)))
The '32' here is the fake index for the cycle counter, right? I think that
was introduced in the previous patch, so let's add a #define for it.
Will
On Tue, Mar 30, 2021 at 08:03:36AM -0700, Rob Clark wrote:
> On Tue, Mar 30, 2021 at 2:34 AM Will Deacon wrote:
> >
> > On Mon, Mar 29, 2021 at 09:02:50PM -0700, Rob Clark wrote:
> > > On Mon, Mar 29, 2021 at 7:47 AM Will Deacon wrote:
> > > >
> > &g
ed-off-by: Raphael Gault
> Signed-off-by: Rob Herring
> ---
> arch/arm64/kernel/perf_event.c | 18 ++
> include/linux/perf/arm_pmu.h | 2 ++
> 2 files changed, 20 insertions(+)
Acked-by: Will Deacon
Will
On Tue, Mar 30, 2021 at 04:57:50AM -0700, zhouchuangao wrote:
> It can be optimized at compile time.
Hmm, I don't see it (and I also don't understand why we care). Do you have
numbers showing that this is worthwhile?
Will
ected
> indentation.
Applied to will (for-next/perf), thanks!
[1/1] docs: perf: Address some html build warnings
https://git.kernel.org/will/c/b88f5e9792cc
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
On Mon, Mar 29, 2021 at 09:02:50PM -0700, Rob Clark wrote:
> On Mon, Mar 29, 2021 at 7:47 AM Will Deacon wrote:
> >
> > On Fri, Mar 26, 2021 at 04:13:02PM -0700, Eric Anholt wrote:
> > > db820c wants to use the qcom smmu path to get HUPCF set (which keeps
> > &
ot;, so what's the up-side of aligning them like this?
Cheers,
Will
_SPIN_LOCK_UNLOCKED(lockname.wait_lock) \
> , .wait_list = LIST_HEAD_INIT(lockname.wait_list) \
> + __OSQ_MUTEX_INITIALIZER(lockname) \
You don't need the lockname parameter for this macro.
Will
sm so we can't go all the way to the TTBR1 path.
What do you mean by "doesn't have separate pagetables support yet"? The
compatible string doesn't feel like the right way to determine this.
Will
ecause of SW_TAGS KASAN.
That said, what is there to do? As things stand, won't kernel stack
addresses end up using KASAN_TAG_KERNEL?
Will
Hi Hector,
On Fri, Mar 26, 2021 at 05:58:15PM +0900, Hector Martin wrote:
> On 25/03/2021 04.57, Will Deacon wrote:
> > > + event = readl(ic->base + AIC_EVENT);
> > > + type = FIELD_GET(AIC_EVENT_TYPE, event);
> > > + irq
erence between my series and Daniel's series. So I'll
> > finish taking Will's comment into account and we'll send out a v3 soon.
>
> It doesn't solve the needs of Cisco, I've stated many times your changes have
> little value. Please stop submitt
On Mon, Mar 29, 2021 at 05:06:20PM +0800, liuqi (BA) wrote:
>
>
> On 2021/3/29 16:47, Will Deacon wrote:
> > On Fri, Mar 26, 2021 at 05:07:41PM +0800, Shaokun Zhang wrote:
> > > Apologies for the mistake.
> > >
> > > Will, shall I send a
On Fri, Mar 26, 2021 at 05:07:41PM +0800, Shaokun Zhang wrote:
> Apologies for the mistake.
>
> Will, shall I send a new version v5 to fix this issue or other?
Please send additional patches on top now that these are queued.
Thanks,
Will
On Thu, Mar 25, 2021 at 12:18:38PM +0100, Christophe Leroy wrote:
>
>
> Le 03/03/2021 à 18:57, Will Deacon a écrit :
> > On Tue, Mar 02, 2021 at 05:25:22PM +, Christophe Leroy wrote:
> > > Most architectures have similar boot command line manipulation
> > &
On Thu, Mar 25, 2021 at 08:24:53PM +0100, Dmitry Vyukov wrote:
> On Thu, Mar 25, 2021 at 8:10 PM Will Deacon wrote:
> > On Thu, Mar 25, 2021 at 07:34:54PM +0100, Dmitry Vyukov wrote:
> > > On Thu, Mar 25, 2021 at 7:20 PM Will Deacon wrote:
> > > > On Thu, Mar 18, 20
On Thu, Mar 25, 2021 at 07:34:54PM +0100, Dmitry Vyukov wrote:
> On Thu, Mar 25, 2021 at 7:20 PM Will Deacon wrote:
> >
> > On Thu, Mar 18, 2021 at 08:34:16PM +0100, Dmitry Vyukov wrote:
> > > On Thu, Mar 18, 2021 at 8:31 PM syzbot
> > > wrote:
> > > >
65ba0efa8e
> > userspace arch: arm64
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+0b036374a865ba0ef...@syzkaller.appspotmail.com
only the SFM error occurs.
Applied to will (for-joerg/arm-smmu/updates), thanks!
[1/1] iommu/arm-smmu-v3: add bit field SFM into GERROR_ERR_MASK
https://git.kernel.org/will/c/655c447c97d7
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
On Mon, 15 Mar 2021 11:32:24 +0530, Rajendra Nayak wrote:
> Add the SoC specific compatible for SC7280 implementing
> arm,mmu-500.
Applied to will (for-joerg/arm-smmu/updates), thanks!
[1/1] dt-bindings: arm-smmu: Add compatible for SC7280 SoC
https://git.kernel.org/will/c/a9aa2b
handler to flush rcaches
> iommu/vt-d: Remove IOVA domain rcache flushing for CPU offlining
> iommu: Delete iommu_dma_free_cpu_cached_iovas()
> iommu: Stop exporting free_iova_fast()
Looks like this is all set for 5.13, so hopefully Joerg can stick it in
-next for a bit more exposure.
Will
On Tue, Mar 09, 2021 at 12:10:44PM +0530, Sai Prakash Ranjan wrote:
> On 2021-02-05 17:38, Sai Prakash Ranjan wrote:
> > On 2021-02-04 03:16, Will Deacon wrote:
> > > On Tue, Feb 02, 2021 at 11:56:27AM +0530, Sai Prakash Ranjan wrote:
> > > > On 2021-02-01 23:50, Jor
| 2 +-
> drivers/iommu/mtk_iommu_v1.c | 9 -
> 2 files changed, 5 insertions(+), 6 deletions(-)
Both of these patches look fine to me, but you probably need to check
the setting of MODULE_OWNER after:
https://lore.kernel.org/r/f4de29d8330981301c1935e667b507254a2691ae.1616157612.git.robin.mur...@arm.com
Will
ivers/iommu/tegra-smmu.c | 5 +---
> drivers/iommu/virtio-iommu.c| 5 +---
> include/linux/iommu.h | 29 -
> 22 files changed, 31 insertions(+), 98 deletions(-)
I was worried this might blow up with !CONFIG_IOMMU_API, but actually
it all looks fine and is much cleaner imo so:
Acked-by: Will Deacon
Will
/iommu/sprd-iommu.c | 1 +
> drivers/iommu/virtio-iommu.c| 1 +
> include/linux/iommu.h | 9 +
> 5 files changed, 5 insertions(+), 8 deletions(-)
Acked-by: Will Deacon
Will
take SC7180 as an example, GPU SMMU was QSMMU(QCOM SMMU IP)
> > > > and APSS SMMU was SMMU500(ARM SMMU IP).
> > > >
> > > > APSS SMMU compatible - ("qcom,sc7180-smmu-500", "arm,mmu-500")
> > > > GPU SMMU compatible - ("qcom,sc
-
> 1 file changed, 8 insertions(+), 7 deletions(-)
Acked-by: Will Deacon
Will
Hi Linus,
Please pull these arm64 fixes for -rc5. Minor fixes all over, ranging
from typos to tests to errata workarounds. Summary in the tag.
Cheers,
Will
--->8
The following changes since commit c8e3866836528a4ba3b0535834f03768d74f7d8e:
perf/arm_dmc620_pmu: Fix error return code
On Thu, Mar 25, 2021 at 11:07:40PM +0900, Hector Martin wrote:
> On 25/03/2021 04.09, Arnd Bergmann wrote:
> > On Wed, Mar 24, 2021 at 7:12 PM Will Deacon wrote:
> > >
> > > > +/*
> > > > + * ioremap_np needs an explicit architecture implementation, as
On Mon, 8 Feb 2021 21:04:58 +0800, Qi Liu wrote:
> For each PMU event, there is a SMMU_EVENT_ATTR(xx, XX) and
> &smmu_event_attr_xx.attr.attr. Let's redefine the SMMU_EVENT_ATTR
> to simplify the smmu_pmu_events.
Applied to will (for-next/perf), thanks!
[1/1] drivers/perf: S
emit()
> drivers/perf: convert sysfs sprintf family to sysfs_emit
>
> [...]
Applied to will (for-next/perf), thanks!
[1/3] drivers/perf: convert sysfs snprintf family to sysfs_emit
https://git.kernel.org/will/c/700a9cf0527c
[2/3] drivers/perf: convert sysfs scnprintf fa
/arm64/Makefile | 5 +++--
> tools/perf/arch/powerpc/Makefile | 5 +++--
> tools/perf/arch/s390/Makefile| 5 +++--
> 3 files changed, 9 insertions(+), 6 deletions(-)
For arm64:
Acked-by: Will Deacon
Will
oid)
> | ^~
Applied first patch ONLY to arm64 (for-next/fixes), thanks!
[1/2] arm64/process.c: fix Wmissing-prototypes build warnings
https://git.kernel.org/arm64/c/baa96377bc7b
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
* ordered after the aic_ic_write() above (to avoid dropping vIPIs) and
> + * before IPI handling code (to avoid races handling vIPIs before they
> + * are signaled). The former is taken care of by the release semantics
> + * of the write portion, while the latter is taken care of by the
> + * acquire semantics of the read portion.
> + */
> + firing = atomic_fetch_andnot(enabled, &aic_vipi_flag[this_cpu]) &
> enabled;
Does this also need to be ordered after the Ack? For example, if we have
something like:
CPU 0 CPU 1
aic_ipi_send_mask()
atomic_fetch_andnot(flag)
atomic_fetch_or_release(flag)
aic_ic_write(AIC_IPI_SEND)
aic_ic_write(AIC_IPI_ACK)
sorry if it's a stupid question, I'm just not sure about the cases in which
the hardware will pend things for you.
Will
On Wed, Mar 24, 2021 at 06:59:21PM +, Mark Rutland wrote:
> On Wed, Mar 24, 2021 at 06:38:18PM +0000, Will Deacon wrote:
> > On Fri, Mar 05, 2021 at 06:38:48AM +0900, Hector Martin wrote:
> > > Apple ARM64 SoCs have a ton of vendor-specific registers we're going to
>
ecommends that any register names created in the
| IMPLEMENTATION DEFINED register spaces be prefixed with IMP_ and
| postfixed with _ELx, where appropriate.
and it seems like we could follow that here without much effort, if you
don't mind.
Will
o remove if you just moved the definitions,
rather than reordered than and changed the comments at the same time!
But I *think* nothing had changed, so:
Acked-by: Will Deacon
Will
p((addr), (size),
> __pgprot(PROT_DEVICE_nGnRnE))
Probably worth noting that whether or not this actually results in a
non-posted mapping depends on the system architecture, but this is the
best we can do, so:
Acked-by: Will Deacon
I would personally prefer that drivers didn't have to ca
On Fri, Mar 05, 2021 at 06:38:40AM +0900, Hector Martin wrote:
> The implementor will be used to condition the FIQ support quirk.
>
> The specific CPU types are not used at the moment, but let's add them
> for documentation purposes.
>
> Signed-off-by: Hector Martin
>
for any given
> + * device and bus. Portable drivers with a good reason to want non-posted
> + * write semantics should always provide an ioremap() fallback in case
> + * ioremap_np() is not available.
> + */
> +#ifndef ioremap_np
> +#define ioremap_np ioremap_np
> +static inline void __iomem *ioremap_np(phys_addr_t offset, size_t size)
> +{
> + return NULL;
> +}
> +#endif
Can we implement the generic pci_remap_cfgspace() in terms of ioremap_np()
if it is supported by the architecture? That way, we could avoid defining
both on arm64.
Will
; + bic x0, x0, #(0xf << ID_AA64MMFR1_VHE_SHIFT)
> + str x0, [x1]
I find it a bit bizarre doing this here, as for the primary CPU we're still
a way away from parsing the early paramaters and for secondary CPUs this
doesn't need to be done for each one. Furthermore, this same code is run
on the resume path, which can probably then race with itself.
Is it possible to do it later on the boot CPU only, e.g. in
init_feature_override()? We should probably also log a warning that we're
ignoring the option because nVHE is not available.
Will
> So that we can support both cases dynamically, this patch updates the
> FIQ exception handling code to operate the same way as the IRQ handling
> code, with its own handle_arch_fiq handler. Where a root FIQ handler is
> not registered, an unexpected FIQ exception will trigger the defa
not have an
> effect on other systems - if spurious FIQs were arriving, they would
> already panic the kernel.
>
> Signed-off-by: Hector Martin
> Signed-off-by: Mark Rutland
> Tested-by: Hector Martin
> Cc: Catalin Marinas
> Cc: James Morse
> Cc: Marc Zyngier
>
1 - 100 of 3211 matches
Mail list logo