Michael Ellerman writes:
> It was reported that soft dirty tracking doesn't work when using the
> Radix MMU.
>
> The tracking is supposed to work by clearing the soft dirty bit for a
> mapping and then write protecting the PTE. If/when the page is written
> to, a page fault occurs and the soft
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> Mark writes to hypervisor ipi state so that KCSAN recognises these
> asynchronous issue of kvmppc_{set,clear}_host_ipi to be intended, with
> atomic writes. Mark asynchronous polls to this variable in
> kvm_ppc_read_one_intr().
>
>
On Fri, 2023-05-05 at 14:44 -0500, gjo...@linux.vnet.ibm.com wrote:
> From: Greg Joyce
>
> Changes to the PLPKS API require minor updates to the SED Opal
> PLPKS keystore code.
>
> Signed-off-by: Greg Joyce
[+ Nayna]
This patch will need to be squashed with patch 2.
> ---
>
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> IPI message flags are observed and consequently consumed in the
> smp_ipi_demux_relaxed function, which handles these message sources
> until it observes none more arriving. Mark the checked loop guard with
> READ_ONCE, to signal to KCSAN
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> The idle_state entry in the PACA on PowerNV features a bit which is
> atomically tested and set through ldarx/stdcx. to be used as a spinlock.
> This lock then guards access to other bit fields of idle_state. KCSAN
> cannot differentiate
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> The power_save callback can be overwritten by another core at boot time.
> Specifically, null values will be replaced exactly once with the callback
> suitable for the particular platform (PowerNV / pseries lpars), making
> this value a
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> Prior to this patch, data races are detectable by KCSAN of the following
> forms:
>
> [1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
> or otherwise outside of a critical section
> [2] Interrupted critical
On Wed May 10, 2023 at 1:31 PM AEST, Rohan McLure wrote:
> Annotate the release barrier and memory clobber (in effect, producing a
> compiler barrier) in the publish_tail_cpu call. These barriers have the
> effect of ensuring that qnode attributes are all written to prior to
> publish the node to
This file contains only the enter_prom implementation now.
Trim includes and update header comment while we're here.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/Makefile | 8 +++--
.../kernel/{entry_64.S => prom_entry_64.S}| 30 ++-
The _switch stack frame setup are substantially the same, so are the
comments. The difference in how the stack and current are switched,
and other hardware and software housekeeping is done is moved into
macros.
Generated code should be unchanged.
Signed-off-by: Nicholas Piggin
---
Change the order of some operations and change some register numbers in
preparation to merge 32-bit and 64-bit switch.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/entry_32.S | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/kernel/entry_32.S
64-bit has removed the sync from _switch since commit 9145effd626d1
("powerpc/64: Drop explicit hwsync in context switch"). The same
logic there should apply to 32-bit. Remove the sync and replace with
a placeholder comment (32 and 64 will be merged with a later change).
Signed-off-by: Nicholas
More some 64-bit specifics out from the function epilogue and rearrange
this to be a bit neater, use 32-bit mem ops for CR save/restore, and
change some register numbers.
---
arch/powerpc/kernel/entry_64.S | 38 --
1 file changed, 18 insertions(+), 20 deletions(-)
The large hunk of SLB pinning in _switch asm code makes it more
difficult to see everything else that's going on. Also the code
footprint of the increasingly less important path is likely to
take up icache footprint and fetch.
Move it out of line.
Signed-off-by: Nicholas Piggin
---
This got a positive response so I'll post again. If anything ppc32 gets
complicated by all the ppc64 crud so if Christophe is okay with it then
it can't be too bad.
Thanks,
Nick
Since v1:
- Don't re-order 32-bit prologue.
- Improve Kconfig conditional includes.
- Break out code changes into
We are now in a position where no caller of pin_user_pages() requires the
vmas parameter at all, so eliminate this parameter from the function and
all callers.
This clears the way to removing the vmas parameter from GUP altogether.
Acked-by: David Hildenbrand
Acked-by: Dennis Dalessandro (for
On Sun, 7 May 2023, Maciej W. Rozycki wrote:
> > We're going to land this series this cycle, come hell or high water.
>
> Thank you for coming back to me and for your promise. I'll strive to
> address your concerns next weekend.
>
> Unfortunately a PDU in my remote lab has botched up and
On Sat, 13 May 2023, Helge Deller wrote:
> Hi Hugh,
>
> On 5/10/23 06:52, Hugh Dickins wrote:
> > To keep balance in future, remember to pte_unmap() after a successful
> > get_ptep(). And (we might as well) pretend that flush_cache_pages()
> > really needed a map there, to read the pfn before
On Mon, May 08, 2023 at 05:52:35PM -0400, Frank Li wrote:
> Layerscape has PME interrupt, which can be used as linkup notifier.
> Set CFG_READY bit of PEX_PF0_CONFIG to enable accesses from root complex
> when linkup detected.
>
> Signed-off-by: Xiaowei Bao
> Signed-off-by: Frank Li
One minor
19 matches
Mail list logo