__get_user_atomic_128_aligned() stores to kaddr using stvx which is a
VMX store instruction, hence kaddr must be 16 byte aligned otherwise
the store won't occur as expected.
Unfortunately when we call __get_user_atomic_128_aligned() in
p9_hmi_special_emu(), the buffer we pass as kaddr (ie. vbuf) i
alignment_handler currently only tests the unaligned cases but it can
also be useful for testing the workaround for the P9N DD2.1 vector CI
load issue fixed by p9_hmi_special_emu(). This workaround was
introduced in 5080332c2c89 ("powerpc/64s: Add workaround for P9 vector
CI load issue").
This cha
On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > > > kmap_atomic() is always preferr
Hi Christophe,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.9 next-20201012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documente
On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > > kmap_atomic() is _much_ more lightwe
On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> > always CPU-local and never b
On 10/12/20 11:22 AM, Catalin Marinas wrote:
> On Mon, Oct 12, 2020 at 11:03:33AM -0600, Khalid Aziz wrote:
>> On 10/10/20 5:09 AM, Catalin Marinas wrote:
>>> On Wed, Oct 07, 2020 at 02:14:09PM -0600, Khalid Aziz wrote:
On 10/7/20 1:39 AM, Jann Horn wrote:
> arch_validate_prot() is a hook
On Mon, Oct 12, 2020 at 11:03:33AM -0600, Khalid Aziz wrote:
> On 10/10/20 5:09 AM, Catalin Marinas wrote:
> > On Wed, Oct 07, 2020 at 02:14:09PM -0600, Khalid Aziz wrote:
> >> On 10/7/20 1:39 AM, Jann Horn wrote:
> >>> arch_validate_prot() is a hook that can validate whether a given set of
> >>> p
On 10/10/20 5:09 AM, Catalin Marinas wrote:
> Hi Khalid,
>
> On Wed, Oct 07, 2020 at 02:14:09PM -0600, Khalid Aziz wrote:
>> On 10/7/20 1:39 AM, Jann Horn wrote:
>>> arch_validate_prot() is a hook that can validate whether a given set of
>>> protection flags is valid in an mprotect() operation. It
On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> kmap_atomic() is always preferred over kmap()/kmap_thread().
> kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> always CPU-local and never broadcast.
>
> So, basically, unless you *must* sleep while the mapping
On 10/12/20 9:19 AM, Eric Biggers wrote:
> On Sun, Oct 11, 2020 at 11:56:35PM -0700, Ira Weiny wrote:
>>> And I still don't really understand. After this patchset, there is still
>>> code
>>> nearly identical to the above (doing a temporary mapping just for a memcpy)
>>> that
>>> would still be
On Sun, Oct 11, 2020 at 11:56:35PM -0700, Ira Weiny wrote:
> >
> > And I still don't really understand. After this patchset, there is still
> > code
> > nearly identical to the above (doing a temporary mapping just for a memcpy)
> > that
> > would still be using kmap_atomic().
>
> I don't unde
On Mon, Oct 12, 2020 at 08:58:36PM +0530, Bharata B Rao wrote:
> On Mon, Oct 12, 2020 at 12:27:43AM -0700, Ram Pai wrote:
> > Abstract the secure VM related calls into generic calls.
> >
> > These generic calls will call the corresponding method of the
> > backend that prvoides the implementation
On the same principle as commit 773edeadf672 ("powerpc/mm: Add mask
of possible MMU features"), add mask for MMU features that are
always there in order to optimise out dead branches.
Signed-off-by: Christophe Leroy
---
v2: Features must be anded with MMU_FTRS_POSSIBLE instead of ~0, otherwise
On Mon, Oct 12, 2020 at 12:27:43AM -0700, Ram Pai wrote:
> Abstract the secure VM related calls into generic calls.
>
> These generic calls will call the corresponding method of the
> backend that prvoides the implementation to support secure VM.
>
> Currently there is only the ultravisor based i
Hi,
To review this, I looked through the supported instructions to see if
there were any that I thought might have been missed.
I didn't find any other v3.1 ones, although I don't have a v3.1 ISA to
hand so I was basically looking for instructions I didn't recognise.
I did, however, find a numbe
Ravi Bangoria writes:
> Hi Daniel,
>
> On 10/12/20 7:21 AM, Daniel Axtens wrote:
>> Hi,
>>
>> Apologies if this has come up in a previous revision.
>>
>>
>>> case 1:
>>> + if (!cpu_has_feature(CPU_FTR_ARCH_31))
>>> + return -1;
>>> +
>>> prefix_r = G
Hello,
On Mon, Oct 12, 2020 at 04:01:28PM +0530, Madhavan Srinivasan wrote:
> Power9 and isa v3.1 has 7bit mantissa field for Threshold Event Counter
^^^ Shouldn't his be 3.0?
> Multiplier (TECM). TECM is part of Monitor Mode Control Register A (MMCRA).
> This field along with T
Hi Daniel,
On 10/12/20 7:21 AM, Daniel Axtens wrote:
Hi,
Apologies if this has come up in a previous revision.
case 1:
+ if (!cpu_has_feature(CPU_FTR_ARCH_31))
+ return -1;
+
prefix_r = GET_PREFIX_R(word);
ra = GET_P
of
linux-next 20201012)
I think now, bats_show_603() should simply be renamed bats_show()
Christophe
arch/powerpc/mm/ptdump/bats.c | 24 +++-
arch/powerpc/mm/ptdump/hashpagetable.c | 12 +---
arch/powerpc/mm/ptdump/ptdump.c| 13
Power9 and isa v3.1 has 7bit mantissa field for Threshold Event Counter
Multiplier (TECM). TECM is part of Monitor Mode Control Register A (MMCRA).
This field along with Threshold Event Counter Exponent (TECE) is used to
get threshould counter value. In Power10, the width of TECM field is
increase
The kernel expects pte_young() to work regardless of CONFIG_SWAP.
Make sure a minor fault is taken to set _PAGE_ACCESSED when it
is not already set, regardless of the selection of CONFIG_SWAP.
This adds at least 3 instructions to the TLB miss exception
handlers fast path. Following patch will red
When _PAGE_ACCESSED is not set, a minor fault is expected.
To do this, TLB miss exception ANDs _PAGE_PRESENT and _PAGE_ACCESSED
into the L2 entry valid bit.
To simplify the processing and reduce the number of instructions in
TLB miss exceptions, manage it as an APG bit and get it next to
_PAGE_GUA
440/460 variants and 470 variants are not compatible, no
need to make code supporting both and using MMU features.
Just use CONFIG_PPC_47x to decide what to build.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/entry_32.S | 11 +++
arch/powerpc/mm/nohash/tlb_low.S | 29 ++
As stated in platform/44x/Kconfig, CONFIG_PPC_47x is not
compatible with 440 and 460 variants.
This is confirmed in asm/cache.h as L1_CACHE_SHIFT is different
for 47x, meaning a kernel built for 47x will not run correctly
on a 440.
In cputable, opt out all 440 and 460 variants when CONFIG_PPC_47x
On the same principle as commit 773edeadf672 ("powerpc/mm: Add mask
of possible MMU features"), add mask for MMU features that are
always there in order to optimise out dead branches.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu.h | 25 +
1 file change
Only mpc83xx will set MMU_FTR_NEED_DTLB_SW_LRU and its
definition is enclosed in #ifdef CONFIG_PPC_83xx.
Make MMU_FTR_NEED_DTLB_SW_LRU possible only when
CONFIG_PPC_83xx is set.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu.h | 5 -
1 file changed, 4 insertions(+), 1 dele
MMU_FTR_PPCAS_ARCH_V2 is defined in cpu_table.h
as MMU_FTR_TLBIEL | MMU_FTR_16M_PAGE.
MMU_FTR_TLBIEL and MMU_FTR_16M_PAGE are defined in mmu.h
MMU_FTR_PPCAS_ARCH_V2 is used only in mmu.h and it is used only once.
Remove MMU_FTR_PPCAS_ARCH_V2 and use
directly MMU_FTR_TLBIEL | MMU_FTR_16M_PAGE
Si
CPU_FTR_NODSISRALIGN has not been used since
commit 31bfdb036f12 ("powerpc: Use instruction emulation
infrastructure to handle alignment faults")
Remove it.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/cputable.h | 22 ++
arch/powerpc/kernel/dt_cpu_ftrs.c |
Since commit 10b35d9978ac ("[PATCH] powerpc: merged asm/cputable.h"),
CPU_FTR_COHERENT_ICACHE has always been defined.
Remove the #ifndef CPU_FTR_COHERENT_ICACHE block.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/mem.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/arch/powerpc
G2_LE has a 603 core, add CPU_FTR_NOEXECUTE.
Fixes: 385e89d5b20f ("powerpc/mm: add exec protection on powerpc 603")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/cputable.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/
On 8xx, we get the following features:
[0.00] cpu_features = 0x0100
[0.00] possible= 0x0120
[0.00] always = 0x
This is not correct. As CONFIG_PPC_8xx is mutually exclusive with all
other configurations, the
On 2020/10/12 13:28, Ira Weiny wrote:
> On Sat, Oct 10, 2020 at 10:20:34AM +0800, Coly Li wrote:
>> On 2020/10/10 03:50, ira.we...@intel.com wrote:
>>> From: Ira Weiny
>>>
>>> These kmap() calls are localized to a single thread. To avoid the over
>>> head of global PKRS updates use the new kmap_t
Abstract the secure VM related calls into generic calls.
These generic calls will call the corresponding method of the
backend that prvoides the implementation to support secure VM.
Currently there is only the ultravisor based implementation.
Modify that implementation to act as a backed to the g
Preparing this file to be one of the many backends. Since this file
supports Ultravisor backend, renaming all variable from kvmppc_*
to uvmem_*. This is to avoid clash with some generic top level
functions to be defined in the next patch.
Signed-off-by: Ram Pai
---
arch/powerpc/kvm/book3s_hv_uv
Secure VMs are currently supported by the Ultravisor backend.
Enhance the plumbing to support multiple backends. Each backend shall
provide the implementation for the necessary and sufficient calls
in order to support secure VM.
Also as part of this change, modify the ultravisor implementation to
36 matches
Mail list logo