On 10/19/20 6:45 PM, Michal Suchánek wrote:

On Mon, Oct 19, 2020 at 09:59:57PM +1100, Michael Ellerman wrote:
Hi Ganesh,

Some comments below ...

Ganesh Goudar <ganes...@linux.ibm.com> writes:
To check machine check handling, add support to inject slb
multihit errors.

Cc: Kees Cook <keesc...@chromium.org>
Reviewed-by: Michal Suchánek <msucha...@suse.de>
Co-developed-by: Mahesh Salgaonkar <mah...@linux.ibm.com>
Signed-off-by: Mahesh Salgaonkar <mah...@linux.ibm.com>
Signed-off-by: Ganesh Goudar <ganes...@linux.ibm.com>
---
  drivers/misc/lkdtm/Makefile             |   1 +
  drivers/misc/lkdtm/core.c               |   3 +
  drivers/misc/lkdtm/lkdtm.h              |   3 +
  drivers/misc/lkdtm/powerpc.c            | 156 ++++++++++++++++++++++++
  tools/testing/selftests/lkdtm/tests.txt |   1 +
  5 files changed, 164 insertions(+)
  create mode 100644 drivers/misc/lkdtm/powerpc.c

..
diff --git a/drivers/misc/lkdtm/powerpc.c b/drivers/misc/lkdtm/powerpc.c
new file mode 100644
index 000000000000..f388b53dccba
--- /dev/null
+++ b/drivers/misc/lkdtm/powerpc.c
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "lkdtm.h"
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
Usual style is to include the linux headers first and then the local header.

ok

+
+/* Gets index for new slb entry */
+static inline unsigned long get_slb_index(void)
+{
+       unsigned long index;
+
+       index = get_paca()->stab_rr;
+
+       /*
+        * simple round-robin replacement of slb starting at SLB_NUM_BOLTED.
+        */
+       if (index < (mmu_slb_size - 1))
+               index++;
+       else
+               index = SLB_NUM_BOLTED;
+       get_paca()->stab_rr = index;
+       return index;
+}
I'm not sure we need that really?

We can just always insert at SLB_MUM_BOLTED and SLB_NUM_BOLTED + 1.

Or we could allocate from the top down using mmu_slb_size - 1, and
mmu_slb_size - 2.

Ok, We can do that.

+#define slb_esid_mask(ssize)   \
+       (((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T)
+
+/* Form the operand for slbmte */
+static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
+                                        unsigned long slot)
+{
+       return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot;
+}
+
+#define slb_vsid_shift(ssize)  \
+       ((ssize) == MMU_SEGSIZE_256M ? SLB_VSID_SHIFT : SLB_VSID_SHIFT_1T)
+
+/* Form the operand for slbmte */
+static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
+                                        unsigned long flags)
+{
+       return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags |
+               ((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT);
+}
I realise it's not much code, but I'd rather those were in a header,
rather than copied from slb.c. That way they can never skew vs the
versions in slb.c

Best place I think would be arch/powerpc/include/asm/book3s/64/mmu-hash.h

Ok, ill move them.

+
+/* Inserts new slb entry */
It inserts two.

Right.

+static void insert_slb_entry(char *p, int ssize)
+{
+       unsigned long flags, entry;
+
+       flags = SLB_VSID_KERNEL | mmu_psize_defs[MMU_PAGE_64K].sllp;
That won't work if the kernel is built for 4K pages. Or at least it
won't work the way we want it to.

You should use mmu_linear_psize.

But for vmalloc you should use mmu_vmalloc_psize, so it will need to be
a parameter.

Sure, Thanks

+       preempt_disable();
+
+       entry = get_slb_index();
+       asm volatile("slbmte %0,%1" :
+                       : "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
+                         "r" (mk_esid_data((unsigned long)p, ssize, entry))
+                       : "memory");
+
+       entry = get_slb_index();
+       asm volatile("slbmte %0,%1" :
+                       : "r" (mk_vsid_data((unsigned long)p, ssize, flags)),
+                         "r" (mk_esid_data((unsigned long)p, ssize, entry))
+                       : "memory");
+       preempt_enable();
+       /*
+        * This triggers exception, If handled correctly we must recover
+        * from this error.
+        */
+       p[0] = '!';
That doesn't belong in here, it should be done by the caller.

That would also mean p could be unsigned long in here, so you wouldn't
have to cast it four times.

Sure, ill change it.

+}
+
+/* Inject slb multihit on vmalloc-ed address i.e 0xD00... */
+static void inject_vmalloc_slb_multihit(void)
+{
+       char *p;
+
+       p = vmalloc(2048);
vmalloc() allocates whole pages, so it may as well be vmalloc(PAGE_SIZE).

ok

+       if (!p)
+               return;
That's unlikely, but it should be an error that's propagated up to the caller.

ok

+
+       insert_slb_entry(p, MMU_SEGSIZE_1T);
+       vfree(p);
+}
+
+/* Inject slb multihit on kmalloc-ed address i.e 0xC00... */
+static void inject_kmalloc_slb_multihit(void)
+{
+       char *p;
+
+       p = kmalloc(2048, GFP_KERNEL);
+       if (!p)
+               return;
+
+       insert_slb_entry(p, MMU_SEGSIZE_1T);
+       kfree(p);
+}
+
+/*
+ * Few initial SLB entries are bolted. Add a test to inject
+ * multihit in bolted entry 0.
+ */
+static void insert_dup_slb_entry_0(void)
+{
+       unsigned long test_address = 0xC000000000000000;
Should use PAGE_OFFSET;

ok

+       volatile unsigned long *test_ptr;
Does it need to be a volatile?
The slbmte should act as a compiler barrier (it has a memory clobber)
and a CPU barrier as well?

Yes, volatile is not required, ill remove it.

+       unsigned long entry, i = 0;
+       unsigned long esid, vsid;
Please group your variables:

   unsigned long esid, vsid, entry, test_address, i;
   volatile unsigned long *test_ptr;

And then initialise them as appropriate.

ok

+       test_ptr = (unsigned long *)test_address;
+       preempt_disable();
+
+       asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
+       asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
Why do we need to read them out of the SLB rather than just computing
the values?
It ensures that the entry is perfect duplicate without copying even more
code from other parts of the kernel, doesn't it?

Especially when inserting only one duplicate as suggested later it
ensures that the test really does what it should.
+       entry = get_slb_index();
+
+       /* for i !=0 we would need to mask out the old entry number */
Or you could just compute esid and then it wouldn't be an issue.

+       asm volatile("slbmte %0,%1" :
+                       : "r" (vsid),
+                         "r" (esid | entry)
+                       : "memory");
At this point we've just inserted a duplicate of entry 0. So you don't
need to insert a third entry do you?
This code was obviously adapted from the previous one which needed two
entries in case there was none for the memory region to start with.

Addin only one duplicate should suffice and it can be easily tested that
it still generates a MCE.

+       asm volatile("slbmfee  %0,%1" : "=r" (esid) : "r" (i));
+       asm volatile("slbmfev  %0,%1" : "=r" (vsid) : "r" (i));
+       entry = get_slb_index();
+
+       /* for i !=0 we would need to mask out the old entry number */
+       asm volatile("slbmte %0,%1" :
+                       : "r" (vsid),
+                         "r" (esid | entry)
+                       : "memory");
+
+       pr_info("%s accessing test address 0x%lx: 0x%lx\n",
+               __func__, test_address, *test_ptr);
This prints the first two instructions of the kernel. I happen to know
what values they should have, but most people won't understand what
they're seeing. A better test would be to read the value at the top of
the function and then load it again here and check we got the right
thing.
It does not really matter what we read back so long as the compiler does
not optimize out the read. The point here is to access an address in the
range covered by the SLB entry 0. The failure case is that the system
crashes and the test never finishes.

Thanks

Michal

Reply via email to