On Sat, May 18, 2024 at 5:05 PM Erhard Furtner wrote:
>
> The patch fixes the issue on ppc too. Thanks!
You're welcome!
> The test run continues and I get a failing one later on (though not '31
> rcu_uaf' Nico reported but) '65 vmalloc_oob':
> [...]
> BUG: KASAN: vmalloc-out-of-bounds in vmallo
On Wed, May 1, 2024 at 2:42 PM 'Erhard Furtner' via kasan-dev
wrote:
>
> On Sat, 27 Apr 2024 20:50:20 +0200
> Erhard Furtner wrote:
>
> > Greetings!
> >
> > Building kernel v6.9-rc5 with GCC 13.2 + binutils 2.42 and running KASAN
> > KUnit tests (CONFIG_KASAN_INLINE=y, CONFIG_KASAN_KUNIT_TEST=y)
On Mon, Jan 29, 2024 at 2:47 PM Tong Tiangen wrote:
>
> Currently, many scenarios that can tolerate memory errors when copying page
> have been supported in the kernel[1][2][3], all of which are implemented by
> copy_mc_[user]_highpage(). arm64 should also support this mechanism.
>
> Due to mte, a
On Thu, Jan 26, 2023 at 8:08 AM Christophe Leroy
wrote:
>
> On powerpc64, you can build a kernel with KASAN as soon as you build it
> with RADIX MMU support. However if the CPU doesn't have RADIX MMU,
> KASAN isn't enabled at init and the following Oops is encountered.
>
> [0.00][T0]
On Thu, Sep 30, 2021 at 9:09 AM Kefeng Wang wrote:
>
> Directly use is_kernel() helper in kernel_or_module_addr().
>
> Cc: Andrey Ryabinin
> Cc: Alexander Potapenko
> Cc: Andrey Konovalov
> Cc: Dmitry Vyukov
> Signed-off-by: Kefeng Wang
> ---
> mm/kasan/report
* some of the callers (e.g. kasan_poison_object_data) pass tagged
> @@ -99,6 +102,9 @@ EXPORT_SYMBOL(kasan_poison);
> #ifdef CONFIG_KASAN_GENERIC
> void kasan_poison_last_granule(const void *addr, size_t size)
> {
> + if (!kasan_arch_is_ready())
> + return;
> +
> if (size & KASAN_GRANULE_MASK) {
> u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size);
> *shadow = size & KASAN_GRANULE_MASK;
> --
> 2.30.2
>
Reviewed-by: Andrey Konovalov
to use and enabled by default.
> + If the architecture disables inline instrumentation, stack
> + instrumentation is also disabled as it adds inline-style
> + instrumentation that is run unconditionally.
>
> config KASAN_SW_TAGS_IDENTIFY
> bool "Enable memory corruption identification"
> --
> 2.30.2
>
Reviewed-by: Andrey Konovalov
Thanks, Daniel!
md_table(pud_t pud)
> {
> return pud_page(pud) ==
> virt_to_page(lm_alias(kasan_early_shadow_pmd));
> @@ -64,7 +64,7 @@ static inline bool kasan_pmd_table(pud_t pud)
> return false;
> }
> #endif
> -pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]
> +pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE + PTE_HWTABLE_PTRS]
> __page_aligned_bss;
>
> static inline bool kasan_pte_table(pmd_t pmd)
> --
> 2.30.2
>
Reviewed-by: Andrey Konovalov
MAX_PTRS_PER_PTE
> +#define MAX_PTRS_PER_PTE PTRS_PER_PTE
> +#endif
> +
> +#ifndef MAX_PTRS_PER_PMD
> +#define MAX_PTRS_PER_PMD PTRS_PER_PMD
> +#endif
> +
> +#ifndef MAX_PTRS_PER_PUD
> +#define MAX_PTRS_PER_PUD PTRS_PER_PUD
> +#endif
> +
> +#ifndef MAX_PTRS_PER_P4D
> +#define MAX_PTRS_PER_P4D PTRS_PER_P4D
> +#endif
> +
> #endif /* _LINUX_PGTABLE_H */
> --
> 2.30.2
>
Acked-by: Andrey Konovalov
On Thu, Jun 17, 2021 at 12:30 PM Daniel Axtens wrote:
>
> Allow architectures to define a kasan_arch_is_ready() hook that bails
> out of any function that's about to touch the shadow unless the arch
> says that it is ready for the memory to be accessed. This is fairly
> uninvasive and should have
On Thu, Jun 17, 2021 at 12:30 PM Daniel Axtens wrote:
>
> For annoying architectural reasons, it's very difficult to support inline
> instrumentation on powerpc64.*
>
> Add a Kconfig flag to allow an arch to disable inline. (It's a bit
> annoying to be 'backwards', but I'm not aware of any way to
On Tue, Sep 3, 2019 at 4:56 PM Daniel Axtens wrote:
>
> Provide the current number of vmalloc shadow pages in
> /sys/kernel/debug/kasan_vmalloc/shadow_pages.
Maybe it makes sense to put this into /sys/kernel/debug/kasan/
(without _vmalloc) and name e.g. vmalloc_shadow_pages? In case we want
to ex
On Fri, Mar 9, 2018 at 3:15 PM, Robin Murphy wrote:
> Hi Andrey,
>
> On 09/03/18 14:01, Andrey Konovalov wrote:
>>
>> arm64 has a feature called Top Byte Ignore, which allows to embed pointer
>> tags into the top byte of each pointer. Userspace programs (such as
>>
On Fri, Mar 9, 2018 at 3:16 PM, Robin Murphy wrote:
> On 09/03/18 14:02, Andrey Konovalov wrote:
>>
>> To allow arm64 syscalls accept tagged pointers from userspace, we must
>> untag them when they are passed to the kernel. Since untagging is done in
>> generic parts
architectures besides arm64.
Signed-off-by: Andrey Konovalov
---
arch/alpha/include/asm/uaccess.h | 2 ++
arch/arc/include/asm/uaccess.h| 1 +
arch/arm/include/asm/uaccess.h| 2 ++
arch/blackfin/include/asm/uaccess.h | 2 ++
arch/c6x/include/asm/uaccess.h| 2
strncpy_from_user and strnlen_user accept user addresses as arguments, and
do not go through the same path as copy_from_user and others, so here we
need to separately handle the case of tagged user addresses as well.
Untag user pointers passed to these functions.
Signed-off-by: Andrey Konovalov
mm/gup.c provides a kernel interface that accepts user addresses and
manipulates user pages directly (for example get_user_pages, that is used
by the futex syscall). Here we also need to handle the case of tagged user
pointers.
Untag addresses passed to this interface.
Signed-off-by: Andrey
unmap, remap_file_pages,
mprotect, pkey_mprotect, mremap and msync.
Signed-off-by: Andrey Konovalov
---
mm/madvise.c | 2 ++
mm/mempolicy.c | 6 ++
mm/mincore.c | 2 ++
mm/mlock.c | 5 +
mm/mmap.c | 9 +
mm/mprotect.c | 2 ++
mm/mremap.c| 2 ++
mm/msync.c | 3 +++
8 files ch
This patch makes the untagged_addr macro accept all kinds of address types
(void *, unsigned long, etc.) and allows not to specify type casts in each
place where it is used. This is done by using __typeof__.
Signed-off-by: Andrey Konovalov
---
arch/arm64/include/asm/uaccess.h | 3 ++-
1 file
m.org/docs/HardwareAssistedAddressSanitizerDesign.html
Andrey Konovalov (6):
arm64: add type casts to untagged_addr macro
arm64: untag user addresses in copy_from_user and others
mm, arm64: untag user addresses in memory syscalls
mm, arm64: untag user addresses in mm/gup.c
lib, arm64: untag addrs passed to strncpy_fro
in access_ok and in __uaccess_mask_ptr.
Signed-off-by: Andrey Konovalov
---
arch/arm64/include/asm/uaccess.h | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 2d6451cbaa86..24a221678fe3 100644
--- a
commits 230fa253df63
("kernel: Provide READ_ONCE and ASSIGN_ONCE") and 43239cbe79fc ("kernel:
Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)").
Signed-off-by: Andrey Konovalov
---
Changed in v2:
- Other archs besides x86.
arch/arm/include/asm/barrier.h | 4 ++--
ar
22 matches
Mail list logo