-by: Alexander Potapenko
---
v2: - Merged "kasan: Change the behavior of kmalloc_large_oob_right" and
"kasan: Changed kmalloc_large_oob_right, added kmalloc_pagealloc_oob_right"
from v1
v3: - Minor description changes
---
lib/test_kasan.c | 28 +++-
; {
> kmalloc_oob_right();
> @@ -436,6 +479,10 @@ static int __init kmalloc_tests_init(void)
> kasan_global_oob();
> ksize_unpoisons_memory();
> copy_user_test();
> +#ifdef CONFIG_SLAB
> + kasan_double_free();
> + kasan_double_free_concurrent();
&g
ntine
implementation")
Signed-off-by: Alexander Potapenko
---
mm/slab.c | 7 ++-
mm/slub.c | 8 +---
2 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1..ac6c251 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2703,8 +2703,13 @@ static void slab_put_
On Fri, May 27, 2016 at 7:30 PM, Christoph Lameter wrote:
> On Fri, 27 May 2016, Alexander Potapenko wrote:
>
>> It's reasonable to rely on the fact that for every page allocated for a
>> kmem_cache the |slab_cache| field points to that cache. Without that it's
>> hard t
Add ARCH_HAS_KCOV to ARM64 config. To avoid crashes, disable
instrumentation of the following files:
arch/arm64/boot/*
arch/arm64/kvm/hyp/*
Signed-off-by: Alexander Potapenko
---
v2: - disable instrumentation of arch/arm64/{boot,kvm/hyp}
- enable instrumentation of arch/arm64/lib/delay.c
Hi all,
On Tue, Jun 14, 2016 at 6:57 PM, Alexander Potapenko wrote:
> Add ARCH_HAS_KCOV to ARM64 config. To avoid crashes, disable
> instrumentation of the following files:
>
> arch/arm64/boot/*
> arch/arm64/kvm/hyp/*
>
> Signed-off-by: Alexander Potapenko
> ---
> v2
On Tue, Jun 14, 2016 at 7:55 PM, Mark Rutland wrote:
> On Tue, Jun 14, 2016 at 06:57:21PM +0200, Alexander Potapenko wrote:
>> Add ARCH_HAS_KCOV to ARM64 config. To avoid crashes, disable
>> instrumentation of the following files:
>>
>> arch/arm64/boot/*
>> arch
Add ARCH_HAS_KCOV to ARM64 config. To avoid potential crashes, disable
instrumentation of the files in arch/arm64/kvm/hyp/*.
Signed-off-by: Alexander Potapenko
Acked-by: Mark Rutland
---
v3: - reverted arch/arm64/boot/Makefile, there's no code in that dir
- added ack from Mark Rutland
v2
On Wed, Jun 15, 2016 at 1:44 PM, Mark Rutland wrote:
> On Wed, Jun 15, 2016 at 10:25:10AM +0100, Mark Rutland wrote:
>> On Tue, Jun 14, 2016 at 08:16:08PM +0200, Alexander Potapenko wrote:
>> > On Tue, Jun 14, 2016 at 7:55 PM, Mark Rutland wrote:
>> > > I built
On Thu, Jun 9, 2016 at 8:22 PM, Alexander Potapenko wrote:
> On Thu, Jun 9, 2016 at 6:45 PM, Andrey Ryabinin
> wrote:
>>
>>
>> On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
>>> For KASAN builds:
>>> - switch SLUB allocator to using stackdepo
;
- change the freelist hook so that parts of the freelist can be put into
the quarantine.
Signed-off-by: Alexander Potapenko
---
v3: - addressed comments by Andrey Ryabinin:
- replaced KMALLOC_MAX_CACHE_SIZE with KMALLOC_MAX_SIZE in
kasan_cache_create();
- for caches
On Thu, Jun 9, 2016 at 8:22 PM, Alexander Potapenko wrote:
> On Thu, Jun 9, 2016 at 6:45 PM, Andrey Ryabinin
> wrote:
>>
>>
>> On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
>>> For KASAN builds:
>>> - switch SLUB allocator to using stackdepo
==\n");
> - add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> - spin_unlock_irqrestore(_lock, flags);
> - kasan_enable_current();
> +
> + kasan_end_report();
> +}
> +
> +void kasan_report_double_free(struct
;
- refactor the slab freelist hook, put freed memory into the quarantine.
Signed-off-by: Alexander Potapenko
---
include/linux/slab.h | 9 ++
include/linux/slub_def.h | 4 +++
lib/Kconfig.kasan| 4 +--
mm/kasan/Makefile| 3 +-
mm/kasan/kasan.c | 78
object_err(cache, page, object,
> "kasan: bad access detected");
> +#endif
> return;
> }
> dump_page(page, "kasan: bad access detected");
> diff --git a/mm/slab.c b/mm/slab.c
> index 763096a..b8c51a6 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2611,6 +2611,7 @@ static void cache_init_objs(struct kmem_cache *cachep,
> cachep->ctor(objp);
> kasan_poison_object_data(cachep, objp);
> }
> + kasan_init_object(cachep, index_to_obj(cachep, page, i));
>
> if (!shuffled)
> set_free_obj(page, i, i);
> @@ -3508,7 +3509,7 @@ static inline void __cache_free(struct kmem_cache
> *cachep, void *objp,
> unsigned long caller)
> {
> /* Put the object into the quarantine, don't touch it for now. */
> - if (kasan_slab_free(cachep, objp))
> + if (kasan_slab_free(cachep, objp, _RET_IP_))
> return;
>
> ___cache_free(cachep, objp, caller);
> diff --git a/mm/slub.c b/mm/slub.c
> index 5beeeb2..f25c0c2 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1344,7 +1344,7 @@ static inline void slab_free_hook(struct kmem_cache *s,
> void *x)
> if (!(s->flags & SLAB_DEBUG_OBJECTS))
> debug_check_no_obj_freed(x, s->object_size);
>
> - kasan_slab_free(s, x);
> + kasan_slab_free(s, x, _RET_IP_);
> }
>
> static inline void slab_free_freelist_hook(struct kmem_cache *s,
> --
> 1.7.1
>
--
Alexander Potapenko
Software Engineer
Google Germany GmbH
Erika-Mann-Straße, 33
80636 München
Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
On Thu, Jun 9, 2016 at 4:05 PM, Andrey Ryabinin wrote:
> On 06/01/2016 07:22 PM, Andrey Ryabinin wrote:
>>
>>
>> On 06/01/2016 03:53 PM, Alexander Potapenko wrote:
>>> To avoid draining the mempools, KASAN shouldn't put the mempool elements
>>>
;
- refactor the slab freelist hook, put freed memory into the quarantine.
Signed-off-by: Alexander Potapenko
---
v2: - incorporated kbuild fixes by Andrew Morton
---
include/linux/slab.h | 9 +
include/linux/slub_def.h | 4 +++
lib/Kconfig.kasan| 4 +--
mm/kasan/Makefile
On Thu, Jun 9, 2016 at 6:45 PM, Andrey Ryabinin wrote:
>
>
> On 06/08/2016 09:40 PM, Alexander Potapenko wrote:
>> For KASAN builds:
>> - switch SLUB allocator to using stackdepot instead of storing the
>>allocation/deallocation stacks in the objects;
>> -
and induce
crashes later on. Warning about such corruptions will ease the
debugging.
Signed-off-by: Alexander Potapenko
---
mm/kasan/kasan.c | 15 +++
mm/kasan/kasan.h | 1 +
mm/kasan/report.c | 3 +++
3 files changed, 19 insertions(+)
diff --git a/mm/kasan/kasan.c b/mm/kasan
On Tue, May 31, 2016 at 1:52 PM, Andrey Ryabinin
wrote:
>
>
> On 05/31/2016 01:44 PM, Alexander Potapenko wrote:
>> Add a special shadow value to distinguish accesses to KASAN-specific
>> allocator metadata.
>>
>> Unlike AddressSanitizer in the userspace, KASAN
.
Signed-off-by: Alexander Potapenko
Reported-by: Kuthonuzo Luruo
---
include/linux/kasan.h | 8 ++--
mm/kasan/kasan.c | 48 +---
mm/mempool.c | 5 +++--
mm/slab.c | 4 ++--
mm/slab.h | 2 +-
5 files changed
"Memory hot-add will be disabled\n");
> + pr_info("WARNING: KASAN doesn't support memory hot-add\n");
> + pr_info("Memory hot-add will be disabled\n");
No objections, but let's wait for Andrey.
> hotplug_memory_notifier(kasan_mem_notifi
s/ryabinin/aryabinin/
On Wed, Jun 1, 2016 at 5:22 PM, Alexander Potapenko wrote:
> On Wed, Jun 1, 2016 at 5:20 PM, Shuah Khan wrote:
>> Change the following memory hot-add error messages to info messages. There
>> is no need for these to be errors.
>>
>> [8.22
On Wed, Jun 1, 2016 at 5:23 PM, Andrey Ryabinin wrote:
> On 05/31/2016 08:49 PM, Alexander Potapenko wrote:
>> On Tue, May 31, 2016 at 1:52 PM, Andrey Ryabinin
>> wrote:
>>>
>>>
>>> On 05/31/2016 01:44 PM, Alexander Potapenko wrote:
>>>&
On Wed, Jun 1, 2016 at 6:31 PM, Alexander Potapenko wrote:
> On Wed, Jun 1, 2016 at 5:23 PM, Andrey Ryabinin
> wrote:
>> On 05/31/2016 08:49 PM, Alexander Potapenko wrote:
>>> On Tue, May 31, 2016 at 1:52 PM, Andrey Ryabinin
>>> wrote:
>>>>
>>>
On Thu, Jun 2, 2016 at 2:17 PM, Andrey Ryabinin wrote:
>
>
> On 06/02/2016 03:02 PM, Alexander Potapenko wrote:
>> On Wed, Jun 1, 2016 at 6:31 PM, Alexander Potapenko
>> wrote:
>>> On Wed, Jun 1, 2016 at 5:23 PM, Andrey Ryabinin
>>> wrote:
>>&
Instead of calling kasan_krealloc(), which replaces the memory allocation
stack ID (if stack depot is used), just unpoison the whole memory chunk.
Signed-off-by: Alexander Potapenko
---
v2: - splitted v1 into two patches
---
mm/slab.c | 2 +-
mm/slub.c | 5 +++--
2 files changed, 4 insertions
Add a test that makes sure ksize() unpoisons the whole chunk.
Signed-off-by: Alexander Potapenko
---
v2: - splitted v1 into two patches
---
lib/test_kasan.c | 20
1 file changed, 20 insertions(+)
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
index 82169fb..48e5a0b
Do not bail out from depot_save_stack() if the stack trace has zero hash.
Initially depot_save_stack() silently dropped stack traces with zero
hashes, however there's actually no point in reserving this zero value.
Reported-by: Joonsoo Kim
Signed-off-by: Alexander Potapenko
---
lib
Hi James,
On Wed, Apr 13, 2016 at 6:12 PM, James Morse wrote:
> Hi Alex,
>
> On 12/04/16 12:17, Alexander Potapenko wrote:
>> I also wonder if we can, say, land the change to arch/arm64/Kconfig
>> separately from makefile changes that improve the precision or fix
>> c
Fair enough. These are only used in
https://github.com/steelannelida/kasan/commit/7c9b30f499dfd5f48b39fbbd0006c788bd72f72a
I think I'd better send them for review as part of that change.
On Fri, Oct 30, 2015 at 1:24 AM, Andrey Ryabinin wrote:
> On 10/28/2015 07:39 PM, Alexander Potapenko wr
d by Dmitry Chernenkov.
>>
>> Signed-off-by: Dmitry Chernenkov
>> Signed-off-by: Alexander Potapenko
>>
>
> Besides adding SLAB support, this patch seriously messes up SLUB-KASAN part.
> Changelog doesn't mention why this was done and what was done.
> So this p
->state = KASAN_STATE_FREE;
set_track(_info->track);
}
#endif
I'll include them in the next round of patches.
On Fri, Feb 19, 2016 at 2:41 AM, Joonsoo Kim wrote:
>> On Mon, Feb 1, 2016 at 3:15 AM, Joonsoo Kim wrote:
>>> On Thu, Jan 28, 2016 at 02:29:42PM +0100, Alexander
in both SLAB and SLUB modes.
I'll send the updated patch set later today.
On Tue, Feb 2, 2016 at 5:25 PM, Alexander Potapenko wrote:
> The intention was to detect the situation in which a new allocator
> appears for which we don't know how it behaves if we allocate more
> than KMALLOC_MAX_C
seems to be
> a more fitting name.
>
> Suggested-by: Marco Elver
> Signed-off-by: Andrey Konovalov
> Link:
> https://linux-review.googlesource.com/id/I719cc93483d4ba288a634dba80ee6b7f2809cd26
Reviewed-by: Alexander Potapenko
> ---
> mm/kasan/common.c | 47
lov
> Link:
> https://linux-review.googlesource.com/id/Iba2a6697e3c6304cb53f89ec61dedc77fa29e3ae
Reviewed-by: Alexander Potapenko
> ---
> Documentation/dev-tools/kasan.rst | 16 +++-
> 1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/
On Tue, Jan 5, 2021 at 7:28 PM Andrey Konovalov wrote:
>
> Clarify and update comments and info messages in KASAN tests.
>
> Signed-off-by: Andrey Konovalov
> Link:
> https://linux-review.googlesource.com/id/I6c816c51fa1e0eb7aa3dead6bda1f339d2af46c8
> void *kasan_ptr_result;
> int
On Tue, Jan 5, 2021 at 7:28 PM Andrey Konovalov wrote:
>
> Add 3 new tests for tag-based KASAN modes:
>
> 1. Check that match-all pointer tag is not assigned randomly.
> 2. Check that 0xff works as a match-all pointer tag.
> 3. Check that there are no match-all memory tags.
>
> Note, that test #3
view.googlesource.com/id/Id347dfa5fe8788b7a1a189863e039f409da0ae5f
Reviewed-by: Alexander Potapenko
> KASAN tests consist on two parts:
While at it: "consist of".
On Tue, Jan 5, 2021 at 7:28 PM Andrey Konovalov wrote:
>
> It might not be obvious to the compiler that the expression must be
> executed between writing and reading to fail_data. In this case, the
> compiler might reorder or optimize away some of the accesses, and
> the tests will fail.
Have
Nit: s/adopt/adapt in the title.
> +again:
> ptr1 = kmalloc(size, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
>
> @@ -384,6 +386,13 @@ static void kmalloc_uaf2(struct kunit *test)
> ptr2 = kmalloc(size, GFP_KERNEL);
>
On Tue, Jan 5, 2021 at 7:28 PM Andrey Konovalov wrote:
>
> Since the hardware tag-based KASAN mode might not have a redzone that
> comes after an allocated object (when kasan.mode=prod is enabled), the
> kasan_bitops_tags() test ends up corrupting the next object in memory.
>
> Change the test so
ink:
> https://linux-review.googlesource.com/id/Ia173d5a1b215fe6b2548d814ef0f4433cf983570
Reviewed-by: Alexander Potapenko
bling preemption around flush_tlb_one_kernel().
>
> Link: https://lore.kernel.org/lkml/ygidbaboelggm...@elver.google.com/
> Reported-by: Tomi Sarvela
> Signed-off-by: Marco Elver
Acked-by: Alexander Potapenko
prevent certain information leaks.
>
> Signed-off-by: Marco Elver
Acked-by: Alexander Potapenko
> ---
> mm/kfence/core.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 3b8ec938470a..f7106f28443d 100644
> --- a
On Wed, Mar 3, 2021 at 1:12 PM Marco Elver wrote:
>
> Use %td for ptrdiff_t.
>
> Link:
> https://lkml.kernel.org/r/3abbe4c9-16ad-c168-a90f-087978ccd...@csgroup.eu
> Reported-by: Christophe Leroy
> Signed-off-by: Marco Elver
Reviewed-by: Alexander Potapenko
> [ 14.998426] BUG: KFENCE: invalid read in
> finish_task_switch.isra.0+0x54/0x23c
> [ 14.998426]
> [ 15.007061] Invalid read at 0x(ptrval):
> [ 15.010906] finish_task_switch.isra.0+0x54/0x23c
> [ 15.015633] kunit_try_run_case+0x5c/0xd0
> [ 15.019682]
On Thu, Mar 4, 2021 at 9:53 PM Marco Elver wrote:
>
> cache_alloc_debugcheck_after() performs checks on an object, including
> adjusting the returned pointer. None of this should apply to KFENCE
> objects. While for non-bulk allocations, the checks are skipped when we
> allocate via KFENCE, for
On Fri, Mar 5, 2021 at 2:31 AM Andrew Morton wrote:
>
> On Thu, 4 Mar 2021 22:05:48 +0100 Alexander Potapenko
> wrote:
>
> > On Thu, Mar 4, 2021 at 9:53 PM Marco Elver wrote:
> > >
> > > cache_alloc_debugcheck_after() performs checks on an object, including
gt; also avoids scanning the whole source string.
Looks like a good thing to do.
> Signed-off-by: Zhiyuan Dai
Acked-by: Alexander Potapenko
> ---
> mm/kasan/report_generic.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/report_generic.c b/
" case, since the allocation
> is ephemeral for the lifespan of the namespace, there are no explicit
> restriction. However, the implicit restriction, of having enough
> available "System RAM" to store the page map for the typically large
> pmem, still applies.
>
> Fixes: 6
On Thu, Jan 4, 2024 at 9:45 PM Stefan Hajnoczi wrote:
>
> On Tue, Jan 02, 2024 at 08:03:46AM -0500, Michael S. Tsirkin wrote:
> > On Mon, Jan 01, 2024 at 05:38:24AM -0800, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit:fbafc3e621c3 Merge
ask
> variable. Disable instrumentation in the respective functions. They are
> very small and it's easy to see that no important metadata updates are
> lost because of this.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
son_memory() calls for the output buffers.
> The logic is the same as in [1].
>
> [1]
> https://github.com/zlib-ng/zlib-ng/commit/1f5ddcc009ac3511e99fc88736a9e1a6381168c5
>
> Reported-by: Alexander Gordeev
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
>
Leoshkevich
Reviewed-by: Alexander Potapenko
ison the whole dest manually with kmsan_unpoison_memory().
>
> Reported-by: Alexander Gordeev
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
ce this question came up, I should probably add a check and
> a WARN_ON_ONCE() here.
Yes, please.
--
Alexander Potapenko
Software Engineer
Google Germany GmbH
Erika-Mann-Straße, 33
80636 München
Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
On Tue, Nov 21, 2023 at 11:02 PM Ilya Leoshkevich wrote:
>
> Comparing pointers with TASK_SIZE does not make sense when kernel and
> userspace overlap. Skip the comparison when this is the case.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Tue, Nov 21, 2023 at 11:03 PM Ilya Leoshkevich wrote:
>
> put_user() uses inline assembly with precise constraints, so Clang is
> in principle capable of instrumenting it automatically. Unfortunately,
> one of the constraints contains a dereferenced user pointer, and Clang
> does not currently
On Tue, Nov 21, 2023 at 11:02 PM Ilya Leoshkevich wrote:
>
> Prevent KMSAN from complaining about buffers filled by cpacf_trng()
> being uninitialized.
>
> Tested-by: Alexander Gordeev
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
depending on whether the code is built with
> sanitizers or fortify. This should probably be streamlined, but in the
> meantime resolve the issues by introducing the IN_BOOT_STRING_C macro,
> similar to the existing IN_ARCH_STRING_C macro.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Tue, Nov 21, 2023 at 11:06 PM Ilya Leoshkevich wrote:
>
> Like for KASAN, it's useful to temporarily disable KMSAN checks around,
> e.g., redzone accesses. Introduce kmsan_disable_current() and
> kmsan_enable_current(), which are similar to their KASAN counterparts.
Initially we used to have
> +static inline void *kmsan_get_metadata(void *addr, bool is_origin)
> +{
> + return NULL;
> +}
> +
> #endif
We shouldn't need this part, as kmsan_get_metadata() should never be
called in non-KMSAN builds.
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
(hope some s390 maintainer acks this as well)
> +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
> +{
> + if (addr >= (void *)_lowcore &&
> + addr < (void *)(_lowcore + 1)) {
> + /*
> +* Different lowcores accessed via S390_lowcore are described
> +* by
Hi Ilya,
Sorry for this taking so long, I'll probably take a closer look next week.
Overall, the s390 part looks good to me, but I wanted to check the x86
behavior once again (and perhaps figure out how to avoid introducing
another way to disable KMSAN).
Do you happen to have a Git repo with your
s.
>
> Unpoisoning the canary is not the right thing to do: only
> check_canary() is supposed to ever touch it. Instead, disable KMSAN
> checks around canary read accesses.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Tue, Nov 21, 2023 at 11:07 PM Ilya Leoshkevich wrote:
>
> The constraints of the DFLTCC inline assembly are not precise: they
> do not communicate the size of the output buffers to the compiler, so
> it cannot automatically instrument it.
KMSAN usually does a poor job instrumenting inline
-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
gs when running the ftrace testsuite.
I couldn't reproduce these warnings on x86, hope you really need this
change on s390 :)
> Fix by trusting the architecture-specific assembly code and always
> unpoisoning ftrace_regs in ftrace_ops_list_func.
>
> Signed-off-by: Ilya Leoshkev
On Fri, Dec 8, 2023 at 1:53 PM Alexander Potapenko wrote:
>
> On Tue, Nov 21, 2023 at 11:02 PM Ilya Leoshkevich wrote:
> >
> > KMSAN warns about check_canary() accessing the canary.
> >
> > The reason is that, even though set_canary() is properly instrumented
> &
> A problem with __memset() is that, at least for me, it always ends
> up being a call. There is a use case where we need to write only 1
> byte, so I thought that introducing a call there (when compiling
> without KMSAN) would be unacceptable.
Wonder what happens with that use case if we e.g.
On Tue, Nov 21, 2023 at 11:06 PM Ilya Leoshkevich wrote:
>
> Add a wrapper for memset() that prevents unpoisoning.
We have __memset() already, won't it work for this case?
On the other hand, I am not sure you want to preserve the redzone in
its previous state (unless it's known to be poisoned).
On Tue, Nov 21, 2023 at 11:02 PM Ilya Leoshkevich wrote:
>
> Avoid false KMSAN negatives with SLUB_DEBUG by allowing
> kmsan_slab_free() to poison the freed memory, and by preventing
> init_object() from unpoisoning new allocations. The usage of
> memset_no_sanitize_memory() does not degrade the
gs when running the ftrace testsuite.
>
> Fix by trusting the assembly code and always unpoisoning ftrace_regs in
> kprobe_ftrace_handler().
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Fri, Dec 8, 2023 at 3:14 PM Ilya Leoshkevich wrote:
>
> On Fri, 2023-12-08 at 14:32 +0100, Alexander Potapenko wrote:
> > On Tue, Nov 21, 2023 at 11:07 PM Ilya Leoshkevich
> > wrote:
> > >
> > > The constraints of the DFLTCC inline assembly are not pr
On Tue, Nov 21, 2023 at 11:07 PM Ilya Leoshkevich wrote:
>
> The inline assembly block in s390's chsc() stores that much.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Tue, Nov 21, 2023 at 11:02 PM Ilya Leoshkevich wrote:
>
> Currently KMSAN does not fully propagate metadata in strlcpy() and
> strlcat(), because they are built with -ffreestanding and call
> memcpy(). In this combination memcpy() calls are not instrumented.
Is this something specific to
On Tue, Nov 21, 2023 at 11:07 PM Ilya Leoshkevich wrote:
>
> It is useful to manually copy metadata in order to describe the effects
> of memmove()-like logic in uninstrumented code or inline asm. Introduce
> kmsan_memmove_metadata() for this purpose.
>
> Signed-off-by: Ilya Leoshkevich
> ---
>
an_unpoison_memory()
> definition. This produces some runtime overhead, but only when building
> with CONFIG_KMSAN. The benefit is that it does not disturb the existing
> KMSAN build logic and call sites don't need to be changed.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
option to describe this situation, so explicitly check for
> s390.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
(see the nit below)
> ---
> mm/kmsan/init.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/kmsan/init.c b/mm/kmsan
ata for page operations")
> Suggested-by: Alexander Gordeev
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
> ---
> mm/kmsan/shadow.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c
> index b9d05aff313e..2
viewed-by: Alexander Gordeev
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Wed, Nov 15, 2023 at 9:34 PM Ilya Leoshkevich wrote:
>
> Hi,
>
> This series provides the minimal support for Kernel Memory Sanitizer on
> s390. Kernel Memory Sanitizer is clang-only instrumentation for finding
> accesses to uninitialized memory. The clang support for s390 has already
> been
On Thu, Nov 16, 2023 at 10:04 AM Alexander Potapenko wrote:
>
> On Wed, Nov 15, 2023 at 9:35 PM Ilya Leoshkevich wrote:
> >
> > The unwind code can read uninitialized frames. Furthermore, even in
> > the good case, KMSAN does not emit shadow for backchain
On Wed, Nov 15, 2023 at 9:35 PM Ilya Leoshkevich wrote:
>
> The unwind code can read uninitialized frames. Furthermore, even in
> the good case, KMSAN does not emit shadow for backchains. Therefore
> disable it for the unwinding functions.
>
> Signed-off-by: Ilya Leoshkevich
> ---
>
On Wed, Nov 15, 2023 at 9:34 PM Ilya Leoshkevich wrote:
>
> All other sanitizers are disabled for these components as well.
>
> Reviewed-by: Alexander Gordeev
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
(see a nit below)
> ---
> arch/s390/boot/
stens
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
(see the comment below)
>
> -#include
> +#include
For the sake of consistency with other KMSAN code, please keep the
headers sorted alphabetically.
Leoshkevich
Reviewed-by: Alexander Potapenko
On Wed, Nov 15, 2023 at 9:34 PM Ilya Leoshkevich wrote:
>
> Like for KASAN, it's useful to temporarily disable KMSAN checks around,
> e.g., redzone accesses.
This example is incorrect, because KMSAN does not have redzones.
You are calling these functions from "mm: slub: Let KMSAN access
to improve the KMSAN usability for
> modules.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
.
Nice!
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
MSAN for now.
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
ts.
Good catch, thank you!
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Wed, Nov 15, 2023 at 9:34 PM Ilya Leoshkevich wrote:
>
> Avoid false KMSAN negatives with SLUB_DEBUG by allowing
> kmsan_slab_free() to poison the freed memory, and by preventing
> init_object() from unpoisoning new allocations.
>
> Signed-off-by: Ilya Leoshkevich
> ---
> mm/kmsan/hooks.c |
On Wed, Nov 15, 2023 at 9:35 PM Ilya Leoshkevich wrote:
>
> The pages for the KMSAN metadata associated with most kernel mappings
> are taken from memblock by the common code. However, vmalloc and module
> metadata needs to be defined by the architectures.
>
> Be a little bit more careful than
On Wed, Nov 15, 2023 at 9:35 PM Ilya Leoshkevich wrote:
>
> This is normally done by the generic entry code, but the
> kernel_stack_overflow() flow bypasses it.
>
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
> ---
> arch/s390/kernel/traps.c | 2 ++
gt; pointer. While at it, prettify them too.
>
> Suggested-by: Heiko Carstens
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
ossible.
> Signed-off-by: Ilya Leoshkevich
Reviewed-by: Alexander Potapenko
On Thu, Jun 13, 2024 at 5:39 PM Ilya Leoshkevich wrote:
>
> put_user() uses inline assembly with precise constraints, so Clang is
> in principle capable of instrumenting it automatically. Unfortunately,
> one of the constraints contains a dereferenced user pointer, and Clang
> does not currently
901 - 1000 of 1024 matches
Mail list logo