On Mon, Nov 23, 2020 at 7:54 PM Andrey Konovalov wrote:
>
> > > @@ -168,6 +173,9 @@ void quarantine_put(struct kmem_cache *cache, void
> > > *object)
> > > struct qlist_head temp = QLIST_INIT;
> > > struct kasan_free_meta *meta = kasan_get_free_meta(cache, object);
> > >
> > > +
On Tue, Nov 17, 2020 at 2:12 PM Dmitry Vyukov wrote:
>
> > void __kasan_poison_slab(struct page *page)
> > {
> > @@ -272,11 +305,9 @@ void * __must_check __kasan_init_slab_obj(struct
> > kmem_cache *cache,
> > struct kasan_alloc_meta *alloc_meta;
> >
> > if
On Fri, Nov 13, 2020 at 11:20 PM Andrey Konovalov wrote:
>
> KASAN marks caches that are sanitized with the SLAB_KASAN cache flag.
> Currently if the metadata that is appended after the object (stores e.g.
> stack trace ids) doesn't fit into KMALLOC_MAX_SIZE (can only happen with
> SLAB, see the
On Tue, 17 Nov 2020 at 14:12, Dmitry Vyukov wrote:
> > +*/
> > *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE;
> > +
> > ___cache_free(cache, object, _THIS_IP_);
> >
> > if (IS_ENABLED(CONFIG_SLAB))
> > @@ -168,6 +173,9 @@ void quarantine_put(struct
On Tue, Nov 17, 2020 at 2:18 PM Marco Elver wrote:
>
> On Tue, 17 Nov 2020 at 14:12, Dmitry Vyukov wrote:
>
> > > +*/
> > > *(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE;
> > > +
> > > ___cache_free(cache, object, _THIS_IP_);
> > >
> > > if
On Fri, Nov 13, 2020 at 11:20PM +0100, Andrey Konovalov wrote:
> KASAN marks caches that are sanitized with the SLAB_KASAN cache flag.
> Currently if the metadata that is appended after the object (stores e.g.
> stack trace ids) doesn't fit into KMALLOC_MAX_SIZE (can only happen with
> SLAB, see
KASAN marks caches that are sanitized with the SLAB_KASAN cache flag.
Currently if the metadata that is appended after the object (stores e.g.
stack trace ids) doesn't fit into KMALLOC_MAX_SIZE (can only happen with
SLAB, see the comment in the patch), KASAN turns off sanitization
completely.
7 matches
Mail list logo