On Thu, Sep 17, 2020 at 11:40 AM Christopher Lameter wrote:
>
> On Tue, 15 Sep 2020, Marco Elver wrote:
>
> > void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags)
> > {
> > - void *ret = slab_alloc(s, gfpflags, _RET_IP_);
> > + void *ret = slab_alloc(s, gfpflags, _RET_IP_, s->obj
On Tue, 15 Sep 2020, Marco Elver wrote:
> void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags)
> {
> - void *ret = slab_alloc(s, gfpflags, _RET_IP_);
> + void *ret = slab_alloc(s, gfpflags, _RET_IP_, s->object_size);
The additional size parameter is a part of a struct kmem_cache
From: Alexander Potapenko
Inserts KFENCE hooks into the SLUB allocator.
We note the addition of the 'orig_size' argument to slab_alloc*()
functions, to be able to pass the originally requested size to KFENCE.
When KFENCE is disabled, there is no additional overhead, since these
functions are __a
3 matches
Mail list logo