On Tue, 12 May 2026 13:29:52 +0200
Oleg Nesterov <[email protected]> wrote:

> On 05/11, Rosen Penev wrote:
> >
> >  struct xol_area {
> >     wait_queue_head_t               wq;             /* if all slots are 
> > busy */
> > -   unsigned long                   *bitmap;        /* 0 = free slot */
> >
> >     struct page                     *page;
> >     /*
> > @@ -117,6 +116,7 @@ struct xol_area {
> >      * the vma go away, and we must handle that reasonably gracefully.
> >      */
> >     unsigned long                   vaddr;          /* Page(s) of 
> > instruction slots */
> > +   unsigned long                   bitmap[];       /* 0 = free slot */
> >  };
> >
> >  static void uprobe_warn(struct task_struct *t, const char *msg)
> > @@ -1755,18 +1755,13 @@ static struct xol_area *__create_xol_area(unsigned 
> > long vaddr)
> >     struct xol_area *area;
> >     void *insns;
> >
> > -   area = kzalloc_obj(*area);
> > +   area = kzalloc_flex(*area, bitmap, BITS_TO_LONGS(UINSNS_PER_PAGE));
> 
> The downside is that kmalloc will use kmem_cache with ->object_size = 
> PAGE_SIZE * 2,
> almost half of the allocated memory won't be used...

Hmm, is the bitmap so big? 

#define UINSNS_PER_PAGE                 (PAGE_SIZE/UPROBE_XOL_SLOT_BYTES)

And even on arm64, 

#define UPROBE_XOL_SLOT_BYTES   AARCH64_INSN_SIZE

So if PAGE_SIZE is 4k, UINSNS_PER_PAGE is 1k, its BITS_TO_LONGS will
be 1024/64 = 16. So 128 bytes. So the object is allocated from
object_size = 256 ?

Thank you,

> 
> But technically the patch looks correct so I won't argue.
> 
> Oleg.
> 
> 


-- 
Masami Hiramatsu (Google) <[email protected]>

Reply via email to