On Tue, Dec 23, 2025 at 11:59:54AM +0100, Jan Beulich wrote:
> On 23.12.2025 09:15, Roger Pau Monne wrote:
> > The current logic splits the update of the amount of available memory in
> > the system (total_avail_pages) and pending claims into two separately
> > locked regions.  This leads to a window between counters adjustments where
> > the result of total_avail_pages - outstanding_claims doesn't reflect the
> > real amount of free memory available, and can return a negative value due
> > to total_avail_pages having been updated ahead of outstanding_claims.
> > 
> > Fix by adjusting outstanding_claims and d->outstanding_pages in the same
> > place where total_avail_pages is updated.  This can possibly lead to the
> > pages failing to be assigned to the domain later, after they have already
> > been subtracted from the claimed amount.  Ultimately this would result in a
> > domain losing part of it's claim, but that's better than the current skew
> > between total_avail_pages and outstanding_claims.
> 
> For the system as a whole - yes. For just the domain rather not. It may be
> a little cumbersome, but can't we restore the claim from the error path
> after failed assignment? (In fact the need to (optionally) pass a domain
> into free_heap_pages() would improve symmetry with alloc_heap_pages().)

Passing a domain parameter to free_heap_pages() is not that much of an
issue.  The problem with restoring the claim value on failure to
assign is the corner cases.  For example consider an allocation that
depletes the existing claim, allocating more than what was left to be
claimed.  Restoring the previous claim value on failure to assign to
the domain would be tricky.  It would require returning the consumed
claim from alloc_heap_pages(), so that alloc_domheap_pages() could
restore it on failure to assign.

However, I was looking at the possible failure causes of
assign_pages() and I'm not sure there's much point in attempting to
restore the claimed amount.  Current cases where assign_pages() will
fail:

 - Domain is dying: keeping the claim is irrelevant, the domain is
   dying anyway.

 - tot_pages > max_pages: inconsistent domain state, and a claim
   should never be bigger than max_pages.

 - tot_pages + alloc > max_pages: only possible if alloc is using
   claim pages plus unclaimed ones, as the claim cannot be bigger than
   max_pages.  Such alloc is doomed to fail anyway, and would point at
   the claim value being incorrectly set.

 - tot_pages + alloc < alloc: overflow of tot_pages, should never
   happen with claimed pages as tot_pages <= max_pages, and claim <=
   max_pages.

However that only covers current code in assign_pages(), there's no
guarantee that future code might introduce new failure cases.

Having said all that, I have a prototype that restores the claimed
amount that I could send to the list.  It involves adding two extra
parameters to free_heap_pages(): the domain and the claim amount to
restore.  It's not super-nice, but I was expecting it to be worse.

> > Fixes: 65c9792df600 ("mmu: Introduce XENMEM_claim_pages (subop of memory 
> > ops)")
> > Signed-off-by: Roger Pau Monné <[email protected]>
> > ---
> > Arguably we could also get rid of domain_adjust_tot_pages() given what it
> > currently does, which will be a revert of:
> > 
> > 1c3b9dd61dab xen: centralize accounting for domain tot_pages
> > 
> > Opinions?  Should it be done in a separate commit, possibly as a clear
> > revert?  Maybe it's worth keeping the helper in case we need to add more
> > content there, and it's already introduced anyway.
> 
> Personally I think we're better off keeping that helper, even if it's now
> pretty thin.

Ack, sounds good to me.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -515,30 +515,6 @@ unsigned long domain_adjust_tot_pages(struct domain 
> > *d, long pages)
> >      ASSERT(rspin_is_locked(&d->page_alloc_lock));
> >      d->tot_pages += pages;
> >  
> > -    /*
> > -     * can test d->outstanding_pages race-free because it can only change
> > -     * if d->page_alloc_lock and heap_lock are both held, see also
> > -     * domain_set_outstanding_pages below
> > -     */
> > -    if ( !d->outstanding_pages || pages <= 0 )
> > -        goto out;
> > -
> > -    spin_lock(&heap_lock);
> > -    BUG_ON(outstanding_claims < d->outstanding_pages);
> > -    if ( d->outstanding_pages < pages )
> > -    {
> > -        /* `pages` exceeds the domain's outstanding count. Zero it out. */
> > -        outstanding_claims -= d->outstanding_pages;
> > -        d->outstanding_pages = 0;
> > -    }
> > -    else
> > -    {
> > -        outstanding_claims -= pages;
> > -        d->outstanding_pages -= pages;
> > -    }
> > -    spin_unlock(&heap_lock);
> > -
> > -out:
> >      return d->tot_pages;
> >  }
> 
> Below here the first comment in domain_set_outstanding_pages() refers to
> the code being deleted, and hence imo wants updating, too.
> 
> > @@ -1071,6 +1047,26 @@ static struct page_info *alloc_heap_pages(
> >      total_avail_pages -= request;
> >      ASSERT(total_avail_pages >= 0);
> >  
> > +    if ( d && d->outstanding_pages && !(memflags & MEMF_no_refcount) )
> > +    {
> > +        /*
> > +         * Adjust claims in the same locked region where total_avail_pages 
> > is
> > +         * adjusted, not doing so would lead to a window where the amount 
> > of
> > +         * free memory (avail - claimed) would be incorrect.
> > +         *
> > +         * Note that by adjusting the claimed amount here it's possible for
> > +         * pages to fail to be assigned to the claiming domain while 
> > already
> > +         * having been subtracted from d->outstanding_pages.  Such claimed
> > +         * amount is then lost, as the pages that fail to be assigned to 
> > the
> > +         * domain are freed without replenishing the claim.
> > +         */
> > +        unsigned long outstanding = min(outstanding_claims, request);
> > +
> > +        outstanding_claims -= outstanding;
> > +        BUG_ON(outstanding > d->outstanding_pages);
> > +        d->outstanding_pages -= outstanding;
> > +    }
> 
> This now happening with the domain alloc lock not held imo also needs at
> least mentioning (if not discussing) in the description. Aiui it's safe as
> long as all updates of d->outstanding_pages happen with the heap lock
> held. Which in turn may want mentioning in a comment next to the field
> definition, for (now) being different from e.g. ->tot_pages and
> ->xenheap_pages.

I will add the comment to the field definition and update the commit
message, thanks for noticing.

Roger.

Reply via email to