Hi Julien

> -----Original Message-----
> From: Julien Grall <jul...@xen.org>
> Sent: Tuesday, May 17, 2022 2:01 AM
> To: Penny Zheng <penny.zh...@arm.com>; xen-devel@lists.xenproject.org
> Cc: Wei Chen <wei.c...@arm.com>; Andrew Cooper
> <andrew.coop...@citrix.com>; George Dunlap <george.dun...@citrix.com>;
> Jan Beulich <jbeul...@suse.com>; Stefano Stabellini <sstabell...@kernel.org>;
> Wei Liu <w...@xen.org>
> Subject: Re: [PATCH v4 1/6] xen: do not free reserved memory into heap
> 
> Hi Penny,
> 
> On 10/05/2022 03:27, Penny Zheng wrote:
> > Pages used as guest RAM for static domain, shall be reserved to this
> > domain only.
> > So in case reserved pages being used for other purpose, users shall
> > not free them back to heap, even when last ref gets dropped.
> >
> > free_staticmem_pages will be called by free_heap_pages in runtime for
> > static domain freeing memory resource, so let's drop the __init flag.
> >
> > Signed-off-by: Penny Zheng <penny.zh...@arm.com>
> > ---
> > v4 changes:
> > - no changes
> > ---
> > v3 changes:
> > - fix possible racy issue in free_staticmem_pages()
> > - introduce a stub free_staticmem_pages() for the
> > !CONFIG_STATIC_MEMORY case
> > - move the change to free_heap_pages() to cover other potential call
> > sites
> > - fix the indentation
> > ---
> > v2 changes:
> > - new commit
> > ---
> >   xen/common/page_alloc.c | 17 ++++++++++++++---
> >   xen/include/xen/mm.h    |  2 +-
> >   2 files changed, 15 insertions(+), 4 deletions(-)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 319029140f..5e569a48a2 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1443,6 +1443,10 @@ static void free_heap_pages(
> >
> >       ASSERT(order <= MAX_ORDER);
> >
> > +    if ( pg->count_info & PGC_reserved )
> 
> NIT: I would suggest to use "unlikely()" here.
> 
> > +        /* Reserved page shall not go back to the heap. */
> > +        return free_staticmem_pages(pg, 1UL << order, need_scrub);
> > +
> >       spin_lock(&heap_lock);
> >
> >       for ( i = 0; i < (1 << order); i++ ) @@ -2636,8 +2640,8 @@
> > struct domain *get_pg_owner(domid_t domid)
> >
> >   #ifdef CONFIG_STATIC_MEMORY
> >   /* Equivalent of free_heap_pages to free nr_mfns pages of static
> > memory. */ -void __init free_staticmem_pages(struct page_info *pg,
> unsigned long nr_mfns,
> > -                                 bool need_scrub)
> > +void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> > +                          bool need_scrub)
> 
> Looking at the implementation of free_staticmem_pages(), the page will be
> scrubbed synchronously.
> 
> If I am not mistaken, static memory is not yet supported so I would be OK to
> continue with synchronous scrubbing. However, this will need to be
> asynchronous before we even consider to security support it.
> 

Yes,  I remembered that asynchronous is still on the to-do list for static 
memory.

If it doesn't bother too much to you, I would like to ask some help on this 
issue, ;).
I only knew basic knowledge on the scrubbing, I knew that dirty pages is placed 
at the
end of list heap(node, zone, order) for scrubbing and "first_dirty" is used to 
track down
the dirty pages. IMO, Both two parts are restricted to the heap thingy,  not 
reusable for
static memory, so maybe I need to re-write scrub_free_page for static memory, 
and also
link the need-to-scrub reserved pages to a new global list e.g.  
dirty_resv_list for aync
scrubbing?
 Any suggestions?

> BTW, SUPPORT.md doesn't seem to explicitely say whether static memory is
> supported. Would you be able to send a patch to update it? I think this should
> be tech preview for now.
> 

Sure, will do.

> >   {
> >       mfn_t mfn = page_to_mfn(pg);
> >       unsigned long i;
> > @@ -2653,7 +2657,8 @@ void __init free_staticmem_pages(struct page_info
> *pg, unsigned long nr_mfns,
> >           }
> >
> >           /* In case initializing page of static memory, mark it 
> > PGC_reserved. */
> > -        pg[i].count_info |= PGC_reserved;
> > +        if ( !(pg[i].count_info & PGC_reserved) )
> 
> NIT: I understand the flag may have already been set, but I am not convinced 
> if
> it is worth checking it and then set.
> 

Jan suggested that since we remove the __init from free_staticmem_pages, it's 
now in preemptable
state at runtime, so better be adding this check here. 

> > +            pg[i].count_info |= PGC_reserved;
> 
> 
> >       }
> >   }
> >
> > @@ -2762,6 +2767,12 @@ int __init acquire_domstatic_pages(struct
> > domain *d, mfn_t smfn,
> >
> >       return 0;
> >   }
> > +#else
> > +void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> > +                          bool need_scrub) {
> > +    ASSERT_UNREACHABLE();
> > +}
> >   #endif
> >
> >   /*
> > diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index
> > 3be754da92..9fd95deaec 100644
> > --- a/xen/include/xen/mm.h
> > +++ b/xen/include/xen/mm.h
> > @@ -85,10 +85,10 @@ bool scrub_free_pages(void);
> >   } while ( false )
> >   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
> >
> > -#ifdef CONFIG_STATIC_MEMORY
> >   /* These functions are for static memory */
> >   void free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> >                             bool need_scrub);
> > +#ifdef CONFIG_STATIC_MEMORY
> >   int acquire_domstatic_pages(struct domain *d, mfn_t smfn, unsigned int
> nr_mfns,
> >                               unsigned int memflags);
> >   #endif
> 
> Cheers,
> 
> --
> Julien Grall

Reply via email to