On Mon, May 11, 2026 at 01:48:56PM +0000, Anirudh Rayabharam wrote:
> On Thu, May 07, 2026 at 03:43:09PM +0000, Stanislav Kinsburskii wrote:
> > mshv_prepare_pinned_region() returns 0 (success) when mshv_region_map()
> > fails on an unencrypted partition. The condition on the error path:
> >
> > if (ret && mshv_partition_encrypted(partition))
> >
> > only handles map failures for encrypted partitions — if the partition is
> > not encrypted and the map fails, execution falls through to 'return 0',
> > silently ignoring the error.
> >
> > Additionally, calling mshv_region_invalidate() inline on map failure
> > zeroes the mreg_pages array before the caller's cleanup path
> > (mshv_region_destroy) can call mshv_region_unmap(). Since unmap skips
> > pages where mreg_pages[offset] is NULL, this can leave stale SLAT
> > mappings for partially-mapped pages.
> >
> > Fix by returning immediately on success and falling through to error
> > return on failure. For unencrypted partitions, the caller's
> > mshv_region_destroy() handles unmap followed by invalidate in the
> > correct order. For encrypted partitions where re-sharing fails, zero
> > the page array without unpinning — the pages are inaccessible to the
> > host and must not be unpinned, but zeroing prevents
> > mshv_region_destroy() from attempting to unpin them.
>
> mshv_region_destroy() calls mshv_region_invalidate() which calls
> mshv_region_invalidate_pages() which just does:
>
> static void mshv_region_invalidate_pages(struct mshv_mem_region *region,
> u64 page_offset, u64
> page_count)
> {
> if (region->mreg_type == MSHV_REGION_TYPE_MEM_PINNED)
> unpin_user_pages(region->mreg_pages + page_offset,
> page_count);
>
> memset(region->mreg_pages + page_offset, 0,
> page_count * sizeof(struct page *));
> }
>
> It doesn't checked for zeroed pages before unpinning.
>
unpin_user_pages skips NULL pages.
Thanks,
Stanislav
> >
> > Fixes: 621191d709b14 ("Drivers: hv: Introduce mshv_root module to expose
> > /dev/mshv to VMMs")
> > Signed-off-by: Stanislav Kinsburskii <[email protected]>
> > ---
> > drivers/hv/mshv_root_main.c | 26 ++++++++++++++++----------
> > 1 file changed, 16 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
> > index 665d565899c15..7e4252b6bc65c 100644
> > --- a/drivers/hv/mshv_root_main.c
> > +++ b/drivers/hv/mshv_root_main.c
> > @@ -1360,32 +1360,38 @@ static int mshv_prepare_pinned_region(struct
> > mshv_mem_region *region)
> > pt_err(partition,
> > "Failed to unshare memory region (guest_pfn:
> > %llu): %d\n",
> > region->start_gfn, ret);
> > - goto invalidate_region;
> > + goto err_out;
> > }
> > }
> >
> > ret = mshv_region_map(region);
> > - if (ret && mshv_partition_encrypted(partition)) {
> > + if (ret)
> > + goto share_region;
> > +
> > + return 0;
> > +
> > +share_region:
> > + if (mshv_partition_encrypted(partition)) {
> > int shrc;
> >
> > shrc = mshv_region_share(region);
> > if (!shrc)
> > - goto invalidate_region;
> > + goto err_out;
> >
> > pt_err(partition,
> > "Failed to share memory region (guest_pfn: %llu): %d\n",
> > region->start_gfn, shrc);
> > /*
> > - * Don't unpin if marking shared failed because pages are no
> > - * longer mapped in the host, ie root, anymore.
> > + * Re-sharing failed — the pages remain inaccessible to the
> > + * host. Zero the page array so that mshv_region_destroy()
> > + * won't attempt to unpin them (leaking the page references
> > + * is intentional; unpinning host-inaccessible pages would be
> > + * unsafe).
> > */
>
> Actually, mshv_region_destroy() also does mshv_region_share(). Maybe we
> can remove it from here altogether. Either way, should this zeroing
> logic be added there too so as not to crash the host by unpinning pages
> that weren't marked shared?
>
> >From mshv_region_destroy():
>
> if (mshv_partition_encrypted(partition)) {
> ret = mshv_region_share(region);
> if (ret) {
> pt_err(partition,
> "Failed to regain access to memory, unpinning
> user pages will fail and crash the host error: %d\n",
> ret);
> return;
> }
> }
>
> Thanks,
> Anirudh.