From: Stanislav Kinsburskii <[email protected]> Sent: Monday, October 6, 2025 4:33 PM > > On Mon, Oct 06, 2025 at 05:09:07PM +0000, Michael Kelley wrote: > > From: Stanislav Kinsburskii <[email protected]> Sent: > > Monday, October 6, 2025 8:06 AM > > > > > > Reduce overhead when unmapping large memory regions by batching GPA unmap > > > operations in 2MB-aligned chunks. > > > > > > Use a dedicated constant for batch size to improve code clarity and > > > maintainability. > > > > > > Signed-off-by: Stanislav Kinsburskii <[email protected]> > > > --- > > > drivers/hv/mshv_root.h | 2 ++ > > > drivers/hv/mshv_root_hv_call.c | 2 +- > > > drivers/hv/mshv_root_main.c | 28 +++++++++++++++++++++++++--- > > > 3 files changed, 28 insertions(+), 4 deletions(-) > > > > > > diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h > > > index e3931b0f12693..97e64d5341b6e 100644 > > > --- a/drivers/hv/mshv_root.h > > > +++ b/drivers/hv/mshv_root.h > > > @@ -32,6 +32,8 @@ static_assert(HV_HYP_PAGE_SIZE == MSHV_HV_PAGE_SIZE); > > > > > > #define MSHV_PIN_PAGES_BATCH_SIZE (0x10000000ULL / > > > HV_HYP_PAGE_SIZE) > > > > > > +#define MSHV_MAX_UNMAP_GPA_PAGES 512 > > > + > > > struct mshv_vp { > > > u32 vp_index; > > > struct mshv_partition *vp_partition; > > > diff --git a/drivers/hv/mshv_root_hv_call.c > > > b/drivers/hv/mshv_root_hv_call.c > > > index c9c274f29c3c6..0696024ccfe31 100644 > > > --- a/drivers/hv/mshv_root_hv_call.c > > > +++ b/drivers/hv/mshv_root_hv_call.c > > > @@ -17,7 +17,7 @@ > > > /* Determined empirically */ > > > #define HV_INIT_PARTITION_DEPOSIT_PAGES 208 > > > #define HV_MAP_GPA_DEPOSIT_PAGES 256 > > > -#define HV_UMAP_GPA_PAGES 512 > > > +#define HV_UMAP_GPA_PAGES MSHV_MAX_UNMAP_GPA_PAGES > > > > > > #define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1))) > > > > > > diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c > > > index 97e322f3c6b5e..b61bef6b9c132 100644 > > > --- a/drivers/hv/mshv_root_main.c > > > +++ b/drivers/hv/mshv_root_main.c > > > @@ -1378,6 +1378,7 @@ mshv_map_user_memory(struct mshv_partition > > > *partition, > > > static void mshv_partition_destroy_region(struct mshv_mem_region *region) > > > { > > > struct mshv_partition *partition = region->partition; > > > + u64 gfn, gfn_count, start_gfn, end_gfn; > > > u32 unmap_flags = 0; > > > int ret; > > > > > > @@ -1396,9 +1397,30 @@ static void mshv_partition_destroy_region(struct > > > mshv_mem_region *region) > > > if (region->flags.large_pages) > > > unmap_flags |= HV_UNMAP_GPA_LARGE_PAGE; > > > > > > - /* ignore unmap failures and continue as process may be exiting */ > > > - hv_call_unmap_gpa_pages(partition->pt_id, region->start_gfn, > > > - region->nr_pages, unmap_flags); > > > + start_gfn = region->start_gfn; > > > + end_gfn = region->start_gfn + region->nr_pages; > > > + > > > + for (gfn = start_gfn; gfn < end_gfn; gfn += gfn_count) { > > > + if (gfn % MSHV_MAX_UNMAP_GPA_PAGES) > > > + gfn_count = ALIGN(gfn, MSHV_MAX_UNMAP_GPA_PAGES) - gfn; > > > + else > > > + gfn_count = MSHV_MAX_UNMAP_GPA_PAGES; > > > > You could do the entire if/else as: > > > > gfn_count = ALIGN(gfn + 1, MSHV_MAX_UNMAP_GPA_PAGES) - gfn; > > > > Using "gfn + 1" handles the case where gfn is already aligned. Arguably, > > this is a bit > > more obscure, so it's just a suggestion. > > > > > + > > > + if (gfn + gfn_count > end_gfn) > > > + gfn_count = end_gfn - gfn; > > > > Or > > gfn_count = min(gfn_count, end_gfn - gfn); > > > > I usually prefer the "min" function instead of an "if" statement if > > logically > > the intent is to compute the minimum. But again, just a suggestion. > > > > > + > > > + /* Skip if all pages in this range if none is mapped */ > > > + if (!memchr_inv(region->pages + (gfn - start_gfn), 0, > > > + gfn_count * sizeof(struct page *))) > > > + continue; > > > + > > > + ret = hv_call_unmap_gpa_pages(partition->pt_id, gfn, > > > + gfn_count, unmap_flags); > > > + if (ret) > > > + pt_err(partition, > > > + "Failed to unmap GPA pages %#llx-%#llx: %d\n", > > > + gfn, gfn + gfn_count - 1, ret); > > > + } > > > > Overall, I think this algorithm looks good and handles all the edge cases. > > > > Thank you for your suggestions. I also generally prefer reducing the > code in a similar way, but in this case, I deliberately chose a more > elaborate approach to improve clarity. > > So, if you don’t mind, I’d rather keep it as is, since this version is > easy to understand and self-documenting. >
Yes, that's fine with me. Reviewed-by: Michael Kelley <[email protected]>
