The old logic had an off-by-one bug. For instance, assuming 4k pages on
host and guest, if 'len' is '4097' (indicating to unmap 2 pages), then
'last = start + 4096', so 'real_last = start + 4095', so ultimately
'real_len = 4096'. I do not believe this could cause any observable bugs
in guests, because `target_munmap` page-aligns the length it passes in.
However, calls to this function in `target_mremap` do not page-align the
length, so those calls could "drop" pages, leading to a part of the
reserved region becoming unmapped. At worst, a host allocation could get
mapped into that hole, then clobbered by a new guest mapping.

Signed-off-by: Matthew Lugg <[email protected]>
---
 linux-user/mmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

The refactor in the last revision has been dropped in favour of a simple
two-line patch. The added 'ROUND_UP' on the first line of the diff is necessary
to prevent the last 'for (prot = 0, ...' in the function from considering a page
which 'len' partially covers.

diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 6163f1a0d1..4bcfaf7894 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -1029,9 +1029,9 @@ static int mmap_reserve_or_unmap(abi_ulong start, 
abi_ulong len)
     void *host_start;
     int prot;
 
-    last = start + len - 1;
+    last = ROUND_UP(start + len, TARGET_PAGE_SIZE) - 1;
     real_start = start & -host_page_size;
-    real_last = ROUND_UP(last, host_page_size) - 1;
+    real_last = ROUND_UP(last + 1, host_page_size) - 1;
 
     /*
      * If guest pages remain on the first or last host pages,
-- 
2.51.2


Reply via email to