If the hole is big enough, the multiplication by page size will truncate
as it's operating on a uint32_t.

Cast to uint64_t before to avoid that.

Signed-off-by: Mathias Krause <mini...@grsecurity.net>
---
 vmware_vmss.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/vmware_vmss.c b/vmware_vmss.c
index eed948f46ee3..11b7b72bf503 100644
--- a/vmware_vmss.c
+++ b/vmware_vmss.c
@@ -498,12 +498,14 @@ read_vmware_vmss(int fd, void *bufptr, int cnt, ulong 
addr, physaddr_t paddr)
                int i;
 
                for (i = 0; i < vmss.regionscount; i++) {
+                       uint32_t hole;
+
                        if (ppn < vmss.regions[i].startppn)
                                break;
 
                        /* skip holes. */
-                       pos -= ((vmss.regions[i].startppn - 
vmss.regions[i].startpagenum)
-                               << VMW_PAGE_SHIFT);
+                       hole = vmss.regions[i].startppn - 
vmss.regions[i].startpagenum;
+                       pos -= (uint64_t)hole << VMW_PAGE_SHIFT;
                }
        }
 
-- 
2.20.1


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility

Reply via email to