On 1/22/26 18:02, Kevin Brodsky wrote:
FORCE_READ(*addr) ensures that the compiler will emit a load from
addr. Several tests need to trigger such a load for a range of
pages, ensuring that every page is faulted in, if it wasn't already.
Introduce a new helper force_read_pages() that does exactly that and
replace existing loops with a call to it.
The step size (regular/huge page size) is preserved for all loops,
except in split_huge_page_test. Reading every byte is unnecessary;
we now read every huge page, matching the following call to
check_huge_file().
Reviewed-by: Dev Jain <[email protected]>
Signed-off-by: Kevin Brodsky <[email protected]>
---
tools/testing/selftests/mm/hugetlb-madvise.c | 9 +--------
tools/testing/selftests/mm/pfnmap.c | 9 +++------
tools/testing/selftests/mm/split_huge_page_test.c | 6 +-----
tools/testing/selftests/mm/vm_util.h | 7 +++++++
4 files changed, 12 insertions(+), 19 deletions(-)
diff --git a/tools/testing/selftests/mm/hugetlb-madvise.c
b/tools/testing/selftests/mm/hugetlb-madvise.c
index 05d9d2805ae4..5b12041fa310 100644
--- a/tools/testing/selftests/mm/hugetlb-madvise.c
+++ b/tools/testing/selftests/mm/hugetlb-madvise.c
@@ -47,14 +47,7 @@ void write_fault_pages(void *addr, unsigned long nr_pages)
void read_fault_pages(void *addr, unsigned long nr_pages)
{
- unsigned long i;
-
- for (i = 0; i < nr_pages; i++) {
- unsigned long *addr2 =
- ((unsigned long *)(addr + (i * huge_page_size)));
- /* Prevent the compiler from optimizing out the entire loop: */
- FORCE_READ(*addr2);
- }
+ force_read_pages(addr, nr_pages, huge_page_size);
}
Likely we could get rid of read_fault_pages() completely and simply let
the callers call force_read_pages() now?
Acked-by: David Hildenbrand (Red Hat) <[email protected]>
--
Cheers
David