If we need to increase the number of huge pages, drop caches first
to reduce fragmentation and then check that we actually allocated
as many as we wanted.  Retry once if that doesn't work.

Signed-off-by: Ben Hutchings <b...@decadent.org.uk>
---
The test always fails for me in a 1 GB VM without this.

Ben.

 tools/testing/selftests/vm/run_vmtests | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/vm/run_vmtests 
b/tools/testing/selftests/vm/run_vmtests
index 9179ce8..97ed1b2 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -20,13 +20,26 @@ done < /proc/meminfo
 if [ -n "$freepgs" ] && [ -n "$pgsize" ]; then
        nr_hugepgs=`cat /proc/sys/vm/nr_hugepages`
        needpgs=`expr $needmem / $pgsize`
-       if [ $freepgs -lt $needpgs ]; then
+       tries=2
+       while [ $tries -gt 0 ] && [ $freepgs -lt $needpgs ]; do
                lackpgs=$(( $needpgs - $freepgs ))
+               echo 3 > /proc/sys/vm/drop_caches
                echo $(( $lackpgs + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
                if [ $? -ne 0 ]; then
                        echo "Please run this test as root"
                        exit 1
                fi
+               while read name size unit; do
+                       if [ "$name" = "HugePages_Free:" ]; then
+                               freepgs=$size
+                       fi
+               done < /proc/meminfo
+               tries=$((tries - 1))
+       done
+       if [ $freepgs -lt $needpgs ]; then
+               printf "Not enough huge pages available (%d < %d)\n" \
+                      $freepgs $needpgs
+               exit 1
        fi
 else
        echo "no hugetlbfs support in kernel?"

-- 
Ben Hutchings
All extremists should be taken out and shot.

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to