On 16/09/2025 18.55, Richard Henderson wrote:
On 9/15/25 22:18, Thomas Huth wrote:
On 16/09/2025 03.38, Richard Henderson wrote:
On 9/15/25 11:55, Richard Henderson wrote:
Startup of libgcrypt locks a small pool of pages -- by default 16k.
Testing for zero locked pages is isn't correct, while testing for
32k is a decent compromise.
Signed-off-by: Richard Henderson <[email protected]>
---
tests/functional/x86_64/test_memlock.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/
x86_64/ test_memlock.py
index 2b515ff979..81bce80b0c 100755
--- a/tests/functional/x86_64/test_memlock.py
+++ b/tests/functional/x86_64/test_memlock.py
@@ -37,7 +37,8 @@ def test_memlock_off(self):
status = self.get_process_status_values(self.vm.get_pid())
- self.assertTrue(status['VmLck'] == 0)
+ # libgcrypt may mlock a few pages
+ self.assertTrue(status['VmLck'] < 32)
def test_memlock_on(self):
self.common_vm_setup_with_memlock('on')
I wonder if I should have chosen 64k, which might be one 64k page...
It's a x86 test, so we should not have to worry about 64k pages there, I
hope?
Fair enough, though it does beg the question of why it's an x86-specific
test. Don't all host architectures support memory locking?
I guess you need at least a target machine that runs a firmware by default,
since this test does not download any assets...?
Thomas