True. I'd forgotten I had to raise FD limits in my environment (to 999999) to allow valgrind to pass. The other thing I do is set --max-stackframe=3280592 on the valgrind command line, but with those two changes, pyNFS "all" passes for me under valgrind.

Daniel

On 09/13/2017 08:53 AM, Swen Schillig wrote:
In yesterdays conf.-call we spoke about pynfs-errors while running
valgrind.

I spent some time today to find out the reason and I'm afraid that was
a user error (so me).

A good few pynfs-tests go to numeric limits for a standard process
and if ganesha is executed under valgrind those limits are hit.
In my case it was the soft-limit for number of open files.
Once increased, pynfs succeeds.
At least the tests based on file creations.

There are other tests which might fail if executed as part of
the "all" test-suite but do succeed if executed in a smaller set.
E.g. MKLINK RDDR1 RDDR2 RDDR3 RDDR4 RDDR8 RDDR11 RDDR12 RENEW3 RLOWN1
RD10 RD11

Anyhow, just FYI.

Cheers Swen


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to