Hi,

    Scott and PETSc folks,

      Using alt files for testing is painful. Whenever you add, for example, a 
new variable to be output in a viewer it changes the output files and you need 
to regenerate the alt files for all the test configurations. Even though the 
run behavior of the code hasn't changed.

     I'm looking for suggestions on how to handle this kind of alternative 
output in a nicer way (alternative output usually comes from different 
iterations counts due to different precision and often even different 
compilers).

     I idea I was thinking of was instead of having "alt" files we have "patch" 
files that continue just the patch to the original output file instead of a complete copy. Thus in 
some situations the patch file would still apply even if the original output file changed thus 
requiring much less manual work in updating alt files. Essentially the test harness would test 
against the output file, if that fails it would apply the first patch and compare again, try the 
second patch etc.

yes, a 'patch' approach would simplify updates to the reference

However: I'm not sure whether we're tackling the right problem here. Our diff-based testing isn't great. I'd favor more dedicated unit tests, where the correctness check is embedded in the test (ex*.*}) itself rather than determined by some text-based diff tool (which, to make matters worse, even filters out floating point numbers...). Not all tests can be written as such -- but many can and this would significantly reduce the burden on alt files.

Best regards,
Karli

Reply via email to