If all the issues manifest themselves with calls to obabel, I would add functions to test/testbabel.py (the filename is a throwback to when obabel was babel). You can see an example in this pull request:
https://github.com/openbabel/openbabel/pull/2379

Or, if you are feeling adventurous, I think it would be great if we had a plain text file of obabel commands that are run with the expected outputs following some standard naming convention. For example, maybe each line is of the form:
<CMD> <ARGUMENTS> # files/correct_output
and the test function compares the standard out of running the command to the 
correct_output file.
This would make it much easier to add tests if it is appropriate to compare 
file outputs exactly.

Thanks,

David Koes

Associate Professor
Computational & Systems Biology
University of Pittsburgh

On 6/8/21 11:22 AM, David van der Spoel wrote:
On 2021-06-08 15:42, David Koes wrote:
Hi Madeleine,

It sounds like you have a really excellent set of test structures.  It
would be fantastic if you could contribute these to the testing
framework.

We would be happy to do that, but maybe you can give a suggestion on
where in the testing framework? Or should we just add a new python
script there with in and output data?

FYI, we have made quite a few fixes already that likely are of interest
to others:
  % git diff master:data/bondtyp.txt data/bondtyp.txt | wc
      128     784    4532


_______________________________________________
OpenBabel-Devel mailing list
OpenBabel-Devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openbabel-devel

Reply via email to