On Fri, Mar 11, 2016 at 8:18 AM, Andreas Delmelle < [email protected]> wrote:
> True enough, and I am definitely not against instating such a policy > (quite on the contrary). > On the other hand, I do feel compelled to add a word of caution here. > > Example: > Some people could be familiar enough with certain parts of the codebase to > see what goes wrong and/or how a given issue can be fixed (or new > functionality can be added), but do not have the appropriate experience > with various testing frameworks in order to devise a test, nor do they > always have the time available to familiarise themselves with that. > If I am working as an independent contractor for a client, and the > hours/days can be invoiced to said client, it may be well worth it to spend > double or triple the amount of time on coming up with test cases. If I look > at it from a hobbyist perspective, and it is my own dime, I will probably > be more inclined to leave writing the test cases to someone else. > > So, a new potential problem becomes that certain patches will just remain > uncommitted for a very, very long time, that is: as long as nobody finds > the time or has the inspiration to come up with a way to formally and > adequately test the fix or feature. Certain valuable fixes or new features > may take ages to get committed if that policy is too strictly and blindly > enforced (as in: no more commits without a test case, period). > all true > > All that said, summarised, some formal guidelines would definitely be a > helpful and valuable start. Just keep in mind that with such a policy also > comes a responsibility to either provide ideas and pointers to would-be > committers, or complete the patches yourself, to get those gaps filled. > although I mentioned policy in my original post, I'm not advocating we create a fixed policy; however, guidelines would be useful; more importantly, I'd like to give more priority to testing among the dev team; I'm pretty sure we have all committed code to this project without corresponding tests, so I'm not singling out anyone; one reason I bring this up is because I have a perception that we have encountered a number of regressions or potential regressions over recent releases which could be mitigated provided some or more tests were added; my personal view is that adding code without corresponding tests is rather like building a road but not funding or creating a road maintenance crew: it may work for a while or may break immediately or at some point without notice; so my message is: find the time or resources to invest in more tests, and be more diligent about permitting commits (from patches or from our own work) without tests; my priorities about addressing this matter include: - create and regularly run memory usage/leak testing producing an easy to check output format that can be automatically monitored for significant regressions; - same as above, but for performance testing; - regularly running coverage analysis so we can track how much we are testing (or not) - improve testing coverage over time and monitor for coverage regression; > > > Cheers > > Andreas > > > On 09 Mar 2016, at 10:14, Chris Bowditch <[email protected]> > wrote: > > > > +1 > > > > On 07/03/2016 19:52, Glenn Adams wrote: > >> I haven't kept an eye on whether or not bug fixes or new/changed > >> functionality is being committed with new junit tests; however, I have a > >> feeling this is not always done. > >> > >> We need to improve our coverage testing on XML Graphics projects, > >> especially w.r.t. to new code/fixes. > >> > >> I realize we don't have a formal project policy on this matter, but we > need > >> to do a better job I think. > >> > > > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: [email protected] > > For additional commands, e-mail: [email protected] > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] > >
