Re: Unit vs. use case/functional testing
On 2 Mar 2007, at 22:53, James E Keenan wrote: Adrian Howard wrote: [snip] Adrian: How about posting this part on http://perl-qa.yi.org/ index.php/Main_Page? [snip] ObItsAWiki :-) Adrian
Re: Unit vs. use case/functional testing
On 1 Mar 2007, at 16:42, Andrew Gianni wrote: [snip] In this situation, we would still like something in place to ensure that altering the construction of the business rules doesn't cause regression in the application, but we can't (or I'd certainly rather not) simply write unit tests for. It's also not something that really needs to be done by the developers per se, as they can essentially black box the code, testing for output based on input. I'm thinking that it's about time we started formalizing having staff with QA duties who write functional tests against the code. Does this sound sensible? [snip] If you mean having having these tests in addition to the the unit tests yes. These sort of business facing tests are immensely useful. They're not a replacement for unit tests though. I'm less certain about having separate testing staff. I generally find it more effective to have the developers help write the functional tests with whoever is the source of requirements. Doing this before you start implementing a feature is a great way of defining what finished is. When describing tests these days I find Brian Marick's classifications (technology-facing vs business facing tests) more useful descriptive tools than unit/functional/acceptance type descriptions. See http://www.testing.com/cgi-bin/blog/2003/08/21. If so, what tools could you recommend? Should we still just use Test::More but in a more simple manner? Should we write mech tests instead (these are web apps)? Or are there other tools that would be useful? It seems to me that it would be rather laborious to write all of these tests by hand. As chromatic said looking at a FIT framework may well be useful. Test::FIT is, unfortunately, almost completely undocumented. However the code is pretty easy to grok and I've written custom fixtures with it without too much hassle. In general getting the customers to use something that they're comfortable with to input test data is a bonus. For example I've written stuff that allows the client to enter their data in Excel, and then parsed it and thrown it into some Test::Builder based code. You also might want to look at Test::Base, which can be quite handy for this sort of thing. Ingy did a nice presentation of it at some point that I'm sure Google can locate :-) Writing a DSL on top of Test::Builder is another route. Building an application specific subclass of WWW::Mechanize for example. For web apps go play with Selenium. Any insight appreciated. Recommendations on good books on general testing philosophy would also be helpful (I've already got the developer's notebook). For more general testing discussions I'd recommend joining all of: * [EMAIL PROTECTED] * [EMAIL PROTECTED] * [EMAIL PROTECTED] You also might want to look at all or some of: * http://www.testingeducation.org/BBST/ * http://agiletesting.blogspot.com/ * http://www.opensourcetesting.org/ * http://www.testingreflections.com/ * http://www.testdriven.com/ * http://www.testobsessed.com/ * http://nunit.com/blogs/ * http://googletesting.blogspot.com/ * http://www.developertesting.com/ * http://www.kohl.ca/ And possibly a chunk of the stuff under http://del.icio.us/adrianh/ testing :-) Book wise think all of these are pretty good. Lessons Learned in Software Testing: A Context Driven Approach by Cem Kaner, James Bach and Brett Pettichord (more aimed at folk with tester in their job title - but an interesting read) Testing Extreme Programming by Lisa Crispin and Tip House. (obviously slanted towards XP - but that's a good thing :-) Test Driven Development by Kent Beck (everybody should read this) Test Driven Development: A Practical Guide by Dave Astels (some people dislike this book - I quite like it myself, but prefer the Beck) FIT for Developing Software: Framework for Integrated Tests by Robert C. Martin (if you want to know more about FIT testing) If you're dealing with legacy code, or code that's not been developed test first, I'd also thoroughly recommend Working Effectively With Legacy Code by Michael Feathers. Cheers, Adrian
Re: Unit vs. use case/functional testing
Adrian Howard wrote: [snip] Adrian: How about posting this part on http://perl-qa.yi.org/index.php/Main_Page? For more general testing discussions I'd recommend joining all of: * [EMAIL PROTECTED] * [EMAIL PROTECTED] * [EMAIL PROTECTED] You also might want to look at all or some of: * http://www.testingeducation.org/BBST/ * http://agiletesting.blogspot.com/ * http://www.opensourcetesting.org/ * http://www.testingreflections.com/ * http://www.testdriven.com/ * http://www.testobsessed.com/ * http://nunit.com/blogs/ * http://googletesting.blogspot.com/ * http://www.developertesting.com/ * http://www.kohl.ca/ And possibly a chunk of the stuff under http://del.icio.us/adrianh/ testing :-) Book wise think all of these are pretty good. Lessons Learned in Software Testing: A Context Driven Approach by Cem Kaner, James Bach and Brett Pettichord (more aimed at folk with tester in their job title - but an interesting read) Testing Extreme Programming by Lisa Crispin and Tip House. (obviously slanted towards XP - but that's a good thing :-) Test Driven Development by Kent Beck (everybody should read this) Test Driven Development: A Practical Guide by Dave Astels (some people dislike this book - I quite like it myself, but prefer the Beck) FIT for Developing Software: Framework for Integrated Tests by Robert C. Martin (if you want to know more about FIT testing) If you're dealing with legacy code, or code that's not been developed test first, I'd also thoroughly recommend Working Effectively With Legacy Code by Michael Feathers. jimk
Re: Unit vs. use case/functional testing
On 1 Mar 2007, at 16:42, Andrew Gianni wrote: Any insight appreciated. Recommendations on good books on general testing philosophy would also be helpful (I've already got the developer's notebook). It sounds as if you have two distinct things to test. You have a rules engine that has to correctly interpret and implement arbitrary business rules and you have a set of rules expressed in some notation. Assuming I've interpreted you correctly you should be able to test the rule engine using fairly common testing practices. To test the rules you could maybe extend the rule notation to allow assertions to be expressed as part of the rule set and write a test harness that uses your (now proved) rules engine to test the assertions. Any assertions in the rule set can be tested while the system is live too. -- Andy Armstrong, hexten.net
Re: Unit vs. use case/functional testing
# from Andrew Gianni # on Thursday 01 March 2007 08:42 am: However, our business rules have gotten complicated enough that we are no longer writing them that way explicitly in the code. In the last application we built, we put the rules in a database and the appropriate ones were pulled based on circumstances (using generalized code) and run. Now we're embarking on something different that allows us to essentially write our business rules declaratively, Assuming that applying a rule to some input yields a verdict, you may want to use data-driven testing and have the same users write data to drive tests of the rules. They would use a spreadsheet or some other means to create data which exercises the rules. input1, input2, input3, expected verdict Try to keep it simple (possibly breaking sub-conditions into a different set of data or labels for a common group of inputs) so that anyone on the business team can audit it. I'm assuming expected verdict is one (or more) of a finite number of answers that you get from your rules engine. (Possibly a method name, key in a dispatch table, or input to a function.) If this is true, then your application can be unit tested against each of the verdict's action-points. Changes to the rules engine shouldn't break the data-driven tests and thus shouldn't break the action points. Changes to the rules require changes to the expected verdicts, and may require changes to the action points, but at that point you should have good coverage. although it's taken care of by a module that we can assume is fully tested (details will be forthcoming at some point, methinks). I would like to see that. Please keep us posted. --Eric -- Unthinking respect for authority is the greatest enemy of truth. --Albert Einstein --- http://scratchcomputing.com ---
Re: Unit vs. use case/functional testing
On 3/1/07 12:18 PM, Andy Armstrong [EMAIL PROTECTED] wrote: To test the rules you could maybe extend the rule notation to allow assertions to be expressed as part of the rule set and write a test harness that uses your (now proved) rules engine to test the assertions. That's sort of what I was thinking. There's no point in going through the CGI interface to test the rules, or even the application framework interface, minus CGI. The rules engine is general enough that we can probably write a relatively straight forward interface that allows us to pass data directly to the engine and simply ensure that the results are what we expect. Is that basically what you were suggesting? Andrew -- Andrew Gianni - Lead Programmer Analyst University at Buffalo, State University of New York Computing and Information Technology / Administrative Computing Services 215 MFAC, Ellicott Complex, Buffalo, NY 14261-0026 716.645.3587x7124 - AIM: andrewsgianni - http://schoolof.info/agianni
Re: Unit vs. use case/functional testing
On 1 Mar 2007, at 18:15, Andrew Gianni wrote: To test the rules you could maybe extend the rule notation to allow assertions to be expressed as part of the rule set and write a test harness that uses your (now proved) rules engine to test the assertions. That's sort of what I was thinking. There's no point in going through the CGI interface to test the rules, or even the application framework interface, minus CGI. The rules engine is general enough that we can probably write a relatively straight forward interface that allows us to pass data directly to the engine and simply ensure that the results are what we expect. Is that basically what you were suggesting? Pretty much. It's potentially a great example of testing improving your code quality in ways other than the obvious 'does it work?'. In addition to the benefits of test coverage you're being persuaded to decouple components from one another so they can be used both for testing and in the application - which is in general a good thing. If you get it right it'll also be a great help to the people writing the rules. You can give them a tool which allows them to start with an assertion and work backwards to a rule that implements it. They'll be able to do their own testing on the rule outside of the live application. -- Andy Armstrong, hexten.net
Re: Unit vs. use case/functional testing
On Thursday 01 March 2007 10:36, Andy Armstrong wrote: In addition to the benefits of test coverage you're being persuaded to decouple components from one another so they can be used both for testing and in the application - which is in general a good thing. If you get it right it'll also be a great help to the people writing the rules. You can give them a tool which allows them to start with an assertion and work backwards to a rule that implements it. They'll be able to do their own testing on the rule outside of the live application. That sounds much like FIT. Does Test::Fit look helpful? -- c
Re: Unit vs. use case/functional testing
On 3/1/07 1:42 PM, chromatic [EMAIL PROTECTED] wrote: On Thursday 01 March 2007 10:36, Andy Armstrong wrote: If you get it right it'll also be a great help to the people writing the rules. You can give them a tool which allows them to start with an assertion and work backwards to a rule that implements it. They'll be able to do their own testing on the rule outside of the live application. That sounds much like FIT. Does Test::Fit look helpful? That looks like roughly what I was looking for. I'm thinking we might be able to write a general fixture sub-class for the business rule framework and all of our actual test fixtures could base class off of that, limiting further the amount of work an individual test writer would need to do. I *definitely* like the idea that it would potentially give us a tool to use with the customer when specing out the rules for an application. Even so, I would have to figure out a way to organize my fixtures effectively. There are 139 fields in the current application and while they're not all relevant to each test case, we'll have to keep track of the dependencies between them all when putting together the individual test cases. Andrew -- Andrew Gianni - Lead Programmer Analyst University at Buffalo, State University of New York Computing and Information Technology / Administrative Computing Services 215 MFAC, Ellicott Complex, Buffalo, NY 14261-0026 716.645.3587x7124 - AIM: andrewsgianni - http://schoolof.info/agianni
Re: Unit vs. use case/functional testing
On 3/1/07 1:05 PM, Eric Wilhelm [EMAIL PROTECTED] wrote: although it's taken care of by a module that we can assume is fully tested (details will be forthcoming at some point, methinks). I would like to see that. Please keep us posted. Will do, it'll be on CPAN, although I'm not the author. Andrew -- Andrew Gianni - Lead Programmer Analyst University at Buffalo, State University of New York Computing and Information Technology / Administrative Computing Services 215 MFAC, Ellicott Complex, Buffalo, NY 14261-0026 716.645.3587x7124 - AIM: andrewsgianni - http://schoolof.info/agianni