On 1 Mar 2007, at 16:42, Andrew Gianni wrote:
[snip]
In this situation, we would still like something in place to ensure
that
altering the construction of the business rules doesn't cause
regression in
the application, but we can't (or I'd certainly rather not) simply
write
unit tests for. It's also not something that really needs to be
done by the
developers per se, as they can essentially black box the code,
testing for
output based on input. I'm thinking that it's about time we started
formalizing having staff with QA duties who write functional tests
against
the code.
Does this sound sensible?
[snip]
If you mean having having these tests in addition to the the unit
tests yes. These sort of business facing tests are immensely useful.
They're not a replacement for unit tests though.
I'm less certain about having separate testing staff. I generally
find it more effective to have the developers help write the
functional tests with whoever is the source of requirements. Doing
this before you start implementing a feature is a great way of
defining what "finished" is.
When describing tests these days I find Brian Marick's
classifications (technology-facing vs business facing tests) more
useful descriptive tools than unit/functional/acceptance type
descriptions. See <http://www.testing.com/cgi-bin/blog/2003/08/21>.
If so, what tools could you recommend? Should we
still just use Test::More but in a more simple manner? Should we
write mech
tests instead (these are web apps)? Or are there other tools that
would be
useful? It seems to me that it would be rather laborious to write
all of
these tests by hand.
As chromatic said looking at a FIT framework may well be useful.
Test::FIT is, unfortunately, almost completely undocumented. However
the code is pretty easy to grok and I've written custom fixtures with
it without too much hassle.
In general getting the customers to use something that they're
comfortable with to input test data is a bonus. For example I've
written stuff that allows the client to enter their data in Excel,
and then parsed it and thrown it into some Test::Builder based code.
You also might want to look at Test::Base, which can be quite handy
for this sort of thing. Ingy did a nice presentation of it at some
point that I'm sure Google can locate :-)
Writing a DSL on top of Test::Builder is another route. Building an
application specific subclass of WWW::Mechanize for example.
For web apps go play with Selenium.
Any insight appreciated. Recommendations on good books on general
testing
philosophy would also be helpful (I've already got the developer's
notebook).
For more general testing discussions I'd recommend joining all of:
* [EMAIL PROTECTED]
* [EMAIL PROTECTED]
* [EMAIL PROTECTED]
You also might want to look at all or some of:
* http://www.testingeducation.org/BBST/
* http://agiletesting.blogspot.com/
* http://www.opensourcetesting.org/
* http://www.testingreflections.com/
* http://www.testdriven.com/
* http://www.testobsessed.com/
* http://nunit.com/blogs/
* http://googletesting.blogspot.com/
* http://www.developertesting.com/
* http://www.kohl.ca/
And possibly a chunk of the stuff under http://del.icio.us/adrianh/
testing :-)
Book wise think all of these are pretty good.
"Lessons Learned in Software Testing: A Context Driven Approach"
by Cem Kaner, James Bach and Brett Pettichord
(more aimed at folk with "tester" in their job title - but an
interesting read)
"Testing Extreme Programming" by Lisa Crispin and Tip House.
(obviously slanted towards XP - but that's a good thing :-)
"Test Driven Development" by Kent Beck
(everybody should read this)
"Test Driven Development: A Practical Guide" by Dave Astels
(some people dislike this book - I quite like it myself, but prefer
the Beck)
"FIT for Developing Software: Framework for Integrated Tests" by
Robert C. Martin
(if you want to know more about FIT testing)
If you're dealing with legacy code, or code that's not been developed
test first, I'd also thoroughly recommend "Working Effectively With
Legacy Code" by Michael Feathers.
Cheers,
Adrian