On 4 Feb 2010, at 14:09, J. B. Rainsberger wrote:
On Thu, Feb 4, 2010 at 00:12, David Mitchell <[email protected]>
wrote:
What is the 'best practice' way to structure RSpec code and
documentation
when testing a very large project, where the RSpec code base has to
be
maintained and extended over a long period?
I don't mean to be glib, but my blink reaction is that there's nothing
different between maintaining a large suite of RSpec examples and any
other large code base. I think all the design principles for large
production code bases apply to large suites of RSpec examples.
Over the life of the project, there's been a number of people
writing RSpec
tests without any overriding guidance on things like:
- appropriate naming of helper functions
- use of private vs. protected vs. public methods to only expose
functionality as required
- ensuring the scope of code is managed correctly (e.g. code for
testing
databases should probably be held in a module named 'Database')
- documentation, in any form e.g. what a helper function does, what
its side
effects are, coverage of modules & how to extend them, ...
- use of 'raise' and 'warn' to highlight problems
- etc., etc.
I truly think the Four Elements of Simple Deign would help all those.
As a result, what exists now is basically a huge mess. For
example, we've
got multiple helper functions named identically, that serve very
different
purposes e.g. 'it_should_be_nil', with one doing a string comparison,
another covering the number of records returned in a database
cursor, and so
on. The scope of these functions is such that they're accessible
from all
the 'wrong' places, so it's quite possible that the wrong helper
function
could accidentally be referenced at any point and quite difficult to
identify which one of several identically-named helper functions is
going to
be executed at any given point.
Aside from some serious therapy, what I'm looking for is some sort
of 'best
practices' documentation covering how to use RSpec to create *and
maintain*
a very large population of test cases over an extended period of
time. If I
can get that, then I can at least start working in the right
direction to
ensure the problem doesn't get any worse, and then start fixing
what exists
now. Issues that are biting me right now include:
- how to structure a hierarchy of RSpec modules to cover both unit-
and
functional-test requirements. For unit-testing, it seems to make
sense to
create a hierarchy along infrastructure lines, so there might be a
module
named 'Database' that includes all the generic database test
functions (e.g.
check table names, field names, field definitions, constraints,
triggers,
... are all defined correctly), that is subclassed into distinct
modules for
each database instance being tested. However, for functional-
testing, it
seems to make more sense to create a hierarchy along business
process lines,
so that helper functions covering a particular set of business
functionality
are bundled together. Given you'll probably want to use a lot of
the same
methods in both your functional- and unit-test code, what's the
best way to
structure this hierarchy?
- use of modules/namespaces to achieve sensible isolation of
functionality
(e.g. the 'it_should_be_nil' problem described above), while still
having
the code referencing functions in modules being readable
- documentation requirements when building/maintaining a large
RSpec test
suite over an extended period of time, so that you don't wind up
relying
exclusively on knowledge held in the heads of key people, and new
people can
be brought up to date on "how it all hangs together" relatively
quickly
If anyone can point me to useful reference material along these
lines, I'd
greatly appreciate it.
Have you tried pair programming?
Have you read Feathers' "Working Effectively with Legacy Code"? I
think that might help you recover from this mess.
As for some Novice Rules to organize your examples, I recommend this:
* Keep offline and online examples separate, so I can run all the
examples that require expensive external resources separately from the
one that don't. I use separate source folders for this.
* Move test data creation into a folder like Rails' relatively new
spec/support folder. Over time, introduce libraries like FactoryGirl
to reduce the amount of code you write to create test data.
* Refactor test facilities, like custom matchers, to spec/support to
make them available to everyone.
To fix the underlying problem, I recommend something bigger: invite
your teams to spend 90 minutes, once per week, putting their examples
up on a projector and play "What's not to like about this code?" Start
refactoring it there and then. Doing this every week helps teams
converge on their understanding of "good enough design" as well as
helps them share information about how they organize their examples
and the code that supports them. It simply sounds like your
programmers don't discuss these ideas with each other enough, or if
they do, they don't agree enough. Whatever changes you make to the
code base don't matter if you don't also do something like this.
Good luck.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Diaspar Software Services :: http://www.diasparsoftware.com
Author, JUnit Recipes
2005 Gordon Pask Award for contribution to Agile practice :: Agile
2010: Learn. Practice. Explore.
_______________________________________________
rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users
cheers,
Matt
http://mattwynne.net
+447974 430184
_______________________________________________
rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users