On 4 February 2010 05:12, David Mitchell <monch1...@gmail.com> wrote:

> Hello group,
>
> I've searched through several months messages in the archive, but haven't
> found an answer to this...
>
> What is the 'best practice' way to structure RSpec code and documentation
> when testing a very large project, where the RSpec code base has to be
> maintained and extended over a long period?
>
> A bit of background: I've just be brought onto a (non-Ruby) project that
> has unit- and functional-test suites written using RSpec.  It's a large
> project, and growing; there's currently >20,000 distinct unit-test cases in
> RSpec, and a smaller (but still considerable) number of functional-test
> cases.  The quantity of these test cases is still growing quickly, but
> they've hit a bottleneck in creating new test cases without breaking
> existing test cases.
>
> Over the life of the project, there's been a number of people writing RSpec
> tests without any overriding guidance on things like:
> - appropriate naming of helper functions
> - use of private vs. protected vs. public methods to only expose
> functionality as required
> - ensuring the scope of code is managed correctly (e.g. code for testing
> databases should probably be held in a module named 'Database')
> - documentation, in any form e.g. what a helper function does, what its
> side effects are, coverage of modules & how to extend them, ...
> - use of 'raise' and 'warn' to highlight problems
> - etc., etc.
>
> As a result, what exists now is basically a huge mess.  For example, we've
> got multiple helper functions named identically, that serve very different
> purposes e.g. 'it_should_be_nil', with one doing a string comparison,
> another covering the number of records returned in a database cursor, and so
> on.  The scope of these functions is such that they're accessible from all
> the 'wrong' places, so it's quite possible that the wrong helper function
> could accidentally be referenced at any point and quite difficult to
> identify which one of several identically-named helper functions is going to
> be executed at any given point.
>
> Aside from some serious therapy, what I'm looking for is some sort of 'best
> practices' documentation covering how to use RSpec to create *and maintain*
> a very large population of test cases over an extended period of time.  If I
> can get that, then I can at least start working in the right direction to
> ensure the problem doesn't get any worse, and then start fixing what exists
> now.  Issues that are biting me right now include:
> - how to structure a hierarchy of RSpec modules to cover both unit- and
> functional-test requirements.  For unit-testing, it seems to make sense to
> create a hierarchy along infrastructure lines, so there might be a module
> named 'Database' that includes all the generic database test functions (e.g.
> check table names, field names, field definitions, constraints, triggers,
> ... are all defined correctly), that is subclassed into distinct modules for
> each database instance being tested.  However, for functional-testing, it
> seems to make more sense to create a hierarchy along business process lines,
> so that helper functions covering a particular set of business functionality
> are bundled together.  Given you'll probably want to use a lot of the same
> methods in both your functional- and unit-test code, what's the best way to
> structure this hierarchy?
> - use of modules/namespaces to achieve sensible isolation of functionality
> (e.g. the 'it_should_be_nil' problem described above), while still having
> the code referencing functions in modules being readable
> - documentation requirements when building/maintaining a large RSpec test
> suite over an extended period of time, so that you don't wind up relying
> exclusively on knowledge held in the heads of key people, and new people can
> be brought up to date on "how it all hangs together" relatively quickly
>
> If anyone can point me to useful reference material along these lines, I'd
> greatly appreciate it.
>
> Thanks in advance
>
> David Mitchell
>
> _______________________________________________
> rspec-users mailing list
> rspec-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/rspec-users
>

Rails is quite good in organising RSpec tests, so looking at a big Rails
project could give you some pointers. Key ideas I would take from Rails are

1. Separation of unit specs from other sorts of specs
2. 1 to 1 match between a class and its spec
3. Applying convention with rigour to naming of specs

My no.1 tip for any spec is to read its output when you run it. I do this in
texmate but you can do it with an html report or at the command prompt.
Basically my opinion is that if a spec viewed in this manner doesn't make
absolute sense then its not worth having. If you can't understand what
you're specifying you certainly can't know how your application is behaving.
One thing this kind of viewing will do is reveal specs that are doing to
many things.

HTH

Andrew
_______________________________________________
rspec-users mailing list
rspec-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Reply via email to