David,

> * first, when a bug gets reported in live, I like to create a test case
>  from it, using data that at least replicates the structure of that
>  that is live.  This will, necessarily, be an end-to-end test of the
>  whole application, from user interaction through all the layers that
>  make up the application down to the database, and back up through all
>  those layers again to produce broken output.  As I debug and fix I may
>  also create unit tests for individual bits of the application.

See, I would approach that differently.  I would start with creating a test
for just the bit of data that I suspected was causing the problem.  If that
worked (or didn't work, technically--if that reproduced the bug, I mean),
fine, I'm done.  If that _doesn't_ work, though, I have to add more and
more bits of data specific to the user's exact situation ... which then
shows me how much that extra data is contributing to the problem vs how
much is irrelevant.  By the time I'm done, I have exactly the data I need
to reproduce, which makes it _much_ easier to figure out what's wrong.

But then, on our team, we only produce unit tests.  Integration tests are
produced by QA, and they might do it more as you suggest.

Of course, they don't get to start with an empty database either.  It might
be _nice_ to start with a blank slate every time, I'm not sure--I'd worry
that I'd constantly miss scalability issues that are currently outed fairly
early with our current process--but the point is, I don't have that luxury,
whether I want to or not. ;->

(OTOH, I'll soon be moving to a new job where they don't even _have_ a QA
department, so I'll have to rethink the whole process. :-D )


            -- Buddy

Reply via email to