[ apologies if this mail is poorly formatted, posted via webmail ]

Gavin Sherry said:
> For the latest few weeks Neil and I have been discussing unit testing as
> a means of testing Postgres more rigorously.

I should note that we've also been looking at some other ideas, including
different approaches to testing, static analysis, and model checking.

> The only problem I can think of is that many of the functions we
> would want to test are static. We could just sed the static typing away,
> however, using a rule that all static functions are defined with ^static.

Another approach would be to have a "configure" flag to enable unit
testing that would define "static" to nothing when enabled. It would be
nice to have access to the prototypes of static functions while writing
unit tests: we could either do without that, or have a script to generate
the header files automatically.

BTW, I think that unit testing could probably only be enabled when a
configure flag is specified in any case: unfortunately the changes needed
to implement it may be rather invasive.

> Where as with a standard Assert() in C we produce a test with a
> boolean result, CuTest can do the tests itself (ie, if you want to assert
> on a string comparison or an integer equality, CuTest provides functions
> to actually do those operations). I think this is ugly and I guess we
> could just use the standard boolean test.

I don't think it's ugly. FWIW, CuTest probably uses that approach because
it is what most of the SUnit-derived testing frameworks do. You can always
use CuAssert() if you want to write the rest of the assertion condition
yourself.

I think one challenge Gavin didn't mention is how easy (or not) it will be
to write unit tests for deeply-internal parts of the backend: unit testing
utils/adt and the like is all well and good, but there really isn't much
point if that's all we can test. The problem with testing the guts of the
backend is that a given backend function typically requires an enormous
amount of state -- it will often make some pretty specific assumptions
about the environment in which it is executing. I'm not sure if there is a
simple way to solve this -- writing the first few deep-internals unit
tests is probably going to be pretty painful. But I think there are a few
reasons to be optimistic:

- once we've written a few such tests, we can begin to see the
initialization / setup code that is required by multiple tests, and
refactor this out into separate functions in the backend. Eventually, the
code that is invoked to do _real_ backend startup would be just another
client of the same set of shared initialization functions that are used to
initialize the environment for unit tests.

- we don't need to write tests for the *entire* source tree before we
begin to see some payback. Once we have a good test suite for a specific
component (say, utils/adt or FE libpq), developers should be able to see
the gains (and hassles) of unit testing; if people like it it should be
easy to incrementally add more tests.

One final note: it is a hassle to unit test a 300 line function because of
all the different code paths and error conditions such a function usually
has. I think a natural pattern will be to test small bits of functionality
and refactor as you go: rather than trying to test a huge function, we
ought to pull a distinct piece of functionality out and into its own
function, which will be much easier to unit test by itself. So unit
testing and refactoring the code to be divided into smaller, more granular
functions tends to go hand in hand. Now, we can debate about whether the
resulting functions are in good style (I strongly believe they are), but I
thought I'd add that.

-Neil



---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to