On Wednesday 27 June 2007 22:38:17 Andy Lester wrote:

> It'd have to be against the last update from svn of the file itself.

Yes.

> I'm not sure I like the idea of relying on a given VCS.  I know
> Parrot's hosted in Subversion, but what about the Git folks?

As soon as they start reporting failures in the metadata tests, I'll start to 
believe we should consider that they run the full test suite.

I haven't seen them report any failures from the metadata tests.  Thus I 
conclude that, if any such folks exist, they don't have a lot of motivation 
to report failures.

> It smells funny to me.

All I know is that I've made more than my share of commits in the past six 
months to fix broken tests of non-functional requirements.  I'm all for code 
quality and standards and removing even all warnings, but *people don't run 
the full test suite reliably before they commit anyway*.  Heck, you didn't 
even *compile* before one of your checkins yesterday.

I can't believe that adding more tests--tests that analyze some subset of the 
3800 files in the repository and perform a lot of IO to do so--will encourage 
people to run the tests more often.

It's my experience (and advice I give people in exchange for money in 
professional contexts) that making tests faster and less painful to run 
encourages people to run them more often.  Faster, more frequent feedback 
enables many very good things.

Running all of the coding standards tests on all of the files in the 
repository--even the ones we didn't change--on every full test run goes 
against my strongly-held personal advice.  We certainly don't do that for the 
tests of the configuration and code-generation systems, and those are 
FUNCTIONAL tests.

Again, I'm all for code quality.  I think these tests are important--but 
they're only important *if* people run them.  Adding minutes to the full test 
run is one sign of a not-right approach.

-- c

Reply via email to