There's a lot of opinion and not much science with TDD. I'd be sceptical of someone who introduced themselves as a TDD ninja. It takes an enormous amount of practice to develop expertise (5,000 hours) so almost everyone is a beginner. Read Kent Beck and Uncle Bob on what they think is best to get started but usually your intuition will tell you what does make sense.
What you're describing does sound pretty lame but might be justifiable in some obscure circumstances. For me the rules of TDD are simple: - write enough of a test that it can be executed and it fails - write enough code to make that test pass - commit - refactor to eliminate any duplication and make the code express itself - commit - repeat Unit tests must be incredibly fast (that means no database connections, no access to the file system, etc.). Anything like that should be tested with integration tests and interaction with those 'leaf' classes should be done via mock tests. Along the way, both the tests and the code might not look ideal but should at least suggest the direction you're heading. There should not be code the describes the entire implementation that has been commented out and you're just trying to write tests to uncomment it. If the tests make no sense, the code also probably makes no sense. In the end, you should have some code that does exactly what you expect and some tests that can convey to someone else exactly what you expect. That's my opinion. In some circumstances, I break some/all of those rules (some duplication is tests is ok if the alternative is tests that do not adequately describe what the code does). To wildly speculate about what might be going on in your case: Sometimes testing the behaviour of a subclasses can seem to potentially introduce duplication. If your tests comprehensively cover the behaviour of the superclass, it can seem more efficient to just verify that the subclass IS an instance of the superclass and deduce that it must therefore have the inherited behaviour. The alternative is to have the tests follow the same inheritance hierarchy but that seems even worse. A better option that avoids this is usually to choose composition over inheritance - move the shared behaviour into an entirely different class and inject that as a dependency. It certainly sounds like the tests don't explain what the class does or why. Sorry about the rant. I hope some of that made sense. On Sun, Jun 5, 2011 at 5:06 PM, Tristan Reeves <tree...@gmail.com> wrote: > Hi list, > I'll describe the situation in as little detail as possible. > > There's some code in which a class BaseClass, and a class ClassForUse : > BaseClass are defined. > > BaseClass is used in a unit test that calls its constructor with mocks. > ClassForUse is used in production with a 0-param constructor which calls > the base constructor with hard-coded arguments. > > Forgetting (for now) any issues with all this (and to me there are plenty), > we then find the following unit test: > > [Setup] > var _instance = new ClassForUse(); > > [Test] > Assert.That(_instance is BaseClass); > > ...to me this is totally insane. But I seem unable to articulate exactly > the nature of the insanity. > > A little further on we have (pseudocode) > [Test] > Assert _instance._MemberOne is of type A > Assert _instance._MemberTwo is of type B > Assert _instance._MemberThree is of type C > > where the members are (if not for the tests) private members set by the > 0-param constructor which pushed them into the base constructor. (all hard > coded). > > So...is this really insane, or is it I who am crazy?? It's made more > perplexing to me because the author of this code says it's all a natural > result of TDD. And I am far from a TDD expert. > > I would love some feedback about this Modus Operandi. esp. any refs. It > seems obviously wrong, and yet I am unable to come up with any definitive > argument. > > Thanks, > Tristan. >