Yaron Minsky wrote a while ago that "When we first tried switching over from VB to C#, one of the most disturbing features of the language to the partners who read the code was inheritance. They found it difficult to figure out which implementation of a given method was being invoked from a given call point, and therefore
difficult to reason about the code.".

I was always puzzled about such an argument. Scott Meyers points
out at every opportunity that in C++ (and, by extension, in OO languages
in general), the class's interface is a contract that has to be upheld within the inheritance tree. So if something is a Foo, then it must not matter that it's an instance that derives 5 levels deep from Foo. If the code is written such that a derived class breaks the contract, the code is written wrongly
and will cause no end of trouble. It's another story, of course, how
to uphold such contracts in your development environment: how much
can the compiler do, how much can the test harness do, how much
is done via static code analysis tools, etc.

Most importantly, inheritance has never meant to be a way to reduce the
amount of code. It's an incidental benefit, often cited as somehow the raison
d'etre of OO. If code reuse stands in the way of upholding the interface
contract, the contract wins in spite of code duplication. The best example, perhaps, is the *widely abused* circle/ellipse inheritance. The fact that, mathematically, a circle is an ellipse, does *not* mean that in OO language you should derive a circle class from an ellipse class -- it makes no sense at all if you think about it from the standpoint of interface contracts. In OO world, a circle is *not* an ellipse. An ellipse has two radii, a circle doesn't,
that's where the story ends. The Wikipedia article about that problem is
amusing in the numerous examples they show that could supposedly
"fix" it somehow. I have not seen any decent, maintainable C++ code where the Liskov's substitution principle was violated. I have violated it myself in
my early years of learning C++, and I have only regretted it.

So, when correctly applied, what's so disturbing about inheritance? You inherit only where it makes sense, and if it makes sense then you don't care about
which particular method is called: it's supposed to be safe.

Cheers, Kuba
_______________________________________________
Caml-list mailing list. Subscription management:
http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
Archives: http://caml.inria.fr
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs

Reply via email to