On Sat, Apr 16, 2022 at 04:40:58PM +1000, Chris Angelico wrote:

> > Which conditions would you drop? There's not that many, really. Five.
> > Six if you include the "no cycles" requirement for the DAG, which I
> > think is so obviously necessary that it is barely worth mentioning.
> 
> This is *exactly* the "no true Scotsman" fallacy: you have already
> excluded from consideration anything that drops a condition you didn't
> already drop.

No, I have excluded them from consideration because if you allow them, 
**inheritance doesn't work properly**.

It becomes inconsistent and buggy. Bad things happen, like the method 
resolution order depending on how you spell the class name, or differing 
between a class and its subclass.

Maybe those bad things will be rare. MI in Python 1.x only misbehaved if 
you had a diamond graph, which was rare with classic classes. With 
new-style classes, all multiple inheritance includes diamonds, and MI 
in Python 2.2 misbehaved under some circumstances but not all:

https://mail.python.org/pipermail/python-dev/2002-October/029035.html

For 2.3, we swapped to using a proven algorithm, C3 linearization, to 
fix those problems. This was not an arbitrary choice. It is necessary to 
avoid inheritance misbehaving.

Like Dylan, Ruby, Perl etc (to the best of my knowledge, corrections are 
welcome) Python now supports as many cases of automatic inheritance in 
MI as it is possible to support without breakage.

There is no more silent breakage in the MI model like there was in 
Python 1.5, instead we get a clear exception if we try to create an 
inconsistent class hierarchy.

C++ and Eiffel are even stricter (more restrictive) than Python. They 
don't just exclude class hierarchies which are inconsistent, they 
exclude class hierarchies with perfectly good linearizations because 
they have a method conflict.

As I have said now more times than I can count, that's a perfectly 
acceptible design choice, maybe even better than Python's, but it means 
that their MI model is less general than Python's.

Hard to believe that this mild take is so controversial. Imagine if I 
had something really wild like "Python lists are mutable" or "cars 
generally drive on four wheels".


> On the assumption that your five conditions are
> essential, there's no way that you can drop any of the five conditions
> and still have it count, therefore the five conditions are essential.
> Your logic is circular.

You seem to be determined to accuse me of every fallacy under the sun, 
whether it applies or not. Excluded middle, No True Scotsman (no matter 
how many times I say that other choices for MI are legitimate and maybe 
even better than Python's choice), circular reasoning.

At least you haven't (yet) accused me of poisoning the well, perhaps 
because the irony would be too much.

The conditions I have given are not essential because Python has them, 
but because they genuinely are essential to avoid buggy MI like Python 
used to have.

There are other ways to avoid such bugs. You can do what Java does, and 
not allow MI at all. Or you can refuse to resolve method conflicts, like 
C++ and Eiffel. Or you might just hope that nobody notices the bugs, and 
if they do, close them as Won't Fix. There are many strategies a 
language might take.


> It is highly arrogant to assume that nobody will ever find a way to
> implement MI while dropping one of your conditions. They're not
> fundamental to the definition, they're fundamental to *the way Python
> does things*.

No, they are fundamental to the definition. People just don't generally 
mention them, either because they don't know them, or take them for 
granted.

The three conditions for C3 are necessary for MI to be 
consistent and coherent. Of course you don't need them if you have 
subclassing without inheritance (like mixins in some languages), or 
no MI at all, but otherwise you need C3.

The rule that the linearization should only depend on the DAG between 
classes, and not on incidental factors like their name, or the time of 
the day, or a random number generator, is just common sense.

https://i.imgur.com/fIVQIj8.jpg

The requirement for automatic conflict resolution is kinda necessary for 
it to be inheritance, otherwise you're doing something else. E.g. in 
Eiffel, you have to rename the conflicting methods. In C++ you have to 
use explicit delegation.

Which is cool. As I have said about a bajillion times now, there are 
good arguments that the more restrictive models of MI implemented by C++ 
and Eiffel are better than the less restrictive model used by Python.

So please stop falsely accusing me of "No True Scotsman" bullshit. If 
you keep doing it, I will know you're not arguing in good faith.


> > The most subjective is the requirement for automatic conflict
> > resolution. It is a legitimate design choice to give up automatic
> > conflict resolution (that's what C++ and Eiffel do) but that would be a
> > breaking change for Python.
> 
> Yes. A breaking change FOR PYTHON.

Right. Python's model for MI (like that of Dylan, Ruby, Perl etc) 
already supports automatic conflict resolution as a feature.

If you take that feature away, you have *fewer* features in your model 
of MI, right? Can we at least agree that if you start with N features, 
and subtract 1 leaving N-1, that N-1 is *less* than N?

Or are you now going to insist that maybe some day in the future, we 
will discover a way to take away 1 from a number and have the result be 
larger than the original? 

    N - 1 > N

If we did break backwards compatibility, it would mean that cases of MI 
which are supported now would no longer be supported in the future. That 
sounds like "less general" to me.


> > So come on Chris, back up your disagreement with something objective,
> > not just wishy-washy "anything might happen in the future!" nonsense.
> 
> Yet you're willing to argue that other languages don't do "full MI"
> because they do things that would be a breaking change for Python?

Not because it would be a breaking change for Python, but because they 
don't support MI in the full generality that languages such as Dylan, Ruby, 
Perl and Raku do. This shouldn't be controversial.

They reject superclass hierarchies which Dylan etc are capable of 
handling. That sounds like less general to me.

Its not even a value judgement that Dylan etc are "better". There is a 
good argument to be made that MI in its full generality is too hard to 
use right.


> The only things that we can completely rule out are those which are
> true by definition, or can be proven mathematically or logically.

Right, like the C3 linearization for MI etc. That's my point.


> A new odd number between 3 and 5 is provably impossible. 7 is truly 
> prime, by the definition of primes, and any extension to that 
> definition (eg complex primes or Gaussian integers) must maintain 
> that.

You happen to be right about 7 being a Gaussian prime, but your 
reasoning is wrong. For example none of 2, 5, 13 or 17 are Gaussian 
primes. (But 7 and 11 are.)


[...]
> This is the essence of the "no true Scotsman" fallacy: you assume that
> it's not true MI without automatic conflict resolution.

I have never said "true MI".

I have said *full MI*, in the sense of full generality. You cannot 
generalise MI further without losing essential properties.

C++ does MI. But there are cases where C++ raises an exception where 
Dylan, Perl, etc will happily resolve the conflict. C++'s model of MI 
supports fewer cases that Dylan, Perl, etc, and is therefore less 
general than Dylan, Perl, etc.

Hard to imagine that something so obvious and mild should cause so much 
angst and insistance that I am wrong with so little to back it up.


> > - The MRO is entirely dependendent on the shape of the inheritance
> >   graph, and not on incidental properties like the name of classes.
[...]
> 
> Is it not true MI if the relationships change? Is that what you're
> saying? 

If you have a hierarchy like this:

    A   B
     \ /
      C
      |
      D

which linearizes to [D, C, A, B], and refactor the name A to Z and 
change *nothing* else, but the linearization changes to [D, C, B, Z] 
(swapping the order of B and what was A, and therefore changing the 
behaviour of D) then what you have is *broken* inheritance.


> > - the inheritance model is consistent, monotonic and preserves local
> >   precedence order (C3 linearization).
> >
> > Which of those three will you give up, and why is that a good thing?
> >
> > "In my class Spam, superclass A takes precedence over B, but when I
> > subclass Spam, the precedence swaps and B comes before A."
> >
> > "That's not a bug, that's a feature!!!"
> 
> You keep asserting that, because something OBVIOUSLY would be a bad
> thing for Python, it must not be "true multiple inheritance".


It would be bad for any language.

If you have a class hierarchy like this:

    A   B
     \ /
      C
      |
      D

with linearization [D, C, A, B], and then you subclass D:

    A   B
     \ /
      C
      |
      D
      |
      E

and the linearization of E swaps the orders of all or some of C, A, B, 
let's say [E, D, B, A, C], then you have a broken model of MI.

That is bad in any language, not just Python.


[...]
> > Yes. Its a DAG of superclass/subclass relationships. There is only one
> > way to draw that graph that is coherent.
> 
> Can you mathematically prove that? 

Me personally? No, it's above my pay grade. But other people can and 
have. I've given enough pointers to this topic to sink a battleship. Do 
your own research. I have. Google is your friend.

You really don't have to automatically dispute everything I say without 
evidence. Go get some evidence and prove I'm wrong. I always welcome 
correction if I am provably wrong.

But it is tiresome to read you misrepresenting what I have said over and 
over again, when your arguments are no more substantial than "somebody 
might someday invent something new".


> What if there is some other way to
> linearize that is *also* consistent with itself, but different from
> C3? Is there some way to prove that this is impossible?

That's a good question!

The C3 algorithm is deterministic at every step, there is never any 
place where it makes an arbitrary choice. So any other algorithm with 
the same requirements must end up with the same linearization.


> Also, what if there is no linearization as such - what if, when you
> call the superclass, it actually calls ALL the parents, not just one?

You mean you want to eliminate the ability of classes to override their 
superclass? I'll be honest, I never even imagined that anyone would 
treat that as a serious suggestion.

But okay, we say that classes can no longer override their superclasses, 
and the interpreter ensures that when you call Child.method, every one 
of its superclasses are called. They still have to be called in some 
order, and that's your linearization.

If the linearization meets the same C3 requirements, then it is 
effectively the same as what Python does (except it eliminates the 
ability to override a superclass method). If it is different, then there 
will be rare, or common, class hierarchies that misbehave.



-- 
Steve
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/FUO5LZILFKOOJOPUJRG5LMRB6PIMSILA/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to