On Tue, Dec 15, 2020 at 12:49:26PM +0300, Paul Sokolovsky wrote:

> > Are you asking for a semantic difference (the two statements do 
> > something different) or an implementation difference (the two
> > statements do the same thing in slightly different ways)?
> 
> I'm asking for semantic difference, it's even in the post title.

So far all you have talked about is implementation differences such as 
whether intermediate results are put on a stack or not, and differences 
in byte-code from one version of Python to another.


> But
> the semantic difference in not in "what two statements do", but in
> "what two statements mean".

Right, so why are you wasting time talking about what they *do*, i.e. 
whether they put intermediate results on the stack, or in a register?


> Difference in "doing" is entailed by
> difference in "meaning". And there may be no difference in "doing",
> but still difference in "meaning", as the original "1+2+3" vs
> "1+(2+3)" example was intended to show. 

In the case of ints, there is no difference in meaning. For integers, 
addition is associative, and the order does not matter.

So here you *say* you are talking about semantics, but you are actually 
talking about implementation. With integers, the semantics of all of 
these are precisely the same:

    1 + 2 + 3
    (1 + 2) + 3
    1 + (2 + 3)
    3 + (2 + 1)

etc. The order in which you *do* the additions makes no difference to 
the semantics.



> > Implementation differences are fairly boring (to me): 
> 
> Right. How implementation points are brought into the discussion is to
> show the issue. As mentioned above, the actual progression is the
> opposite: first there's semantic meaning, then it's implemented. So,
> what's the semantic meaning of "a.b()" that it gets compiled with
> LOAD_METHOD bytecode?


- Look up the name "a" in the local namespace;

- look up the attribute name "b" according to a's MRO, including
  the use of the descriptor protocol, slots, etc;

- call whatever object gets returned.


[...]
> > For what it is worth, Python 1.5 generates the exact same byte code
> > for both expressions; so does Python 3.9. However the byte code for
> > 1.5 and for 3.9 are different.
> 
> Right, and the question is what semantic (not implementational!) shift
> happened in 3.7 (that's the point when it started to be compiled
> differently).

Absolutely none.

There was a semantic shift, but it was back in Python 2.2 when new style 
classes and the descriptor protocol were introduced.



> > However, the semantics of the two expressions are more or less
> > identical in all versions of Python, regardless of the byte-code used.
> 
> That's what I'd like to put under scrutiny.


Okay, the major semantic differences include:

- in Python 1.5, attribute name look-ups call `__getattr__` if the
  name is not found in the object's MRO;

- in Python 3.9, attribute name look-ups first call `__getattribute__`
  before checking the MRO and `__getattr__`;

- the MRO is calculated differently between 1.5 and 3.9;

- in 3.9, the descriptor protocol may be invoked;

- descriptors and the descriptor protocol did not exist in 1.5;

- there are a few differences in the possible types of `obj.meth`
  between the versions, e.g. Python 1.5 had both bound and unbound
  instance methods, while Python 3.9 does not.

There may be other differences, but those are the most important 
ones I can remember.


> > (I say "more or less" because there may be subtle differences between 
> > versions, relating to the descriptor protocol, or lack there of, old
> > and new style classes, attribute lookup, and handling of unbound
> > methods.)
> 
> Right, and exactly those "subtle differences" is what I'd like to
> discuss.

See above.


> The level of abstraction I'm talking about is where you look not just
> at "`(expression)` vs `expression`" but at:
> 
> expression <op> expression   vs   expression <op> (expression)

That is an extremely straight-forward change in execution order. Whether 
that makes any semantic difference depends on whether the operations 
involved are associative or not.


> Where <op> is an arbitrary operator. As we already concluded, those do
> have difference, even considering such a simple operator as "+".
> 
> So... what can we say about the difference between a.b() and (a.b)()
> then?

There isn't one.

Even though the `.` (dot) and `x()` (call) are not actual operators, we 
can treat them as pseudo-operators. According to Python's precedence 
rules, the first expression `a.b()` is the same as:

- lookup name a
- lookup attribute b
- call

and the second `(a.b)()` is:

- lookup name a
- lookup attribute b
- call

which is precisely the same. The parens make no semantic difference.


Paul, I think you have fooled yourself by comparing two *different* 
situations. You compare a use of parens where they change the order of 
operations:

    a + b + c
    a + (b + c)

but you should be looking at this:

    (a + b) + c  # parens are redundant and make no difference

That is exactly equivalent to your method example:

    (obj.meth)()  # left pair of parens are redundant


> > > it might
> > > be helpful to add the 3rd line:
> > > 
> > > t = obj.meth; t()  
> > 
> > That clearly has different semantics from the first two: it has the 
> > side-effect of binding a value to the name t.
> 
> But that's yet another good argument to introduce block-level scoping
> to Python (in addition to already stated great arguments), because then,
> 
> (a.b)()
> 
> will be *exactly* equivalent to (inside a function):
> 
> if 1:
>     const t = a.b
>     t()

You are changing the rules of the discussion as you go. You said nothing 
about a hypothetical new feature of block scopes and constants. You gave 
an example of existing Python code:

    t = obj.meth; t()

There is no if block here, no new scope, no constants. t is a plain old 
ordinary local variabe in the current scope.


> Hopefully, that was answered above. To end the post with the summary,
> I'm suggesting that there's difference between:
> 
> expression <op> expression   vs   expression <op> (expression)
> 
> Which is hopefully hard to disagree with.

There may or may not be a difference, depending on the associativity and 
precedence rules involved.


> Then I'm asking, how consistent are we with understanding and
> appreciating that difference, taking the example of:
> 
> a.b()   vs   (a.b)() 

Where as this example has only a single interpretation of the 
precedence:

- lookup a in the current scope;
- lookup attribute b on a;
- call the result.

There's no other order of operations available, so no way for the parens 
to change the order of operations:

- you cannot call a.b until you have looked up b on a;
- you cannot lookup b on a until you have looked up a.

Let's be concrete:

    s = "hello world"
    s.upper()  # returns "HELLO WORLD"

There is only one possible order of operations:

- lookup s;
- lookup "upper" on s;
- call the resulting method.

You cannot use parens to change the order of operations:

- call the resulting method first   (what method?)
- lookup "upper" on s               (what's s?)
- lastly lookup s                   (too late!)




-- 
Steve
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/Y2WWHFHI74SVL7V3UAFM6XROALOEV6HL/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to