>
> I don't pretend to fully understand the proposal and how it would be
> implemented, but IMO adding an overhead (not to mention more complicated
> semantics) to *every* chained attribute lookup...


It's a good thing that isn't the case then :p

Sorry, couldn't resist that quip. But more seriously, this would not be an
issue. Normal attribute lookups would remain wholly unchanged. Because
attribute lookups on namespace proxies *are* normal attribute lookups.
That's the entire reason I've proposed it the way I have (well, that and
the fact that it allows methods within namespaces to still bind to the
parent class basically 'for free' without requiring any magical behaviour).
Because aside from the magic behaviour of the namespace block (the way it
binds names to its parent scope) it does not require *any* changes to the
current attribute lookup semantics. In fact, it is built entirely on them.

The only *possible* difference would be that perhaps namespace proxies
could be special-cased if any room for optimization could be found at the
implementation stage (purely hypothetical) to attempt to make them more
performant than the rough python pseudocode sketch of their behaviour I
posted yesterday in response to Paul. So fear not, no current semantics
would be harmed in the making of this film keyword.

So, is it a prerequisite that whatever object in which I'm trying to
> establish a namespace must support getattr/setattr?
>

I'd imagine so, yes. At the very least, any namespaceable (new word!)
object would have to have some sort of __dict__ or equivalent (the built-in
`vars` would have to work on them), because otherwise I don't see how they
could store attributes.

Also, it occurs to me that if I can declare a namespace in any object, I
> might be tempted to (or might inadvertently) monkey patch external objects
> with it. Any thoughts on guarding against that, or is this "we're adults
> here" case?
>

Nothing stops you from monkey-patching objects right now. Declaring
namespaces on arbitrary live objects you've obtained from who-knows-where
would have all the same benefits to code conciseness and clarity as
declaring them on `self` within a method, versus conventional
monkey-patching. You could, for instance, group all the attributes you're
monkey-patching in under a single namespace with a descriptive name
(namespace custom? namespace extension? namespace monkey???) to keep track
of what belongs to the original object and what doesn't, more easily.

Monkey-patching is often considered an anti-pattern, but sometimes there's
a library that does 99% of what you need but without that last 1% it's
worthless for your use-case. Rather than forking and rewriting the entire
library yourself, sometimes monkey-patching is a better solution.
Practicality over purity, right?

So yes, this definitely falls under the "consenting adults" category.

A problem I sense here is the fact that the interpreter would always need
> to attempt to resolve "A.B.C" as getattr(getattr(A, "B"), "C") and
> getattr(A, "B.C"). Since the proxy would be there to serve up  the
> namespace's attributes, why not just let it and do away with "B.C" in
> A.__dict__? What will the performance costs be of attempting to get an
> attribute in two calls instead of one?
>

Well, the performance would be, at worst, identical to a set of chained
attribute lookups currently, which also have to be done in multiple steps,
one at a time. We want to keep to existing python semantics as much as
possible so that the change is minimally invasive (As Rob Cliffe pointed
out, this is a proposal for syntactic sugar, so it probably doesn't warrant
changes to the semantics of something.so fundamental to the language as
attribute lookups). Buy hey, if there's some room for optimization
under-the-hood that would allow nested namespace lookups to be done in a
single step in certain situations that would be undetectable to an end-user
of the language, then that would be fine too, assuming it didn't add too
much burden of maintenance.

I somewhat doubt it would be possible to do that though, because the
namespace proxies would be python objects like any other. You can pass
references to them around.

So in the case of:

namespace A:
    namespace B:
        C = "foo"

You could just grab a reference to B:

>>>ref = A.B

and then later on look up C on the reference:

>>>ref.C  # this looks up vars(sys.modules[__name__])['A.B.C']
'foo'

The attribute lookup behavior behaves exactly like you would expect any
similar set of chained attribute lookups to behave. No different than:

class A:
    class B:
        C = "foo"


So, if in a nested scenario, A.B.C.D, I'm trying to understand the
> combination of getattr calls to resolve D. Would it just still be two
> attempts, getattr(A, "B.C.D") and getattr(getattr(getattr(A, "B"), "C"),
> "D")? If it were to become a Cartesian product of calls, there'll likely be
> a problem. 🤔️
>

The only way you could ever resolve D in a single attribute lookup like
getattr(A, "B.C.D") is if you literally typed that statement out verbatim:

>>> getattr(A, "B.C.D")

That would actually work, because `namespace A` would prepend its name to
the attribute lookup and forward it on to its parent scope, serving up
(sys.modules[__name__])['A.B.C.D'], which is where the key under which D is
stored in the module globals.

But if you just look it up using normal attribute access syntax, like you
would 99% of the time:

>>> A.B.C.D

Then it would be done in 3 separate lookups, one for each dot, exactly the
way you described: getattr(getattr(getattr(A, "B"), "C"), "D")

Looking B up on `namespace A` gets you `namespace A.B`, looking up C on
`namespace A.B` gets you `namespace A.B.C`, and looking up D on `namespace
A.B.C` finally gets you a reference to whatever D contains.

This isn't a cartesian product though. It's just one attribute lookup per
level of reference nesting, just the same as any other form of attribute
access.


As a side-note to everyone who has asked questions and participated in the
discussion so far, thank you for at least entertaining my proposal. I'm
under no delusions about the incredibly high bar that adding a new keyword
to the language represents.
But I still think discussions like this are important! Even if only one in
every hundred such ideas actually has potential, if we don't brainstorm
them up and discuss them as a community we'd never end up finding the ones
that are valuable enough to seriously consider taking forward to a PEP.

I've found that on this list there are a few people who fancy themselves
gatekeepers and will dismiss virtually any new idea that gets posted here
out-of-hand in a way that is often pretty unkind, and without really
offering up much of a rationale. Maybe it's just a fundamental difference
of mindset, but to me, the idea that python is 'finished' and should remain
largely untouched forever is kind of horrifying. Programming languages that
don't develop and evolve just become obsolete. It seems to me that with
literally every recent new enhancement to the language that I've been
incredibly excited about in the last few years (f-strings, assignment
expressions, pattern matching) there's been a huge number of people loudly
complaining that 'we don't really need this' and it just baffles me.

Of course we don't *need* syntactic sugar. We could all write code like
this:

>>>(3).__add__((4).__mul__(2))
11

rather than:
>>>3 + 4*2

and

evens = []
for num in range(1, 11):
    if not num % 2:
        evens.append(num)

rather than:

[num for num in range(1, 11) if not num % 2]


But I think it's hard to argue that a language without any syntactic sugar
is better, or for that matter is something anyone would actually enjoy
using.

So yeah, to make this point explicit. *This is a proposal for syntactic
sugar*. It doesn't really add anything you can't do currently, although
many of the things that would be idiomatic and trivial with `namespace`
would be horrible antipatterns without it (and equivalent implementations
available currently are much more verbose and less performant). but it
should hopefully allow for more organized and DRYer code, and more easy
usage of namespacing in situations where it makes sense and would be
otherwise non-trivial to add them in (there are several examples of this in
this thread and in the doc).

Arguing that this is pointless because it doesn't add completely new
functionality misses the mark in the same way that it's not particularly
helpful to argue that listcomps are pointless because for-loops exist. When
they were first introduced many people hated them, and yet they (and other
types of comprehensions) are one of the most beloved features of the
language nowadays.

I can respect that many people will feel that the upsides of this proposal
don't justify the growth in complexity of the language. I obviously
disagree (or else I wouldn't have written all this up!) and am using this
thread to try to convince you that it is. But even if I'm unsuccessful
maybe this will put the idea on someone else's radar who might come up with
a better and more compelling suggestion down the road.

Maybe something like how people wanted switch-cases and they were judged to
be not sufficiently worth it, but then led to the match statement later on
(basically switch-cases on steroids that also do your laundry and walk your
dog).

So cheers, everyone.


On Tue, May 4, 2021 at 6:53 PM Rob Cliffe via Python-ideas <
python-ideas@python.org> wrote:

>
>
> On 04/05/2021 14:39, Paul Bryan wrote:
>
>
>
> A problem I sense here is the fact that the interpreter would always need
> to attempt to resolve "A.B.C" as getattr(getattr(A, "B"), "C") and
> getattr(A, "B.C"). Since the proxy would be there to serve up  the
> namespace's attributes, why not just let it and do away with "B.C" in
> A.__dict__? What will the performance costs be of attempting to get an
> attribute in two calls instead of one?
>
>
> [snip]
> So, if in a nested scenario, A.B.C.D, I'm trying to understand the
> combination of getattr calls to resolve D. Would it just still be two
> attempts, getattr(A, "B.C.D") and getattr(getattr(getattr(A, "B"), "C"),
> "D")? If it were to become a Cartesian product of calls, there'll likely be
> a problem. 🤔️
>
>
> I don't pretend to fully understand the proposal and how it would be
> implemented, but IMO adding an overhead (not to mention more complicated
> semantics) to *every* chained attribute lookup is enough to kill the
> proposal, given that it seems to have relatively slight benefits.
> Best wishes
> Rob Cliffe
>
> _______________________________________________
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/4FVAFKULIYLMIBJZIZIJZCKIYSTZROWB/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JPMPNUUVAMAMJKP7Y6XTCKJIGFKDWFXA/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to