[Python-ideas] Re: Pre PEP: Python Literals (was custom strings before)
On Tue, 6 Jul 2021, 7:56 am Jim Baker, wrote: > > > On Mon, Jul 5, 2021, 2:40 PM Guido van Rossum wrote: > >> FWIW, we could make f-strings properly nest too, like you are proposing >> for backticks. It's just that we'd have to change the lexer. But it would >> not be any harder than would be for backticks (since it would be the same >> algorithm), nor would it be backward incompatible. So this is not an >> argument for backticks. >> > > Good point. At some point, I was probably thinking of backticks without a > tag, since JS supports this for their f-string like scenario. but if we > always require a tag - so long as it's not a prefix already in use (b, f, > r, fr, hopefully not forgetting as I type this email in a parking lot...) - > then it can be disambiguated using standard quotes. > There's a deferred PEP proposing a resolution to the f-string nesting limitations: https://www.python.org/dev/peps/pep-0536/#motivation > >> Separately, should there be a way to *delay* evaluation of the templated >> expressions (like we explored in our private little prototype last year)? >> > > I think so, but probably with an explicit marker on *each* deferred > expression. I'm in favor of Julia's expression quote, which generally needs > to be enclosed in parentheses, but possibly not needed in expression > braces (at the risk of looking like a standalone format spec). > https://docs.julialang.org/en/v1/manual/metaprogramming/ > > So this would like > x = 42 > d = deferred_tag"Some expr: {:(x*2)}" > > All that is happening here is that this being wrapped in a lambda, which > captures any scope lexically as usual. Then per that experiment you > mentioned, it's possible to use that scope using fairly standard - or at > least portable to other Python implementations - metaprogramming, including > the deferred evaluation of the lambda. > (No frame walking required!) > > Other syntax could work for deferring. > It reminds me of the old simple implicit lambda proposal: https://www.python.org/dev/peps/pep-0312/ The old proposal has more ambiguities to avoid now due to type hinting syntax but a parentheses-required version could work: "(:call_result)" Cheers, Nick. > ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/H6VVZ46MAE74EQXC5PONVIVWIHMSGNMX/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Pre PEP: Python Literals (was custom strings before)
On Fri, 2 Jul 2021, 5:12 pm Thomas Güttler, wrote: > Hi Nick and all other Python ideas friends, > > yes, you are right. There is not much difference between PEP-501 or my > proposal. > > One argument why I would like to prefer backticks: > > Some IDEs detect that you want to use a f-string automatically: > > You type: > > name = 'Peter' > print('Hello {name... > > and the IDE automatically adds the missing "f" in front of the string: > > name = 'Peter' > print(f'Hello {name... > > This is a handy feature (of PyCharm), which would not work reliably if > there are two different prefixes. > > --- > > You mentioned these things: > > eager rendering: I think deferred rendering would increase the complexity > a lot. And I think it is not needed. > Eager rendering is f-strings. Any templating proposal necessarily involves a delayed rendering step, when the template is combined with the interpolated values. runtime value interpolation: It is up to the receiver of > types.InterpolationTemplate to handle the data structure. > I really meant runtime template parsing here (i.e. str.format). dedicated templating libraries: One temp after the other. I think HTML and > SQL libraries would adapt as soon as the foundation > is available. > The existence of i-strings likely wouldn't change the syntax of jinja2 templates, Django templates, SQL Alchemy, pandas, etc. I would be happy if PEP-501 would come true. > So would I, but I still don't have a compelling answer to the "but it's yet another subtly different way to do it" objection. Cheers, Nick. > >> ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/CYLVN7R6TBADO7LRCAJZ4ASZKDVXH67O/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Pre PEP: Python Literals (was custom strings before)
Stephen J. Turnbull wrote: > But a > PEP 501 i-string "just works" nicely: > load_warning = i'Load is too high: {load}' > while (theres_work_to_do_matey): > if load > max_load: > logging.warn(load_warning) > (This assumes a future version of logging.warn that calls str() on the > first argument if it is an InterpolationTemplate.) A "Why this rather than PEP 501's interpolation templates?" is the main thing I was looking for in the PEP and I didn't find it. If the proposal is just a variant on PEP 501 with the syntax changed from i"template" to \`template\` and the template type name changed from InterpolationTemplate to TemplateLiteral, it doesn't need to be a new PEP, I can just explicitly reject those spelling options in PEP 501. (The reasons for that PEP's deferral unfortunately still hold, though - eager rendering, runtime value interpolation, and dedicated templating libraries together cover enough cases that the motivation for introducing the semantic complexity of yet another templating option gets weakened dramatically). If the differences between the proposals run deeper than that, then the proposed new PEP needs to spell them out. ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/XSGJO3VX3HBSGESRUFDTMI2LTJFNRY66/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: [Python-Dev] Have virtual environments led to neglect of the actual environment?
On Wed, 24 Feb 2021 at 10:49, Random832 wrote: > > I was reading a discussion thread > <https://gist.github.com/tiran/2dec9e03c6f901814f6d1e8dad09528e> about > various issues with the Debian packaged version of Python, and the following > statement stood out for me as shocking: > > Christian Heimes wrote: > > Core dev and PyPA has spent a lot of effort in promoting venv because we > > don't want users to break their operating system with sudo pip install. > > I don't think sudo pip install should break the operating system. And I think > if it does, that problem should be solved rather than merely advising users > against using it. And why is it, anyway, that distributions whose package > managers can't coexist with pip-installed packages don't ever seem to get the > same amount of flak for "damaging python's brand" as Debian is getting from > some of the people in the discussion thread? Why is it that this community is > resigned to recommending a workaround when distributions decide the > site-packages directory belongs to their package manager rather than pip, > instead of bringing the same amount of fiery condemnation of that practice as > we apparently have for *checks notes* splitting parts of the stdlib into > optional packages? Why demand that pip be present if we're not going to > demand that it works properly? The reason venv is promoted as heavily as it is is because it's the only advice that can be given that is consistently correct regardless of the operating system the user is running locally, whereas safely using a system-wide Python installation varies a lot depending on whether you're on Windows, Mac OS X, or Linux (let alone some other platform outside the big 3 desktop clients). conda is also popular for the same reason: while the instructions for installing conda in the first place are OS-dependent, once it is up and running you can use consistent platform independent conda commands rather than having to caveat all your documentation with platform-specific instructions. Apple moved all of their dynamic language interpreter implementations to inaccessible-by-default locations so Mac OS X users would stop using them to run their own code. Alongside that, we *have* worked with the Linux distro vendors to help make "sudo pip install" safe (e.g [1]), but that only helps if a user is running a new enough version of a distro that has participated in that work. However, while the option of running "platform native" environments will never go away, and work will continue to make it less error prone, the level of knowledge of your specific OS's idiosyncrasies that it requires is almost certainly going to remain too high for it to ever again become the default recommendation that it used to be. Cheers, Nick. [1] https://fedoraproject.org/wiki/Changes/Making_sudo_pip_safe (Note: this change mitigated some aspects of the problem in a way similar to what Debian does, but still doesn't solve it completely, as custom Python builds may still make arbitrary changes) P.S. "But what about user site-packages?" you ask. Until relatively recently, Debian didn't put the user's local bin directory on the system path by default, so commands provided by user level package installs didn't work without the user adjusting their PATH. The CPython Windows installer also doesn't adjust PATH by default (for good reasons). And unlike a venv, "python -m" doesn't let you ensure that the code executed is the version installed in user site-packages - it could be coming from a directory earlier in sys.path. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/RDLEH6DUF57UB6U4HNL2QRVAJY4KDSSJ/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Re: Where should we put the python-ideas HOWTO?
C. Titus Brown wrote: > put it in the current dev guide somewhere, just so it lands in version > control. > Then iterate on both it and the dev guide. My first thought would be to merge > the content > with this document, > https://github.com/python/devguide/blob/master/langchanges.rst. This option seems like a good one to me - there are plenty of other places where the information should be referenced (e.g. from https://devguide.python.org/communication/), but the dev guide is definitely intended to cover more than just the practical aspects of implementing already approved changes, it's also meant to provide guidance on having productive discussions when deciding whether or not to make a particular change. Cheers, Nick. ___ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7VT7M2JRXHVNR7JUBAUYNXOY4K2XUPYN/ Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Redefining method
On 31 July 2018 at 01:35, Jonathan Fine wrote: > Hi Jamesie > > Thank you for your question. You asked why not >> c = MyClass >> o = c() >> >> def c.foo(cls): ... >> def o.bar(self): ... > > I've the same same query, but never had the courage to ask. So that > you for asking. And also giving me a chance to share my thoughts. It's essentially due to the fact that while we deliberately allow runtime monkeypatching (as it's sometimes the best available answer), we also strongly encourage the idea of treating it as a last resort option (outside specific use cases like testing). So if you want to define methods on a class, the strongly preferred place to define them is inside the class definition, where they're easy for future maintainers of that class to find. If you need to replace them for some reason, it will preferably be within a temporary bounded scope, using a tool like unittest.mock.patch, rather than as a permanent change that affects every other use of the class within the process. Cheers, Nick. P.S. While it's *not* explicitly part of Python's design rationale, http://connascence.io/locality.html and the rest of that site provide some good info on the kinds of problems that "action at a distance" effects, like monkeypatching class definitions, can cause in a code base. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] The future of Python parallelism. The GIL. Subinterpreters. Actors.
On 18 July 2018 at 05:35, Eric Snow wrote: > In order to make all this work the missing piece is a mechanism by > which the decref (#3) happens under the original interpreter. At the > moment Emily Morehouse and I are pursuing an approach that extends the > existing ceval "pending call" machinery currently used for handling > signals (see Py_AddPendingCall). The new [*private*] API would work > the same way but on a per-interpreter basis rather than just the main > interpreter. This would allow one interpreter to queue up a decref to > happen later under another interpreter. > > FWIW, this ability to decref an object under a different interpreter > is a blocker right now for a number of things, including supporting > buffers in PEP 554 channels. Aw, I guess the original idea of just doing an active interpreter context switch in the current thread around the shared object decref operation didn't work out? That's a shame. I'd be curious as to the technical details of what actually failed in that approach, as I would have expected it to at least work, even if the performance might not have been wonderful. (Although thinking about it further now given a per-interpreter locking model, I suspect there could be some wonderful opportunities for cross-interpreter deadlocks that we didn't consider in our initial design sketch...) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Idea: Deferred Default Arguments?
On 20 July 2018 at 22:45, Steven D'Aprano wrote: > Perhaps you mean duplicate, or repeat, or copy. But surely they're not > redefined -- then they would have different values. Being able to > redefine the defaults in a wrapper function is a feature. > > Putting aside the terminology, I think this is a minor annoyance: DRY > violations when setting default values. FWIW, I tend to handle this problem the same way I handle other DRY problems with magic constants: give the default value a name and either export it directly, or export an API for retrieving it. If that results in name sprawl ("But now I have 15 defaults to export!"), then I take it as a hint that I may not be modeling my data correctly, and am missing a class definition or two somewhere. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Add the imath module
On 12 July 2018 at 23:41, Serhiy Storchaka wrote: > 12.07.18 16:15, Robert Vanden Eynde пише: >> >> About the name, why not intmath ? > > > Because cmath. But if most core developers prefer intmath, I have no > objections. My initial reaction from just the subject title was "But we already have cmath, why would we need imath?", based on the fact that mathematicians write complex numbers as "X + Yi", rather than the "X + Yj" that Python borrowed from electrical engineering (where "i" already had a different meaning as the symbol for AC current). Calling the proposed module "intmath" instead would be clearer to me (and I agree with the rationale that as the number of int-specific functions increases, separating them out from the more float-centric math module makes sense). Beyond that, I think the analogy with the statistics module is a good one: 1. There are a number of integer-specific algorithms where naive implementations are going to be slower than they need to be, subtly incorrect in some cases, or both. With a standard library module, we can provide a robust test suite, and pointers towards higher performance alternatives for folks that need them. 2. For educational purposes, being able to introduce the use cases for a capability before delving into the explanation of how that capability works can be quite a powerful technique (the standard implementations can also provide a cross-check on the accuracy of student implementations). And in the intmath case, there are likely to be more opportunities to delete private implementations from other standard library modules in favour of the public intmath functions. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] The future of Python parallelism. The GIL. Subinterpreters. Actors.
On 11 July 2018 at 00:31, David Foster wrote: > I was not aware of PyParallel. The PyParellel "parallel thread" > line-of-execution implementation is pretty interesting. Trent, big kudos to > you on that effort. > > Since you're speaking in the past tense and said "but we're not doing it > like that", I infer that the notion of a parallel thread was turned down for > integration into CPython, as that appears to have been the original goal. > > However I am unable to locate a rationale for why that integration was > turned down. Was it deemed to be too complex to execute, perhaps in the > context of providing C extension compatibility? Was there a desire to see a > similar implementation on Linux as well as Windows? Some other reason? It was never extended beyond Windows, and a Windows-only solution doesn't meet the needs of a lot of folks interested in more efficient exploitation of multiple local CPU cores. It's still an interesting design concept though, especially for problems that can be deconstructed into a setup phase (read/write main thread), and a parallel operation phase (ephemeral worker threads that store all persistent state in memory mapped files, or otherwise outside the current process). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] The future of Python parallelism. The GIL. Subinterpreters. Actors.
On 9 July 2018 at 04:27, David Foster wrote: > I'd like to solicit some feedback on what might be the most efficient way to > make forward progress on efficient parallelization in Python inside the same > OS process. The most promising areas appear to be: > > 1. Make the current subinterpreter implementation in Python have more > complete isolation, sharing almost no state between subinterpreters. In > particular not sharing the GIL. The "Interpreter Isolation" section of PEP > 554 enumerates areas that are currently shared, some of which probably > shouldn't be. > > 2. Give up on making things work inside the same OS process and rather focus > on implementing better abstractions on top of the existing multiprocessing > API so that the actor model is easier to program against. For example, > providing some notion of Channels to communicate between lines of execution, > a way to monitor the number of Messages waiting in each channel for > throughput profiling and diagnostics, Supervision, etc. In particular I > could do this by using an existing library like Pykka or Thespian and > extending it where necessary. Yep, that's basically the way Eric and I and a few others have been thinking. Eric started off this year's language summit with a presentation on the topic: https://lwn.net/Articles/754162/ The intent behind PEP 554 is to eventually get to a point where each subinterpreter has its own dedicated eval loop lock, and the GIL either disappears entirely (replaced by smaller purpose specific locks) or becomes a read/write lock (where write access is only needed to adjust certain state that is shared across subinterpreters). On the multiprocessing front, it could be quite interesting to attempt to adapt the channel API from PEP 554 to the https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.sharedctypes data sharing capabilities in the modern multiprocessing module. Also of relevance is Antoine Pitrou's work on a new version of the pickle protocol that allows for out-of-band data sharing to avoid redundant memory copies: https://www.python.org/dev/peps/pep-0574/ Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Calling python from C completely statically
On 9 July 2018 at 03:10, Alberto Garcia wrote: > Hey there, > > Yes, the part of having the pyd modules built in in library is already done. > I followed the instructions in the README. What I would like to know now is > how to embed the non frozen python (py) modules. Can you guys please point > me in the right direction. The gist is to: 1. take the entire Lib directory and put it in a zip archive 2. use the approach demonstrated in cx_freeze to point sys.path in your static executable at that zip archive 3. adjust your C code to point sys.path back at the executable itself, and then combine your executable and the zip archive into a single contiguous file (similar to what zipapp does with it's helper script and app archive) There are likely to still be rough edges when doing that, since this isn't a well tested configuration. When all else fails, find the part of the source code responsible for any error messages you're seeing, and try to work out if there's a setting you can tweak to avoid hitting that code path. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Calling python from C completely statically
On 8 July 2018 at 21:34, Paul Moore wrote: > This question is probably more appropriate for python-list, but yes, > you certainly can do this. The "Embedded" distributions of Python for > Windows essentially do this already. IIRC, they are only available for > Python 3.x, so you may find you have some hurdles to overcome if you > have to remain on Python 2.7, but in principle it's possible. > > One further point you may need to consider - a proportion of the > standard library is in the form of shared C extensions (PYDs in > Windows - I don't know if you're using Windows or Unix from what you > say above). You can't (without significant extra work) load a binary > extension direct from a zip file, so you'll need to either do that > extra work (which is platform specific and fragile, I believe) or be > prepared to ship supporting DLLs alongside your application (this is > the approach the embedded distribution takes). That's the part of the problem that Alberto's static linking solves - all of the standard library's extension modules are top level ones (at least as far as I am aware), so we support building them as statically linked builtin modules instead (we just don't do it routinely, because we don't have any great desire to make the main executable even larger than it already is). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] grouping / dict of lists
On 1 July 2018 at 15:18, Chris Barker via Python-ideas wrote: > On Fri, Jun 29, 2018 at 10:53 AM, Michael Selik wrote: >> >> I've drafted a PEP for an easier way to construct groups of elements from >> a sequence. https://github.com/selik/peps/blob/master/pep-.rst >> > I'm really warming to the: > > Alternate: collections.Grouping > > version -- I really like this as a kind of custom mapping, rather than "just > a function" (or alternate constructor) -- and I like your point that it can > have a bit of functionality built in other than on construction. > > But I think it should be more like the other collection classes -- i.e. a > general purpose class that can be used for grouping, but also used more > general-purpose-y as well. That way people can do their "custom" stuff (key > function, etc.) with comprehensions. > > The big differences are a custom __setitem__: > > def __setitem__(self, key, value): > self.setdefault(key, []).append(value) > > And the __init__ and update would take an iterable of (key, value) pairs, > rather than a single sequence. > > This would get away from the itertools.groupby approach, which I find kinda > awkward: > > * How often do you have your data in a single sequence? > > * Do you need your keys (and values!) to be sortable???) > > * Do we really want folks to have to be writing custom key functions and/or > lambdas for really simple stuff? > > * and you may need to "transform" both your keys and values > > I've enclosed an example implementation, borrowing heavily from Michael's > code. > > The test code has a couple examples of use, but I'll put them here for the > sake of discussion. > > Michael had: > > Grouping('AbBa', key=c.casefold)) > > with my code, that would be: > > Grouping(((c.casefold(), c) for c in 'AbBa')) > > Note that the key function is applied outside the Grouping object, it > doesn't need to know anything about it -- and then users can use an > expression in a comprehension rather than a key function. > > This looks a tad clumsier with my approach, but this is a pretty contrived > example -- in the more common case [*], you'd be writing a bunch of lambdas, > etc, and I'm not sure there is a way to get the values customized as well, > if you want that. (without applying a map later on) > > Here is the example that the OP posted that kicked off this thread: > > In [37]: student_school_list = [('Fred', 'SchoolA'), > ...:('Bob', 'SchoolB'), > ...:('Mary', 'SchoolA'), > ...:('Jane', 'SchoolB'), > ...:('Nancy', 'SchoolC'), > ...:] > > In [38]: Grouping(((item[1], item[0]) for item in student_school_list)) > Out[38]: Grouping({'SchoolA': ['Fred', 'Mary'], >'SchoolB': ['Bob', 'Jane'], >'SchoolC': ['Nancy']}) Unpacking and repacking the tuple would also work: Grouping(((school, student) for student, school in student_school_list)) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] grouping / dict of lists
On 30 June 2018 at 16:25, Guido van Rossum wrote: > On Fri, Jun 29, 2018 at 3:23 PM Michael Selik wrote: >> I included an alternate solution of a new class, collections.Grouping, >> which has some advantages. In addition to having less of that "heavy-handed" >> feel to it, the class can have a few utility methods that help handle more >> use cases. > > > Hm, this actually feels heavier to me. But then again I never liked or > understood the need for Counter -- I prefer basic data types and helper > functions over custom abstractions. (Also your description doesn't do it > justice, you describe a class using a verb phrase, "consume a sequence and > construct a Mapping". The key to Grouping seems to me that it is a dict > subclass with a custom constructor. But you don't explain why a subclass is > needed, and in that sense I like the other approach better. I'm not sure if the draft was updated since you looked at it, but it does mention that one benefit of the collections.Grouping approach is being able to add native support for mapping a callable across every individual item in the collection (ignoring the group structure), as well as for applying aggregate functions to reduce the groups to single values in a standard dict. Delegating those operations to the container API that way then means that other libraries can expose classes that implement the grouping API, but with a completely different backend storage model. > But I still think it is much better off as a helper function in itertools. I thought we actually had an open enhancement proposal for adding a "defaultdict.freeze" operation that switched it over to raising KeyError the same way a normal dict does, but I can't seem to find it now. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Add a __cite__ method for scientific packages
On 29 June 2018 at 12:14, Nathaniel Smith wrote: > On Thu, Jun 28, 2018 at 2:25 PM, Andrei Kucharavy > wrote: >> As for the list, reserving a __citation__/__cite__ for packages at the same >> level as __version__ is now reserved and adding a citation()/cite() function >> to the standard library seemed large enough modifications to warrant >> searching a buy-in from the maintainers and the community at large. > > There isn't actually any formal method for registering special names > like __version__, and they aren't treated specially by the language. > They're just variables that happen to have a funny name. You shouldn't > start using them willy-nilly, but you don't actually have to ask > permission or anything. The one caveat on dunder names is that we expressly exempt them from our usual backwards compatibility guarantees, so it's worth getting some level of "No, we're not going to do anything that would conflict with your proposed convention" at the language design level. > And it's not very likely that someone else > will come along and propose using the name __citation__ for something > that *isn't* a citation :-). Aye, in this case I think you can comfortably assume that we'll happily leave the "__citation__" and "__cite__" dunder names alone unless/until there's a clear consensus in the scientific Python community to use them a particular way. And even then, it would likely be Python package installers like pip, Python environment managers like pipenv, and data analysis environment managers like conda that would handle the task of actually consuming that metadata (in whatever form it may appear). Having your citation management support depend on which version of Python you were using seems like it would be mostly a source of pain rather than beneficial. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Correct way for writing Python code without causing interpreter crashes due to parser stack overflow
On 27 June 2018 at 17:04, Fiedler Roman wrote: > Hello List, > > Context: we are conducting machine learning experiments that generate some > kind of nested decision trees. As the tree includes specific decision > elements (which require custom code to evaluate), we decided to store the > decision tree (result of the analysis) as generated Python code. Thus the > decision tree can be transferred to sensor nodes (detectors) that will then > filter data according to the decision tree when executing the given code. > > Tracking down a crash when executing that generated code, we came to > following simplified reproducer that will cause the interpreter to crash (on > Python 2/3) when loading the code before execution is started: > > #!/usr/bin/python2 -BEsStt > A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A(None)])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])]) > > The error message is: > > s_push: parser stack overflow > MemoryError > > Despite the machine having 16GB of RAM, the code cannot be loaded. Splitting > it into two lines using an intermediate variable is the current workaround to > still get it running after manual adapting. This seems like it may indicate a potential problem in the pgen2 parser generator, since the compilation is failing at the original parse step, but checking the largest version of this that CPython can parse on my machine gives a syntax tree of only ~77kB: >>> tree = parser.expr("A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A(None)])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])") >>> sys.getsizeof(tree) 77965 Attempting to print that hints more closely at the potential problem: >>> tree.tolist() Traceback (most recent call last): File "", line 1, in RecursionError: maximum recursion depth exceeded while getting the repr of an object As far as I'm aware, the CPython parser is using the actual C stack for recursion, and is hence throwing MemoryError because it ran out of stack space to recurse into, not because it ran out of memory in general (RecursionError would be a more accurate exception). Trying your original example in PyPy (which uses a different parser implementation) suggests you may want to try using that as your execution target before resorting to switching languages entirely: >>>> tree2 = parser.expr("A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A([A(None)])])])])])])])])])]]))])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])])") >>>> len(tree2.tolist()) 5 Alternatively, you could explore mimicking the way that scikit-learn saves its trained models (which I believe is a variation on "use pickle", but I've never actually gone and checked for sure). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] staticmethod and classmethod should be callable
On 21 June 2018 at 03:27, Serhiy Storchaka wrote: > 20.06.18 20:07, Guido van Rossum пише: >> >> Maybe we're misunderstanding each other? I would think that calling the >> classmethod object directly would just call the underlying function, so this >> should have to call utility() with a single arg. This is really the only >> option, since the descriptor doesn't have any context. >> >> In any case it should probably `def utility(cls)` in that example to >> clarify that the first arg to a class method is a class. > > > Sorry, I missed the cls parameter in the definition of utility(). > > class Spam: > @classmethod > def utility(cls, arg): > ... > > value = utility(???, arg) > > What should be passed as the first argument to utility() if the Spam class > (as well as its subclasses) is not defined still? That would depend on the definition of `utility` (it may simply not be useful to call it in the class body, which is also the case with most instance methods). The more useful symmetry improvement is to the consistency of behaviour between instance methods on class instances and the behaviour of class methods on classes themselves. So I don't think this is a huge gain in expressiveness, but I do think it's a low cost consistency improvement that should make it easier to start unifying more of the descriptor handling logic internally. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] POPT (Python Ob ject Provider Threads)
On 20 June 2018 at 00:47, Martin Bammer wrote: > If this idea is well implemented I expect a big performance improvement for > all Python applications. Given the free lists already maintained for several builtin types in the reference implementation, I suspect you may be disappointed on that front :) (While object creation overhead certainly isn't trivial, the interpreter's already pretty aggressive about repurposing previously allocated and initialised memory for new instances) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Meta-PEP about C functions
On 14 June 2018 at 01:15, Jeroen Demeyer wrote: > I have finished my "meta-PEP" for issues with built-in (implemented in C) > functions and methods. This is meant to become an "informational" (not > standards track) PEP for other PEPs to refer to. > > You can read the full text at > https://github.com/jdemeyer/PEP-functions-meta > > I also give brief ideas of solutions for the various issues. The main idea > is a new PyTypeObject field tp_ccalloffset giving an offset in the object > structure for a new PyCCallDef struct. This new struct replaces PyMethodDef > for calling functions/methods and defines a new "C call" protocol. Comparing > with PEP 575, one could say that the base_function class has been replaced > by PyCCallDef. This is even more general than PEP 575 and it should be > easier to support this new protocol in existing classes. > > I plan to submit this as PEP in the next days, but I wanted to check for > some early feedback first. This looks like a nice overview of the problem space to me, thanks for putting it together! It will probably make sense to publish it as a PEP just before you publish the first draft of your CCall protocol PEP, so that the two PEPs get assigned consecutive numbers (it doesn't really matter if they're non-consecutive, but at the same time, I don't think there's any specific urgency in getting this one published before the CCall PEP needs to reference it as background information). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Loosen 'as' assignment
On 16 June 2018 at 15:49, Rin Arakaki wrote: > Hi, > I'm wondering if it's possible and consistent that loosen 'as' assignment, > for example: > > >>> import psycopg2 as pg > >>> import psycopg2.extensions as pg.ex > > You can't now assign to an attribute in as statement but are there some > reasons? > To be honest, I'll be satisfied if the statement above become valid, but > also interested in general design decisions about 'as' functionality, I > mean, it can be applicable to all expression that can be left side of '=' > such as 'list[n]' one, and also other statement than 'import' such as > 'with'. > This is essentially monkeypatching the psycopg2 module to alias the "extensions" submodule as the "ex" submodule. You can already do that today as: >>> import psycopg2 as pg >>> import psycopg2.extensions >>> pg.ex = pg.extensions Monkeypatching other modules at runtime is a questionable enough practice that we're unlikely to add syntax that actively encourages it. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Link accepted PEPs to their whatsnew section?
On 13 June 2018 at 11:06, Michael Selik wrote: > Google will probably fix this problem for you after dataclasses become > popular. The docs will gain a bunch of inbound links and the issue will > (probably) solve itself as time passes. > Sometimes when reading a PEP it isn't especially clear exactly which version it landed in, or whether or not there were significant changes post-acceptance based on issues discovered during the beta period, though. So the idea of a "Release-Note" header that points to the version specific What's New entry seems like a decent idea to me (and may actually help the What's New section supplant the PEP in search results). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A PEP on introducing variables on 'if' and 'while'
On 11 June 2018 at 04:35, Juancarlo Añez wrote: > > As the PEP author, that's your job. >> > > I started writing the PEP, and I found an interesting example: > > if not (m := re.match(r'^(\d+)-(\d+)$', identifier): > raise ValueError('f{identifier} is not a valid identifier') > print(f'first part is {m.group(1)}') > print(f'first part is {m.group(2)}') > > > That's fairly easy to understand, and not something that can be resolved > with `as` if it's part of the `if` and `while` statement, rather than a > different syntax for the `:=` semantics. > Yep, the "What about cases where you only want to capture part of the conditional expression?" question is the rock on which every "only capture the entire conditional expression" proposal has foundered. PEP 572 arose from Chris deciding to take on the challenge of seriously asking the question "Well, what if we *did* allow capturing of arbitrary subexpressions with inline assignments?". Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Add hooks to asyncio lifecycle
On 10 June 2018 at 07:59, Michel Desmoulin wrote: > What I'm proposing is to make that easy to implement by just letting > anyone put a check in there. > > Overriding policy, loops or tasks factories are usually down for > critical parts of the system. The errors emerging from a bug in there > are very cryptic. > > Asyncio design made the choice to expose very low level things. You > literally don't have this problem in languages like JS because nobody > can change those. > > Now it's here, it's a footgun, and it would be nice to provide a way to > put it in a holster. > With the API need framed that way, perhaps all that asyncio is currently missing is an "asyncio.lock_policy(unlock_token, err_callback)" API such that your application can declare that initialisation is completed and no further event loop policy changes should be allowed? (The "unlock_token" would be an arbitrary app-provided object that must also be passed to the corresponding "unlock_policy" call - that way libraries couldn't unlock the policy after the application locks it, since they won't have a reference to the app-specific unlock token). Adding further callback hooks for more events seems like it will just push the problem back another level, and you'll have the potential for conflicts between callbacks registered with the new hooks, and an even harder to understand overall system. By contrast, the above would be amenable to doing something like: 1. Per-process setup code establishes a particular event loop policy, and then locks it 2. Until the policy gets unlocked again, attempts to change it will call the err_callback (so the app can raise a custom access denied exception) 3. get_event_loop(), set_event_loop(), and new_event_loop() are all already managed by the event loop policy, so shouldn't need new hooks 4. stop(), close(), set_debug(), set_task_factory(), etc are all already managed by the event loop (and hence by the event loop policy), so shouldn't need new hooks Right now, the weak link in that chain is that there's no way for the application to keep a library from switching out the event policy with a new one, and subsequently bypassing all of the app's control over how it expects event loops to be managed. Given a way to close that loophole, the application should already have the ability to enforce everything else that it wants to enforce via the existing event loop and event loop policy APIs. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Allow callable in slices
On 9 June 2018 at 19:20, Michel Desmoulin wrote: > Given 2 callables checking when a condition arises and returning True: > > def starting_when(element): > ... > > def ending_when(element: > ... > Allow: > > a_list[starting_when:] > > To be equivalent to: > > from itertools import dropwhile > > list(dropwhile(lambda x: not starting_when(x), a_list)) > Custom container implementations can already do this if they're so inclined, as slice objects don't type check their inputs: >>> class MyContainer: ... def __getitem__(self, key): ... return key ... >>> mc = MyContainer() >>> mc[:bool] slice(None, , None) >>> mc[bool:] slice(, None, None) >>> mc[list:tuple:range] slice(, , ) It's only slice.indices() that needs start/stop/step to adhere to the Optional[int] type hint. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A "within" keyword
On 9 June 2018 at 14:46, Jelle Zijlstra wrote: > > > 2018-06-08 20:27 GMT-07:00 Alex Walters : > >> Why not... >> >> cool_namespace = SomeNamespaceContextManager() >> >> with cool_namespace: >> def foo(): >> pass >> >> advantage being it introduces no new keyword. The 'disadvantage' is it >> would change semantics of the with statement (as would be required to get >> the names defined in the suite of the context manager) >> >> Actually, this is probably doable now. You can get the globals of the > calling code by doing sys._getframe(), then check which names are added > while the context manager is active. > It's doable without code generation hacks by using class statements instead of with statements. The withdrawn PEP 422 shows how to use a custom metaclass to support a "namespace" keyword argument in the class header that redirects all writes in the body to the given dict: https://www.python.org/dev/peps/pep-0422/#new-ways-of-using-classes https://www.python.org/dev/peps/pep-0422/#extending-a-class even shows how to further use that to extend existing classes with new attributes. We're not likely to actively encourage that approach though - while they do enable some handy things, they also encourage hard to navigate programs with a lot of "action at a distance" side effects that make it tricky to reason locally about the code you're currently looking at (if anything, we've been pushing more in the other direction: encouraging the use of features like checked type hints to better *enable* reasoning locally about a piece of code). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Let try-except check the exception instance
On 31 May 2018 at 14:47, Danilo J. S. Bellini wrote: > The idea is to allow catching exceptions beyond checking their MRO, > using a class that checks the exception instance by implementing > a custom __instancecheck__. > The exception machinery deliberately attempts to avoid instantiating exception objects whenever it can, but that gets significantly more difficult if we always need to create the instance before we can decide whether or not the raised exception matches the given exception handler criteria. So quite aside from any philosophy-of-design questions, we're unlikely to ever implement this simply for laziness-of-evaluation reasons. (There are already lots of circumstances that force instantiation - that doesn't mean we're keen to add more) Cheers, Nick. P.S. There's a somewhat related issue aimed at getting ABCs to work correctly as exception handlers, although that still sticks to the principle that exception handler checks solely consider the exception type, not its value: https://bugs.python.org/issue12029 -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Add shutil.chown(..., recursive=False)
On 30 May 2018 at 02:38, Guido van Rossum wrote: > Honestly, despite the occasional use case(1), I'm not sure that this is a > battery we need in the stdlib. Nobody seems too excited about writing the > code, and when the operation is needed, shelling out to the system chown is > not too onerous. (Ditto for chmod.) > > (1) Not even sure that a use case was shown -- it was just shown that the > operation is not necessarily useless. > My main use cases have been in installers and test suites, but those cases have also been for Linux-specific code where shelling out to "chown -R" and "chmod -R" was an entirely acceptable alternative. I think one of the other key points here is that "chown" and "chmod" inherently don't map at all well to the Windows filesystem access control model [1], so there's no new portability challenges arising from expecting the chown and chmod commands to be available. Cheers, Nick. [1] os.chown and shutil.chown don't exist at all there, and os.chmod only supports setting a file to read-only - there isn't any access to user or group permissions. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Was `os.errno` undocumented?
On 29 May 2018 at 20:06, Petr Viktorin wrote: > Is that reasoning sound? > Should our policy on removing internal imports take that into account? > As Steven noted, the normative answer to this is in PEP 8: https://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces Since `os.errno` is a transitive import, is not included in `os.__all__`, and isn't documented in the library reference, it's considered an implementation detail that can be removed without a deprecation period. That said, it should still be mentioned in the "Porting to Python 3.7" section of the What's New guide, with the fix being to switch to "import errno" in any affected code. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Add shutil.chown(..., recursive=False)
On 29 May 2018 at 06:23, Giampaolo Rodola' wrote: > ...as in (not tested): > > def _rchown(dir, user, group): > for root, dirs, files in os.walk(dir, topdown=False): > for name in files: > chown(os.path.join(root, name), user, group) > > def chown(path, user=None, group=None, recursive=False): > if recursive and os.path.isdir(path): > _rchown(dir, user, group) > ... > > It appears like a common enough use case to me ("chown -R path"). > Thoughts? > https://bugs.python.org/issue13033 is a long-open RFE for this, proposing to add it as "shutil.chowntree" (naming inspired by "shutil.rmtree" and "shutil.copytree"). The "walkdir" project I mention on that PR has been on hiatus for a few years now (aside from a bit of activity to get a new release out in 2016 with several contributed fixes), but the main point of the comment where I mentioned it still stands: the hard part of designing recursive state modification APIs is deciding what to do when an operation fails after you've already made changes to the state of the disk. shutil.rmtree fortunately provides some good precedent there, but it does mean this feature would need to be implemented as its own API, rather than as an option on shutil.chown. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin
On 28 May 2018 at 10:17, Greg Ewing wrote: > Nick Coghlan wrote: > >> Aye, while I still don't want comprehensions to implicitly create new >> locals in their parent scope, I've come around on the utility of letting >> inline assignment targets be implicitly nonlocal references to the nearest >> block scope. >> > > What if you're only intending to use it locally within the > comprehension? Would you have to put a dummy assignment in > the surrounding scope to avoid a NameError? That doesn't > sound very nice. > The draft PEP discusses that - it isn't saying "Always have them raise TargetNameError, now and forever", it's saying "Have them raise TargetNameError in the first released iteration of the capability, so we can separate the discussion of binding semantics in scoped expressions from the discussion of declaration semantics". I still want to leave the door open to giving comprehensions and lambdas a way to declare and bind truly local variables, and that gets more difficult if we go straight to having the binding expressions they contain *implicitly* declare new variables in the parent scope (rather than only binding previously declared ones). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin
On 26 May 2018 at 04:14, Tim Peters wrote: > [Peter O'Connor] > >> ... > >> We could use given for both the in-loop variable update and the variable > >> initialization: > >>smooth_signal = [average given average=(1-decay)*average + decay*x > >> for x in signal] given average=0. > > [Steven D'Aprano ] > > I don't think that will work under Nick's proposal, as Nick does not > > want assignments inside the comprehension to be local to the surrounding > > scope. (Nick, please correct me if I'm wrong.) > > Nick appears to have moved on from "given" to more-general augmented > assignment expressions. Aye, while I still don't want comprehensions to implicitly create new locals in their parent scope, I've come around on the utility of letting inline assignment targets be implicitly nonlocal references to the nearest block scope. > See PEP 577, but note that it's still a > work-in-progress: > > https://github.com/python/peps/pull/665 > > > Under that PEP, > > average = 0 > smooth_signal = [(average := (1-decay)*average + decay*x) > for x in signal] > > Or, for the running sums example: > > total = 0 > sums = [(total += x) for x in data] > > I'm not entirely clear on whether the "extra" parens are needed, so > added 'em anyway to make grouping clear. > I think the parens would technically be optional (as in PEP 572), since "EXPR for" isn't legal syntax outside parentheses/brackets/braces, so the parser would terminate the assignment expression when it sees the "for" keyword. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Meta-PEP about built-in functions
On 23 May 2018 at 07:28, Jeroen Demeyer wrote: > Hello, > > Both PEP 573 and PEP 575 deal with built-in functions. Additionally, some > people (Stefan Behnel, Robert Bradshaw, Jim Pivarski and me) are currently > brainstorming about a yet-to-be-written PEP to allow calling the underlying > C function of a built-in function using native types (for example, a C long > instead of a Python int). Think of it as analogous to the buffer protocol: > the buffer protocol exposes C *data* while this would expose C *callables*. > > Since all these PEPs would overlap somewhat, I'm starting to wonder about > the best way to organize this. Ideally, I would like some kind of > "meta-PEP" where we discuss the future of built-in functions in general > terms without too much details. This would be followed by several PEPs each > going in detail about one specific aspect. > > Is there a precedent for this? What do the seasoned Python developers > think? > Probably the closest recent precedent would be PEPs 482 and 483, which laid out some background material and concepts so that PEP 484 could reference them, without needing to include them directly. I think doing something like that for the C level callable API to describe the status quo and the challenges it raises (both internally in CPython and for third party projects like Cython) could be a very good way to go, as that way the actual change proposals can focus on what they're proposing to change and why, without each needing to include all the background details regarding the specifics of the current situation. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Modern language design survey for "assign and compare" statements
On 19 May 2018 at 10:54, Mike Miller wrote: > Background: > > While the previous discussions about assignment-expressions (PEP 572) > (abbreviated AE below) have been raging one thing that was noticeable is > that > folks have been looking back to C for a solution. > > But how are newer languages solving the problem today? Believe Ryan > brought > this up first on the list, but it had been in the back of my mind as well. > Finally have compiled my research, corrections welcome. Thank you for compiling this! It's a very useful point of reference :) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Make keywords KEYwords only in places they would have syntactical meaning
On 19 May 2018 at 04:34, Chris Angelico wrote: > "yield" would have to be a keyword in any context where an expression > is valid. Which, in turn, makes it utterly useless as a function name, > or any other identifier. > Right, I spent a fair bit of time thinking about this in the context of using "given" to introduce postfix assignment expressions, and while it's reasonably straightforward to make it so that keywords that can't start an expression or statement (currently "as", "in", "is", "and", "or") can also be used as names, we don't have that kind of freedom for keywords that can *start* an expression or statement ("async"/"await" relied on some parser tricks that relied on the fact that "async" always needed to be paired with "def" to introduce a coroutine, and "await EXPR" was only permitted inside coroutine definitions). We also run into the problem that even when the compiler can tell the difference, *humans* are still likely to be confused by the potential ambiguity (for the same reason that shadowing builtins is generally considered poor style). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Escaped braces in format specifiers
On 15 May 2018 at 16:23, Eric V. Smith wrote: > I'm busy at the sprints, so I don't have a lot of time to think about this. > > However, let me just say that recursive format specs are supported, to a > depth of 1. > > >>> width=10 > >>> f'{"test":{width}}' > 'test ' > > So first the string is basically expanded to: > f'{"test":10}' > Then the string is formatted again to produce the final result. > > That is why the braces must match: they're being used for recursive format > specs. There's no mechanism for having braces that aren't inspected by the > f-string machinery. https://www.python.org/dev/peps/pep-0536/ also seems worth noting (I don't actually understand the specifics of that PEP myself, just making sure that Ken's aware of its existence if this is an area of interest) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 15 May 2018 at 01:53, Tim Peters wrote: > [Nick] > > The question would then turn to "What if you just want to bind the target > > name, without considering the old value?". And then *that's* where "NAME > : = > > EXPR" would come in: as an augmented assignment operator that used > augmented > > assignment scoping semantics, rather than regular local name binding > > semantics. > > Plain old ":=" would somehow be viewed as being an augmented > assignment operator too? ... OK, the meaning is that augmented > assignment _and_ ":=" would resolve the target's scope in the way the > containing block resolves it. > > > That would mean *directly* overturning PEP 3099's rejection of the idea > of > > using "NAME := EXPR" to imply "nonlocal NAME" at function scope, but > that's > > effectively on the table for implicit functions anyway (and I'd prefer to > > have ":=" be consistent everywhere, rather than having to special case > the > > implicit scopes). > > Creating a local in a containing scope by magic is never done by > Python today. Extending that beyond "need" seems potentially > perilous. For example, it can already be tedious to figure out which > names _are_ local to a function by staring at the function's code, but > people quickly get better at that over time; change the rules so that > they _also_ have to stare at all immediately contained functions too > to figure it out, and it may become significantly harder (OK, I didn't > declare `x`, and a contained function did `x := 3.14` but `x` isn't > declared there either - I guess it's my `x` now). Then again, if > they're doing that much function nesting they deserve whatever they > get ;-) > More likely they'd get a compile time error complaining that the compiler couldn't figure out what they meant, and asking them to be clearer about the intended scoping. Restrict it to that only synthetically generated functions can pull > off this trick by magic (where there are real use cases to motivate > it), and they still don't have to look outside the body of a > function's text to figure it out. Visually, there's no distinction > between the code running in the function's scope and in scopes > synthesized to implement comprehensions appearing in the function's > text. The comprehensions aren't even indented more. > > So, offhand, I'm not sure that the right way to address something you > view as a wart is to vastly expand its reach to 12 operators that > impose it on everyone everywhere every time they're used ;-) > Once I reframed the idea as being like an augmented assignment, your proposed semantics seemed a lot less magical to me, since I was able to define them in terms of "find the assignment or declaration that already exists", rather than implicitly creating a new one. If the compiler can't find a suitable target scope, then it can throw AmbiguousTargetError (which would be an improvement over the runtime UnboundLocalError you typically get today). > Seriously, I do suspect that in > > def f(...): > ... no instances of `s` ... > s += f"START {time.time():.2f}" > > it's overwhelmingly more likely that they simply forgot to do > > s = "" > > earlier in `f` than they actually wanted to append to whatever `s` > means in f's parent block.. That's a radical change to what people > have come to expect `NAME +=` to do. > I think this is the key argument in favour of only allowing the "implicitly nonlocal rebinding" behaviour in lambda expressions, generator expressions, and comprehensions, as that way the search for a target to bind would always terminate at the containing block (just as it does today). BTW, would > > def f(): > x := 3.14 > x = 3.14 > > be a compile-time error? Everyone agreed the analogous case would be > in synthetic functions. Fine by me! > Yeah, I think that would be an AmbiguousTargetError, as when the compiler saw "x := 3.14", it wouldn't have seen "x = 3.14" yet. For other augmented assignments, it would be a DeprecationWarning for the time being, and become an AmbiguousTargetError at a later date. (This also relates to the previous point: if "x := 3.14" can be implicitly nonlocal, then I couldn't answer that question without knowing which names were defined in outer scopes. By contrast, if the implicit access to outer scopes is limited to inline scopes accessing their containing scope, then this example becomes precisely analagous to the current
Re: [Python-ideas] "given" vs ":=" in list comprehensions
On 14 May 2018 at 08:24, Ed Kellett wrote: > On 2018-05-14 05:02, Nick Coghlan wrote: > > The same grammar adjustment that I believe will allow "given" to be used > as > > both a postfix keyword and as a regular name would also work for "where". > > However, "where" still has the problem of semantically conflicting with > > SQL's use of it to introduce a filter clause, whereas Hypothesis uses > > "given" to bind names to values (just a little more indirectly than would > > be the case for assignment expressions) > > I suspect that SQL is not high on the list of languages people might > confuse with Python. If we used "where" as a name binding keyword, ORM docs like http://docs.sqlalchemy.org/en/latest/orm/query.html, and https://docs.djangoproject.com/en/2.0/topics/db/queries/ would need to be modified to explain that "SQL WHERE" and "Python where" do very different things. It's better to just avoid the potential for that problem entirely (either by using a symbolic notation, or by using a different keyword) > FWIW, as I'm sure will have been mentioned, Haskell > uses "where", and people seem to manage fine with it. > Unfortunately, Haskell's adoption numbers and reputation as a difficult to learn language don't back up that assumption (I doubt that outcome has anything to do with their use of "where", it just means "Haskell uses it that way" can't be credited as evidence that something won't cause confusion) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 14 May 2018 at 06:10, Tim Peters wrote: > [Greg Ewing '] > > This whole discussion started because someone wanted a way > > to bind a temporary result for use *within* a comprehension. > > It's been noted several times recently that the example PEP 572 gives > as _not_ working: > > total = 0 > progressive_sums = [total := total + value for value in data] > > was the original use case that prompted work on the PEP. You gotta > admit that's ironic ;-) > After pondering this case further, I think it's also worth noting that that *particular* example could also be addressed by: 1. Allowing augmented assignment *expressions* 2. Changing the scoping rules for augmented assignment operations in general such that they *don't change the scope of the referenced name* Writing "i += n" without first declaring the scope of "i" with "i = 0", "nonlocal i" or "global i" is one of the most common sources of UnboundLocalError after all, so I'd be surprised to find anyone that considered the current augmented assignment scoping rules to be outside the realm of reconsideration. The accumulation example would then be written: total = 0 progressive_sums = [total += value for value in data] if progressive_sums: assert total == progressive_sums[-1] The question would then turn to "What if you just want to bind the target name, without considering the old value?". And then *that's* where "NAME : = EXPR" would come in: as an augmented assignment operator that used augmented assignment scoping semantics, rather than regular local name binding semantics. That would mean *directly* overturning PEP 3099's rejection of the idea of using "NAME := EXPR" to imply "nonlocal NAME" at function scope, but that's effectively on the table for implicit functions anyway (and I'd prefer to have ":=" be consistent everywhere, rather than having to special case the implicit scopes). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 11 May 2018 at 12:45, Tim Peters wrote: > [Nick Coghlan] > > I've been thinking about this problem, and I think for the If/elif/while > > cases it's actually possible to allow the "binding is the same as the > > condition" case to be simplified to: > > > > if command = pattern.match(the_string): > > ... > > elif command = other_pattern.match(the_string): > > ... > > > > while data = read_data(): > > Unless there's some weird font problem on my machine, that looks like > a single "equals sign". In which case we'd be reproducing C's > miserable confusion about whether: > > if (i = 1) > > was a too-hastily-typed spelling of the intended: > > if (i == 1) > > or whether they were thinking "equals" and typed "=" by mistake. > > If so, that would get an instant -1 from any number of core devs, who > have vivid painful memories of being burned by that in C. That's not > just speculation - it came up a number of times in the PEP 572 > threads. > I was one of those core devs, and would personally prefer to require that folks spell the inline binding completely unambiguously as "if i given i = 1:". However, if the repetition of "i" is considered a deal breaker relative to ":=" (even though the status quo already requires repetition of the target name in the condition), then I'd prefer to add this shorthand (which folks can then opt to prohibit in favour of the more explicit form in their style guides) over adding the cognitive complexity of deciding when to use "i = 1" and when to use "i := 1". Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Crazy idea: allow keywords as names in certain positions
On 13 May 2018 at 14:19, Guido van Rossum wrote: > As anyone still following the inline assignment discussion knows, a > problem with designing new syntax is that it's hard to introduce new > keywords into the language, since all the nice words seem to be used as > method names in popular packages. (E.g. we can't use 'where' because > there's numpy.where > <https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.where.html>, > and we can't use 'given' because it's used in Hypothesis > <http://hypothesis.readthedocs.io/en/latest/quickstart.html>.) > > The idea I had (not for the first time :-) is that in many syntactic > positions we could just treat keywords as names, and that would free up > these keywords. > While I think the "restricted use" idea would be confusing, I do like the idea of separating out "postfix keywords", which can't start a statement or expression, and hence can be used *unambiguously* as names everywhere that names are allowed. Adding such a capability is essential to proposing a keyword based approach to inline assignments, and would technically also allow "and", "or", "is", and "as" to be freed up for use as names. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 13 May 2018 at 20:00, Tim Peters wrote: > [Tim] > >> - If the target is not local to any function enclosing F, and is not > >> declared `global` in the block containing F, then the target is bound > >> in the block containing F. > > [also Tim] > > FYI, that's still not right, ... > > I suspect that the above should be reworded to the simpler: > > > > - If the target is not declared `global` or `nonlocal` in the block > > containing F, then the target is bound in the block containing F. > > ... > > I'm satisfied that captures the intent - but now it's misleadingly > wordy. It should be the briefer: > > - The target is bound in the block containing F. > > Other text (in section 4.2.2) already covers the intended meanings for > when a `global` or `nonlocal` declaration appears in the block too. > > And then it's short enough again that the bullet list isn't really > helpful anymore. So, putting that all together: > > """ > An assignment expression binds the target, except in a function F > synthesized to implement a list comprehension or generator expression > (see XXX). In the latter case[1], the target is bound in the block > containing F, and errors may be detected: If the target also appears > as an identifier target of a `for` loop header in F, a `SyntaxError` > exception is raised. If the block containing F is a class block, a > `SyntaxError` exception is raised. > > Footnote: > [1] The intent is that runtime binding of the target occurs as if the > binding were performed in the block containing F. Because that > necessarily makes the target not local in F, it's an error if the > target also appears in a `for` loop header, which is a local binding > for the same target. If the containing block is a class block, F has > no access to that block's scope, so it doesn't make sense to consider > the containing block. The target is bound in the containing block, > where it inherits that block's `global` or `nonlocal` declaration if > one exists, else establishes that the target is local to that block. > """ > This is getting pretty close to being precise enough to be at least potentially implementable (thanks!), but there are still two cases that would need to be covered: - what happens inside a lambda expression? - what happens inside another comprehension or generator expression? Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] "given" vs ":=" in list comprehensions
On 12 May 2018 at 20:34, Andre Roberge wrote: > Sorry for chiming in so late; I was lurking using google groups and had to > subscribe to post - hence this new thread. > > I gather that *where* has been discarded as a possible new keywords given > its use as a function in numpy (https://docs.scipy.org/doc/ > numpy-1.14.0/reference/generated/numpy.where.html) ... Still, I will > include it below for completeness (and as I think it reads better than the > other choices) > The same grammar adjustment that I believe will allow "given" to be used as both a postfix keyword and as a regular name would also work for "where". However, "where" still has the problem of semantically conflicting with SQL's use of it to introduce a filter clause, whereas Hypothesis uses "given" to bind names to values (just a little more indirectly than would be the case for assignment expressions) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 12 May 2018 at 14:13, Tim Peters wrote: > Just clarifying a fine point here: > > [Steven D'Aprano ] > > ... > > average = 0 > > smooth_signal = [(average := (1-decay)*average + decay*x) for x in > signal] > > assert average == smooth_signal[-1] > > > The scope issues are logically independent of assignment-expression > spelling, but it's a pretty safe guess Nick is opposed to that example > ever "just working" regardless of spelling, while PEP 572 doesn't > currently support it anyway. I'm personally fine with that example working if there's an explicit nonlocal declaration on "average" in the nested scope - it's Guido that objected to requiring the explicit scoping declaration to access that behaviour. For the implicit version, my request is that any PEP proposing the idea of parent local scoping be held to the standard of *actually drafting the patch for the language specification*, rather than handwaving away the hard problems that it creates (i.e. what to do at class scope, what to do when multiple generators expressions reference the same nonlocal name, what to do with nested comprehensions, how to expand comprehensions using this kind of scoping to their statement form in a context independent way). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 11 May 2018 at 03:33, Greg Ewing wrote: > Gustavo Carneiro wrote: > >> IMHO, all these toy examples don't translate well to the real world >> because they tend to use very short variable names while in real world >> [good written code] tends to select longer more descriptive variable names. >> > > I don't believe that's always true. It depends on the context. > Sometimes, using long variable names can make code *harder* > to read. > > I don't think there's anything unrealistic about this > example: > >if m given m = pattern.match(the_string): > nugget = m.group(2) > > Most people's short-term memory is good enough to remember > that "m" refers to the match object while they read the > next couple of lines. IMO, using a longer name would serve > no purpose and would just clutter things up. I've been thinking about this problem, and I think for the If/elif/while cases it's actually possible to allow the "binding is the same as the condition" case to be simplified to: if command = pattern.match(the_string): ... elif command = other_pattern.match(the_string): ... while data = read_data(): ... Allowing this would be part of the definition of the if/elif/while statement headers, rather than a general purpose assignment expression. The restriction of the LHS to a simple name target would need to be in the AST generator rather than in the grammar, but it's hardly the only case where we do that kind of thing. Switching to the given expression form would then only be necessary in cases where the condition *wasn't* the same as the binding target. A similar enhancement could be made to conditional expressions (adjusting their grammar to permit "EXPR if NAME = EXPR else EXPR") and filter clauses in comprehensions (allowing "EXPR for TARGET in EXPR if NAME = EXPR"). In essence, "if", "elif", and "while" would all allow for an "implied given" clause in order to simplify the 90% case where the desired condition and the bound expression are the same. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 11 May 2018 at 07:15, Nick Coghlan wrote: > * *maybe* discover that even the above expansion isn't quite accurate, and > that the underlying semantic equivalent is actually this (one way to > discover this by accident is to have a name error in the outermost iterable > expression): > > def _genexp(_outermost_iter): > for x in _outermost_iter: > yield x > > _result = _genexp(_outermost_iter) > Typo here: the call argument should be "data", not a repeat of the parameter name, Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
behaviour of things like name references, locals(), lambda expressions that close over the iteration variable, etc can be explained directly in terms of the equivalent functions and generators, so while comprehension iteration variable hiding may *seem* magical, it's really mostly explained by the deliberate semantic equivalence between the comprehension form and the constructor+genexp form. (That's exactly how PEP 3100 describes the change: "Have list comprehensions be syntactic sugar for passing an equivalent generator expression to list(); as a consequence the loop variable will no longer be exposed") As such, any proposal to have name bindings behave differently in comprehension and generator expression scope from the way they would behave in the equivalent nested function definitions *must be specified to an equivalent level of detail as the status quo*. All of the attempts at such a definition that have been made so far have been riddled with action and a distance and context-dependent compilation requirements: * whether to implicitly declare the binding target as nonlocal or global depends on whether or not you're at module scope or inside a function * the desired semantics at class scope have been left largely unclear * the desired semantics in the case of nested comprehensions and generator expressions has been left entirely unclear Now, there *are* ways to resolve these problems in a coherent way, and that would be to define "parent local scoping" as a new scope type, and introduce a corresponding "parentlocal NAME" compiler declaration to explicitly request those semantics for bound names (allowing the expansions of comprehensions and generator expressions as explicitly nested functions to be adjusted accordingly). But the PEP will need to state explicitly that that's what it is doing, and fully specify how those new semantics are expected to work in *all* of the existing scope types, not just the two where the desired behaviour is relatively easy to define in terms of nonlocal and global. Regards, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 10 May 2018 at 11:10, Guido van Rossum wrote: > Please no, it's not that easy. I can easily generate a stream of +1s or > -1s for any proposal. I'd need well-reasoned explanations and it would have > to come from people who are willing to spend significant time writing it up > eloquently. Nick has tried his best and failed to convince me. So the bar > is high. > > (Also note that most of the examples that have been brought up lately were > meant to illustrate the behavior in esoteric corner cases while I was > working out the fine details of the semantics. Users should use this > feature sparingly and stay very far away of those corner cases -- but they > have to be specified in order to be able to implement this thing.) > I raised this with some of the folks that were still here at the Education Summit (similar to what I did for data classes at the PyCon Australia education seminar last year), but whereas the reactions to data classes were "as easy or easier to teach as traditional classes", the reaction for this for the folks that I asked was almost entirely negative - the most positive reaction was "Yes, if it's as a wholesale replacement for the '=' spelling, since that sometimes gets confused with mathematical equality". As an *addition* to the existing spelling, and especially with the now proposed leaking semantics in comprehension scopes, it was "No, that would just confuse out students". It's one thing adding syntactic and semantic complexity for the sake of something that significantly increases the language's expressive power (which is what the original sublocal scopes proposal was aiming for: the ability to more readily express constrained scoping and name shadowing without explicit name aliasing and del statements), it's something else entirely to do it for the sake of purely cosmetic tweaks like flattening the occasional nested if-else chain or replacing a loop-and-a-half with an embedded assignment. Regards, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 10 May 2018 at 23:22, Guido van Rossum wrote: > On Thu, May 10, 2018 at 5:17 AM, Nick Coghlan wrote: > >> How would you expect this to work in cases where the generator expression >> isn't immediately consumed? If "p" is nonlocal (or global) by default, then >> that opens up the opportunity for it to be rebound between generator steps. >> That gets especially confusing if you have multiple generator expressions >> in the same scope iterating in parallel using the same binding target: >> >> # This is fine >> gen1 = (p for p in range(10)) >> gen2 = (p for p in gen1) >> print(list(gen2)) >> >> # This is not (given the "let's reintroduce leaking from >> comprehensions" proposal) >> p = 0 >> gen1 = (p := q for q in range(10)) >> gen2 = (p, p := q for q in gen1) >> print(list(gen2)) >> > > That's just one of several "don't do that" situations. *What will happen* > is perhaps hard to see at a glance, but it's perfectly well specified. Not > all legal code does something useful though, and in this case the obvious > advice should be to use different variables. > I can use that *exact same argument* to justify the Python 2 comprehension variable leaking behaviour. We decided that was a bad idea based on ~18 years of experience with it, and there hasn't been a clear justification presented for going back on that decision presented beyond "Tim would like using it sometimes". PEP 572 was on a nice trajectory towards semantic simplification (removing sublocal scoping, restricting to name targets only, prohibiting name binding expressions in the outermost iterable of comprehensions to avoid exposing the existing scoping quirks any more than they already are), and then we suddenly had this bizarre turn into "and they're going to be implicitly nonlocal or global when used in comprehension scope". > It also reintroduces the original problem that comprehension scopes > solved, just in a slightly different form: > > # This is fine >> for x in range(10): >> for y in range(10): >> transposed_related_coords = [y, x for x, y in >> related_coords(x, y)] >> >> # This is not (given the "let's reintroduce leaking from >> comprehensions" proposal) >> for x in range(10): >> for y in range(10): >> related_interesting_coords = [x, y for x in >> related_x_coord(x, y) if is_interesting(y := f(x))] >> >> Deliberately reintroducing stateful side effects into a nominally >> functional construct seems like a recipe for significant confusion, even if >> there are some cases where it might arguably be useful to folks that don't >> want to write a named function that returns multiple values instead. >> > > You should really read Tim's initial post in this thread, where he > explains his motivation. > I did, and then I talked him out of it by pointing out how confusing it would be to have the binding semantics of "x := y" be context dependent. > It sounds like you're not buying it, but your example is just a case where > the user is shooting themselves in the foot by reusing variable names. When > writing `:=` you should always keep the scope of the variable in mind -- > it's no different when using `:=` outside a comprehension. > It *is* different, because ":=" normally binds the same as any other name binding operation including "for x in y:" (i.e. it creates a local variable), while at comprehension scope, the proposal has now become for "x := y" to create a local variable in the containing scope, while "for x in y" doesn't. Comprehension scoping is already hard to explain when its just a regular nested function that accepts a single argument, so I'm not looking forward to having to explain that "x := y" implies "nonlocal x" at comprehension scope (except that unlike a regular nonlocal declaration, it also implicitly makes it a local in the immediately surrounding scope). It isn't reasonable to wave this away as "It's only confusing to Nick because he's intimately familiar with how comprehensions are implemented", as I also wrote some of the language reference docs for the current (already complicated) comprehension scoping semantics, and I can't figure out how we're going to document the proposed semantics in a way that will actually be reasonably easy for readers to follow. The best I've been able to come up with is: - for comprehensions at function scope (including in a lambda expression inside a comprehension scope), a binding
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 9 May 2018 at 03:06, Guido van Rossum wrote: > So the way I envision it is that *in the absence of a nonlocal or global > declaration in the containing scope*, := inside a comprehension or genexpr > causes the compiler to assign to a local in the containing scope, which is > elevated to a cell (if it isn't already). If there is an explicit nonlocal > or global declaration in the containing scope, that is honored. > > Examples: > > # Simplest case, neither nonlocal nor global declaration > def foo(): > [p := q for q in range(10)] # Creates foo-local variable p > print(p) # Prints 9 > > # There's a nonlocal declaration > def bar(): > p = 42 # Needed to determine its scope > def inner(): > nonlocal p > [p := q for q in range(10)] # Assigns to p in bar's scope > inner() > print(p) # Prints 9 > > # There's a global declaration > def baz(): > global p > [p := q for q in range(10)] > baz() > print(p) # Prints 9 > > All these would work the same way if you wrote list(p := q for q in > range(10)) instead of the comprehension. > How would you expect this to work in cases where the generator expression isn't immediately consumed? If "p" is nonlocal (or global) by default, then that opens up the opportunity for it to be rebound between generator steps. That gets especially confusing if you have multiple generator expressions in the same scope iterating in parallel using the same binding target: # This is fine gen1 = (p for p in range(10)) gen2 = (p for p in gen1) print(list(gen2)) # This is not (given the "let's reintroduce leaking from comprehensions" proposal) p = 0 gen1 = (p := q for q in range(10)) gen2 = (p, p := q for q in gen1) print(list(gen2)) It also reintroduces the original problem that comprehension scopes solved, just in a slightly different form: # This is fine for x in range(10): for y in range(10): transposed_related_coords = [y, x for x, y in related_coords(x, y)] # This is not (given the "let's reintroduce leaking from comprehensions" proposal) for x in range(10): for y in range(10): related_interesting_coords = [x, y for x in related_x_coord(x, y) if is_interesting(y := f(x))] Deliberately reintroducing stateful side effects into a nominally functional construct seems like a recipe for significant confusion, even if there are some cases where it might arguably be useful to folks that don't want to write a named function that returns multiple values instead. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: about the operator precedence of :=
On 10 May 2018 at 13:33, Guido van Rossum wrote: > (I vaguely recall this has been brought up before, but I'm too lazy to > find the subtread. So it goes.) > > PEP 572 currently seems to specify that when used in expressions, the > precedence of `:=` is lower (i.e. it binds more tightly) than all operators > except for the comma. I derive this from the single example `stuff = [[y := > f(x), x/y] for x in range(5)]`. > > From this it would follow that `f(a := 1, a)` is equivalent to `a = 1; > f(1, 1)`, and also that `(a := 1, a)` is equivalent to `a = 1; (1, 1)`. > (Although M.A.L. objected to this.) > > But what should `a := 1, 1` at the top level (as a statement) do? On the > one hand, analogy with the above suggest that it is equivalent to `a = 1; > (1, 1)`. But on the other hand, it would be really strange if the following > two lines had different meanings: > > a = 1, 1 # a = (1, 1) > a := 1, 1 # a = 1; (1, 1) > > I now think that the best way out is to rule `:=` in the top level > expression of an expression statement completely (it would still be okay > inside parentheses, where it would bind tighter than comma). > FWIW, this is one of the ambiguities that the generalised postfix expression form of the given clause would reduce fairly significantly by separating the result expression from the bound expression: a = 1, 1 a given a = 1, 1 # As above, but also returns a a = 1, x, 3 given x = 2 They also compose fairly nicely as an essentially right associative operator: a given a = 1, x, 3 given x = 2 a given a = (1, x, 3 given x = 2) # Same as previous (a given a = 1), x, 3 given x = 2 # Forcing left-associativity While you do have to repeat the bound name at least once, you gain a much clearer order of execution (while it's an order of execution that's right-to-left rather than left-to-right, "rightmost expression first" is the normal way assignments get evaluated anyway). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Runtime assertion with no overhead when not active
On 10 May 2018 at 17:55, Barry Scott wrote: > On 7 May 2018, at 18:52, Guido van Rossum wrote: > > On Mon, May 7, 2018 at 6:24 AM, Serhiy Storchaka > wrote: > >> I just don't understand why you need a new keyword for writing runtime >> checks. >> > > Oh, that's pretty clear. The OP wants to be able to turn these checks off > with some flag he can set/clear at runtime, and when it's off he doesn't > want to incur the overhead of evaluating the check. The assert statement > has the latter property, but you have to use -O to turn it off. He > basically wants a macro so that > > runtime_assert() > > expands to > > if and (): > raise AssertionError > > In Lisp this would be easy. :-) > > > This idea requires the same sort of machinery in python that I was hoping > for to implement the short circuit logging. > > My logging example would be > > log( control_flag, msg_expr ) > > expanding to: > > if : > log_function( ) > Logging is the example that came to mind for me as well - https://docs.python.org/3/howto/logging.html#optimization discusses the "logging.isEnabledFor" API, and how it can be used to avoid the overhead of expensive state queries when the result log message would just be discarded anyway. Generally speaking though, the spelling for that kind of lazy evaluation is to accept a zero-argument callable, which can then be used with "lambda: " at the call site: runtime_assert(lambda: ) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 9 May 2018 at 03:57, Tim Peters wrote: > These all match my expectations. Some glosses: > > [Guido] > > We should probably define what happens when you write [p := p for p in > > range(10)]. I propose that this overwrites the loop control variable > rather > > than creating a second p in the containing scope -- either way it's > probably > > a typo anyway. > > A compile-time error would be fine by me too. Creating two meanings > for `p` is nuts - pick one in case of conflict. I suggested before > that the first person with a real use case for this silliness should > get the meaning their use case needs, but nobody bit, so "it's local > then" is fine. > I'd suggest that the handling of conflicting global and nonlocal declarations provides a good precedent here: >>> def f(): ... global x ... nonlocal x ... x = 1 ... File "", line 2 SyntaxError: name 'x' is nonlocal and global Since using a name as a binding target *and* as the iteration variable would effectively be declaring it as both local and nonlocal, or as local and global. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 8 May 2018 at 04:19, Brett Cannon wrote: > My brain wants to drop the variable name in front of 'given': > > if given m = pattern.search(data): > > while given m = pattern.search(remaining_data): > > Maybe it's because the examples use such a short variable name? > Does that change if the condition isn't just "bool(name)"? For example: if y > 0 given y = f(x): ... That's the situation where I strongly prefer the postfix operator spelling, since it's pretty clear how I should pronounce it (i.e. "if y is greater than zero, given y is set to f-of-x, then ..."). By contrast, while a variety of plausible suggestions have been made, I still don't really know how to pronounce "if (y := f(x)) > 0:)" in a way that's going to be clear to an English-speaking listener (aside from pronouncing it the same way as I'd pronounce the version using "given", but that then raises the question of "Why isn't it written the way it is pronounced?"). I do agree with Tim that the name repetition would strongly encourage the use of short names rather than long ones (since you're always typing them at least twice), such that we'd probably see code like: while not probable_prime(n) given (n = highbit | randrange(1, highbit, 2)): pass Rather than the more explicit: while not probable_prime(candidate) given (candidate = highbit | randrange(1, highbit, 2)): pass However, I'd still consider both of those easier to follow than: while not probable_prime(candidate := highbit | randrange(1, highbit, 2)): pass since it's really unclear to me that "candidate" in the latter form is a positional argument being bound to a name in the local environment, and *not* a keyword argument being passed to "probable_prime". I've also been pondering what the given variant might look like as a generally available postfix operator, rather than being restricted to if/elif/while clauses, and I think that would have interesting implications for the flexibility of its usage in comprehensions, since there would now be *three* places where "given" could appear (as is already the case for the inline binding operator spelling): - in the result expression - in the iterable expression - in the filter expression That is: [(x, y, x - y) given y = f(x) for x in data] [(x, data) for x in data given data = get_data()] [(x, y, x/y) for x in data if y given y = f(x)] Rather than: [(x, y := f(x), x - y) for x in data] [(x, data) for x in data := get_data()] [(x, y, x/y) for x in data if y := f(x)] Opening it up that way would allow for some odd usages that might need to be discouraged in PEP 8 (like explicitly preferring "probable_prime(n) given n = highbit | randrange(1, highbit, 2)" to "probable_prime(n given n = highbit | randrange(1, highbit, 2))"), but it would probably still be simpler overall than attempting to restrict the construct purely to if/elif/while. Even as a generally available postfix keyword, "given" should still be amenable to the treatment where it could be allowed as a variable name in a non-operator context (since we don't allow two adjacent expressions to imply a function call, it's only prefix keywords that have to be disallowed as names to avoid ambiguity in the parser). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] __dir__ in which folder is this py file
On 7 May 2018 at 21:42, Nathaniel Smith wrote: > On Mon, May 7, 2018, 03:45 Steven D'Aprano wrote: > >> On Sun, May 06, 2018 at 09:33:03PM -0700, Nathaniel Smith wrote: >> >> > How is >> > >> > data_path = __filepath__.parent / "foo.txt" >> > >> > more distracting than >> > >> > data_path = joinpath(dirname(__file__), "foo.txt") >> >> >> Why are you dividing by a string? That's weird. >> >> [looks up the pathlib docs] >> >> Oh, that's why. It's still weird. >> >> So yes, its very distracting. >> > > Well, yes, you do have to know the API to use it, and if you happen to > have learned the os.path API but not the pathlib API then of course the > os.path API will look more familiar. I'm not sure what this is supposed to > prove. > I think it strongly suggests that *magically* introducing a path object into a module's namespace would be a bad idea, since it harms readability (since merely having `path` in the name isn't a strong enough hint that the object in question is a `pathlib.Path` instance). Your original point is still valid though: given the boilerplate reduction already available via "from pathlib import Path; _this_dir = Path(__file__).parent", it's the pathlib version that needs to be taken as the baseline for how verbose the status quo really is, not the lower level os.path API (no matter how accustomed some of us may still be to using the latter). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] __dir__ in which folder is this py file
On 7 May 2018 at 20:44, Steven D'Aprano wrote: > First I have to work out what __filepath__ is, then I have to remember > the differences between all the various flavours of pathlib.Path > and suffer a moment or two of existential dread as I try to work out > whether or not *this* specific flavour is the one I need. This might not > matter for heavy users of pathlib, but for casual users, it's a big, > intimidating API with: > > - an important conceptual difference between pure paths and > concrete paths; > - at least six classes; > - about 50 or so methods and properties > Right, but that's why I think this may primarily be a docs issue, as for simple use cases, only one pathlib class matters, and that's "pathlib.Path" (which is the appropriate concrete path type for the running platform), together with its alternate constructors "Path.cwd()" and "Path.home()". So if you spell out the OP's original example with pathlib instead of os.path, you get: from pathlib import Path SRC_DIR = Path(__file__).parent And then SRC_DIR is a rich path object that will mostly let you avoid importing any of: - os - os.path - stat - glob - fnmatch > As far as performance goes, I don't think it matters that we could > technically make pathlib imported lazily. Many people put all their > pathname manipulations at the beginning of their script, so lazy or not, > the pathlib module is going to be loaded *just after* startup, . > It's the fnmatch and re module imports *inside* pathlib that may be worth making lazy, as those currently account for a reasonable chunk of the import time but are only used to implement PurePath.match and _WildcardSelector. That means making them lazy may allow folks to avoid those imports if they don't use any of the wildcard matching features. > For many scripts, this isn't going to matter, but for those who want to > avoid the overhead of pathlib, making it lazy doesn't help. That just > delays the overhead, it doesn't remove it. > That's fine - it's not uncommon for folks looking to minimise startup overhead to have to opt in to using a lower level API for exactly that reason. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] __dir__ in which folder is this py file
On 7 May 2018 at 14:33, Nathaniel Smith wrote: > On Sun, May 6, 2018 at 8:47 PM, Nick Coghlan wrote: > > On 7 May 2018 at 13:33, Nathaniel Smith wrote: > >> > >> Spit-balling: how about __filepath__ as a > >> lazily-created-on-first-access pathlib.Path(__file__)? > >> > >> Promoting os.path stuff to builtins just as pathlib is emerging as > >> TOOWTDI makes me a bit uncomfortable. > > > > pathlib *isn't* TOOWTDI, since it takes almost 10 milliseconds to import > it, > > and it introduces a higher level object-oriented abstraction that's > > genuinely distracting when you're using Python as a replacement for shell > > scripting. > > Hmm, the feedback I've heard from at least some folks teaching > intro-python-for-scientists is like, "pathlib is so great for > scripting that it justifies upgrading to python 3". > > How is > > data_path = __filepath__.parent / "foo.txt" > > more distracting than > > data_path = joinpath(dirname(__file__), "foo.txt") > Fair point :) In that case, perhaps the right answer here would be to adjust the opening examples section in the pathlib docs, showing some additional common operations like: _script_dir = Path(__file__).parent _launch_dir = Path.cwd() _home_dir = Path.home() And perhaps in a recipes section: def open_file_from_dir(dir_path, rel_path, *args, **kwds): return open(Path(dir_path) / rel_path, *args, **kwds) (Now that open() accepts path objects natively, I'm inclined to recommend that over the pathlib-specific method spelling) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] __dir__ in which folder is this py file
On 7 May 2018 at 13:33, Nathaniel Smith wrote: > Spit-balling: how about __filepath__ as a > lazily-created-on-first-access pathlib.Path(__file__)? > > Promoting os.path stuff to builtins just as pathlib is emerging as > TOOWTDI makes me a bit uncomfortable. > pathlib *isn't* TOOWTDI, since it takes almost 10 milliseconds to import it, and it introduces a higher level object-oriented abstraction that's genuinely distracting when you're using Python as a replacement for shell scripting. While lazy imports could likely help with the import time problem (since 6.5 of those milliseconds are from importing fnmatch), I think there's also a legitimate argument for a two tier system here, where we say "If you can't handle your filesystem manipulation task with just open, dirname, abspath, and joinpath, then reach for the higher level pathlib abstraction". Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 7 May 2018 at 13:15, Tim Peters wrote: > [Tim] > >> There's a difference, though: if `y` "leaks", BFD. Who cares? ;-) > >> If `y` remains inaccessible, there's no way around that. > > Part of it is just that people seem to be fighting for the sake of > > fighting. I'm weary of it, and I'm not going to debate this point with > > you. You want 'em to leak? No problem. Implement it that way and I'm > > not going to argue it. > > I'm more interested in real-life use cases than in arguments. My > suggestion came from staring at my real-life use cases, where binding > expressions in comprehensions would clearly be more useful if the > names bound leaked. Nearly (but not all) of the time,, they're quite > happy with that for-target names don't leak. Those are matters of > observation rather than of argument. > The issue is that because name binding expressions are just ordinary expressions, they can't be defined as "in comprehension scope they do X, in other scopes they do Y" - they have to have consistent scoping semantics regardless of where they appear. However, it occurs to me that a nonlocal declaration clause could be allowed in comprehension syntax, regardless of how any nested name bindings are spelt: p = rem = None while any((rem := n % p) for p in small_primes nonlocal (p, rem)): # p and rem were declared as nonlocal in the nested scope, so our rem and p point to the last bound value I don't really like that though, since it doesn't read as nicely as being able to put the nonlocal declaration inline. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] __dir__ in which folder is this py file
On 7 May 2018 at 12:35, Chris Angelico wrote: > On Mon, May 7, 2018 at 12:13 PM, Nick Coghlan wrote: > > So I have a different suggestion: perhaps it might make sense to propose > > promoting a key handful of path manipulation operations to the status of > > being builtins? > > > > Specifically, the ones I'd have in mind would be: > > > > - dirname (aka os.path.dirname) > > - joinpath (aka os.path.join) > > These two are the basics of path manipulation. +1 for promoting to > builtins, unless pathlib becomes core (which I suspect isn't > happening). > pathlib has too many dependencies to ever make the type available as a builtin: $ ./python -X importtime -c pass 2>&1 | wc -l 25 $ ./python -X importtime -c "import pathlib" 2>&1 | wc -l 53 It's a good way of unifying otherwise scattered standard library APIs, but it's overkill if all you want to do is to calculate and resolve some relative paths. > > - abspath (aka os.path.abspath) > > Only +0.5 on this, as it has to do file system operations. It may be > worthwhile, instead, to promote os.path.normpath, which (like the > others) is purely processing the string form of the path. It'll return > the same value regardless of the file system. > My rationale for suggesting abspath() over any of its component parts is based on a few key considerations: - "make the given path absolute" is a far more common path manipulation activitity than "normalise the given path" (most users wouldn't even know what the latter means - the only reason *I* know what it means is because I looked up the docs for abspath while writing my previous comment) - __file__ isn't always absolute (especially in __main__), so you need to be able to do abspath(__file__) in order to reliably apply dirname() more than once - it can stand in for both os.getcwd() (when applied to the empty string or os.curdir) and os.path.normpath() (when the given path is already absolute), so we get 3 new bits of builtin functionality for the price of one new builtin name - I don't want to read "normpath(joinpath(getcwd(), relpath))" when I could be reading "abspath(relpath)" instead Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 7 May 2018 at 12:51, Nick Coghlan wrote: > If any other form of comprehension level name binding does eventually get > accepted, then inline scope declarations could similarly be used to hoist > values out into the surrounding scope: > > rem = None > while any((nonlocal rem := n % p) for nonlocal p in small_primes): > # p and rem were declared as nonlocal in the nested scope, so > our rem and p point to the last bound value > Thinking about it a little further, I suspect the parser would reject "nonlocal name := ..." as creating a parsing ambiguity at statement level (where it would conflict with the regular nonlocal declaration statement). The extra keyword in the given clause would avoid that ambiguity problem: p = rem = None while any(rem for nonlocal p in small_primes given nonlocal rem = n % p): # p and rem were declared as nonlocal in the nested scope, so our p and rem refer to their last bound values Such a feature could also be used to make augmented assignments do something useful at comprehension scope: input_tally = 0 process_inputs(x for x in input_iter given nonlocal input_tally += x) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A comprehension scope issue in PEP 572
On 7 May 2018 at 11:32, Tim Peters wrote: > I have a long history of arguing that magically created lexically > nested anonymous functions try too hard to behave exactly like > explicitly typed lexically nested functions, but that's the trendy > thing to do so I always lose ;-) You have the reasoning there backwards: implicitly nested scopes behave like explicitly nested scopes because that was the *easy* way for me to implement them in Python 3.0 (since I got to re-use all the pre-existing compile time and runtime machinery that was built to handle explicit lexical scopes). Everything else I tried (including any suggestions made by others on the py3k mailing list when I discussed the problems I was encountering) ran into weird corner cases at either compile time or run time, so I eventually gave up and proposed that the implicit scope using to hide the iteration variable name binding be a full nested closure, and we'd just live with the consequences of that. The sublocal scoping proposal in the earlier drafts of PEP 572 was our first serious attempt at defining a different way of doing things that would allow names to be hidden from surrounding code while still being visible in nested suites, and it broke people's brains to the point where Guido explicitly asked Chris to take it out of the PEP :) However, something I *have* been wondering is whether or not it might make sense to allow inline scoping declarations in comprehension name bindings. Then your example could be written: def ...: p = None while any(n % p for nonlocal p in small_primes): # p was declared as nonlocal in the nested scope, so our p points to the last bound value Needing to switch from "nonlocal p" to "global p" at module level would likely be slightly annoying, but also a reminder that the bound name is now visible as a module attribute. If any other form of comprehension level name binding does eventually get accepted, then inline scope declarations could similarly be used to hoist values out into the surrounding scope: rem = None while any((nonlocal rem := n % p) for nonlocal p in small_primes): # p and rem were declared as nonlocal in the nested scope, so our rem and p point to the last bound value Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] __dir__ in which folder is this py file
On 7 May 2018 at 03:44, Chris Angelico wrote: > On Mon, May 7, 2018 at 1:05 AM, George Fischhof > wrote: > >> On Sun, May 6, 2018, 1:54 AM Yuval Greenfield > >> wrote: > >>> > >>> Hi Ideas, > >>> > >>> I often need to reference a script's current directory. I end up > writing: > >>> > >>> import os > >>> SRC_DIR = os.path.dirname(__file__) > > I would give +1 for __dirname__ > > Something to keep in mind: making this available to every module, > whether it's wanted or not, means that the Python interpreter has to > prepare that just in case it's wanted. That's extra work as part of > setting up a module. Which, in turn, means it's extra work for EVERY > import, and consequently, slower Python startup. It might only be a > small slowdown, but it's also an extremely small benefit. > It also makes the name show up in dir(mod) for every module, and we're currently looking for ways to make that list *shorter*, not longer. So I have a different suggestion: perhaps it might make sense to propose promoting a key handful of path manipulation operations to the status of being builtins? Specifically, the ones I'd have in mind would be: - dirname (aka os.path.dirname) - joinpath (aka os.path.join) - abspath (aka os.path.abspath) Why those 3? Because with just those three operations you can locate other files relative to `__file__`, the current working directory [1], and arbitrary absolute paths, as well as remove path traversal notation like ".." and "." from the resulting paths (since abspath() internally calls normpath()). _launch_dir = abspath('') def open_from_launch_dir(relpath, mode='r'): return open(abspath(joinpath(_launch_dir, relpath)), mode) _script_dir = dirname(abspath(__file__)) def open_from_script_dir(relpath, mode='r'): return open(abspath(joinpath(_script_dir, relpath)), mode) You'd still need to import pathlib or os.path for more complex path manipulations, but they generally wouldn't be needed any more if all you're doing is reading and/or writing a handful of specific files. Cheers, Nick. [1] abspath can stand in for os.getcwd(), since you can spell the latter as abspath('.') or abspath(''), and we could potentially even make it so you can retrieve the cwd via just abspath() -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Python-ideas Digest, Vol 138, Issue 32
On 6 May 2018 at 07:59, Angus Hollands wrote: > If, however, the motivation for the PEP was deemed significant enough that > warrant its inclusion in a future release, then I would like to suggest > that the keyword approach is superior to the operator variant. In > particular, I prefer the `where` to the `given` or 'let' candidates, as I > think it is more descriptive and slightly shorter to type ;) > Aside from an API naming conflict with NumPy, the key problem with using "where" for this purpose is that "where" is explicitly used to name *filtering* clauses in SQL, NumPy, and other contexts ("having" is used in a similar way for filtering on SQL aggregate groups). So in "[(x, y, x/y) for x in data if y where y = f(x)]", having both an "if" clause and a "where" clause makes it look like there are two filters being defined (and a mispelt "==" in the second one), rather than a filter and a name binding. The virtue of "given" is that in any context which uses it, the intended meaning is to associate a name with a value (either directly, as in the mathematical usage, or indirectly, as in the hypothesis API usage), which is exactly the meaning we're interested in here. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 6 May 2018 at 02:06, Nick Coghlan wrote: > On 4 May 2018 at 22:06, Nick Coghlan wrote: > >> (Note: Guido's already told me off-list that he doesn't like the way this >> spelling reads, but I wanted to share it anyway since it addresses one of >> the recurring requests in the PEP 572 discussions for a more targeted >> proposal that focused specifically on the use cases that folks had agreed >> were reasonable potential use cases for inline assignment expressions. >> >> I'll also note that another potential concern with this specific proposal >> is that even though "given" wasn't used as a term in any easily discovered >> Python APIs back when I first wrote PEP 3150, it's now part of the >> Hypothesis testing API, so adopting it as a keyword now would be markedly >> more disruptive than it might have been historically) >> > > Since I genuinely don't think this idea is important enough to disrupt > hypothesis's public API, I've been pondering potential synonyms that are > less likely to be common in real world code, while still being > comprehensible and useful mnemonics for the proposed functionality. > I've also been pondering Tim's suggestion of instead enhancing the code generation pipeline's native support for pseudo-keywords in a way that can be explicitly represented in the Grammar and AST, rather than having to be hacked in specifically every time we want to introduce a new keyword without a mandatory __future__ statement (and without creating major compatibility headaches for code that uses those new keywords as attribute or variable names). While I haven't actually tried this out yet, the way I'm thinking that might look is to add the following nodes to the grammar: name_plus: NAME | pseudo_keyword pseudo_keyword: 'given' and the replace all direct uses of 'NAME' in the grammar with 'name_plus'. That way, to allow a new keyword to be used as a name in addition to its syntactic use case, we'd add it to the pseudo_keyword list in addition to adding it to . To avoid ambiguities in the grammar, this could only be done for keywords that *can't* be used to start a new expression or statement (so it wouldn't have been sufficient for the async/await case, since 'async' can start statements, and 'await' can start both statements and expressions). So if Guido's view on the out-of-order execution approach to inline name binding softens, I think this would be a better approach to pursue than making a more awkward keyword choice. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 4 May 2018 at 22:06, Nick Coghlan wrote: > (Note: Guido's already told me off-list that he doesn't like the way this > spelling reads, but I wanted to share it anyway since it addresses one of > the recurring requests in the PEP 572 discussions for a more targeted > proposal that focused specifically on the use cases that folks had agreed > were reasonable potential use cases for inline assignment expressions. > > I'll also note that another potential concern with this specific proposal > is that even though "given" wasn't used as a term in any easily discovered > Python APIs back when I first wrote PEP 3150, it's now part of the > Hypothesis testing API, so adopting it as a keyword now would be markedly > more disruptive than it might have been historically) > Since I genuinely don't think this idea is important enough to disrupt hypothesis's public API, I've been pondering potential synonyms that are less likely to be common in real world code, while still being comprehensible and useful mnemonics for the proposed functionality. The variant I've most liked is the word "letting" (in the sense of "while letting these names have these values, do ..."): # Exactly one branch is executed here if m letting m = pattern.search(data): ... elif m letting m = other_pattern.search(data)): ... else: ... # This name is rebound on each trip around the loop while m letting m = pattern.search(remaining_data): ... # "f(x)" is only evaluated once on each iteration result = [(x, y, x/y) for x in data if y letting y = f(x)] # Tim's "bind two expressions" example if diff and g > 1 letting diff = x - x_base, g = gcd(diff, n): return g # The "bind two expressions" example across multiple lines while diff and g > 1 letting ( diff = x - x_base, g = gcd(diff, n), ): ... # Do something with diff and g Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Inline assignments using "given" clauses
On 5 May 2018 at 13:36, Tim Peters wrote: > [Nick Coghlan ] > > ... > > The essence of the given clause concept would be to modify *these > specific > > cases* (at least initially) to allow the condition expression to be > followed > > by an inline assignment, of the form "given TARGET = EXPR". > > I'm not clear on what "these specific cases" are, specifically ;-) > Conditions in "if", "elif", and "while" statement expressions? > Exactly the 3 cases presented (if/elif/while conditions). The usage in comprehensions would mirror the usage in if statements, and avoid allowing name bindings in arbitrary locations due to the implicity nested scopes used by comprehensions. Conditional expressions would be initially omitted since including them would allow arbitrary name bindings in arbitrary locations via the quirky "x if x given x = expr else x" spelling, and because "else" isn't as distinctive an ending token as "given ... :", "given ... )", "given ... ]", or "given ... }". > Restricted to one "given" clause, or can they chain? In a listcomp, > is it one "given" clause per "if", or after at most one "if"? Or is > an "if" even needed at all in a listcomp? For example, > > [(f(x)**2, f(x)**3) for x in xs] > > has no conditions, and > > [(fx := f(x))**2, fx**3) for x in xs] > > is one reasonable use for binding expressions. > > [(fx**2, fx**3) for x in xs given fx = f(x)] > > reads better, although it's initially surprising (to my eyes) to find > fx defined "at the end". But no more surprising than the current: > > [(fx**2, fx**3) for x in xs for fx in [f(x)]] > > trick. > > There were a couple key reasons I left the "for x in y" case out of the initial proposal: 1. The "for x in y" header is already quite busy, especially when tuple unpacking is used in the assignment target 2. Putting the "given" clause at the end would make it ambiguous as to whether it's executed once when setting up the iterator, or on every iteration 3. You can stick in an explicit "if True" if you don't need the given variable in the filter condition [(fx**2, fx**3) for x in xs if True given fx = f(x)] And then once you've had an entire release where the filter condition was mandatory for the comprehension form, allowing the "if True" in "[(fx**2, fx**3) for x in xs given fx = f(x)]" to be implicit would be less ambiguous. [snip] > It''s certain sanest as > > if x**2 + y**2 > 9 given x, y = func_returning_twople(): > > "given" really shines there! > Yep, that's why I don't have the same immediate reaction of "It would need to be limited to simple names as targets" reaction as I do for assignment expressions. It might still be a good restriction to start out with, though (especially if we wanted to allow multiple name bindings in a single given clause). [snip] The one-letter variable name obscures that it doesn't > actually reduce _redundancy_, though. That is, in the current > > match = pattern.search(data) > if match: > > it's obviously less redundant typing as: > > if match := pattern.search(data): > > In > > if match given match = pattern.search(data): > > the annoying visual redundancy (& typing) persists. > Right, but that's specific to the case where the desired condition really is just "bool(target)". That's certainly likely to be a *common* use case, but if we decide that it's *that* particular flavour of redundancy that really bothers us, then there's always the "if expr as name:" spelling (similar to the way that Python had "a and b" and "a or b" logical control flow operators long before it got "a if c else b"). One more, a lovely (to my eyes) binding expression simplification > requiring two bindings in an `if` test, taken from real-life code I > happened to write during the PEP discussion: > > diff = x - x_base > if diff: > g = gcd(diff, n) > if g > 1: > return g > > collapsed to the crisp & clear: > > if (diff := x - x_base) and (g := gcd(diff, n)) > 1: > return g > > If only one trailing "given" clause can be given per `if` test > expression, presumably I couldn't do that without trickery. I was actually thinking that if we did want to allow multiple assignments, and we limited targets to single names, we could just use a comma as a separator: if diff and g > 1 given diff = x - x_bas
Re: [Python-ideas] Inline assignments using "given" clauses
On 6 May 2018 at 00:33, Mikhail V wrote: > recently I have discovered interesting fact: it seems one > can already use 'inline assignment' with current syntax. E.g.: > > if exec("m = input()") or m: > print (m) > > It seem to work as inline assignment correctly. > Yes it is just coincidence, BUT: this has the wanted effect! > - hides one line in "if" and "while" > - leaves the assignment statement (or is it expression now? ;) > - has explicit appearance > > So we have it already? > Not really, since: 1. exec with a plain string has a very high runtime cost (the compiler is pretty slow, since we assume the results will be cached) 2. exec'ed strings are generally opaque to static analysis tools 3. writing back to the local namespace via exec doesn't work consistently at function scope (and assuming PEP 558 is eventually accepted, will some day reliably *never* work) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
[Python-ideas] Inline assignments using "given" clauses
(Note: Guido's already told me off-list that he doesn't like the way this spelling reads, but I wanted to share it anyway since it addresses one of the recurring requests in the PEP 572 discussions for a more targeted proposal that focused specifically on the use cases that folks had agreed were reasonable potential use cases for inline assignment expressions. I'll also note that another potential concern with this specific proposal is that even though "given" wasn't used as a term in any easily discovered Python APIs back when I first wrote PEP 3150, it's now part of the Hypothesis testing API, so adopting it as a keyword now would be markedly more disruptive than it might have been historically) Recapping the use cases where the inline assignment capability received the most agreement regarding being potentially more readable than the status quo: 1. Making an "exactly one branch is executed" construct clearer than is the case for nested if statements: if m := pattern.search(data): ... elif m := other_pattern.search(data): ... else: ... 2. Replacing a loop-and-a-half construct: while m := pattern.search(remaining_data): ... 3. Sharing values between filtering clauses and result expressions in comprehensions: result = [(x, y, x/y) for x in data if (y := f(x))] The essence of the given clause concept would be to modify *these specific cases* (at least initially) to allow the condition expression to be followed by an inline assignment, of the form "given TARGET = EXPR". (Note: being able to implement such a syntactic constraint is a general consequence of using a ternary notation rather than a binary one, since it allows the construct to start with an arbitrary expression, without requiring that expression to be both the result of the operation *and* the value bound to a name - it isn't unique to the "given" keyword specifically) While the leading keyword would allow TARGET to be an arbitrary assignment target without much chance for confusion, it could also be restricted to simple names instead (as has been done for PEP 572. With that spelling, the three examples above would become: # Exactly one branch is executed here if m given m = pattern.search(data): ... elif m given m = other_pattern.search(data)): ... else: ... # This name is rebound on each trip around the loop while m given m = pattern.search(remaining_data): ... # "f(x)" is only evaluated once on each iteration result = [(x, y, x/y) for x in data if y given y = f(x)] Constraining the syntax that way (at least initially) would avoid poking into any dark corners of Python's current scoping and expression execution ordering semantics, while still leaving the door open to later making "result given NAME = expr" a general purpose ternary operator that returns the LHS, while binding the RHS to the given name as a side effect. Using a new keyword (rather than a symbol) would make the new construct easier to identify and search for, but also comes with all the downsides of introducing a new keyword. (Hence the not-entirely-uncommon suggestion of using "with" for a purpose along these lines, which runs into a different set of problems related to trying to use "with" for two distinct and entirely unrelated purposes). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] A way to subscript a single integer from bytes
On 1 May 2018 at 21:30, Antoine Pitrou wrote: > > Hi Ken, > > On Tue, 1 May 2018 19:22:52 +0800 > Ken Hilton wrote: > > > > So I'm pretty sure everyone here is familiar with how the "bytes" object > > works in Python 3. It acts mostly like a string, with the exception that > > 0-dimensional subscripting (var[idx]) returns an integer, not a bytes > > object - the integer being the ordinal number of the corresponding > > character. > > However, 1-dimensional subscripting (var[idx1:idx2]) returns a bytes > > object. Example: > > > > >>> a = b'hovercraft' > > >>> a[0] > > 104 > > >>> a[4:8] > > b'rcra' > > > > Though this isn't exactly unexpected behavior (it's not possible to > > accidentally do 1-dimensional subscripting and expect an integer it's a > > different syntax), it's still a shame that it isn't possible to quickly > and > > easily subscript an integer out of it. Following up from the previous > > example, The only way to get 493182234161465432041076 out of > b'hovercraft' > > in a single expression is as follows: > > > > list(__import__('itertools').accumulate((i for i in a), lambda x, > y: (x > > << 8) + y))[-1] > > Let's see: > > >>> a = b'hovercraft' > >>> int.from_bytes(a, 'big') > 493182234161465432041076 > It's also worth noting that if there's more than one integer of interest in the string, than using the struct module is often going to be better than using multiple slices and int.from_bytes calls: >>> import struct >>> data = b"hovercraft" >>> struct.unpack(">IIH", data) (1752135269, 1919119969, 26228) (The struct module doesn't handle arbitrary length integers, but it handles 8, 16, 32, and 64 bit ones, which is enough for a lot of common use cases) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Sublocal scoping at its simplest
On 29 April 2018 at 21:24, Chris Angelico wrote: > On Sun, Apr 29, 2018 at 6:03 PM, Nick Coghlan wrote: > > The challenge with doing this implicitly is that there's no indication > > whatsoever that the two "e"'s are different, especially given the > > longstanding precedent that the try/except level one will overwrite any > > existing reference in the local namespace. > > My intention is that the "except" statement IS the indication that > they're different. Now that the name gets unbound at the exit of the > clause, the only indication that it overwrites is that, after "except > Exception as e:", any previous e has been disposed of. I'd hardly call > that a feature. Can you show me code that actually DEPENDS on this > behaviour? > That's not the bar the proposal needs to meet, though: it needs to meet the bar of being *better* than the status quo of injecting an implicit "del e" at the end of the suite. While the status quo isn't always convenient, it has two main virtues: 1. It's easily explained in terms of the equivalent "del" statement 2. Given that equivalence, it's straightforward to avoid the unwanted side effects by either adjusting your exact choices of names (if you want to avoid overwriting an existing name), or else by rebinding the caught exception to a different name (if you want to avoid the exception reference getting dropped). I do agree that *if* sublocal scopes existed, *then* they would offer a reasonable implementation mechanism for block-scoped name binding in exception handlers. However, exception handlers don't offer a good motivation for *adding* sublocal scopes, simply because the simpler "implicitly unbind the name at the end of the block" approach works well enough in practice. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Sublocal scoping at its simplest
On 29 April 2018 at 13:14, Chris Angelico wrote: > There's been a lot of talk about sublocal scopes, within and without > the context of PEP 572. I'd like to propose what I believe is the > simplest form of sublocal scopes, and use it to simplify one specific > special case in Python. > > There are no syntactic changes, and only a very slight semantic change. > > def f(): > e = 2.71828 > try: > 1/0 > except Exception as e: > print(e) > print(e) > f() > > The current behaviour of the 'except... as' statement is as follows: > > 1) Bind the caught exception to the name 'e', replacing 2.71828 > 2) Execute the suite (printing "Division by zero") > 3) Set e to None > 4) Unbind e > > Consequently, the final print call raises UnboundLocalError. I propose > to change the semantics as follows: > > 1) Bind the caught exception to a sublocal 'e' > 2) Execute the suite, with the reference to 'e' seeing the sublocal > 3) Set the sublocal e to None > 4) Unbind the sublocal e > > At the unindent, the sublocal name will vanish, and the original 'e' > will reappear. Thus the final print will display 2.71828, just as it > would if no exception had been raised. > The challenge with doing this implicitly is that there's no indication whatsoever that the two "e"'s are different, especially given the longstanding precedent that the try/except level one will overwrite any existing reference in the local namespace. By contrast, if the sublocal marker could be put on the *name itself*, then: 1. Sublocal names are kept clearly distinct from ordinary names 2. Appropriate sublocal semantics can be defined for any name binding operation, not just exception handlers 3. When looking up a sublocal for code compiled in exec or eval mode, missing names can be identified and reported at compile time (just as they can be for nonlocal declarations) (Such a check likely wouldn't be possible for code compiled in "single" mode, although working out a suitable relationship between sublocal scoping and the interactive prompt is likely to prove tricky no matter what) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Allow multiple imports from a package while preserving its namespace
On 28 April 2018 at 02:18, Eric Snow wrote: > On the plus side, it means one less thing for programmers to do. On > the minus side, I find the imports at the top of the file to be a nice > catalog of external dependencies. Implicitly importing submodules > would break that. > > The idea might be not as useful since the programmer would have to use > the fully qualified name (relative to the "top-level" package). So > why not just put that in the import statement? > I'm mainly thinking of it in terms of inadvertently breaking abstraction layers. Right now, implementation decisions of third-party package authors get exposed to end users, as the question of whether or not a submodule gets eagerly imported by the parent module, or explicitly configured as a lazy import, is exposed to end users: Parent package eagerly imports submodules: no explicit submodule import needed, just import the parent package Parent package explicitly sets up a lazy submodule import: no explicit submodule import needed, just import the parent package Parent package doesn't do either: possible AttributeError at time of use depending on whether or not you or someone else has previously run "import package.submodule" (or an equivalent) The current attribute error is cryptic (and not always raised if some other module has already done the import!), so trying the submodule import implicitly would also provide an opportunity to customise the way the failure is reported. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Should __builtins__ have some kind of pass-through print function, for debugging?
On 27 April 2018 at 21:27, Steven D'Aprano wrote: > Obviously dp() would have to be magic. There's no way that I know of for > a Python function to see the source code of its own arguments. I have no > idea what sort of deep voodoo would be required to make this work. But > if it could work, wow, that would really be useful. And not just for > beginners. > If you relax the enhancement to just noting the line where the debug print came from, it doesn't need to be deep compiler magic - the same kind of stack introspection that warnings and tracebacks use would suffice. (Stack introspection to find the caller's module, filename and line number, linecache to actually retrieve the line if we want to print that). Cheers, Nick. P.S. While super() is a *little* magic, it isn't *that* magic - it gets converted from "super()" to "super(name_of_first_param, __class__)". And even that limited bit of magic has proven quirky enough to be a recurring source of irritation when it comes to interpreter maintenance. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Allow multiple imports from a package while preserving its namespace
On 27 April 2018 at 01:22, Serhiy Storchaka wrote: > I think this special cases isn't special enough to introduce a special > syntax. While I'm mostly inclined to agree, I do think it would be nice to have a clean spelling for "ensure this module is fully imported, but don't bind it locally". Right now, the most concise spelling I can think of for that would be to bind all the imported modules to the same throwaway variable, then delete that variable: import pkg from pkg import mod1 as __, mod2 as __, mod3 as __ del __ Allowing "as None" to mean "Don't bind this name at all" would give: import pkg from pkg import mod1 as None, mod2 as None, mod3 as None from . import views from .views import base as None It would be a bit odd though, since "None = anything" is normally a syntax error. Taking this idea in a completely different direction: what if we were to take advantage of PEP 451 __spec__ attributes to enhance modules to natively support implicit on-demand imports before they give up and raise AttributeError? (Essentially making all submodule imports implicitly lazy if you don't explicitly import them - you'd only be *required* to explicitly import top level modules, with everything under that being an implementation detail of that top level package) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Allow multiple imports from a package while preserving its namespace
On 26 April 2018 at 23:37, Paul Moore wrote: > On 26 April 2018 at 14:29, Julian DeMille via Python-ideas > wrote: >> I personally would like a feature where instead of doing `from ... import >> ...` (which imports the specified items into the current namespace), one >> could use something along the lines of `import .{ , , ... >> }` such that the imported modules/attributes could be accessed as >> `.`, etc. > > What are the benefits of this over a simple "import "? Forcing submodule imports would be the main thing, as at the moment, you have to choose between repeating the base name multiple times (once per submodule) or losing the hierarchical namespace. So where: from pkg import mod1, mod2, mod3 bind "mod1", "mod2", and "mod3" in the current namespace, you might instead write: from pkg import .mod1, .mod2, .mod3 to only bind "pkg" locally, but still make sure "pkg.mod1", "pkg.mod2" and "pkg.mod3" all resolve at import time. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Change magic strings to enums
On 26 April 2018 at 19:37, Jacco van Dorp wrote: > I'm kind of curious why everyone here seems to want to use IntFlags > and other mixins. The docs themselves say that their use should be > minimized, and tbh I agree with them. Backwards compatiblity can be > maintained by allowing the old value and internally converting it to > the enum. Combinability is inherent to enum.Flags. There'd be no real > reason to use mixins as far as I can see ? Serialisation formats are a good concrete example of how problems can arise by switching out concrete types on people: >>> import enum, json >>> a = "A" >>> class Enum(enum.Enum): ... a = "A" ... >>> class StrEnum(str, enum.Enum): ... a = "A" ... >>> json.dumps(a) '"A"' >>> json.dumps(StrEnum.a) '"A"' >>> json.dumps(Enum.a) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.6/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/usr/lib64/python3.6/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib64/python3.6/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/usr/lib64/python3.6/json/encoder.py", line 180, in default o.__class__.__name__) TypeError: Object of type 'Enum' is not JSON serializable The mixin variants basically say "If you run into code that doesn't natively understand enums, act like an instance of this type". Since most of the standard library has been around for years, and sometimes even decades, we tend to face a *lot* of backwards compatibility requirements along those lines. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Change magic strings to enums
On 25 April 2018 at 04:56, Ethan Furman wrote: > On 04/24/2018 10:32 AM, Antoine Pitrou wrote: > >> Also beware the import time cost of having a widely-used module like >> "warnings" depend on the "enum" module and its own dependencies. > > > With all the recent changes to Python, I should go through and see which > dependencies are no longer needed. I was checking this with "./python -X importtime -c 'import enum'", and the overall import time was around 9 ms with a cold disk cache, and 2 ms with a warm one. In both cases, importing "types" and "_collections" accounted for around a 3rd of the time, with the bulk of the execution time being enum's own module level code. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Change magic strings to enums
On 25 April 2018 at 01:06, Jacco van Dorp wrote: > I guess we could add inconsistency as a con, then, since if the import > system isn't working at places where you'd like to use the Enums (or > even executing python code ?). This would mean that to the casual > observer, it'd be arbitrary where they could be used instead. Running './python -X importtime -Wd -c "pass"' with Python 3.7 gives a pretty decent list of the parts of the standard library that constitute the low level core that we try to keep independent of everything else (there's a slightly smaller core that omits the warning module and it's dependencies - leaving "-Wd" off the command line will give that list). > I wonder how many of these would be in places used by most people, > though. > I don't mind putting in some time to figure it out, but I have > no idea where to start. Is there any easily searchable place where I > could scan the standard library for occurences of magic strings ? Searching the documentation for :data: fields, and then checking those to see which ones had already been converted to enums would likely be your best bet. You wouldn't be able to get a blanket approval for "Let's convert all the magic strings to Enums" though - you'd need to make the case that each addition of a new Enum provided a genuine API improvement for the affected module (e.g. I suspect a plausible case could potentially be made for converting some of the inspect module state introspection APIs over to StringEnum, so it was easier to iterate over the valid states in a consistent way, but even there I'd need to see a concrete proposal before I made up my mind). Making the case for IntEnum usage tends to be much easier, simply due to the runime introspection benefits that it brings. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Change magic strings to enums
On 24 April 2018 at 22:52, Jacco van Dorp wrote: > A bit ago I was reading some of the python docs ( > https://docs.python.org/3.6/library/warnings.html ), the warning > module, and I noticed a table of magic strings. > > I can think of a few other places where magic strings are used - for > example, string encoding/decoding locales and strictness, and probably > a number of other places. > > Since Python 3.4, We've been having Enums. > > Wouldn't it be cleaner to use enums by default instead of those magic > strings ? for example, for warnings filter actions, (section 29.5.2), > quite near the top of the page. "It's cleaner" isn't a user problem though. The main justification for using enums is that they're easier to interpret in log messages and expection tracebacks than opaque constants, and that argument is much weaker for well-chosen string constants than it is for other constants (like the numeric constants in the socket and errno modules). For backwards compatibility reasons, we'd want to keep accepting the plain string versions anyway (implicitly converting them to their enum counterparts). At a less philosophical level, many of the cases where we use magic strings are in code that has to work even when the import system isn't working yet - that's relatively straightforward to achieve when the code is only relying on strings with particular contents, but *much* harder if they're relying on a higher level type like enum objects. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Spelling of Assignment Expressions PEP 572 (was post #4)
On 16 April 2018 at 00:27, Thautwarm Zhao wrote: > Personally I prefer "as", but I think without a big change of python Grammar > file, it's impossible to avoid parsing "with expr as name" into "with (expr > as name)" because "expr as name" is actually an "expr". > I have mentioned this in previous discussions and it seems it's better to > warn you all again. I don't think people of Python-Dev are willing to > implement a totally new Python compiler. We have ways of cheating a bit if we want to reinterpret the semantics of something that nevertheless parses cleanly - while the parser is limited to single token lookahead, it's straightforward for the subsequent code generation stage to look a single level down in the parse tree and see that the code that parsed as "with expr" is actually "with subexpr as target". So the main concern around "with (name as expr)" is with human readers getting confused, not the compiler, as we can tell the latter to implement whichever semantics we decide we want, while humans are far less tractable :) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Idea: Importing from arbitrary filenames
On 16 April 2018 at 03:45, Steve Barnes wrote: > On 15/04/2018 08:12, Nick Coghlan wrote: >> The discoverability of these kinds of techniques could definitely >> stand to be improved, but the benefit of adopting them is that they >> work on all currently supported versions of Python (even >> importlib.import_module exists in Python 2.7 as a convenience wrapper >> around __import__), rather than needing to wait for new language level >> syntax for them. > > As you say not too discoverable at the moment - I have just reread > PEP328 & https://docs.python.org/3/library/importlib.html but did not > find any mention of these mechanisms or even that setting an external > __path__ variable existed as a possibility. Yeah, the fact that "packages are ultimately just modules with a __path__ attribute that works like sys.path" tends to get obscured by the close association between package hierarchies and file system layouts in the default filesystem importer. The docs for that are all the way back in PEP 302: https://www.python.org/dev/peps/pep-0302/#packages-and-the-role-of-path > Maybe a documentation enhancement proposal would be in order? If we're not covering explicit __path__ manipulation anywhere, we should definitely mention that possibility. https://docs.python.org/3/library/pkgutil.html#pkgutil.extend_path does talk about it, but only in the context of scanning sys.path for matching names, not in the context of building a package from an arbitrary set of directory names. I'm not sure where we could put an explanation of some of the broader implications of that fact, though - while __path__ manipulation is usually fairly safe, we're always a little hesitant about encouraging too many dynamic modifications to the import system state, since it can sometimes have odd side effects based on whether imports happen before or after that state is adjusted.. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Rewriting file - pythonic way
On 15 April 2018 at 20:47, Paul Moore wrote: > On 15 April 2018 at 11:22, Elazar wrote: >> בתאריך יום א׳, 15 באפר׳ 2018, 13:13, מאת Serhiy Storchaka >> : >>> Actually the reliable code should write into a separate file and replace >>> the original file by the new file only if writing is successful. Or >>> backup the old file and restore it if writing is failed. Or do both. And >>> handle hard and soft links if necessary. And use file locks if needed to >>> prevent race condition when read/write by different processes. Depending >>> on the specific of the application you may need different code. Your >>> three lines are enough for a one-time script if the risk of a powerful >>> blackout or disk space exhaustion is insignificant or if the data is not >>> critical. >> >> This pitfall sounds like a good reason to have such a function in the >> standard library. > > It certainly sounds like a good reason for someone to write a "safe > file rewrite" library function. But I don't think that it's such a > common need that it needs to be a stdlib function. It may well even be > the case that there's such a function already available on PyPI - has > anyone actually checked? There wasn't last time I checked (which admittedly was several years ago now). The issue is that it's painfully difficult to write a robust cross-platform "atomic rewrite" operation that can cleanly handle a wide range of arbitrary use cases - instead, folks are more likely to write simpler alternatives that work well enough given whichever simplifying assumptions are applicable to their use case (which may even include "I don't care about atomicity, and am quite happy to let a poorly timed Ctrl-C or unexpected system shutdown corrupt the file I'm rewriting"). https://bugs.python.org/issue8604#msg174104 is the relevant tracker discussion (deliberately linking into the middle of it, since the early part is akin to this thread: reactions mostly along the lines of "that's easy, and doesn't need to be in the standard library". It definitely *isn't* easy, but it's also challenging to publish on PyPI, since it's a quagmire of platform specific complexity and edge cases, if you mess it up you can cause significant data loss, and anyone that already knows they need atomic rewrites is likely to be able to come up with their own purpose specific implementation in less time than it would take them to assess the suitability of 3rd party alternatives). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Spelling of Assignment Expressions PEP 572 (was post #4)
On 15 April 2018 at 19:41, Mikhail V wrote: > So IIUC, the *only* reason is to avoid '==' ad '=' similarity? > If so, then it does not sound convincing at all. > Of course Python does me a favor showing an error, > when I make a typo like this: > if (x = y) > > But still, if this is the only real reason, it is not convincing. It's thoroughly convincing, because we're already familiar with the consequences of folks confusing "=" and "==" when writing C & C++ code. It's an eternal bug magnet, so it's not a design we're ever going to port over to Python. (And that's even before we get into the parsing ambiguity problems that attempting to reuse "=" for this purpose in Python would introduce, especially when it comes to keyword arguments). The only way Python will ever gain expression level name binding support is with a spelling *other than* "=", as when that's the proposed spelling, the answer will be an unequivocal "No, we're not adding that". Even if the current discussion does come up with a potentially plausible spelling, the final determination on python-dev may *still* be "No, we're not going to add that". That isn't a predetermined answer though - it will depend on whether or not a proposal can be developed that threads the small gap between "this adds too much new cognitive overhead to reading and writing the language" and "while this does add more complexity to the base language, it provides sufficient compensation in allowing common ideas to be expressed more simply". > Syntactically seen, I feel strong that normal '=' would be the way to go. > > Just look at this: > y = ((eggs := spam()), (cheese := eggs.method()) > y = ((eggs = spam()), (cheese = eggs.method()) > > The latter is so much cleaner, and already so common to any > old or new Python user. Consider how close the second syntax is to "y = f(eggs=spam(), cheese=fromage())", though. > Given the fact that the PEP gives quite edge-case > usage examples only, this should be really more convincing. The examples in the PEP have been updated to better reflect some of the key motivating use cases (embedded assignments in if and while statement conditions, generator expressions, and container comprehensions) > And as a side note: I personally find the look of ":=" a bit 'noisy'. You're not alone in that, which is one of the reasons finding a keyword based option that's less syntactically ambiguous than "as" could be an attractive alternative. > Another point: > > *Target first vs Expression first* > === > > Well, this is nice indeed. Don't you find that first of all it must be > decided what should be the *overall tendency for Python*? > Now we have common "x = a + b" everywhere. Then there > are comprehensions (somewhat mixed direction) and > "foo as bar" things. > But wait, is the tendency to "give the freedom"? Then you should > introduce something like "<--" in the first place so that we can > write normal assignment in both directions. > Or is the tendency to convert Python to the "expression first" generally? There's no general tendency towards expression first syntax, nor towards offering flexibility in whether ordinary assignments are target first. All the current cases where we use the "something as target" form are *not* direct equivalents to "target = something": * "import dotted.modname as name": also prevents "dotted" getting bound in the current scope the way it normally would * "from dotted import modname as name": also prevents "modname" getting bound in the current scope the way it normally would * "except exc_filter as exc": binds the caught exception, not the exception filter * "with cm as name": binds the result of __enter__ (which may be self), not the cm directly Indeed, https://www.python.org/dev/peps/pep-0343/#motivation-and-summary points out that it's this "not an ordinary assignment" aspect that lead to with statements using the "with cm as name:" structure in the first place - the original proposal in PEP 310 was for "with name = cm:" and ordinary assignment semantics. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Spelling of Assignment Expressions PEP 572 (was post #4)
On 15 April 2018 at 13:54, Chris Angelico wrote: > On Sun, Apr 15, 2018 at 1:08 PM, Nick Coghlan wrote: >> === Target first, 'from' keyword === >> >> while (value from read_next_item()) is not None: # New >> ... >> >> Pros: >> >> * avoids the syntactic ambiguity of "as" >> * being target first provides an obvious distinction from the "as" keyword >> * typically reads nicely as pseudocode >> * "from" is already associated with a namebinding operation ("from >> module import name") >> >> Cons: >> >> * I'm sure we'll think of some more, but all I have so far is that >> the association with name binding is relatively weak and would need to >> be learned >> > > Cons: Syntactic ambiguity with "raise exc from otherexc", probably not > serious. Ah, I forgot about that usage. The keyword usage is at least somewhat consistent, in that it's short for: _tmp = exc _exc.__cause__ from otherexc raise exc However, someone writing "raise (ExcType from otherexc)" could be confusing, since it would end up re-raising "otherexc" instead of wrapping it in a new ExcType instance. If "otherexc" was also an ExcType instance, that would be a *really* subtle bug to try and catch, so this would likely need the same kind of special casing as was proposed for "as" (i.e. prohibiting the top level parentheses). I also agree with Nathan that if you hadn't encountered from expressions before, it would be reasonable to assume they were semantically comparable to "target = next(expr)" rather than just "target = expr". Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Idea: Importing from arbitrary filenames
On 15 April 2018 at 17:12, Nick Coghlan wrote: > If you want to do this dynamically relative to the current module, > then it's possible to do: > > global __path__ > __path__[:] = (some_directory, some_other_directory) > custom_mod = importlib.import_module(".name", package=__name__) Copy and paste error there: to make this work in non-package modules, drop the "[:]" from the __path__ assignment. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Idea: Importing from arbitrary filenames
On 14 April 2018 at 19:22, Steve Barnes wrote: > I generally love the current import system for "just working" regardless > of platform, installation details, etc., but what I would like to see is > a clear import local, (as opposed to import from wherever you can find > something to satisfy mechanism). This is the one thing that I miss from > C/C++ where #include is system includes and #include "x" search > differing include paths, (if used well). For the latter purpose, we prefer that folks use either explicit relative imports (if they want to search the current package specifically), or else direct manipulation of package.__path__. That is, if you do: from . import custom_imports # Definitely from your own project custom_imports.__path__[:] = (some_directory, some_other_directory) then: from .custom_imports import name will search those directories for packages & modules to import, while still cleanly mapping to a well-defined location in the module namespace for the process as a whole (and hence being able to use all the same caches as other imports, without causing name conflicts or other problems). If you want to do this dynamically relative to the current module, then it's possible to do: global __path__ __path__[:] = (some_directory, some_other_directory) custom_mod = importlib.import_module(".name", package=__name__) The discoverability of these kinds of techniques could definitely stand to be improved, but the benefit of adopting them is that they work on all currently supported versions of Python (even importlib.import_module exists in Python 2.7 as a convenience wrapper around __import__), rather than needing to wait for new language level syntax for them. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Spelling of Assignment Expressions PEP 572 (was post #4)
st part would only be a potential extension beyond the scope of PEP 572 (since it would go against the grain of "name = expression" and "name from expression" otherwise being functionally equivalent in their behaviour), but it's an opportunity that wouldn't arise if a colon is part of the expression level name binding syntax. Cheers, Nick. P.S. The pros and cons of the current syntax proposals, as I see them: === Expression first, 'as' keyword === while (read_next_item() as value) is not None: ... Pros: * typically reads nicely as pseudocode * "as" is already associated with namebinding operations Cons: * syntactic ambiguity in with statement headers (major concern) * encourages a common misunderstanding of how with statements work (major concern) * visual similarity between "as" and "and" makes name bindings easy to miss * syntactic ambiguity in except clause headers theoretically exists, but is less of a concern due to the consistent type difference that makes the parenthesised form pointless === Expression first, '->' symbol === while (read_next_item() -> value) is not None: ... Pros: * avoids the syntactic ambiguity of "as" * "->" is used for name bindings in at least some other languages (but this is irrelevant to users for whom Python is their first, and perhaps only, programming language) Cons: * doesn't read like pseudocode (you need to interpret an arbitrary non-arithmetic symbol) * invites the question "Why doesn't this use the 'as' keyword?" * symbols are typically harder to look up than keywords * symbols don't lend themselves to easy mnemonics * somewhat arbitrary repurposing of "->" compared to its use in function annotations === Target first, ':=' symbol === while (value := read_next_item()) is not None: ... Pros: * avoids the syntactic ambiguity of "as" * being target first provides an obvious distinction from the "as" keyword * ":=" is used for name bindings in at least some other languages (but this is irrelevant to users for whom Python is their first, and perhaps only, language) Cons: * symbols are typically harder to look up than keywords * symbols don't lend themselves to easy mnemonics * subject to a visual "line noise" phenomenon when combined with other uses of ":" as a syntactic marker (e.g. slices, dict key/value pairs, lambda expressions, type annotations) === Target first, 'from' keyword === while (value from read_next_item()) is not None: # New ... Pros: * avoids the syntactic ambiguity of "as" * being target first provides an obvious distinction from the "as" keyword * typically reads nicely as pseudocode * "from" is already associated with a namebinding operation ("from module import name") Cons: * I'm sure we'll think of some more, but all I have so far is that the association with name binding is relatively weak and would need to be learned -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Idea: Importing from arbitrary filenames
On 14 April 2018 at 13:28, Ken Hilton wrote: > Hi all, > > First of all, please excuse me if I'm presenting this idea in the wrong way > or at the wrong time - I'm new to this mailing list and haven't seen anyone > propose a new idea on it yet, so I don't know the customs. > > I have an idea for importing files with arbitrary names. Currently, the > "official" way to import arbitrary files is to use the "imp" module, as > shown by this answer: https://stackoverflow.com/a/3137914/6605349 > However, this method takes two function calls and is not as (aesthetically > pleasing? is that the word?) as a simple "import" statement. Modules aren't required to be stored on the filesystem, so we have no plans to offer this. `runpy.run_path()` exists to let folks run arbitrary Python files and collect the resulting namespace, while if folks really want to implement pseudo-imports based on filenames we expose the necessary building blocks in importlib (https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) The fact that run_path() has a nice straightforward invocation model, and the import emulation recipe doesn't is intended as a hint :) Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Assignment Expressions (post #4)
On 13 April 2018 at 22:35, Chris Angelico wrote: > On Fri, Apr 13, 2018 at 10:22 PM, Steven D'Aprano wrote: >> On Wed, Apr 11, 2018 at 11:50:44PM +1000, Chris Angelico wrote: >> >>> > Previously, there was an alternative _operator form_ `->` proposed by >>> > Steven D'Aprano. This option is no longer considered? I see several >>> > advantages with this variant: >>> > 1. It does not use `:` symbol which is very visually overloaded in Python. >>> > 2. It is clearly distinguishable from the usual assignment statement and >>> > it's `+=` friends >>> > There are others but they are minor. >>> >>> I'm not sure why you posted this in response to the open question, but >>> whatever. The arrow operator is already a token in Python (due to its >>> use in 'def' statements) and should not conflict with anything; >>> however, apart from "it looks different", it doesn't have much to >>> speak for it. >> >> On the contrary, it puts the expression first, where it belongs >> *semi-wink*. > > The 'as' syntax already has that going for it. What's the advantage of > the arrow over the two front-runners, ':=' and 'as'? I stumbled across https://www.hillelwayne.com/post/equals-as-assignment/ earlier this week, and I think it provides grounds to reconsider the suitability of ":=", as that symbol has historically referred to *re*binding an already declared name. That isn't the way we're proposing to use it here: we're using it to mean both implicit local variable declaration *and* rebinding of an existing name, the same as we do for "=" and "as". I think the "we already use colons in too many unrelated places" argument also has merit, as we already use the colon as: 1. the header terminator when introducing a nested suite 2. the key:value separator in dictionary displays and comprehensions 3. the name:annotation separator in function parameter declarations 4. the name:annotation separator in variable declarations and assignment statements 5. the parameter:result separator in lambda expressions 6. the start:stop:step separator in slice syntax "as" is at least more consistently associated with name binding, and has fewer existing uses in the first place, but has the notable downside of being thoroughly misleading in with statement header lines, as well as being *so* syntactically unobtrusive that it's easy to miss entirely (especially in expressions that use other keywords). The symbolic "right arrow" operator would be a more direct alternative to the "as" variant that was more visually distinct: # Handle a matched regex if (pattern.search(data) -> match) is not None: ... # More flexible alternative to the 2-arg form of iter() invocation while (read_next_item() -> item) is not None: ... # Share a subexpression between a comprehension filter clause and its output filtered_data = [y for x in data if (f(x) -> y) is not None] # Visually and syntactically unambigous in with statement headers with create_cm() -> cm as enter_result: ... (Pronunciation-wise, if we went with that option, I'd probably pronounce "->" as "as" most of the time, but there are some cases like the "while" example above where I'd pronounce it as "into") The connection with function declarations would be a little tenuous, but could be rationalised as: Given the function declation: def f(...) -> Annotation: ... Then in the named subexpression: (f(...) -> name) the inferred type of "name" is "Annotation" Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Assignment Expressions (post #4)
On 13 April 2018 at 16:47, Chris Angelico wrote: > Consider: > > pos = -1 > while pos := buffer.find(search_term, pos + 1) >= 0: > ... > > Once find() returns -1, the loop terminates. Should this need to be > parenthesized? I've certainly been assuming that cases like that would need to be written as: pos = -1 while (pos := buffer.find(search_term, pos + 1)) >= 0: ... I'd write the equivalent C while loop the same way: int pos = -1 while ((pos = find(buffer, search_term, pos + 1)) >= 0): ... The parentheses around the assignment in C are technically redundant, but I consider finding the matching parenthesis to be straightforward (especially with text editor assistance), while I consider figuring out where the next lower precedence operator appears difficult (since I don't have the C operand precedence table memorized, and there isn't any simple way for my text editor to help me out). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Assignment Expressions (post #4)
On 12 April 2018 at 07:28, Chris Angelico wrote: > On Thu, Apr 12, 2018 at 1:22 AM, Nick Coghlan wrote: >>> Frequently Raised Objections >>> >> >> There needs to be a subsection here regarding the need to call `del` >> at class and module scope, just as there is for loop iteration >> variables at those scopes. > > Hmm, I'm not sure I follow. Are you saying that this is an objection > to assignment expressions, or an objection to them not being > statement-local? If the latter, it's really more about "rejected > alternative proposals". It's both - accidentally polluting class and module namespaces is an argument against expression level assignments in general, and sublocal namespaces aimed to eliminate that downside. Since feedback on the earlier versions of the PEP has moved sublocal namespaces into the "rejected due to excessive conceptual complexity" box, that means accidental namespace pollution comes back as a downside that the PEP should mention. I don't think it needs to say much, just point out that they share the downside of regular for loops: if you use one at class or module scope, and don't want to export the name, you need to delete it explicitly. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Assignment Expressions (post #4)
On 12 April 2018 at 22:22, Jacco van Dorp wrote: > I've looked through PEP 343, contextlib docs ( > https://docs.python.org/3/library/contextlib.html ), and I couldn't > find a single case where "with (y := f(x))" would be invalid. Consider this custom context manager: @contextmanager def simple_cm(): yield 42 Given that example, the following code: with cm := simple_cm() as value: print(cm.func.__name__, value) would print "'simple_cm 42", since the assignment expression would reference the context manager itself, while the with statement binds the yielded value. Another relevant example would be `contextlib.closing`: that returns the passed in argument from __enter__, *not* self. And that's why earlier versions of PEP 572 (which used the "EXPR as NAME" spelling) just flat out prohibited top level name binding expressions in with statements: "with (expr as name):" and "with expr as name:" were far too different semantically for the only syntactic difference to be a surrounding set of parentheses. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Assignment Expressions (post #4)
tion above, you may also want to consider making this example a filtered comprehension in order to show the proposal in its best light: results = [(x, y, x/y) for x in input_data if (y := f(x) )] > Capturing condition values > -- > > Assignment expressions can be used to good effect in the header of > an ``if`` or ``while`` statement:: Similar to the comprehension section, I think this part could benefit from switching the order of presentation. > Frequently Raised Objections > There needs to be a subsection here regarding the need to call `del` at class and module scope, just as there is for loop iteration variables at those scopes. > This could be used to create ugly code! > --- > > So can anything else. This is a tool, and it is up to the programmer to use > it > where it makes sense, and not use it where superior constructs can be used. This argument will be strengthened by making the examples used in the PEP itself more attractive, as well as proposing suitable additions to PEP 8, such as: 1. If either assignment statements or assignment expressions can be used, prefer statements 2. If using assignment expressions would lead to ambiguity about execution order, restructure to use statements instead Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Move optional data out of pyc files
On 11 April 2018 at 02:14, Serhiy Storchaka wrote: > Currently pyc files contain data that is useful mostly for developing and is > not needed in most normal cases in stable program. There is even an option > that allows to exclude a part of this information from pyc files. It is > expected that this saves memory, startup time, and disk space (or the time > of loading from network). I propose to move this data from pyc files into > separate file or files. pyc files should contain only external references to > external files. If the corresponding external file is absent or specific > option suppresses them, references are replaced with None or NULL at import > time, otherwise they are loaded from external files. > > 1. Docstrings. They are needed mainly for developing. > > 2. Line numbers (lnotab). They are helpful for formatting tracebacks, for > tracing, and debugging with the debugger. Sources are helpful in such cases > too. If the program doesn't contain errors ;-) and is sipped without > sources, they could be removed. > > 3. Annotations. They are used mainly by third party tools that statically > analyze sources. They are rarely used at runtime. While I don't think the default inline pyc format should change, in my ideal world I'd like to see the optimized format change to a side-loading model where these things are still emitted, but they're placed in a separate metadata file that isn't loaded by default. The metadata file would then be lazily loaded at runtime, such that `-O` gave you the memory benefits of `-OO`, but docstrings/annotations/source line references/etc could still be loaded on demand if something actually needed them. This approach would also mitigate the valid points Chris Angelico raises around hot reloading support - we could just declare that it requires even more care than usual to use hot reloading in combination with `-O`. Bonus points if the sideloaded metadata file could be designed in such a way that an extension module compiler like Cython or an alternate pyc compiler frontend like Hylang could use it to provide relevant references back to the original source code (JavaScript's source maps may provide inspiration on that front). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Add more information in the header of pyc files
On 11 April 2018 at 02:54, Antoine Pitrou wrote: > On Tue, 10 Apr 2018 19:29:18 +0300 > Serhiy Storchaka > wrote: >> >> A bugfix release can fix bugs in bytecode generation. See for example >> issue27286. [1] The part of issue33041 backported to 3.7 and 3.6 is an >> other example. [2] There were other examples of compatible changing the >> bytecode. Without bumping the magic number these fixes can just not have >> any effect if existing pyc files were generated by older compilers. But >> bumping the magic number in a bugfix release can lead to rebuilding >> every pyc file (even unaffected by the fix) in distributives. > > Sure, but I don't think rebuilding every pyc file is a significant > problem. It's certainly less error-prone than cherry-picking which > files need rebuilding. And we need to handle the old bytecode format in the eval loop anyway, or else we'd be breaking compatibility with bytecode-only files, as well as introducing a significant performance regression for non-writable bytecode caches (if we were to ignore them). It's a subtle enough problem that I think the `compileall --force` option is a safer way of handling it, even if it regenerates some pyc files that could have been kept. For the "stable file signature" aspect, does that need to be specifically the first *four* bytes? One of the benefits of PEP 552 leaving those four bytes alone is that it meant that a lot of magic number checking code didn't need to change. If the stable marker could be placed later (e.g. after the PEP 552 header), then we'd similarly have the benefit that code checking the PEP 552 headers wouldn't need to change, at the expense of folks having to read 20 bytes to see the new signature byte (which shouldn't be a problem, given that file defaults to reading up to 1 MiB from files it is trying to identify). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]
On 9 April 2018 at 14:38, Raymond Hettinger wrote: >> On Apr 8, 2018, at 6:43 PM, Tim Peters wrote: >> In short, for _general_ use `accumulate()` needs `initial` for exactly >> the same reasons `reduce()` needed it. > > The reduce() function had been much derided, so I've had it mentally filed in > the anti-pattern category. But yes, there may be wisdom there. Weirdly (or perhaps not so weirdly, given my tendency to model computational concepts procedurally), I find the operation of reduce() easier to understand when it's framed as "last(accumulate(iterable, binop, initial=value)))". Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Statement-Local Name Bindings, take three!
On 9 April 2018 at 01:01, Steven D'Aprano wrote: > On Sun, Apr 08, 2018 at 09:25:33PM +1000, Nick Coghlan wrote: > >> I was writing a new stdlib test case today, and thinking about how I >> might structure it differently in a PEP 572 world, and realised that a >> situation the next version of the PEP should discuss is this one: >> >> # Dict display >> data = { >> key_a: 1, >> key_b: 2, >> key_c: 3, >> } >> >> # Set display with local name bindings >> data = { >> local_a := 1, >> local_b := 2, >> local_c := 3, >>} > > I don't understand the point of these examples. Sure, I guess they would > be legal, but unless you're actually going to use the name bindings, > what's the point in defining them? That *would* be the point. In the case where it occurred to me, the actual code I'd written looked like this: curdir_import = "" curdir_relative = os.curdir curdir_absolute = os.getcwd() all_spellings = [curdir_import, curdir_relative, curdir_absolute] (Since I was testing the pydoc CLI's sys.path manipulation, and wanted to cover all the cases). >> I don't think this is bad (although the interaction with dicts is a >> bit odd), and I don't think it counts as a rationale either, but I do >> think the fact that it becomes possible should be noted as an outcome >> arising from the "No sublocal scoping" semantics. > > If we really wanted to keep the sublocal scoping, we could make > list/set/dict displays their own scope too. > > Personally, that's the only argument for sublocal scoping that I like > yet: what happens inside a display should remain inside the display, and > not leak out into the function. > > So that has taken me from -1 on sublocal scoping to -0.5 if it applies > to displays. Inflicting the challenges that comprehensions have at class scope on all container displays wouldn't strike me as a desirable outcome (plus there's also the problem that full nested scopes are relatively expensive at runtime). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] PEP 572: Statement-Local Name Bindings, take three!
On 23 March 2018 at 20:01, Chris Angelico wrote: > Apologies for letting this languish; life has an annoying habit of > getting in the way now and then. > > Feedback from the previous rounds has been incorporated. From here, > the most important concern and question is: Is there any other syntax > or related proposal that ought to be mentioned here? If this proposal > is rejected, it should be rejected with a full set of alternatives. I was writing a new stdlib test case today, and thinking about how I might structure it differently in a PEP 572 world, and realised that a situation the next version of the PEP should discuss is this one: # Dict display data = { key_a: 1, key_b: 2, key_c: 3, } # Set display with local name bindings data = { local_a := 1, local_b := 2, local_c := 3, } # List display with local name bindings data = { local_a := 1, local_b := 2, local_c := 3, } # Dict display data = { key_a: local_a := 1, key_b: local_b := 2, key_c: local_c := 3, } # Dict display with local key name bindings data = { local_a := key_a: 1, local_b := key_b: 2, local_c := key_c: 3, } I don't think this is bad (although the interaction with dicts is a bit odd), and I don't think it counts as a rationale either, but I do think the fact that it becomes possible should be noted as an outcome arising from the "No sublocal scoping" semantics. Cheers, Nick. P.S. The specific test case is one where I want to test the three different ways of spelling "the current directory" in some sys.path manipulation code (the empty string, os.curdir, and os.getcwd()), and it occurred to me that a version of PEP 572 that omits the sublocal scoping concept will allow inline naming of parts of data structures as you define them. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin
On 6 April 2018 at 02:52, Peter O'Connor wrote: > Combined with the new "last" builtin discussed in the proposal, this would > allow u to replace "reduce" with a more Pythonic comprehension-style syntax. I think this idea was overshadowed by the larger syntactic proposal in the rest of your email (I know I missed it initially and only noticed it in the thread subject line later). With the increased emphasis on iterators and generators in Python 3.x, the lack of a simple expression level equivalent to "for item in iterable: pass" is occasionally irritating, especially when demonstrating behaviour at the interactive prompt. Being able to reliably exhaust an iterator with "last(iterable)" or "itertools.last(iterable)" would be a nice reduction function to offer, in addition to our existing complement of builtin reducers like sum(), any() and all(). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia ___ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/