Re: [Python-Dev] What is the rationale behind source only releases?
On Tue, May 15, 2018 at 10:55:07PM -0700, Chris Jerdonek wrote: > What does “no release at all” mean? If it’s not released, how would people > use it? I've been using Python 1.7 for years now. It is the perfect Python, with exactly all the features I want, and none that I don't want, and so much faster than Python 2.7 or 3.7 it is ridiculous. Unfortunately once I've woken up and tried to port my code to an actual computer, it doesn't work. *wink* In principle, we could continue adding fixes to a version in the source repository, but never cut a release with a new version. But I don't think we do that: once a version hits "no release", we stop adding fixes to the repo for that version: - full source and binary releases - source only releases - accumulate fixes in the VCS but don't cut a new release - stop making releases at all (the version is now unmaintained) The third (second from the bottom) doesn't (as far as I am aware) occur. -- Steve ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What is the rationale behind source only releases?
What does “no release at all” mean? If it’s not released, how would people use it? —Chris On Tue, May 15, 2018 at 9:36 PM Alex Walters wrote: > In the spirit of learning why there is a fence across the road before I > tear > it down out of ignorance [1], I'd like to know the rationale behind source > only releases of cpython. I have an opinion on their utility and perhaps > an > idea about changing them, but I'd like to know why they are done (as > opposed > to source+binary releases or no release at all) before I head over to > python-ideas. Is this documented somewhere where my google-fu can't find > it? > > > [1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What is the rationale behind source only releases?
> On May 16, 2018, at 1:06 AM, Ben Finney wrote: > >> >> I'd like to know the rationale behind source only releases of cpython. > > Software freedom entails the freedom to modify and build the software. > For that, one needs the source form of the software. > > Portable software should be feasible to build from source, on a platform > where no builds (of that particular release) were done before. For that, > one needs the source form of the software. I’m guessing the question isn’t why is it useful to have a source release of CPython, but why does CPython transition from having both source releases and binary releases to only source releases. My assumption is the rationale is to reduce the maintenance burden as time goes on for older release channels.___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What is the rationale behind source only releases?
On Wed, May 16, 2018 at 3:06 PM, Ben Finney wrote: > "Alex Walters" writes: > >> I'd like to know the rationale behind source only releases of cpython. > > Software freedom entails the freedom to modify and build the software. > For that, one needs the source form of the software. > > Portable software should be feasible to build from source, on a platform > where no builds (of that particular release) were done before. For that, > one needs the source form of the software. AIUI Alex is asking about the last release(s) of each branch, eg 3.4.8. There are no official Python.org binaries published for these releases, so anyone who wants to upgrade within the 3.4 branch has to build it themselves. ChrisA ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What is the rationale behind source only releases?
"Alex Walters" writes: > I'd like to know the rationale behind source only releases of cpython. Software freedom entails the freedom to modify and build the software. For that, one needs the source form of the software. Portable software should be feasible to build from source, on a platform where no builds (of that particular release) were done before. For that, one needs the source form of the software. > I have an opinion on their utility and perhaps an idea about changing > them, but I'd like to know why they are done The above rationales seem sufficient to me. Are you looking for additional ones? > (as opposed to source+binary releases or no release at all) I don't see a good justification for adding “source+binary” releases to the existing ones. We already have a source release (once), anda separate binary (one per platform). Why bother *also* making a source+binary release — presumably an additional one per platform? As for “no release at all”, it seems that those who want that can download it very quickly now :-) > before I head over to python-ideas. Is this documented somewhere where > my google-fu can't find it? I am not clear on why this would need specific documentation for Python; these are not issues that are different from any other software where the recipients have software freedom in the work. I hope these answers are useful. -- \ “My business is to teach my aspirations to conform themselves | `\ to fact, not to try and make facts harmonise with my | _o__) aspirations.“ —Thomas Henry Huxley, 1860-09-23 | Ben Finney ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] What is the rationale behind source only releases?
In the spirit of learning why there is a fence across the road before I tear it down out of ignorance [1], I'd like to know the rationale behind source only releases of cpython. I have an opinion on their utility and perhaps an idea about changing them, but I'd like to know why they are done (as opposed to source+binary releases or no release at all) before I head over to python-ideas. Is this documented somewhere where my google-fu can't find it? [1]: https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Looking for examples: proof that a list comp is a function
[ Tim, about the most version of the docs at https://docs.python.org/dev/reference/expressions.html#displays-for-lists-sets-and-dictionaries ] >> I say "pretty much" because, for whatever reason(s), it seems to be >> trying hard _not_ to use the word "function". But I can't guess what >> "then passed as an argument to the implicitly nested scope" could >> possibly mean otherwise (it doesn't make literal sense to "pass an >> argument" to "a scope"). [Nick Coghlan ] > I think my motivation was to avoid promising *exact* equivalence with a > regular nested function, since the define-and-call may allow us > opportunities for optimization that don't exist when those two are separated > (e.g. Guido's point in another thread that we actually avoid calling "iter" > twice even though the nominal expansion implies that we should). However, > you're right that just calling it a function may be clearer than relying on > the ill-defined phrase "implicitly nested scope". Plus that, as noted, what passing an argument "to a scope" means is mysterious. Language standard committees struggle for years with how to phrase things so that no more than is intended appears to be promised. It's hard! For example, if you were to show a workalike function and note that the exact placement - and number - of `iter()` calls is not guaranteed, someone else would point out that you need to explicitly say that by "iter" you mean the builtin function of that name, not one user code may have overridden it with in the current scope. Then someone else will note that it's tedious to say things like that whenever they're needed, and more-general text will be added elsewhere in the docs saying that the _rest_ of the docs always mean the language-supplied versions of such-&-such explicitly named functions/classes/modules/... I'd say "nested function" anyway ;-) And for another reason: not just someone from Mars is prone to misreading 'scope", but just about anyone on Earth coming from another language. The idea that the word "scope" all by itself implies "and in general any name bound to within the top-level code spanned by the scope is implicitly local to the scope unless explicitly declared `global` or `nonlocal` in the scope" may be unique to Python. > For Chris's actual question, this is part of why I think adding > "parentlocal" would actually make the scoping proposal easier to explain, as > it means the name binding semantics aren't a uniquely magical property of > binding expressions (however spelled), they're just a new form of target > scope declaration that the compiler understands, and the binding expression > form implies. Note: eas*ier*, not easy ;) Adding an explanation of `parentlocal` to the docs could be a useful pedagogical device, but I don't think I'd support adding that statement to the _language_. It's too weird, and seems to be at a wrong level for plausible future language developments. Let's step way back for a minute. In many languages with full-blown closures, first-class functions, and nested lexical scopes, it's pretty common to define the meaning of various language constructs in terms of calling derived lexically nested functions. In those languages, any "work variables" needed by the synthetic functions are declared as being local to those functions, and _that's the end of it_. They're done. All other names inside the expansions mean exactly the same as what they mean in whatever chunks of user-supplied code the construct interpolates into the synthesized functions. It doesn't matter one whit in which context(s) they appear. That's the only long-term sane way to go about defining constructs in terms of calling synthesized functions interpolating user-supplied pieces of code. Now _if_ Python had been able to do that, the meaning of genexps and listcomps would have been defined, from the start, in terms of synthesized functions that declared all & only the for-target names "local". And, in fact, the change I'm suggesting wouldn't have required changing the comprehension implementation _at all_ when assignment expressions were added. Instead the implementation would need to change to _add_ assignment expression targets to the things declared local if it was decided that those targets should be _local_ to the derived functions instead. That's why this all seems so bloody obvious to me ;-) It's how virtually every other language in the business of defining constructs in terms of nested synthesized functions works. So if that's something we may ever do again - and possibly even if we don't expect to ever do it again - I suggest a more generally useful approach would be to add a new flavor of _function_ to Python. Namely one wherein the only locals are the formal arguments and those explicitly declared local. Whether or not a name is bound in the body would be irrelevant. To avoid a new keyword, `local` could be spelled `not nonlocal` ;-) Note that the only use for `parentlocal` s
Re: [Python-Dev] PEP 575 (Unifying function/method classes) update
On 2018-05-15 18:36, Petr Viktorin wrote: Naturally, large-scale changes have less of a chance there. Does it really matter that much how large the change is? I think you are focusing too much on the change instead of the end result. As I said in my previous post, I could certainly make less disruptive changes. But would that really be better? (If you think that the answer is "yes" here, I honestly want to know). I could make the code less different than today but at the cost of added complexity. Building on top of the existing code is like building on a bad foundation: the higher you build, the messier it gets. Instead, I propose a solid new foundation. Of course, that requires more work to build but once it is built, the finished building looks a lot better. With such a "finished product" PEP, it's hard to see if some of the various problems could be solved in a better way -- faster, more maintainable, or less disruptive. With "faster", you mean runtime speed? I'm pretty confident that we won't lose anything there. As I argued above, my PEP might very well make things "more maintainable", but this is of course very subjective. And "less disruptive" was never a goal for this PEP. It's also harder from a psychological point of view: you obviously already put in a lot of good work, and it's harder to waste that work if an even better solution is found. I hope that this won't be my psychology. As a developer, I prefer to focus on problems rather than on solutions: I don't want to push a particular solution, I want to fix a particular problem. If an even better solution is accepted, I will be a very happy man. What I would hate is that this PEP gets rejected because some people claim that the problem can be solved in a better way, but without actually suggesting such a better way. Is a branching class hierarchy, with quite a few new of flags for feature selection, the kind of simplicity we want? Maybe yes because it *concentrates* all complexity in one small place. Currently, we have several independent classes (builtin_function_or_method, method_descriptor, function, method) which all require various forms of special casing in the interpreter with some code duplication. With my PEP, this all goes away and instead we need to understand just one class, namely base_function. Would it be possible to first decouple things, reducing the complexity, and then tackle the individual problems? What do you mean with "decouple things"? Can you be more concrete? The class hierarchy still makes it hard to decouple the introspection side (how functions look on the outside) from the calling mechanism (how the calling works internally). Any class who wants to profit from fast function calls can inherit from base_function. It can add whatever attributes it wants and it can choose to implement documentation and/or introspection in whatever way it wants. It can choose to not care about that at all. That looks very decoupled to me. Starting from an idea and ironing out the details it lets you (and, if since you published results, everyone else) figure out the tricky details. But ultimately it's exploring one path of doing things – it doesn't necessarily lead to the best way of doing something. So far I haven't seen any other proposals... That's a good question. Maybe inspect.isfunction() serves too many use cases to be useful. Cython functons should behave like "def" functions in some cases, and like built-in functions in others. From the outside, i.e. user's point of view, I want them to behave like Python functions. Whether it's implemented in C or Python should just be an implementation detail. Of course there are attributes like __code__ which dive into implementation details, so there you will see the difference. before we change how inspect.isfunction ultimately behaves, I'd like to make its purpose clearer (and try to check how that meshes with the current use cases). The problem is that this is not easy to do. You could search CPython for occurrences of inspect.isfunction() and you could search your favorite Python projects. This will give you some indication, but I'm not sure whether that will be representative. From what I can tell, inspect.isfunction() is mainly used as guard for attribute access: it implies for example that a __globals__ attribute exists. And it's used by documentation tools to decide that it should be documented as Python function whose signature can be extracted using inspect.signature(). Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Making Tcl/Tk more suitable for embedding (was: [issue33479] Document tkinter and threads)
Subj is off topic for the ticket, so I guess this discussion is better continued here. On 15.05.2018 18:20, Mark Roseman wrote: Mark Roseman added the comment: Hi Ivan, thanks for your detailed response. The approach you're suggesting ("Since the sole offender is their threading model, the way is to show them how it's defective and work towards improving it.") is in the end not something I think is workable. Some historical context. Ousterhout had some specific ideas about how Tcl/Tk should be used, and that was well-reflected in his early control of the code base. He was certainly outspoken against threads. The main argument is that they're complicated if you don't know what you're doing, which included the "non-professional programmers" he considered the core audience. Enumerating how threads were used at the time, most of the uses could be handled (more simply) in other ways, such as event-driven and non-blocking timers and I/O (so what people today would refer to as the "node.js event model"). Threads (or separate communicating processes) were for long-running computations, things he always envisioned happening in C code (written by more "professional programmers"), not Tcl. His idea of how Tcl and C development would be split didn't match reality given faster machines, more memory, etc. Very enlightening. Many thanks. The second thing is that Tcl had multiple interpreters baked in pretty much from the beginning at the C level and exposed fairly early on (1996?) at the Tcl level, akin to PEP 554. Code isolation and resource management were the key motivators, but of course others followed. Creating and using Tcl interpreters was quick, lightweight (fast startup, low memory overhead, etc.) and easy. So in other words, the notion of multiple interpreters in Tcl vs. Python is completely different. I had one large application I built around that time that often ended up with hundreds of interpreters running. Not familiar with the concept so can't say atm if tkinter can make any use of this. All tkinter-using code I've seen so far only ever uses a single tkinter.Tk() -- thus a single interpreter. Which brings me to threads and how they were added to the language. Your guess ("My guess for the decision is it was the easiest way to migrate the code base") is incorrect. The idea of "one thread/one interpreter" was just not seen as a restriction, and was a very natural extension of what had come before. It fit the use cases well (AOLserver was another good example) and was still very understandable from the user level. Contrast with Python's GIL, etc. I'm not actually suggesting any changes to Tcl as a language, only to its C interface (details follow). AFAIK Tcl also advertises itself as an embeddable language as its main selling point, having been developed primarity as an interface to Tk rather than a self-sufficient language (or, equivalently, this being its primary use case now). Having to do an elaborate setup with lots of custom logic to be able to embed it is a major roadblock. This can be the leverage. From C interface's standpoint, an interpreter is effectively a bunch of data that can be passed to APIs. Currently, all Tcl_* calls with a specific interpreter instance must be made from the same thread, and this fact enforces sequential access. I'm suggesting to wrap all these public APIs with an interpreter-specific lock -- so calls can be made from any OS thread and the lock enforces sequential access. For Tcl's execution model and existing code, nothing will change. The downside (that will definitely be brought up) is the overhead, of course. The question is thus whether the above-mentioned benefit outweighs it. With that all said, there would be very little motivation to change the Tcl/Tk side to allow multiple threads to access one interpreter, because in terms of the API and programming model that Tcl/Tk advertises, it's simply not a problem. Keep in mind, the people working on the Tcl/Tk core are very smart programmers, know threads very well, etc., so it's not an issue of "they should know better" or "it's old." In other words, "show them how it's defective" is a non-starter. The other, more practical matter in pushing for changes in the Tcl/Tk core, is that there are a fairly small number of people working on it, very part-time. Almost all of them are most interested in the Tcl side, not Tk. Changes made in Tk most often amount to bug fixes because someone's running into something in their own work. Expecting large-scale changes to happen to Tk without some way to get dedicated new resources put into it is not realistic. A final matter on the practical side. As you've carefully noted, certain Tcl/Tk calls now happen to work when called from different threads. Consider those a side-effect of present implementation, not a guarantee. Future core changes could change what can be called from different threads, making the situa
Re: [Python-Dev] [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)
Sorry about approving this message (I'm a python-dev list moderator)! There will be a few more like it. Looking closer, it appears to be another variation of pure-nuisance spam that's been flooding all sorts of python.org lists. You've been spared many hundreds of those here, but since this one appeared to contain actual Python-related content, I reflexively approved it. On Tue, May 15, 2018 at 3:43 PM, nataliemorrisonxm980xm--- via Python-Dev wrote: > > > > From: Serhiy Storchaka > To: python-check...@python.org > Sent: Wednesday, 9 May 2018, 10:14 > Subject: [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects > with a non-string name attribute. (GH-6095) > ,,, ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 575 (Unifying function/method classes) update
On 2018-05-15 18:36, Petr Viktorin wrote: What is your ultimate use case? (I'll just answer this one question now and reply to the more technical comments in another thread) My ultimate use case is being able to implement functions and methods which are (A) equally fast as the existing built-in function and methods (B) and behave from a user's point of view like Python functions. With objective (A) I want no compromises. CPython has many optimizations for built-in functions and all of them should work for my new functions. Objective (B) means more precisely: 1. Implementing __get__ to turn a function in a method. 2. Being recognized as "functions" by tools like Sphinx and IPython. 3. Introspection support such as inspect.signature() and inspect.getsource(). Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)
From: Serhiy Storchaka To: python-check...@python.org Sent: Wednesday, 9 May 2018, 10:14 Subject: [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095) https://github.com/python/cpython/commit/afe5f633e49e0e873d42088ae56819609c803ba0commit: afe5f633e49e0e873d42088ae56819609c803ba0branch: 2.7author: Bo Baylescommitter: Serhiy Storchaka date: 2018-05-09T13:14:40+03:00summary:bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)files:A Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rstM Lib/gzip.pyM Lib/test/test_gzip.pyM Misc/ACKSdiff --git a/Lib/gzip.py b/Lib/gzip.pyindex 07c6db493b0b..76ace394f482 100644--- a/Lib/gzip.py+++ b/Lib/gzip.py@@ -95,9 +95,8 @@ def __init__(self, filename=None, mode=None, if filename is None: # Issue #13781: os.fdopen() creates a fileobj with a bogus name # attribute. Avoid saving this in the gzip header's filename field.- if hasattr(fileobj, 'name') and fileobj.name != '':- filename = fileobj.name- else:+ filename = getattr(fileobj, 'name', '')+ if not isinstance(filename, basestring) or filename == '': filename = '' if mode is None: if hasattr(fileobj, 'mode'): mode = fileobj.modediff --git a/Lib/test/test_gzip.py b/Lib/test/test_gzip.pyindex 902d93fe043f..cdb1af5c3d13 100644--- a/Lib/test/test_gzip.py+++ b/Lib/test/test_gzip.py@@ -6,6 +6,7 @@ import os import io import struct+import tempfile gzip = test_support.import_module('gzip') data1 = """ int length=DEFAULTALLOC, err = Z_OK;@@ -331,6 +332,12 @@ def test_fileobj_from_fdopen(self): with gzip.GzipFile(fileobj=f, mode="w") as g: self.assertEqual(g.name, "") + def test_fileobj_from_io_open(self):+ fd = os.open(self.filename, os.O_WRONLY | os.O_CREAT)+ with io.open(fd, "wb") as f:+ with gzip.GzipFile(fileobj=f, mode="w") as g:+ self.assertEqual(g.name, "")+ def test_fileobj_mode(self): gzip.GzipFile(self.filename, "wb").close() with open(self.filename, "r+b") as f:@@ -359,6 +366,14 @@ def test_read_with_extra(self): with gzip.GzipFile(fileobj=io.BytesIO(gzdata)) as f: self.assertEqual(f.read(), b'Test') + def test_fileobj_without_name(self):+ # Issue #33038: GzipFile should not assume that file objects that have+ # a .name attribute use a non-None value.+ with tempfile.SpooledTemporaryFile() as f:+ with gzip.GzipFile(fileobj=f, mode='wb') as archive:+ archive.write(b'data')+ self.assertEqual(archive.name, '')+ def test_main(verbose=None): test_support.run_unittest(TestGzip) diff --git a/Misc/ACKS b/Misc/ACKSindex 580b0c5bf76d..458f31e6a6b7 100644--- a/Misc/ACKS+++ b/Misc/ACKS@@ -94,6 +94,7 @@ Michael R Bax Anthony Baxter Mike Bayer Samuel L. Bayer+Bo Bayles Donald Beaudry David Beazley Carlo Beccarinidiff --git a/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rst b/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rstnew file mode 100644index ..22d394b85ab7--- /dev/null+++ b/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rst@@ -0,0 +1,2 @@+gzip.GzipFile no longer produces an AttributeError exception when used with+a file object with a non-string name attribute. Patch by Bo Bayles.___Python-checkins mailing listpython-check...@python.orghttps://mail.python.org/mailman/listinfo/python-checkins___Python-Dev mailing listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-devUnsubscribe: https://mail.python.org/mailman/options/python-dev/nataliemorrisonxm980xm%40yahoo.com___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)
From: Serhiy Storchaka To: python-check...@python.org Sent: Wednesday, 9 May 2018, 10:14 Subject: [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095) https://github.com/python/cpython/commit/afe5f633e49e0e873d42088ae56819609c803ba0commit: afe5f633e49e0e873d42088ae56819609c803ba0branch: 2.7author: Bo Baylescommitter: Serhiy Storchaka date: 2018-05-09T13:14:40+03:00summary:bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)files:A Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rstM Lib/gzip.pyM Lib/test/test_gzip.pyM Misc/ACKSdiff --git a/Lib/gzip.py b/Lib/gzip.pyindex 07c6db493b0b..76ace394f482 100644--- a/Lib/gzip.py+++ b/Lib/gzip.py@@ -95,9 +95,8 @@ def __init__(self, filename=None, mode=None, if filename is None: # Issue #13781: os.fdopen() creates a fileobj with a bogus name # attribute. Avoid saving this in the gzip header's filename field.- if hasattr(fileobj, 'name') and fileobj.name != '':- filename = fileobj.name- else:+ filename = getattr(fileobj, 'name', '')+ if not isinstance(filename, basestring) or filename == '': filename = '' if mode is None: if hasattr(fileobj, 'mode'): mode = fileobj.modediff --git a/Lib/test/test_gzip.py b/Lib/test/test_gzip.pyindex 902d93fe043f..cdb1af5c3d13 100644--- a/Lib/test/test_gzip.py+++ b/Lib/test/test_gzip.py@@ -6,6 +6,7 @@ import os import io import struct+import tempfile gzip = test_support.import_module('gzip') data1 = """ int length=DEFAULTALLOC, err = Z_OK;@@ -331,6 +332,12 @@ def test_fileobj_from_fdopen(self): with gzip.GzipFile(fileobj=f, mode="w") as g: self.assertEqual(g.name, "") + def test_fileobj_from_io_open(self):+ fd = os.open(self.filename, os.O_WRONLY | os.O_CREAT)+ with io.open(fd, "wb") as f:+ with gzip.GzipFile(fileobj=f, mode="w") as g:+ self.assertEqual(g.name, "")+ def test_fileobj_mode(self): gzip.GzipFile(self.filename, "wb").close() with open(self.filename, "r+b") as f:@@ -359,6 +366,14 @@ def test_read_with_extra(self): with gzip.GzipFile(fileobj=io.BytesIO(gzdata)) as f: self.assertEqual(f.read(), b'Test') + def test_fileobj_without_name(self):+ # Issue #33038: GzipFile should not assume that file objects that have+ # a .name attribute use a non-None value.+ with tempfile.SpooledTemporaryFile() as f:+ with gzip.GzipFile(fileobj=f, mode='wb') as archive:+ archive.write(b'data')+ self.assertEqual(archive.name, '')+ def test_main(verbose=None): test_support.run_unittest(TestGzip) diff --git a/Misc/ACKS b/Misc/ACKSindex 580b0c5bf76d..458f31e6a6b7 100644--- a/Misc/ACKS+++ b/Misc/ACKS@@ -94,6 +94,7 @@ Michael R Bax Anthony Baxter Mike Bayer Samuel L. Bayer+Bo Bayles Donald Beaudry David Beazley Carlo Beccarinidiff --git a/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rst b/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rstnew file mode 100644index ..22d394b85ab7--- /dev/null+++ b/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rst@@ -0,0 +1,2 @@+gzip.GzipFile no longer produces an AttributeError exception when used with+a file object with a non-string name attribute. Patch by Bo Bayles.___Python-checkins mailing listpython-check...@python.orghttps://mail.python.org/mailman/listinfo/python-checkins___Python-Dev mailing listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-devUnsubscribe: https://mail.python.org/mailman/options/python-dev/nataliemorrisonxm980xm%40yahoo.com___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)
From: Serhiy Storchaka To: python-check...@python.org Sent: Wednesday, 9 May 2018, 10:14 Subject: [Python-checkins] bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095) https://github.com/python/cpython/commit/afe5f633e49e0e873d42088ae56819609c803ba0commit: afe5f633e49e0e873d42088ae56819609c803ba0branch: 2.7author: Bo Baylescommitter: Serhiy Storchaka date: 2018-05-09T13:14:40+03:00summary:bpo-33038: Fix gzip.GzipFile for file objects with a non-string name attribute. (GH-6095)files:A Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rstM Lib/gzip.pyM Lib/test/test_gzip.pyM Misc/ACKSdiff --git a/Lib/gzip.py b/Lib/gzip.pyindex 07c6db493b0b..76ace394f482 100644--- a/Lib/gzip.py+++ b/Lib/gzip.py@@ -95,9 +95,8 @@ def __init__(self, filename=None, mode=None, if filename is None: # Issue #13781: os.fdopen() creates a fileobj with a bogus name # attribute. Avoid saving this in the gzip header's filename field.- if hasattr(fileobj, 'name') and fileobj.name != '':- filename = fileobj.name- else:+ filename = getattr(fileobj, 'name', '')+ if not isinstance(filename, basestring) or filename == '': filename = '' if mode is None: if hasattr(fileobj, 'mode'): mode = fileobj.modediff --git a/Lib/test/test_gzip.py b/Lib/test/test_gzip.pyindex 902d93fe043f..cdb1af5c3d13 100644--- a/Lib/test/test_gzip.py+++ b/Lib/test/test_gzip.py@@ -6,6 +6,7 @@ import os import io import struct+import tempfile gzip = test_support.import_module('gzip') data1 = """ int length=DEFAULTALLOC, err = Z_OK;@@ -331,6 +332,12 @@ def test_fileobj_from_fdopen(self): with gzip.GzipFile(fileobj=f, mode="w") as g: self.assertEqual(g.name, "") + def test_fileobj_from_io_open(self):+ fd = os.open(self.filename, os.O_WRONLY | os.O_CREAT)+ with io.open(fd, "wb") as f:+ with gzip.GzipFile(fileobj=f, mode="w") as g:+ self.assertEqual(g.name, "")+ def test_fileobj_mode(self): gzip.GzipFile(self.filename, "wb").close() with open(self.filename, "r+b") as f:@@ -359,6 +366,14 @@ def test_read_with_extra(self): with gzip.GzipFile(fileobj=io.BytesIO(gzdata)) as f: self.assertEqual(f.read(), b'Test') + def test_fileobj_without_name(self):+ # Issue #33038: GzipFile should not assume that file objects that have+ # a .name attribute use a non-None value.+ with tempfile.SpooledTemporaryFile() as f:+ with gzip.GzipFile(fileobj=f, mode='wb') as archive:+ archive.write(b'data')+ self.assertEqual(archive.name, '')+ def test_main(verbose=None): test_support.run_unittest(TestGzip) diff --git a/Misc/ACKS b/Misc/ACKSindex 580b0c5bf76d..458f31e6a6b7 100644--- a/Misc/ACKS+++ b/Misc/ACKS@@ -94,6 +94,7 @@ Michael R Bax Anthony Baxter Mike Bayer Samuel L. Bayer+Bo Bayles Donald Beaudry David Beazley Carlo Beccarinidiff --git a/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rst b/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rstnew file mode 100644index ..22d394b85ab7--- /dev/null+++ b/Misc/NEWS.d/next/Library/2018-03-10-20-14-36.bpo-33038.yA6CP5.rst@@ -0,0 +1,2 @@+gzip.GzipFile no longer produces an AttributeError exception when used with+a file object with a non-string name attribute. Patch by Bo Bayles.___Python-checkins mailing listpython-check...@python.orghttps://mail.python.org/mailman/listinfo/python-checkins___Python-Dev mailing listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-devUnsubscribe: https://mail.python.org/mailman/options/python-dev/nataliemorrisonxm980xm%40yahoo.com___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Changes to configure.ac, auto-detection and related build issues
I didn't look at your PRs yet, but PR commits are squashed into a single commit. So it's better to have multiple PRs for different changes. Victor ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Changes to configure.ac, auto-detection and related build issues
On 15 May 2018 at 05:54, Ned Deily wrote: > On May 15, 2018, at 01:58, Eitan Adler wrote: >> On Monday, 14 May 2018, Victor Stinner wrote: >> Hi Eitan, >> >> 2018-05-15 0:01 GMT-04:00 Eitan Adler : >> > I am working on updating, fixing, or otherwise changing python's >> > configure.ac. This work is complex, (...) >> >> Is your work public? Is there an open issue on bugs.python.org or an >> open pull request? >> >> I'm opening bugs and PRs as I Go. Some examples are: >> >> https://github.com/python/cpython/commit/98929b545e86e7c7296c912d8f34e8e8d3fd6439 >> https://github.com/python/cpython/pull/6845 >> https://github.com/python/cpython/pull/6848 >> https://github.com/python/cpython/pull/6849 >> https://bugs.python.org/issue33485 >> >> And so on >> >> >> If not, would you mind to at least describe the changes that you plan to do? >> >> > Please feel free to tag me in >> > related PRs or bugs or emails over the next few weeks. >> >> Hopefully, we only rarely need to modify configure.ac >> >> I'm primarily worried about breaking arcane platforms I don't have direct >> access to. > > > Hi, Eitan! > > As you recognize, it is always a bit dangerous to modify configure.ac and > friends as we do support so many platforms and configuration and downstream > users try combinations that we don't claim to test or support. So, we try to > be conservative about making changes there and do so only with good reason. > > So far, it's somewhat difficult for me to understand what you are trying to > accomplish with the changes you've noted so far other than various cosmetic > cleanups. This all started when I found a bug in the C code of python. I wanted to submit a PR and test my change, but found that it was painful to compile Python on many platforms. In particular I needed to use "clang" but configure.ac was looking for a compiler called "gcc -pthread". As I started to fix this, I realized there is a lot of unneeded complexity in configure.ac and am now trying to clean that up. > It is also difficult to properly review a bunch of small PRs that modify > the same configuration files and especially without an overall tracking issue. I wanted to keep the reviews small to be reviewable, revertable, and bisectable. Is there a nicer way of handling that? Maybe a single large review with a series of small commits? > For most of this to move forward, I think you should create or adapt at least > one b.p.o issue to describe what changes you are suggesting and why and how > they apply to our various platforms and then consolidate PRs under that PR. > Don't be surprised if the PRs don't get much attention right away as we're > busy at the moment trying to get 3.7.0 out the door. Understood. There are lot of PRs and a lot of work. I've been pretty happy with the traction so far. > And it would be best to avoid including generated files (like configure vs > configure.ac) in new PRs as that will only add to the likelihood of merge > conflicts and review complexity. I've gotten three different pieces of advice about this: (a) always include them, so its easier to review (b) never include, so its easier to review and and avoid merge conflicts (c) only include if your tool version matches what was used originally I don't care much but its clear there isn't agreement. -- Eitan Adler ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 575 (Unifying function/method classes) update
On 05/15/18 05:15, Jeroen Demeyer wrote: On 2018-05-14 19:56, Petr Viktorin wrote: It does quite a lot of things, and the changes are all intertwined, which will make it hard to get reviewed and accepted. The problem is that many things *are* already intertwined currently. You cannot deal with functions without involving methods for example. An important note is that it was never my goal to create a minimal PEP. I did not aim for changing as little as possible. I was thinking: we are changing functions, what would be the best way to implement them? That might be a problem. For the change to be accepted, a core developer will need to commit to maintaining the code, understand it, and accept responsibility for anything that's broken. Naturally, large-scale changes have less of a chance there. With such a "finished product" PEP, it's hard to see if some of the various problems could be solved in a better way -- faster, more maintainable, or less disruptive. It's also harder from a psychological point of view: you obviously already put in a lot of good work, and it's harder to waste that work if an even better solution is found. (I always tell Marcel to view large-scale changes as a hands-on learning experiment -- more likely to be thrown away than accepted -- rather than as creating a finished project.) The main goal was fixing introspection but a secondary goal was fixing many of the existing warts with functions. Probably this secondary goal will in the end be more important for the general Python community. I would argue that my PEP may look complicated, but I'm sure that the end result will be a simpler implementation than we have today. Instead of having four related classes implementing similar functionality (builtin_function_or_method, method, method_descriptor and function), we have just one (base_function). The existing classes like method still exist with my PEP but a lot of the core functionality is implemented in the common base_function. Is a branching class hierarchy, with quite a few new of flags for feature selection, the kind of simplicity we want? Would it be possible to first decouple things, reducing the complexity, and then tackle the individual problems? This is really one of the key points: while my PEP *could* be implemented without the base_function class, the resulting code would be far more complicated. Are there parts that can be left to a subsequent PEP, to simplify the document (and implementation)? It depends. The current PEP is more or less a finished product. You can of course pick parts of the PEP and implement those, but then those parts will be somewhat meaningless individually. But if PEP 575 is accepted "in principle" (you accept the new class hierarchy for functions), then the details could be spread over several PEPs. But those individual PEPs would only make sense in the light of PEP 575. Well, that's the thing I'm not sure about. The class hierarchy still makes it hard to decouple the introspection side (how functions look on the outside) from the calling mechanism (how the calling works internally). It fear that it is replacing complexity with a different kind of complexity. So my main question now is, can this all be *simplified* rather than *reorganized*? It's a genuine question – I don't know, but I feel it should be explored more. A few small details could be left out, such as METH_BINDING. But that wouldn't yield a significant simplification. It seems to me that the current complexity is (partly) due to the fact that how functions are *called* is tied to how they are *introspected*. The *existing* situation is that introspection is totally tied to how functions are called. So I would argue that my PEP improves on that by removing some of those ties by moving __call__ to a common base class. Maybe we can change `inspect` to use duck-typing instead of isinstance? That was rejected on https://bugs.python.org/issue30071 Then, if built-in functions were subclassable, Cython functions could need to provide appropriate __code__/__defaults__/__kwdefaults__ attributes that inspect would pick up. Of course, that's possible. I don't think that it would be a *better* solution than my PEP though. Essentially, my PEP started from that idea. But then you realize that you'll need to handle not only built-in functions but also method descriptors (unbound methods of extension types). And you'll want to allow __get__ for the new subclasses. For efficiency, you really want to implement __get__ in the base classes (both builtin_function_or_method and method_descriptor) because of optimizations combining __get__ and __call__ (the LOAD_METHOD and CALL_METHOD opcodes). And then you realize that it makes no sense to duplicate all that functionality in both classes. So you add a new base class. You already end up with a major part of my PEP this way. Starting from an idea and ironing out the de
Re: [Python-Dev] PEP 575 (Unifying function/method classes) update
On 2018-05-14 22:38, Petr Viktorin wrote: Why are these flags added? I made a minor edit to the PEP to remove those flags: https://github.com/python/peps/pull/649 ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Changes to configure.ac, auto-detection and related build issues
On May 15, 2018, at 01:58, Eitan Adler wrote: > On Monday, 14 May 2018, Victor Stinner wrote: > Hi Eitan, > > 2018-05-15 0:01 GMT-04:00 Eitan Adler : > > I am working on updating, fixing, or otherwise changing python's > > configure.ac. This work is complex, (...) > > Is your work public? Is there an open issue on bugs.python.org or an > open pull request? > > I'm opening bugs and PRs as I Go. Some examples are: > > https://github.com/python/cpython/commit/98929b545e86e7c7296c912d8f34e8e8d3fd6439 > https://github.com/python/cpython/pull/6845 > https://github.com/python/cpython/pull/6848 > https://github.com/python/cpython/pull/6849 > https://bugs.python.org/issue33485 > > And so on > > > If not, would you mind to at least describe the changes that you plan to do? > > > Please feel free to tag me in > > related PRs or bugs or emails over the next few weeks. > > Hopefully, we only rarely need to modify configure.ac > > I'm primarily worried about breaking arcane platforms I don't have direct > access to. Hi, Eitan! As you recognize, it is always a bit dangerous to modify configure.ac and friends as we do support so many platforms and configuration and downstream users try combinations that we don't claim to test or support. So, we try to be conservative about making changes there and do so only with good reason. So far, it's somewhat difficult for me to understand what you are trying to accomplish with the changes you've noted so far other than various cosmetic cleanups. It is also difficult to properly review a bunch of small PRs that modify the same configuration files and especially without an overall tracking issue. For most of this to move forward, I think you should create or adapt at least one b.p.o issue to describe what changes you are suggesting and why and how they apply to our various platforms and then consolidate PRs under that PR. Don't be surprised if the PRs don't get much attention right away as we're busy at the moment trying to get 3.7.0 out the door. And it would be best to avoid including generated files (like configure vs configure.ac) in new PRs as that will only add to the likelihood of merge conflicts and review complexity. Thanks! - Ned Deily n...@python.org -- [] ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] FINAL WEEK FOR 3.7.0 CHANGES!
This is it! We are down to THE FINAL WEEK for 3.7.0! Please get your feature fixes, bug fixes, and documentation updates in before 2018-05-21 ~23:59 Anywhere on Earth (UTC-12:00). That's about 7 days from now. We will then tag and produce the 3.7.0 release candidate. Our goal continues been to be to have no changes between the release candidate and final; AFTER NEXT WEEK'S RC1, CHANGES APPLIED TO THE 3.7 BRANCH WILL BE RELEASED IN 3.7.1. Please double-check that there are no critical problems outstanding and that documentation for new features in 3.7 is complete (including NEWS and What's New items), and that 3.7 is getting exposure and tested with our various platorms and third-party distributions and applications. Those of us who are participating in the development sprints at PyCon US 2018 here in Cleveland can feel the excitement building as we work through the remaining issues, including completing the "What's New in 3.7" document and final feature documentation. (We wish you could all be here.) As noted before, the ABI for 3.7.0 was frozen as of 3.7.0b3. You should now be treating the 3.7 branch as if it were already released and in maintenance mode. That means you should only push the kinds of changes that are appropriate for a maintenance release: non-ABI-changing bug and feature fixes and documentation updates. If you find a problem that requires an ABI-altering or other significant user-facing change (for example, something likely to introduce an incompatibility with existing users' code or require rebuilding of user extension modules), please make sure to set the b.p.o issue to "release blocker" priority and describe there why you feel the change is necessary. If you are reviewing PRs for 3.7 (and please do!), be on the lookout for and flag potential incompatibilities (we've all made them). Thanks again for all of your hard work towards making 3.7.0 yet another great release - coming to a website near you on 06-15! Release Managerly Yours, --Ned https://www.python.org/dev/peps/pep-0537/ -- Ned Deily n...@python.org -- [] ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 575 (Unifying function/method classes) update
On 2018-05-14 19:56, Petr Viktorin wrote: It does quite a lot of things, and the changes are all intertwined, which will make it hard to get reviewed and accepted. The problem is that many things *are* already intertwined currently. You cannot deal with functions without involving methods for example. An important note is that it was never my goal to create a minimal PEP. I did not aim for changing as little as possible. I was thinking: we are changing functions, what would be the best way to implement them? The main goal was fixing introspection but a secondary goal was fixing many of the existing warts with functions. Probably this secondary goal will in the end be more important for the general Python community. I would argue that my PEP may look complicated, but I'm sure that the end result will be a simpler implementation than we have today. Instead of having four related classes implementing similar functionality (builtin_function_or_method, method, method_descriptor and function), we have just one (base_function). The existing classes like method still exist with my PEP but a lot of the core functionality is implemented in the common base_function. This is really one of the key points: while my PEP *could* be implemented without the base_function class, the resulting code would be far more complicated. Are there parts that can be left to a subsequent PEP, to simplify the document (and implementation)? It depends. The current PEP is more or less a finished product. You can of course pick parts of the PEP and implement those, but then those parts will be somewhat meaningless individually. But if PEP 575 is accepted "in principle" (you accept the new class hierarchy for functions), then the details could be spread over several PEPs. But those individual PEPs would only make sense in the light of PEP 575. A few small details could be left out, such as METH_BINDING. But that wouldn't yield a significant simplification. It seems to me that the current complexity is (partly) due to the fact that how functions are *called* is tied to how they are *introspected*. The *existing* situation is that introspection is totally tied to how functions are called. So I would argue that my PEP improves on that by removing some of those ties by moving __call__ to a common base class. Maybe we can change `inspect` to use duck-typing instead of isinstance? That was rejected on https://bugs.python.org/issue30071 Then, if built-in functions were subclassable, Cython functions could need to provide appropriate __code__/__defaults__/__kwdefaults__ attributes that inspect would pick up. Of course, that's possible. I don't think that it would be a *better* solution than my PEP though. Essentially, my PEP started from that idea. But then you realize that you'll need to handle not only built-in functions but also method descriptors (unbound methods of extension types). And you'll want to allow __get__ for the new subclasses. For efficiency, you really want to implement __get__ in the base classes (both builtin_function_or_method and method_descriptor) because of optimizations combining __get__ and __call__ (the LOAD_METHOD and CALL_METHOD opcodes). And then you realize that it makes no sense to duplicate all that functionality in both classes. So you add a new base class. You already end up with a major part of my PEP this way. That still leaves the issue of what inspect.isfunction() should do. Often, "isfunction" is used to check for "has introspection" so you certainly want to allow for custom built-in function classes to satisfy inspect.isfunction(). So you need to involve Python functions too in the class hierarchy. And that's more or less my PEP. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 575 (Unifying function/method classes) update
On 2018-05-14 22:38, Petr Viktorin wrote: Why are these flags added? They aren't free – the space of available flags is not infinite. If something (Cython?) needs eight of them, it would be nice to mention the use case, at least as an example. What should Python do with a m_methods entry that has METH_CUSTOM set? Again it would be nice to have an example or use case. They have no specific use case. I just added this because it made sense abstractly. I can remove this from my PEP to simplify it. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com