Re: [Python-ideas] Null coalescing operator

2016-11-01 Thread Paul Moore
On 1 November 2016 at 10:11, Nick Coghlan  wrote:
>
> I do think it would be worth covering the symbol+keyword option
> discussed in PEP 531 (i.e. "?else" instead of "??", but keeping "?.",
> and "?[")

FWIW, I'm not keen on it.

As a technical question, would it be treated in the syntax as a
keyword, which happened to be made up of a punctuation character
followed by letters, or as a ? symbol followed by a keyword? The
difference would be in how "? else" was treated. If the space is not
allowed, we have a unique situation in Python's grammar (where
whitespace between a symbol and a keyword is explicitly disallowed
rather than being optional). If it is allowed, I suspect a lot of
people would prefer to write "? else" and aesthetically the two seem
very different to me.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP 531: Existence checking operators

2016-10-28 Thread Paul Moore
On 28 October 2016 at 15:40, Nick Coghlan  wrote:
> I also don't think the idea is sufficiently general to be worthy of
> dedicated syntax if it's limited specifically to "is not None" checks
> - None's definitely special, but it's not *that* special. Unifying
> None, NaN, NotImplemented and Ellipsis into a meta-category of objects
> that indicate the absence of information rather than a specific value,
> though? And also allowing developers to emulate the protocol for
> testing purposes? That's enough to pique my interest.

I think that's the key for me - new syntax for "is not None" types of
test seems of limited value (sure, other languages have such things,
but that's not compelling - the contexts are different). However, I'm
not convinced by your proposal that we can unify None, NaN,
NotImplemented and Ellipsis in the way you suggest. I wouldn't expect
a[1, None, 2] to mean the same as a[1, ..., 2], so why should an
operator that tested for "Ellipsis or None" be useful? Same for
NotImplemented - we're not proposing to allow rich comparison
operators to return None rather than NotImplemented. The nearest to
plausible is NaN vs None - but even there I don't see the two as the
same.

So, to answer your initial questions, in my opinion:

1. The concept of "checking for existence" is valid.
2. I don't see merging domain-specific values under a common "does not
exist" banner as useful. Specifically, because I wouldn't want the
"does not exist" values to become interchangeable.
3. I don't think there's much value in specific existence-checking
syntax, precisely because I don't view it as a good thing to merge
multiple domain-specific "does not exist", and therefore the benefit
is limited to a shorter way of saying "is not None".

As you noted, given my answers to 1-3, there's not much point in
considering the remaining questions. However, I do think that there's
another concept tied up in the proposals here, that of "short
circuiting attribute access / indexing". The call was for something
that said roughly "a.b if a is not None, otherwise None". But this is
only one form of this pattern - there's a similar pattern, "a.b if a
has an attribute b, otherwise None". And that's been spelled
"getattr(a, 'b', None)" for a long time now. The existence of getattr,
and the fact that no-one is crying out for it to be replaced with
syntax, implies to me that before leaping to a syntax solution we
should be looking at a normal function (possibly a builtin, but maybe
even just a helper). I'd like to see a justification for why "a.b if a
is not None, else None" deserves syntax when "a.b if a has attribute
b, else None" doesn't.

IMO, there's no need for syntax here. There *might* be some benefit in
some helper functions, though. The cynic in me wonders how much of
this debate is rooted in the fact that it's simply more fun to propose
new syntax, than to just write a quick helper and get on with coding
your application...

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Null coalescing operator

2016-10-31 Thread Paul Moore
On 31 October 2016 at 17:16, Guido van Rossum  wrote:
> I think we should try to improve our intutition about these operators. Many
> things that are intuitively clear still require long turgid paragraphs in
> reference documentation to state the behavior unambiguously -- but that
> doesn't stop us from intuitively grasping concepts like a+b (when does
> b.__radd__ get called?) or @classmethod.
[...]
> The "?." operator splits the expression in two parts; the second part is
> skipped if the first part is None.
>
> Eventually this *will* become intuitive. The various constraints are all
> naturally imposed by the grammar so you won't have to think about them
> consciously.

Thanks. Yes, I agree that details in a spec are never particularly
obvious, and we need an intuition of what the operator does if it's to
be successful.

Mark - I couldn't offer a specific rewording, precisely because I
found the whole thing confusing. But based on Guido's post, I'd say
that the "intuitive" explanation of the proposed operators should be
right at the top of the PEP, in the abstract - and should be repeated
as the first statement in the specification section for each operator.
The details can then follow, including all of the corner cases. But
I'd be inclined there to word the corner cases as positive statements,
rather than negative ones. Take for example, the case
"d?.year.numerator + 1" - you say

"""
Note that the error in the second example is not on the attribute
access numerator . In fact, that attribute access is never performed.
The error occurs when adding None + 1 , because the None -aware
attribute access does not short circuit + .
"""

which reads to me as presenting the misconception (that the error was
from the access to numerator) before the correct explanation, and then
explaining to the reader why they were confused if they thought that.
I'd rather see it worded something along the lines of:

"""

>>> d = date.today()
>>> d?.year.numerator + 1
2016

>>> d = None
>>> d?.year.numerator + 1

Traceback (most recent call last):
  File "", line 1, in 
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'

Note that the second example demonstrates that when ?. splits the
enclosing expression into 2 parts, operators like + have a lower
precedence, and so are not short circuited. So, we get a TypeError if
d is None, because we're trying to add None to an integer (as the
error states).
"""

There's no need to even *mention* the incorrect interpretation, it
does nothing for people who'd misinterpreted the example in the first
place, but for people who hadn't, it just suggests to them an
alternative explanation they hadn't thought of - so confusing them
where they weren't confused before.

Does this clarify what I was struggling with in the way the PEP was worded?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Non-ASCII in Python syntax? [was: Null coalescing operator]

2016-10-30 Thread Paul Moore
On 30 October 2016 at 12:31, Chris Angelico <ros...@gmail.com> wrote:
> On Sun, Oct 30, 2016 at 11:22 PM, Paul Moore <p.f.mo...@gmail.com> wrote:
>> In mentioning emoji, my main point was that "average computer users"
>> are more and more likely to want to use emoji in general applications
>> (emails, web applications, even documents) - and if a sufficiently
>> general solution for that problem is found, it may provide a solution
>> for the general character-entry case.
>
> Before Unicode emoji were prevalent, ASCII emoticons dominated, and
> it's not uncommon for multi-character sequences to be automatically
> transformed into their corresponding emoji. It isn't hard to set
> something up that does these kinds of transformations for other
> Unicode characters - use trigraphs for clarity, and type "/:0" to
> produce "∅". Or whatever's comfortable for you. Maybe rig it on
> Ctrl-Alt-0, if you prefer shift-key sequences.

It's certainly not difficult, in principle. I have (had, I lost it in
an upgrade recently...) a little AutoHotkey program that interpreted
Vim-style digraphs in any application that needed them. But my point
was that we don't want to require people to write such custom
utilities, just to be able to write Python code. Or is the feeling
that it's acceptable to require that?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Non-ASCII in Python syntax? [was: Null coalescing operator]

2016-10-30 Thread Paul Moore
On 30 October 2016 at 14:43,   wrote:
> Just picking a nit, here, windows will happily let you do silly things like 
> hook 14 keyboards up and let you map all of emoji to them.  Sadly, this 
> requires lua.

Off topic, I know, but how? I have a laptop with an external and an
internal keyboard. Can I map the internal keyboard to different
characters somehow?
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A better interactive prompt

2016-10-26 Thread Paul Moore
On 26 October 2016 at 22:40, Nikolaus Rath  wrote:
> It also imposes a significant burden on scripting. I often have elements
> like this in shell scripts:
>
> output=$(python < import h5py
> with h5py.File('foo', 'r') as fh:
>  print((fh['bla'] * fh['com']).sum())
> EOF
> )
>
> If this now starts up IPython, it'll be *significantly* slower.

Good point. We could, of course, detect when stdin is non-interactive,
but at that point the code is starting to get unreasonably complex, as
well as having way too many special cases. So I agree, that probably
kills the proposal.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A better interactive prompt

2016-10-26 Thread Paul Moore
On 26 October 2016 at 22:11, Todd  wrote:
> Isn't this what aliases are for?  Just set "python" to be an alias for
> "ipython" for your interactive shell.

I hadn't thought of that option. I might give it a try. Although I'm
not sure how I'd set up a Powershell function (I'm on Windows) that
would wrap the launcher (which selects the version of Python to use)
and invoke IPython. I'll give it a go, though.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP 531: Existence checking operators

2016-10-29 Thread Paul Moore
On 29 October 2016 at 07:21, Nick Coghlan  wrote:
> A short-circuiting if-else protocol for arbitrary "THEN if COND else
> ELSE" expressions could then look like this:
>
> _condition = COND
> if _condition:
> _then = THEN
> if hasattr(_condition, "__then__"):
> return _condition.__then__(_then)
> return _then
> else:
> _else = ELSE
> if hasattr(_condition, "__else__"):
> return _condition.__else__(_else)
> return _else
>
> "and" and "or" would then be simplified versions of that, where the
> condition expression was re-used as either the "THEN" subexpression
> ("or") or the "ELSE" subexpression ("and").
>
> The reason I think this is potentially interesting in the context of
> PEPs 505 and 531 is that with that protocol defined, the
> null-coalescing "operator" wouldn't need to be a new operator, it
> could just be a new builtin that defined the appropriate underlying
> control flow:

This seems to have some potential to me. It doesn't seem massively
intrusive (there's a risk that it might be considered a step too far
in "making the language mutable", but otherwise it's just a new
extension protocol around an existing construct). The biggest downside
I see is that it could be seen as simply generalisation for the sake
of it. But with the null-coalescing / sentinel checking use case, plus
Greg's examples from the motivation section of PEP 335, there may well
be enough potential uses to warrant such a change.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] INSANE FLOAT PERFORMANCE!!!

2016-10-11 Thread Paul Moore
On 11 October 2016 at 17:49, Nick Coghlan  wrote:
> On 12 October 2016 at 02:16, Elliot Gorokhovsky
>  wrote:
>> So I thought, wow, this will give some nice numbers! But I underestimated
>> the power of this optimization. You have no idea. It's crazy.
>> This is just insane. This is crazy.
>
> Not to take away from the potential for speed improvements (which do
> indeed seem interesting), but I'd ask that folks avoid using mental
> health terms to describe test results that we find unbelievable. There
> are plenty of other adjectives we can use, and a text-based medium
> like email gives us a chance to proofread our posts before we send
> them.

I'd also suggest toning down the rhetoric a bit (all-caps title, "the
contents of this message may be dangerous for readers with heart
conditions" etc. Your results do seem good, but it's a little hard to
work out what you actually did, and how your results were produced,
through the hype. It'll be much better when someone else has a means
to reproduce your results to confirm them. In all honestly, people
have been working on Python's performance for a long time now, and I'm
more inclined to think that a 50% speedup is a mistake rather than an
opportunity that's been missed for all that time. I'd be happy to be
proved wrong, but for now I'm skeptical.

Please continue working on this - I'd love my skepticism to be proved wrong!

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: unpacking generalisations for list comprehension

2016-10-13 Thread Paul Moore
On 13 October 2016 at 15:32, Sven R. Kunze  wrote:
> Steven, please. You seemed to struggle to understand the notion of the
> [*] construct and many people (not just me) here tried their best to
> explain their intuition to you.

And yet, the fact that it's hard to explain your intuition to others
(Steven is not the only one who's finding this hard to follow) surely
implies that it's merely that - personal intuition - and not universal
understanding.

The *whole point* here is that not everyone understands the proposed
notation the way the proposers do, and it's *hard to explain* to those
people. Blaming the people who don't understand does not support the
position that this notation should be added to the language. Rather,
it reinforces the idea that the new proposal is hard to teach (and
consequently, it may be a bad idea for Python).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] INSANE FLOAT PERFORMANCE!!!

2016-10-12 Thread Paul Moore
On 12 October 2016 at 11:16, Steven D'Aprano  wrote:
> On Wed, Oct 12, 2016 at 12:25:16AM +, Elliot Gorokhovsky wrote:
>
>> Regarding generalization: the general technique for special-casing is you
>> just substitute all type checks with 1 or 0 by applying the type assumption
>> you're making. That's the only way to guarantee it's safe and compliant.
>
> I'm confused -- I don't understand how *removing* type checks can
> possible guarantee the code is safe and compliant.
>
> It's all very well and good when you are running tests that meet your
> type assumption, but what happens if they don't? If I sort a list made
> up of (say) mixed int and float (possibly including subclasses), does
> your "all type checks are 1 or 0" sort segfault? If not, why not?
> Where's the safety coming from?

My understanding is that the code does a per-check that all the
elements of the list are the same type (float, for example). This is a
relatively quick test (O(n) pointer comparisons). If everything *is* a
float, then an optimised comparison routine that skips all the type
checks and goes straight to a float comparison (single machine op).
Because there are more than O(n) comparisons done in a typical sort,
this is a win. And because the type checks needed in rich comparison
are much more expensive than a pointer check, it's a *big* win.

What I'm *not* quite clear on is why Python 3's change to reject
comparisons between unrelated types makes this optimisation possible.
Surely you have to check either way? It's not that it's a particularly
important question - if the optimisation works, it's not that big a
deal what triggered the insight. It's just that I'm not sure if
there's some other point that I've not properly understood.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-13 Thread Paul Moore
On 13 October 2016 at 20:51, Random832  wrote:
> On Thu, Oct 13, 2016, at 15:46, Random832 wrote:
>> so,  under a similar 'transformation', "*foo for foo in bar" likewise
>> becomes "def f(): for foo in bar: yield from foo"
>>
>> bar = [(1, 2), (3, 4)]
>> (*(1, 2), *(3, 4)) == == tuple(f())
>> [*(1, 2), *(3, 4)] == == list(f())
>
>
> I accidentally hit ctrl-enter while copying and pasting, causing my
> message to go out while my example was less thorough than intended and
> containing syntax errors. It was intended to read as follows:
>
> ..."*foo for foo in bar" likewise becomes
>
> def f():
> for foo in bar:
> yield from foo
>
> a, b = (1, 2), (3, 4)
> bar = [a, b]
> (*a, *b) == (1, 2, 3, 4) == tuple(f()) # tuple(*foo for foo in bar)
> [*a, *b] == [1, 2, 3, 4] == list(f()) # [*foo for foo in bar]

I remain puzzled.

Given the well-documented and understood transformation:

[fn(x) for x in lst if cond]

translates to

result = []
for x in lst:
   if cond:
  result.append(fn(x))

please can you explain how to modify that translation rule to
incorporate the suggested syntax?

Personally, I'm not even sure any more that I can *describe* the
suggested syntax. Where in [fn(x) for x in lst if cond] is the *
allowed? fn(*x)? *fn(x)? Only as *x with a bare variable, but no
expression? Only in certain restricted types of construct which aren't
expressions but are some variation on an unpacking construct?

We've had a lot of examples. I think it's probably time for someone to
describe the precise syntax (as BNF, like the syntax in the Python
docs at 
https://docs.python.org/3.6/reference/expressions.html#displays-for-lists-sets-and-dictionaries
and following sections) and semantics (as an explanation of how to
rewrite any syntactically valid display as a loop). It'll have to be
done in the end, as part of any implementation, so why not now?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-13 Thread Paul Moore
On 13 October 2016 at 21:40, Sjoerd Job Postmus  wrote:
> However, consider the following spelling:
>
> l = [from f(t) for t in iterable]
>
> To me, it does not seem far-fetched that this would mean:
>
> def gen():
> for t in iterable:
> yield from f(t)
> l = list(gen())

Thank you. This is the type of precise definition I was asking for in
my previous post (your timing was superb!)

I'm not sure I *like* the proposal, but I need to come up with some
reasonable justification for my feeling, whereas for previous
proposals the "I don't understand what you're suggesting" was the
overwhelming feeling, and stifled any genuine discussion of merits or
downsides.

Paul

PS I can counter a suggestion of using *f(t) rather than from f(t) in
the above, by saying that it adds yet another meaning to the already
heavily overloaded * symbol. The suggestion of "from" avoids this as
"from" only has a few meanings already. (You can agree or disagree
with my view, but at least we're debating the point objectively at
that point!)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-17 Thread Paul Moore
On 17 October 2016 at 20:35, Sven R. Kunze  wrote:
>  P.S. It's very artificial to assume user are unable to use 'from itertools
> import chain' to try to make chain() seem more cumbersome than it is.
>
> I am sorry but it is cumbersome.

Imports are a fundamental part of Python. How are they "cumbersome"?
Is it cumbersome to have to import sys to get access to argv? To
import re to use regular expressions? To import subprocess to run an
external program?

Importing the features you use (and having an extensive standard
library of tools you might want, but which don't warrant being built
into the language) is, to me, a basic feature of Python.

Certainly having to add an import statement is extra typing. But
terseness was *never* a feature of Python. In many ways, a resistance
to overly terse (I could say "Perl-like") constructs is one of the
defining features of the language - and certainly, it's one that drew
me to Python, and one that I value.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-17 Thread Paul Moore
On 17 October 2016 at 21:22, Random832  wrote:
> For a more concrete example:
>
> [*range(x) for x in range(4)]
> [*(),*(0,),*(0,1),*(0,1,2)]
> [0, 0, 1, 0, 1, 2]
>
> There is simply no way to get there by using flatten(range(4)). The only
> way flatten *without* a generator expression can serve the same use
> cases as this proposal is for comprehensions of the *exact* form [*x for
> x in y]. For all other cases you'd need list(flatten(...generator
> expression without star...)).

Do you have a real-world example of needing this?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-17 Thread Paul Moore
On 17 October 2016 at 21:30, Random832 <random...@fastmail.com> wrote:
> On Mon, Oct 17, 2016, at 16:12, Paul Moore wrote:
>> And finally, no-one has even *tried* to explain why we need a third
>> way of expressing this construction. Nick made this point, and
>> basically got told that his condition was too extreme. He essentially
>> got accused of constructing an impossible test. And yet it's an
>> entirely fair test, and one that's applied regularly to proposals -
>> and many *do* pass the test.
>
> As the one who made that accusation, my objection was specifically to
> the word "always" - which was emphasized - and which is something that I
> don't believe is actually a component of the test that is normally
> applied. His words, specifically, were "a compelling argument needs to
> be presented that the new spelling is *always* preferable to the
> existing ones"
>
> List comprehensions themselves aren't even always preferable to loops.

Sigh. And no-one else in this debate has ever used exaggerated language.

I have no idea if Nick would reject an argument that had any
exceptions at all, but I don't think it's unreasonable to ask that
people at least *try* to formulate an argument that demonstrates that
the two existing ways we have are inferior to the proposal. Stating
that you're not even willing to try is hardly productive.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-17 Thread Paul Moore
On 17 October 2016 at 18:32, Steven D'Aprano  wrote:
> On Mon, Oct 17, 2016 at 12:11:46PM -0400, Random832 wrote:
>
>> Honestly, it goes beyond just being "wrong". The repeated refusal to
>> even acknowledge any equivalence between [...x... for x in [a, b, c]]
>> and [...a..., ...b..., ...c...] truly makes it difficult for me to
>> accept some people's _sincerity_.
>
> While we're talking about people being insincere, how about if you take
> a look at your own comments? This "repeated refusal" that you accuse us
> (opponents of this proposal) of is more of a rhetorical fiction than an
> actual reality. Paul, David and I have all acknowledged the point you
> are trying to make. I won't speak for Paul or David, but speaking for
> myself, it isn't that I don't understand the point you're trying to
> make, but that I do not understand why you think that point is
> meaningful or desirable.

For my part:

1. I've acknowledged that equivalence. As well as the fact that the
proposal (specifically, as explained formally by Greg) is
understandable and a viable possible extension.
2. I don't find the "interpolation" equivalence a *good* way of
interpreting list comprehensions, any more than I think that loops
should be explained by demonstrating how to unroll them.
3. I've even explicitly revised my position on the proposal from -1 to
-0 (although I'm tending back towards -1, if I'm honest...).
4. Whether you choose to believe me or not, I've sincerely tried to
understand the proposal, but I pretty much had to insist on a formal
definition of syntax and semantics before I got an explanation that I
could follow.

However:

1. I'm tired of hearing that the syntax is "obvious". This whole
thread proves otherwise, and I've yet to hear anyone from the
"obvious" side of the debate acknowledge that.
2. Can someone summarise the *other* arguments for the proposal? I'm
genuinely struggling to recall what they are (assuming they exist). It
feels like I'm hearing nothing more than "it's obvious what this does,
it's obvious that it's needed and the people saying it isn't are
wrong". That may well not be the truth, but *it's the impression I'm
getting*. I've tried to take a step back and summarise my side of the
debate a couple of times now. I don't recall seeing anyone doing the
same from the other side (Greg's summarised the proposal, but I don't
recall anyone doing the same with the justification for it).
3. The fact is that the proposed behaviour was *specifically* blocked,
*precisely* because of strong concerns that it would cause readability
issues and only had "mild" support. I'm not hearing any reason to
change that decision (sure, there are a few people here offering
something stronger than "mild" support, but it's only a few voices,
and they are not addressing the readability concerns at all). There
was no suggestion in the PEP that this decision was expected to be
revisited later. Maybe there was an *intention* to do so, but the PEP
didn't state it. I'd suggest that this fact alone implies that the
people proposing this change need to write a new PEP for it, but
honestly I don't think the way the current discussion has gone
suggests that there's any chance of putting together a persuasive PEP,
much less a consensus decision.

And finally, no-one has even *tried* to explain why we need a third
way of expressing this construction. Nick made this point, and
basically got told that his condition was too extreme. He essentially
got accused of constructing an impossible test. And yet it's an
entirely fair test, and one that's applied regularly to proposals -
and many *do* pass the test. It's worth noting here that we have had
no real-world use cases, so the common approach of demonstrating real
code, and showing how the proposal improves it, is not available.
Also, there's no evidence that this is a common need, and so it's not
clear to what extent any sort of special language support is
warranted. We don't (as far as I know, and no-one's provided evidence
otherwise) see people routinely writing workarounds for this
construct. We don't hear of trainers saying that pupils routinely try
to do this, and are surprised when it doesn't work (I'm specifically
talking about students *deducing* this behaviour, not being asked if
they think it's reasonable once explained). These are all arguments
that have been used in the past to justify new syntax (and so reach
Nick's "bar").

And we've had a special-case function (flatten) proposed to cover the
most common cases (taking the approach of the 80-20 rule) - but the
only response to that proposal has been "but it doesn't cover
". If it didn't cover a demonstrably common
real-world problem, that would be a different matter - but anyone can
construct cases that aren't covered by *any* given proposal. That
doesn't prove anything.

I don't see any signs of progress here. And I'm pretty much at the
point where I'm losing interest in having the same points repeated at
me 

Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-17 Thread Paul Moore
On 17 October 2016 at 21:43, Sven R. Kunze  wrote:
> The statement about "cumbersomeness" was specific to this whole issue. Of
> course, importing feature-rich pieces from the stdlib is really cool. It was
> more the missed ability to do the same with list comprehensions of what is
> possible with list displays today. List displays feature * without importing
> anything fancy from the stdlib.

In your other post you specifically mentioned
itertools.chain.from_iterable. I'd have to agree with you that this
specific name feels clumsy to me as well. But I'd argue for finding a
better name, not replacing the function with syntax :-)

Cheers,
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP: Distributing a Subset of the Standard Library

2016-11-29 Thread Paul Moore
On 28 November 2016 at 22:33, Steve Dower  wrote:
> Given that, this wouldn't necessarily need to be an executable file. The
> finder could locate a "foo.missing" file and raise ModuleNotFoundError with
> the contents of the file as the message. No need to allow/require any Python
> code at all, and no risk of polluting sys.modules.

I like this idea. Would it completely satisfy the original use case
for the proposal? (Or, to put it another way, is there any specific
need for arbitrary code execution in the missing.py file?)

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Input characters in strings by decimals (Was: Proposal for default character representation)

2016-12-08 Thread Paul Moore
On 7 December 2016 at 23:52, Mikhail V  wrote:
> Proposal: I would want to have a possibility to input it *by decimals*:
>
> s = "first cyrillic letters: \{1040}\{1041}\{1042}"
> or:
> s = "first cyrillic letters: \(1040)\(1041)\(1042)"
>
> =
>
> This is more compact and seems not very contradictive with
> current Python escape characters in string literals.
> So backslash is a start of some escaping in most cases.
>
> For me most important is that in such way I would avoid
> any presence of hex numbers in strings, which I find very good
> for readability and for me it is very convinient since I use decimals
> for processing everywhere (and encourage everyone to do so).
>
> So this is my proposal, any comments on this are appreciated.

-1. We already have plenty of ways to specify characters in
strings[1], we don't need another.

If readability is what matters to you, and you (unlike many others)
consider hex to be unreadable, use the \N{...} form.

Paul

[1] Including (ab)using f-strings to hide the use of chr().
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Things that won't change (proposed PEP)

2017-01-12 Thread Paul Moore
On 12 January 2017 at 04:39, Mikhail V  wrote:
> And my first though is about "will not change". Like: never ever change or
> like: will not change in 10 years or 20 years.

Like: "Please don't waste people's time trying to start a discussion
about them". In 10 or 20 years, if opinions have changed, the PEP can
be updated.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Settable defaulting to decimal instead of float

2017-01-12 Thread Paul Moore
On 12 January 2017 at 10:28, Victor Stinner  wrote:
> George requested this feature on the bug tracker:
> http://bugs.python.org/issue29223
>
> George was asked to start a discusson on this list. I posted the
> following comment before closing the issue:
>
> You are not the first one to propose the idea.

OK, but without additional detail (for example, how would the proposed
flag work, if the main module imports module A, then would float
literals in A be decimal or binary? Both could be what the user wants)
it's hard to comment. And as you say, most of this has been discussed
before, so I'd like to see references back to the previous discussions
in any proposal, with explanations of how the new proposal addresses
the objections raised previously.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Settable defaulting to decimal instead of float

2017-01-12 Thread Paul Moore
On 12 January 2017 at 15:34, Victor Stinner  wrote:
> 2017-01-12 13:13 GMT+01:00 Stephan Houben :
>> Something like:
>> from __syntax__ import decimal_literal
>
> IMHO you can already implement that with a third party library, see for 
> example:
> https://github.com/lihaoyi/macropy
>
> It also reminds me my PEP 511 which would open the gate for any kind
> of Python preprocessor :-)
> https://www.python.org/dev/peps/pep-0511/

PEP 302 (import hooks) pretty much did that years ago :-) Just write
your own processor to translate a new filetype into bytecode, and
register it as an import hook. There was a web framework that did that
for templates not long after PEP 302 got implemented (can't recall the
name any more).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP: Distributing a Subset of the Standard Library

2016-11-29 Thread Paul Moore
On 29 November 2016 at 10:51, Wolfgang Maier
<wolfgang.ma...@biologie.uni-freiburg.de> wrote:
> On 29.11.2016 10:39, Paul Moore wrote:
>>
>> On 28 November 2016 at 22:33, Steve Dower <steve.do...@python.org> wrote:
>>>
>>> Given that, this wouldn't necessarily need to be an executable file. The
>>> finder could locate a "foo.missing" file and raise ModuleNotFoundError
>>> with
>>> the contents of the file as the message. No need to allow/require any
>>> Python
>>> code at all, and no risk of polluting sys.modules.
>>
>>
>> I like this idea. Would it completely satisfy the original use case
>> for the proposal? (Or, to put it another way, is there any specific
>> need for arbitrary code execution in the missing.py file?)
>>
>
> The only thing that I could think of so far would be cross-platform
> .missing.py files that query the system (e.g. using the platform module) to
> generate adequate messages for the specific platform or distro. E.g.,
> correctly recommend to use dnf install or yum install or apt install, etc.

Yeah. I'd like to see a genuine example of how that would be used in
practice, otherwise I'd be inclined to suggest YAGNI. (Particularly
given that this PEP is simply a standardised means of vendor
customisation - for special cases, vendors obviously still have the
capability to patch or override standard behaviour in any way they
like).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Better error messages [was: (no subject)]

2016-11-30 Thread Paul Moore
On 30 November 2016 at 02:14, Stephen J. Turnbull
 wrote:
> How about:
>
> class Blog:
> pass
>
> blog = get_blog_for_date(someday)
>
> logn = log(blog.size)
>
> NameError: Python doesn't recognize the function "log".  Did you
> mean "Blog"?
>
> Wouldn't
>
> NameError: Python doesn't recognize the name "log".  Perhaps
> you need to import the "math" module?

... and of course up until this example, I'd assumed you were talking
about the log function from the logging module :-)

I'm a strong +1 on better error messages, but there's always a risk
with heuristics that the resulting messages end up worse, not better.

Maybe keep it simpler:

NameError: Python doesn't recognize the name "log". Maybe you
misspelled the name, or did you mean to import the function from a
module?

and don't try to guess the user's intent.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] incremental hashing in __hash__

2017-01-05 Thread Paul Moore
On 5 January 2017 at 13:28, Neil Girdhar  wrote:
> The point is that the OP doesn't want to write his own hash function, but
> wants Python to provide a standard way of hashing an iterable.  Today, the
> standard way is to convert to tuple and call hash on that.  That may not be
> efficient. FWIW from a style perspective, I agree with OP.

The debate here regarding tuple/frozenset indicates that there may not
be a "standard way" of hashing an iterable (should order matter?).
Although I agree that assuming order matters is a reasonable
assumption to make in the absence of any better information.

Hashing is low enough level that providing helpers in the stdlib is
not unreasonable. It's not obvious (to me, at least) that it's a
common enough need to warrant it, though. Do we have any information
on how often people implement their own __hash__, or how often
hash(tuple(my_iterable)) would be an acceptable hash, except for the
cost of creating the tuple? The OP's request is the only time this has
come up as a requirement, to my knowledge. Hence my suggestion to copy
the tuple implementation, modify it to work with general iterables,
and publish it as a 3rd party module - its usage might give us an idea
of how often this need arises. (The other option would be for someone
to do some analysis of published code).

Assuming it is a sufficiently useful primitive to add, then we can
debate naming. But I'd prefer it to be named in such a way that it
makes it clear that it's a low-level helper for people writing their
own __hash__ function, and not some sort of variant of hashing (which
hash.from_iterable implies to me).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] incremental hashing in __hash__

2017-01-06 Thread Paul Moore
On 6 January 2017 at 07:26, Neil Girdhar  wrote:
> On Fri, Jan 6, 2017 at 2:07 AM Stephen J. Turnbull
>  wrote:
>>
>> Neil Girdhar writes:
>>
>>  > I don't understand this?  How is providing a default method in an
>>  > abstract base class a pessimization?  If it happens to be slower
>>  > than the code in the current methods, it can still be overridden.
>>
>> How often will people override until it's bitten them?  How many
>> people will not even notice until they've lost business due to slow
>> response?  If you don't have a default, that's much more obvious.
>> Note that if there is a default, the collections are "large", and
>> equality comparisons are "rare", this could be a substantial overhead.
>
>
> I still don't understand what you're talking about here.  You're saying that
> we shouldn't provide a __hash__ in case the default hash happens to be
> slower than what the user wants and so by not providing it, we force the
> user to write a fast one?  Doesn't that argument apply to all methods
> provided by abcs?

The point here is that ABCs should provide defaults for methods where
there is an *obvious* default. It's not at all clear that there's an
obvious default for __hash__.

Unless I missed a revision of your proposal, what you suggested was:

> A better option is to add collections.abc.ImmutableIterable that derives from 
> Iterable and provides __hash__.

So what classes would derive from ImmutableIterable? Frozenset? A
user-defined frozendict? There's no "obvious" default that would work
for both those cases. And that's before we even get to the question of
whether the default has the right performance characteristics, which
is highly application-dependent.

It's not clear to me if you expect ImmutableIterable to provide
anything other than a default implementation of hash. If not, then how
is making it an ABC any better than simply providing a helper function
that people can use in their own __hash__ implementation? That would
make it explicit what people are doing, and avoid any tendency towards
people thinking they "should" inherit from ImmutableIterable and yet
needing to override the only method it provides.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] VT100 style escape codes in Windows

2016-12-28 Thread Paul Moore
Would this only apply to recent versions of Windows? (IIRC, the VT100
support is Win10 only). If so, I'd be concerned about scripts that
worked on *some* Windows versions but not others. And in particular,
about scripts written on Unix using raw VT codes rather than using a
portable solution like colorama.

At the point where we can comfortably assume the majority of users are
using a version of Windows that supports VT codes, I'd be OK with it
being the default, but until then I'd prefer it were an opt-in option.
Paul

On 28 December 2016 at 23:00, Joseph Hackman  wrote:
> Hey All!
>
> I propose that Windows CPython flip the bit for VT100 support (colors and
> whatnot) for the stdout/stderr streams at startup time.
>
> I believe this behavior is worthwhile because ANSI escape codes are standard
> across most of Python's install base, and the alternative for Windows (using
> ctypes/win32 to alter the colors) is non-intuitive and well beyond the scope
> of most users.
>
> Under Linux/Mac, the terminal always supports what it can, and it's up to
> the application to verify escape codes are supported. Under Windows,
> applications (Python) must specifically request that escape codes be
> enabled. The flag lasts for the duration of the application, and must be
> flipped on every launch. It seems many of the built-in windows commands now
> operate in this mode.
>
> This change would not impede tools that use the win32 APIs for the console
> (such as colorama), and is supported in windows 2000 and up.
>
> The only good alternatives I can see is adding colorized/special output as a
> proper python feature that actually checks using the terminal information in
> *nix and win32.
>
> For more info, please see the issue: http://bugs.python.org/issue29059
>
> Cheers,
> Joseph
>
>
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] incremental hashing in __hash__

2017-01-05 Thread Paul Moore
On 5 January 2017 at 00:31, Steven D'Aprano  wrote:
> This is a good point. Until now, I've been assuming that
> hash.from_iterable should consider order. But frozenset shows us that
> sometimes the hash should *not* consider order.
>
> This hints that perhaps the hash.from_iterable() should have its own
> optional dunder method. Or maybe we need two functions: an ordered
> version and an unordered version.
>
> Hmmm... just tossing out a wild idea here... let's get rid of the dunder
> method part of your suggestion, and add new public class methods to
> tuple and frozenset:
>
> tuple.hash_from_iter(iterable)
> frozenset.hash_from_iter(iterable)
>
>
> That gets rid of all the objections about backwards compatibility, since
> these are new methods. They're not dunder names, so there are no
> objections to being used as part of the public API.
>
> A possible objection is the question, is this functionality *actually*
> important enough to bother?
>
> Another possible objection: are these methods part of the sequence/set
> API? If not, do they really belong on the tuple/frozenset? Maybe they
> belong elsewhere?

At this point I'd be inclined to say that a 3rd party hashing_utils
module would be a reasonable place to thrash out these design
decisions before committing to a permanent design in the stdlib. The
popularity of such a module would also give a level of indication as
to whether this is an important optimisation in practice.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Adding an 'errors' argument to print

2017-03-24 Thread Paul Moore
On 24 March 2017 at 16:37, Victor Stinner  wrote:
> *If* we change something, I would prefer to modify sys.stdout. The
> following issue proposes to add
> sys.stdout.set_encoding(errors='replace'):
> http://bugs.python.org/issue15216

I thought I recalled seeing something like that discussed somewhere. I
agree that this is a better approach (even though it's not as granular
as being able to specify on an individual print statement).

> You can already set the PYTHONIOENCODING environment variable to
> ":replace" to use "replace" on sys.stdout (and sys.stderr).

That's something I didn't know. Thanks for the pointer!

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] What about regexp string litterals : re".*" ?

2017-03-28 Thread Paul Moore
On 28 March 2017 at 08:54, Simon D.  wrote:
> I believe that the u"" notation in Python 2.7 is defined by while
> importing the unicode_litterals module.

That's not true. The u"..." syntax is part of the language. from
future import unicode_literals is something completely different.

> Each regexp lib could provide its instanciation of regexp litteral
> notation.

The Python language has no way of doing that - user (or library)
defined literals are not possible.

> And if only the default one does, it would still be won for the
> beginers, and the majority of persons using the stdlib.

How? You've yet to prove that having a regex literal form is an
improvement over re.compile(r'put your regex here'). You've asserted
it, but that's a matter of opinion. We'd need evidence of real-life
code that was clearly improved by the existence of your proposed
construct.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Add pathlib.Path.write_json and pathlib.Path.read_json

2017-03-27 Thread Paul Moore
On 27 March 2017 at 15:40, Ram Rachum  wrote:
> Another idea: Maybe make json.load and json.dump support Path objects?

If they currently supported filenames, I'd say that's a reasonable
extension. Given that they don't, it still seems like more effort than
it's worth to save a few characters

with path.open('w'): json.dump(obj, f)
with path.open() as f: obj = json.load(f)

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Add pathlib.Path.write_json and pathlib.Path.read_json

2017-03-27 Thread Paul Moore
On 27 March 2017 at 15:48, Eric V. Smith  wrote:
> On 3/27/17 10:40 AM, Ram Rachum wrote:
>>
>> Another idea: Maybe make json.load and json.dump support Path objects?
>
>
> json.dump requires open file objects, not strings or Paths representing
> filenames.
>
> But does this not already do what you want:
>
> Path('foo.json').write_text(json.dumps(obj))
> ?

Indeed. There have now been a few posts quoting ways of reading and
writing JSON, all of which are pretty short (if that matters). Do we
*really* need another way?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Add pathlib.Path.write_json and pathlib.Path.read_json

2017-03-27 Thread Paul Moore
On 27 March 2017 at 17:43, Bruce Leban  wrote:
> the ability to read one json object from the input rather than reading the
> entire input

Is this a well-defined idea? From a quick read of the JSON spec (which
is remarkably short on details of how JSON is stored in files, etc)
the only reference I can see is to a "JSON text" which is a JSON
representation of a single value. There's nothing describing how
multiple values would be stored in the same file/transmitted in the
same stream. It's not unreasonable to assume "read one object, then
read another" but without an analysis of the grammar, it's not 100%
clear if the grammar supports that (you sort of have to assume that
when you hit "the end of the object" you skip some whitespace then
start on the next - but the spec doesn't say anything like that.
Alternatively, it's just as reasonable to assume that
json.load/json.loads expect to be passed a single "JSON text" as
defined by the spec.

If the spec was clear on how multiple objects in a single stream
should be handled, then yes the json module should support that. But
without anything explicit in the spec, it's not as obvious. What do
other languages do?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: Query language extension to Python (PythonQL)

2017-03-27 Thread Paul Moore
On 27 March 2017 at 10:54, Brice PARENT  wrote:
> I get it, but it's more a matter of perception. To me, the version I
> described is just Python, while yours is Python + specific syntax. As this
> syntax is only used in PyQL sub-language, it's not really Python any more...

... which is why I suspect that this discussion would be better
expressed as a suggestion that Python provide better support for
domain specific languages like the one PythonQL offers. In that
context, the "extended comprehension" format *would* be Python,
specifically it would simply be a DSL embedded in Python using
Python's standard features for doing that. Of course, that's just a
re-framing of the perception, and the people who don't like
sub-languages will be just as uncomfortable with DSLs.

However, it does put this request into the context of DSL support,
which is something that many languages provide, to a greater or lesser
extent. For Python, Guido's traditionally been against allowing the
language to be mutable in the way that DSLs permit, so in the first
instance it's likely that the PythonQL proposal will face a lot of
resistance. It's possible that PythonQL could provide a use case that
shows the benefits of allowing DSLs to such an extent that Guido
changes his mind, but that's not yet proven (and it's not really
something that's been argued here yet). And it does change the
discussion from being about who prefers which syntax, to being about
where we want the language to go in terms of DSLs.

Personally, I quite like limited DSL support (things like allowing
no-parenthesis function calls can make it possible to write code that
uses functions as if they were keywords). But it does impose a burden
on people supporting the code because they have to understand the
non-standard syntax. So I'm happy with Python's current choice to not
go down that route, even though I do find it occasionally limiting.

If I needed PythonQL features, I'd personally find Brice's class-based
approach quite readable/acceptable. I find PythonQL form nice also,
but not enough of an advantage to warrant all the extra
keywords/syntax etc.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: Query language extension to Python (PythonQL)

2017-03-25 Thread Paul Moore
On 25 March 2017 at 11:24, Pavel Velikhov  wrote:
> No, the current solution is temporary because we just don’t have the
> manpower to
> implement the full thing: a real system that will rewrite parts of PythonQL
> queries and
> ship them to underlying databases. We need a real query optimizer and smart
> wrappers
> for this purpose. But we’ll build one of these for demo purposes soon
> (either a Spark
> wrapper or a PostgreSQL wrapper).

One thought, if you're lacking in manpower now, then proposing
inclusion into core Python means that the core dev team will be taking
on an additional chunk of code that is already under-resourced. That
rings alarm bells for me - how would you imagine the work needed to
merge PythonQL into the core Python grammar would be resourced?

I should say that in practice, I think that the solution is relatively
niche, and overlaps quite significantly with existing Python features,
so I don't really see a compelling case for inclusion. The parallel
with C# and LINQ is interesting here - LINQ is a pretty cool
technology, but I don't see it in widespread use in general-purpose C#
projects (disclaimer: I don't get to see much C# code, so my
experience is limited).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Add pathlib.Path.write_json and pathlib.Path.read_json

2017-03-27 Thread Paul Moore
On 27 March 2017 at 15:33, Donald Stufft  wrote:
> What do you think about adding methods pathlib.Path.write_json and
> pathlib.Path.read_json , similar to write_text, write_bytes, read_text,
> read_bytes?
>
>
>
> -1, I also think that write_* and read_* were mistakes to begin with.

Text is (much) more general-use than JSON.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Add pathlib.Path.write_json and pathlib.Path.read_json

2017-03-27 Thread Paul Moore
On 27 March 2017 at 13:50, Ram Rachum  wrote:
> This would make writing / reading JSON to a file a one liner instead of a
> two-line with clause.

That hardly seems like a significant benefit...

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] What about regexp string litterals : re".*" ?

2017-03-31 Thread Paul Moore
On 31 March 2017 at 09:20, Stephan Houben  wrote:
> FWIW, I also strongly prefer the Verbal Expression style and consider
> "normal" regular expressions to become quickly unreadable and
> unmaintainable.

Do you publish your code widely? What's the view of 3rd party users of
your code? Until this thread, I'd never even heard of the Verbal
Expression style, and I read a *lot* of open source Python code. While
it's purely anecdotal, that suggests to me that the style isn't
particularly commonly used.

(OTOH, there's also a lot less use of REs in Python code than in other
languages. Much string manipulation in Python avoids using regular
languages at all, in my experience. I think that's a good thing - use
simpler tools when appropriate and keep the power tools for the hard
cases where they justify their complexity).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Adding an 'errors' argument to print

2017-03-24 Thread Paul Moore
On 24 March 2017 at 15:41, Ryan Gonzalez  wrote:
> Recently, I was working on a Windows GUI application that ends up running
> ffmpeg, and I wanted to see the command that was being run. However, the
> file name had a Unicode character in it (it's a Sawano song), and when I
> tried to print it to the console, it crashed during the encode/decode. (The
> encoding used in cmd doesn't support Unicode characters.)
>
> The workaround was to do:
>
>
> print(mystring.encode(sys.stdout.encoding,
> errors='replace).decode(sys.stdout.encoding))
>
>
> Not fun, especially since this was *just* a debug print.
>
> The proposal: why not add an 'errors' argument to print? That way, I
> could've just done:
>
>
> print(mystring, errors='replace')
>
>
> without having to worry about it crashing.

When I've hit issues like this before, I've written a helper function:

def sanitise(str, enc):
"""Ensure that str can be encoded in encoding enc"""
return str.encode(enc, errors='replace').decode(enc)

An errors argument to print would be very similar, but would only
apply to the print function, whereas I've used my sanitise function in
other situations as well.

I understand the attraction of a dedicated "just print the best
representation you can" argument to print, but I'm not sure it's a
common enough need to be worth adding like this.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] "import me" to display some summary of the current python installation

2017-04-12 Thread Paul Moore
On 12 April 2017 at 14:35, Kamal Mustafa  wrote:
> Never mind. site._script() as pointed out by Wes Turner is what I need:-
>
> Python 3.4.2 (default, Oct  8 2014, 10:45:20)
> [GCC 4.9.1] on linux
> Type "help", "copyright", "credits" or "license" for more information.
 import site
 site._script()
> sys.path = [
> '',
> '/usr/lib/python3.4',
> '/usr/lib/python3.4/plat-x86_64-linux-gnu',
> '/usr/lib/python3.4/lib-dynload',
> '/usr/local/lib/python3.4/dist-packages',
> '/usr/lib/python3/dist-packages',
> ]
> USER_BASE: '/home/kamal/.local' (exists)
> USER_SITE: '/home/kamal/.local/lib/python3.4/site-packages' (doesn't exist)
> ENABLE_USER_SITE: True

Unless you have to get this information from within the Python
interactive prompt, you can just run "python -m site" from the shell
prompt to get the same information.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Thread-safe generators

2017-04-16 Thread Paul Moore
On 15 April 2017 at 10:45, Nick Coghlan  wrote:
> So I'd be opposed to trying to make generator objects natively thread
> aware - as Stephen notes, the GIL is an implementation detail of
> CPython, so it isn't OK to rely on it when defining changes to
> language level semantics (in this case, whether or not it's OK to have
> multiple threads all calling the same generator without some form of
> external locking).
>
> However, it may make sense to explore possible options for offering a
> queue.AutoQueue type, where the queue always has a defined maximum
> size (defaulting to 1), disallows explicit calls to put(), and
> automatically populates itself based on an iterator supplied to the
> constructors. Once the input iterator raises StopIteration, then the
> queue will start reporting itself as being empty.

+1 A generator that can have values pulled from it on different
threads sounds like a queue to me, so the AutoQueue class that wraps a
generator seems like a natural abstraction to work with. It also means
that the cost for thread safety is only paid by those applications
that need it.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] get() method for list and tuples

2017-03-05 Thread Paul Moore
On 5 March 2017 at 19:13, Ed Kellett  wrote:
>> I think we're going to have to just disagree. You won't convince me
>> it's worth adding list.get unless you can demonstrate some *existing*
>> costs that would be removed by adding list.get, and showing that they
>> are greater than the costs of adding list.get (search this list if you
>> don't know what those costs are - I'm not going to repeat them again,
>> but they are *not* trivial).
>
>
> They seem to be "it'd need to be added to Sequence too" and "it would mess
> with code that checks for a .get method to determine whether something is a
> mapping". It's easily implementable in Sequence as a mixin method, so I'm
> prepared to call that trivial, and I'm somewhat sceptical that the latter
> code does—let alone should—exist.
>

You didn't seem to find the post(s) I referred to. I did a search for
you. Here's one of the ones I was talking about -
https://mail.python.org/pipermail/python-ideas/2017-February/044807.html

You need to present sufficient benefits for list.get to justify all of
the costs discussed there - or at least show some understanding of
those costs and an appreciation that you're asking *someone* to pay
those costs if you expect a proposal to add *anything* to the Python
language seriously.

But I quit at this point - you seem intent on not appreciating the
other sides of this argument, so there's not really much point
continuing.
Paul

PS And yes, I do appreciate your point here - a get method on lists
may be useful. And helpers (if you don't name them well, for instance)
aren't always the best solution. But I've never yet seen *any* code
that would be improved by using a list.get method, so although I
understand the argument in theory, I don't see the practical benefits.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] get() method for list and tuples

2017-03-05 Thread Paul Moore
On 5 March 2017 at 13:03, Ed Kellett  wrote:
>
> No. I'm asking: if list.get did exist, are there any cases (compatibility
> with old versions aside) where list.get's semantics would be applicable, but
> one of the alternatives would be the better choice?

Self-evidently no. But what does that prove? That we should implement
list.get? You could use the dientical argument for *anything*. There
needs to be another reason for implementing it.

>> "writing a helper function" is a generally
>> useful idiom that works for many, many things, but list.get only
>> solves a single problem and every other such problem would need its
>> own separate language change.
>
> Custom helper functions can obviously accomplish anything in any language.
> If we had to choose between def: and list.get, I'd obviously opt for the
> former.
>
>> The disadvantage that you have to write
>> the helper is trivial, because it's only a few lines of simple code:
>
> I don't think the size of a helper function is relevant to how much of a
> disadvantage it is. Most built-in list methods are trivial to implement in
> Python, but I'm glad not to have to.

But you have yet to explain why you'd be glad not to write a helper
for list.get, in any terms that don't simply boil down to "someone
else did the work for me".

I think we're going to have to just disagree. You won't convince me
it's worth adding list.get unless you can demonstrate some *existing*
costs that would be removed by adding list.get, and showing that they
are greater than the costs of adding list.get (search this list if you
don't know what those costs are - I'm not going to repeat them again,
but they are *not* trivial). And I don't seem to be able to convince
you that writing a helper function is a reasonable approach.

Paul.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Pseudo methods

2017-08-04 Thread Paul Moore
On 4 August 2017 at 08:39, Paul Laos  wrote:
> Hi folks
> I was thinking about how sometimes, a function sometimes acts on classes,
> and behaves very much like a method. Adding new methods to classes existing
> classes is currently somewhat difficult, and having pseudo methods would make 
> that
> easier.

Adding new methods to classes is deliberately (somewhat) difficult, as
it makes it harder to locate the definition of a method. If you need
to see the code for a method, you'd expect to look in the class
definition. Making it common for people to put method definitions
outside the class definition harms supportability by breaking that
assumption.

> Code example: (The syntax can most likely be improved upon)
> def has_vowels(self: str):
> for vowel in ["a", "e,", "i", "o", "u"]:
> if vowel in self: return True
>
> This allows one to wring `string.has_vowels()` instead of
> `has_vowels(string)`,
> which would make it easier to read,

That's very much a subjective view. Personally, I don't see
"string.has_vowels()" as being any easier to read - except in the
sense that it tells me that I can find the definition of has_vowels in
the class definition of str (and I can find its documentation in the
documentation of the str type). And your proposal removes this
advantage!

> and would make it easier to add
> functionality to existing classes, without having to extend them. This would
> be useful for builtins or imported libraries, so one can fill in "missing"
> methods.

This is a common technique in other languages like Ruby, but is
considered specialised and somewhat of an advanced technique
(monkeypatching) in Python. As you say yourself, the syntax will make
it *easier* to do this - it's already possible, so the change doesn't
add any new capabilities. Adding new syntax to the language typically
needs a much stronger justification (either in terms of enabling
fundamentally new techniques, or providing a significantly more
natural spelling of something that's widely used and acknowledged as a
common programming idiom).

Sorry, but I'm -1 on this change. It doesn't let people do anything
they can't do now, on the contrary it makes it simpler to use a
technique which has readability and supportability problems, which as
a result will mean that people will be inclined to use the approach
without properly considering the consequences.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator syntax hooks?

2017-08-10 Thread Paul Moore
On 10 August 2017 at 14:42, Steven D'Aprano  wrote:
> I don't think it is confusing. Regardless of the implementation, the
> meaning of:
>
> [expression for x in sequence while condition]
>
> should (I believe) be obvious to anyone who already groks comprehension
> syntax. The mapping to a for-loop is admittedly a tad more complex:
>
> result = []
> for x in sequence:
> if not condition: break
> result.append(expression)
>
> but I'm yet to meet anyone who routinely and regularly reads
> comprehensions by converting them to for loops like that. And if they
> did, all they need do is mentally map "while condition" to "if not
> condition: break" and it should all Just Work™.

The hard part is the interaction between if and while.

Consider (expr for var in seq if cond1 while cond2):

This means:

for var in seq:
if cond1:
if not cond2: break
yield expr

Note that unlike all other comprehension clauses (for and if) while
doesn't introduce a new level of nesting. That's an inconsistency, and
while it's minor, it would need clarifying (my original draft of this
email was a mess, because I misinterpreted how if and while would
interact, precisely over this point). Also, there's a potential issue
here - consider

[expr for var in even_numbers() if is_odd(var) while var < 100]

This is an infinite loop, even though it has a finite termination
condition (var < 100), because we only test the termination condition
if var is odd, which it never will be.

Obviously, this is a contrived example. And certainly "don't do that,
then" is a valid response. But my instinct is that people are going to
get this wrong - *especially* in a maintenance environment. That
example could have started off being "for var in count(0)" and then
someone realised they could "optimise" it by omitting odd numbers,
introducing the bug in the process. (And I'm sure real life code could
come up with much subtler examples ;-))

Overall, I agree with Steven's point. It seems pretty obvious what the
intention is, and while it's probably possible to construct examples
that are somewhat unclear,

1. The mechanical rule gives an explicit meaning
2. People shouldn't be writing such complex comprehensions, so if the
rule doesn't give what they expect, they can always rewrite the code
with an explicit (and clearer) loop.

But while I think this says that the above interpretation of while is
the only sensible one, and in general other approaches are unlikely to
be as natural, I *don't* think that it unequivocally says that
allowing while is a good thing. It may still be better to omit it, and
force people to state their intent explicitly (albeit a bit more
verbosely).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator syntax hooks?

2017-08-11 Thread Paul Moore
On 11 August 2017 at 05:49, Steven D'Aprano <st...@pearwood.info> wrote:
> On Thu, Aug 10, 2017 at 01:25:24PM -0700, Chris Barker wrote:
>> On Thu, Aug 10, 2017 at 8:39 AM, Paul Moore <p.f.mo...@gmail.com> wrote:
>>
>>
>> >  Also, there's a potential issue
>> > here - consider
>> >
>> > [expr for var in even_numbers() if is_odd(var) while var < 100]
>> >
>> > This is an infinite loop, even though it has a finite termination
>> > condition (var < 100), because we only test the termination condition
>> > if var is odd, which it never will be.
>
> I'm not sure why Paul thinks this is an issue. There are plenty of ways
> to accidentally write an infinite loop in a comprehension, or a for
> loop, already:

Mostly because I work in a support and maintenance environment, where
we routinely see code that *originally* made sense, but which was over
time modified in ways that break things - usually precisely because
coders who in theory understand how to write such things correctly,
end up not taking the time to fully understand the constructs they are
modifying. Of course that's wrong, but it's sadly all too common, and
for that reason I'm always wary of constructs that need thinking
through carefully to understand the implications.

Nick's original

{x for x in itertools.count(0) if 1000 <= x while x < 100}

was like that. It was *sort of* obvious that it meant "numbers between
1_000 and 1_000_000, but the interaction between "if" and "while"
wasn't clear to me. If I were asked to rush in a change to only pick
odd numbers,

{x for x in itertools.count(0) if 1000 <= x and is_odd(x) while x < 100}

seems right to me, but quick - what about edge cases? It's not that I
can't get it right, nor is it that I can't test that I *did* get it
right, just that this sort of "quick fix" is very common in the sort
of real-world coding I see regularly, and a huge advantage of Python
is that it's hard to get in a situation where the obvious guess is
wrong.

Don't get me wrong - I'm not arguing that the sky is falling. Just
that this construct isn't as easy to understand as it seems at first
(and that hard-to-understand cases appear *before* you hit the point
where it's obvious that the statement is too complex and should be
refactored.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Generator syntax hooks?

2017-08-10 Thread Paul Moore
On 10 August 2017 at 21:25, Chris Barker <chris.bar...@noaa.gov> wrote:
> On Thu, Aug 10, 2017 at 8:39 AM, Paul Moore <p.f.mo...@gmail.com> wrote:
>
>>
>>  Also, there's a potential issue
>> here - consider
>>
>> [expr for var in even_numbers() if is_odd(var) while var < 100]
>>
>> This is an infinite loop, even though it has a finite termination
>> condition (var < 100), because we only test the termination condition
>> if var is odd, which it never will be.
>
>
> why is the termination only tested if teh if clause is True? Could then not
> be processed in parallel? or the while first

See? That's my point - the "obvious" interpretation stops being
obvious pretty fast...

> so maybe better to do:
>
> [expr for var in even_numbers() while var < 100 if is_odd(var)]

That would work. But I bet people's intuition wouldn't immediately
lead to that fix (or indeed, necessarily incline them to put the
clauses in this order in the first place).

> Maybe it's just me, but I would certainly expect the while to have
> precedence.
>
> I guess I think of it like this:
>
> "if" is providing a filtering mechanism
>
> "while" is providing a termination mechanism
>
>  -- is there a use case anyone can think of when they would want the while
> to be applied to the list AFTER filtering?

Probably not, but when you can have multiple FORs, WHILEs and IFs, in
any order, explaining the behaviour precisely while still preserving
some sense of "filtering comes after termination" is going to be
pretty difficult. [expr for var1 in seq1 if cond1 for var2 in seq2 for
var3 in seq3 if cond2 if cond3] is legal - stupid, but legal. Now add
while clauses randomly in that, and define your expected semantics
clearly so a user (and the compiler!) can determine what the resulting
mess means.

The main benefit of the current "works like a for loop" interpretation
is that it's 100% explicit. Nothing will make a mess like the above
good code, but at least it's well-defined.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP: Hide implementation details in the C API

2017-07-11 Thread Paul Moore
On 11 July 2017 at 11:19, Victor Stinner  wrote:
> XXX should we abandon the stable ABI? Never really used by anyone.

Please don't. On Windows, embedding Python is a pain because a new
version of Python requires a recompile (which isn't ideal for apps
that just want to optionally allow Python scripting, for example).
Also, the recent availability of the embedded distribution on Windows
has opened up some opportunities and I've been using the stable ABI
there.

It's not the end of the world if we lose it, but I'd rather see it
retained (or better still, enhanced).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Arguments to exceptions

2017-07-06 Thread Paul Moore
On 6 July 2017 at 18:59, Mark E. Haase <meha...@gmail.com> wrote:
> On Thu, Jul 6, 2017 at 5:58 AM, Paul Moore <p.f.mo...@gmail.com> wrote:
>>
>> To use the (already
>>
>> over-used) NameError example, Ken's proposal doesn't include any
>> change to how NameError exceptions are raised to store the name
>> separately on the exception.
>
>
> Maybe I'm misunderstanding you, but the proposal has a clear example of
> raising NameError and getting the name attribute from the exception
> instance:

But no-one manually raises NameError, so Ken's example wouldn't work
with "real" NameErrors. If Ken was intending to present a use case
that did involve manually-raised NameError exceptions, then he needs
to show the context to demonstrate why manually raising NameError
rather than a custom exception (which can obviously work like he
wants) is necessary.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] namedtuple literals [Was: RE a new namedtuple]

2017-07-20 Thread Paul Moore
On 20 July 2017 at 07:58, Nathaniel Smith  wrote:
> From the above it sounds like this ntuple literal idea would be giving
> us a third independent way to solve this niche use case (ntuple,
> namedtuple, structseq). This seems like two too many? Especially given
> that namedtuple is already arguably *too* convenient, in the sense
> that it's become an attractive nuisance that gets used in places where
> it isn't really appropriate.

Agreed. This discussion was prompted by the fact that namedtuple class
creation was slow, resulting in startup time issues. It seems to have
morphed into a generalised discussion of how we design a new "named
values" type. While I know that if we're rewriting the implementation,
that's a good time to review the semantics, but it feels like we've
gone too far in that direction.

As has been noted, the new proposal

- no longer supports multiple named types with the same set of field names
- doesn't allow creation from a simple sequence of values

I would actually struggle to see how this can be considered a
replacement for namedtuple - it feels like a completely independent
beast. Certainly code intended to work on multiple Python versions
would seem to have no motivation to change.

> Also, what's the advantage of (x=1, y=2) over ntuple(x=1, y=2)? I.e.,
> why does this need to be syntax instead of a library?

Agreed. Now that keyword argument dictionaries retain their order,
there's no need for new syntax here. In fact, that's one of the key
motivating reasons for the feature.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] namedtuple literals [Was: RE a new namedtuple]

2017-07-20 Thread Paul Moore
On 20 July 2017 at 10:15, Clément Pit-Claudel <cpitclau...@gmail.com> wrote:
> On 2017-07-20 11:02, Paul Moore wrote:
>>> Also, what's the advantage of (x=1, y=2) over ntuple(x=1, y=2)? I.e.,
>>> why does this need to be syntax instead of a library?
>>
>> Agreed. Now that keyword argument dictionaries retain their order,
>> there's no need for new syntax here. In fact, that's one of the key
>> motivating reasons for the feature.
>
> Isn't there a speed aspect?  That is, doesn't the library approach require 
> creating (and likely discarding) a dictionary every time a new ntuple is 
> created?  The syntax approach wouldn't need to do that.

I don't think anyone has suggested that the instance creation time
penalty for namedtuple is the issue (it's the initial creation of the
class that affects interpreter startup time), so it's not clear that
we need to optimise that (at this stage). However, it's also true that
namedtuple instances are created from sequences, not dictionaries
(because the class holds the position/name mapping, so instance
creation doesn't need it). So it could be argued that the
backward-incompatible means of creating instances is *also* a problem
because it's slower...

Paul

PS Taking ntuple as "here's a neat idea for a new class", rather than
as a possible namedtuple replacement, changes the context of all of
the above significantly. Just treating ntuple purely as a new class
being proposed, I quite like it, but I'm not sure it's justified given
all of the similar approaches available, so let's see how a 3rd party
implementation fares. And it's too early to justify new syntax, but if
the overhead of a creation function turns out to be too high in
practice, we can revisit that question. But that's *not* what this
thread is about, as I understand it.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] namedtuple literals [Was: RE a new namedtuple]

2017-07-24 Thread Paul Moore
On 24 July 2017 at 17:37, Michel Desmoulin  wrote:
> You are in the wrong thread. This thread is specifically about
> namedtupels literal.

In which case, did you not see Guido's post "Honestly I would like to
declare the bare (x=1, y=0) proposal dead."? The namedtuple literal
proposal that started this thread is no longer an option, so can we
move on? Preferably by dropping the whole idea - no-one has to my mind
offered any sort of "replacement namedtuple" proposal that can't be
implemented as a 3rd party library on PyPI *except* the (x=1, y=0)
syntax proposal, and I see no justification for adding a *fourth*
implementation of this type of object in the stdlib (which means any
proposal would have to include deprecation of at least one of
namedtuple, structseq or types.SimpleNamespace).

The only remaining discussion on the table that I'm aware of is how we
implement a more efficient version of the stdlib namedtuple class (and
there's not much of that to be discussed here - implementation details
can be thrashed out on the tracker issue).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] + operator on generators

2017-07-01 Thread Paul Moore
On 1 July 2017 at 07:13, Steven D'Aprano  wrote:
> But the more I think about it the more I agree with Nick. Let's start
> by moving itertools.chain into built-ins, with zip and map, and only
> consider giving it an operator after we've had a few years of experience
> with chain as a built-in. We might even find that an operator doesn't
> add any real value.

I'm struck here by the contrast between this and the "let's slim down
the stdlib" debates we've had in the past.

How difficult is it really to add "from itertools import chain" at the
start of a file? It's not even as if itertools is a 3rd party
dependency.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Arguments to exceptions

2017-07-04 Thread Paul Moore
On 4 July 2017 at 06:08, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 4 July 2017 at 09:46, Greg Ewing <greg.ew...@canterbury.ac.nz> wrote:
>> Paul Moore wrote:
>>>
>>> As noted, I disagree that people are not passing components because
>>> str(e) displays them the way it does. But we're both just guessing at
>>> people's motivations, so there's little point in speculating.
>>
>>
>> I've no doubt that the current situation encourages people
>> to be lazy -- I know, because I'm guilty of it myself!
>>
>> Writing a few extra lines to store attributes away and format
>> them in __str__ might not seem like much, but in most cases
>> those lines are of no direct benefit to the person writing
>> the code, so there's little motivation to do it right.
>
> So isn't this a variant of the argument that defining well-behaved
> classes currently involves writing too much boilerplate code, and the
> fact that non-structured exceptions are significantly easier to define
> than structured ones is just an example of that more general problem?
>
> I personally don't think there's anything all *that* special about
> exceptions in this case - they're just a common example of something
> that would be better handled as a "data record" type, but is commonly
> handled as an opaque string because they're so much easier to define
> that way.

Yes, that's what I was (badly) trying to say.

I agree that we could hide a lot of the boilerplate in BaseException
(which is what Ken was suggesting) but I don't believe we yet know the
best way to write that boilerplate, so I'm reluctant to put anything
in the stdlib until we do know better. For now, experimenting with 3rd
party "rich exception" base classes seems a sufficient option. It's
possible that more advanced methods than simply using a base class may
make writing good exception classes even easier, but I'm not sure I've
seen any evidence of that yet.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Arguments to exceptions

2017-07-03 Thread Paul Moore
On 3 July 2017 at 20:46, Jeff Walker  wrote:
>  I think you are fixating too much on Ken's example. I think I understand 
> what he
> is saying and I agree with him. It is a problem I struggle with routinely. It 
> occurs in
>  the following situations:

Possibly. I hadn't reread the original email. Having done so, I'm
confused as to how the proposal and the example are related. The
proposal makes no difference unless the places where (for example)
NameError are raised are changed. But the proposal doesn't suggest
changing how the interpreter raises NameError. So how will the
proposal make a difference? I'd understood from the example that Ken's
need was to be able to find the name that triggered the NameError. His
proposal doesn't do that (unless we are talking about user-raised
NameError exceptions, as opposed to ones the interpreter raises - in
which case why not just use a user-defined exception?

So I'm -1 on his proposal, as I don't see anything in it that couldn't
be done in user code for user-defined exceptions, and there's nothing
in the proposal suggesting a change in how interpreter-raised
exceptions are created.

> 1. You are handling an exception that you are not raising. This could be 
> because
> Python itself is raising the exception, as in Ken's example, or it could 
> be raised
> by some package you did not write.
> 2. You need to process or transform the message in some way.

Then yes, you need to know the API presented by the exception.
Projects (and the core interpreter) are not particularly good at
documenting (or designing) the API for their exceptions, but that
doesn't alter the fact that exceptions are user-defined classes and as
such do have an API. I'd be OK with arguments that the API of built in
exceptions as raised by the interpreter could be improved. Indeed, I
thought that was Ken's proposal. But his proposal seems to be that if
we add a ___str__ method to BaseException, that will somehow
automatically improve the API of all other exceptions.

To quote Ken:

> However, if more than one argument is passed, you get the string 
> representation
> of the tuple containing all the arguments:
>
> >>> try:
> ... raise Exception('Hey there!', 'Something went wrong.')
> ... except Exception as e:
> ... print(str(e))
> ('Hey there!', 'Something went wrong.')
>
> That behavior does not seem very useful, and I believe it leads to people
> passing only one argument to their exceptions.

Alternatively, I could argue that code which uses print(str(e)) as its
exception handling isn't very well written, and the fact that people
do this is what leads to people passing only one argument to their
exceptions when creating them.

Look, I see that there might be something that could be improved here.
But I don't see an explanation of how, if we implement just the
proposed change to BaseException, the user code that Ken's quoting as
having a problem could be improved. There seems to be an assumption of
"and because of that change, people raising exceptions would change
what they do". Frankly, no they wouldn't. There's no demonstrated
benefit for them, and they'd have to maintain a mess of backward
compatibility code. So why would they bother?

Anyway, you were right that I'd replied to just the example, not the
original proposal. I apologise for that, I should have read the thread
more carefully. But if I had done so, it wouldn't have made much
difference - I still don't see a justification for the proposed
change.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Arguments to exceptions

2017-07-03 Thread Paul Moore
On 3 July 2017 at 09:59, Ken Kundert  wrote:
> I think in trying to illustrate the existing behavior I made things more
> confusing than they needed to be.  Let me try again.
>
> Consider this code.
>
> >>> import Food
> >>> try:
> ... import meals
> ... except NameError as e:
> ... name = str(e).split("'")[1]   # <-- fragile code
> ... from difflib import get_close_matches
> ... candidates = ', '.join(get_close_matches(name, Food.foods, 1, 
> 0.6))
> ... print(f'{name}: not found. Did you mean {candidates}?')
>
> In this case *meals* instantiates a collection of foods. It is a Python file,
> but it is also a data file (in this case the user knows Python, so Python is
> a convenient data format). In that file thousands of foods may be 
> instantiated.
> If the user misspells a food, I would like to present the available
> alternatives. To do so, I need the misspelled name.  The only way I can get it
> is by parsing the error message.

As Steven pointed out, this is a pretty good example of a code smell.
My feeling is that you may have just proved that Python isn't quite as
good a fit for your data file format as you thought - or that your
design has flaws. Suppose your user had a breakfast menu, and did
something like:

if now < lunchtim: # Should have been "lunchtime"

Your error handling will be fairly confusing in that case.

> That is the problem.  To write the error handler, I need the misspelled name.
> The only way to get it is to extract it from the error message. The need to
> unpack information that was just packed suggests that the packing was done too
> early.  That is my point.

I don't have any problem with *having* the misspelled name as an
attribute to the error, I just don't think it's going to be as useful
as you hope, and it may indeed (as above) encourage people to use it
without thinking about whether there might be problems with using
error handling that way.

> Fundamentally, pulling the name out of an error message is a really bad coding
> practice because it is fragile.  The code will likely break if the formatting 
> or
> the wording of the message changes.  But given the way the exception was
> implemented, I am forced to choose between two unpleasant choices: pulling the
> name from the error message or not giving the enhanced message at all.

Or using a different approach. ("Among our different approaches...!"
:-)) Agreed that's also an unpleasant choice at this point.

> What I am hoping to do with this proposal is to get the Python developer
> community to see that:
> 1. The code that handles the exception benefits from having access to the
>components of the error message.  In the least it can present the message 
> to
>the user is the best possible way. Perhaps that means enforcing a 
> particular
>style, or presenting it in the user's native language, or perhaps it means
>providing additional related information as in the example above.

I see it as a minor bug magnet, but not really a problem in principle.

> 2. The current approach to exceptions follows the opposite philosophy,
>suggesting that the best place to construct the error message is at the
>source of the error. It inadvertently puts obstacles in place that make it
>difficult to customize the message in the handler.

It's more about implicitly enforcing the policy of "catch errors over
as small a section of code as practical". In your example, you're
trapping NameError from anywhere in a "many thousands" of line file.
That's about as far from the typical use of one or two lines in a try
block as you can get.

> 3. Changing the approach in the BaseException class to provide the best of 
> both
>approaches provides considerable value and is both trivial and backward
>compatible.

A small amount of value in a case we don't particularly want to encourage.
Whether it's trivial comes down to implementation - I'll leave that to
whoever writes the PR to demonstrate. (Although if it *is* trivial, is
it something you could write a PR for?)

Also, given that this would be Python 3.7 only, would people needing
this functionality (only you have expressed a need so far) be OK with
either insisting their users go straight to Python 3.7, or including
backward compatible code for older versions?

Overall, I'm -0 on this request (assuming it is trivial to implement -
I certainly don't feel it's worth significant implementation effort).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] CPython should get...

2017-07-01 Thread Paul Moore
On 1 July 2017 at 18:35, Nick Timkovich  wrote:
> Devil's advocate: why prepare a patch and submit it if it is going to be
> dismissed out of hand. Trying to gauge support for the idea is a reasonable
> first-step.

That's perfectly OK, but it's important to phrase the email in a way
that makes that clear - "I'm considering putting together a PR for
Python to implement X. Does that sound like a good idea, or does
anyone have suggestions for potential issues I might consider? Also,
is there any prior work in this area that I should look into?"

"Python should have X" implies (a) that you are criticising the python
developers for missing that feature out, (b) that you consider your
position self-evident, and (c) that you expect someone to implement
it.

People have different ways of expressing themselves, so we should all
be prepared to allow some leeway in how people put their ideas across.
But the writer has some responsibility for the tone, too.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Arguments to exceptions

2017-07-06 Thread Paul Moore
On 6 July 2017 at 02:53, Jeff Walker  wrote:
> Could you please expand on these statements:
>
>>  the idea doesn't actually solve the problem it is intended to
>
> Specifically Ken started by saying that it should not be necessary to parse 
> the
> messages to get the components of the message. He then gave an example
> where he was able to access the components of the message without parsing
> the message. So how is it that he is not solving the problem he intended to 
> solve?

Just to add my perspective here, his proposed solution (to modify
BaseException) doesn't include any changes to the derived exceptions
that would need to store the components. To use the (already
over-used) NameError example, Ken's proposal doesn't include any
change to how NameError exceptions are raised to store the name
separately on the exception.

So *as the proposal stands* it doesn't allow users to extract
components of any exceptions, simply because the proposal doesn't
suggest changing exceptions to *store* those components.

>>  His solution can't work
>
> Again, he gave an example where he was able to access the components of the
> message without parsing the message. Yet you claim his solution cannot work.
> Is his example wrong?

Yes. Because he tries to extract the name component of a NameError,
and yet that component isn't stored anywhere - under his proposal or
under current CPython.

>>  He hasn't demonstrated that there is a real problem
>
> You yourself admitted that parsing a message to extract the components is
> undesirable. Ken and others, including myself, gave examples where this was
> necessary. Each example was given as either being a real problem or
> representative of a real problem. Are we all wrong?

He's given examples of use cases. To that extent, Steven is being a
touch absolute here. However, there has been some debate over whether
those examples are valid. We've had multiple responses pointing out
that the code examples aren't restricting what's in the try block
sufficiently tightly, for example (the NameError case in particular
was importing a file which, according to Ken himself, had potentially
*thousands* of places where NameError could be raised). It's possible
that introspecting exceptions is the right way to design a solution to
this particular problem, but it does go against the normal design
principles that have been discussed on this list and elsewhere many
times. So, to demonstrate that there's a problem, it's necessary to
address the question of whether the code could in fact have been
written in a different manner that avoided the claimed problem.

That's obviously not a black and white situation - making it easier to
write code in a certain style is a valid reason for suggesting an
enhancement - but the debate has edged towards a stance of "this is
needed" (as in, the lack of it is an issue) rather than "this would be
an improvement". That's not what Ken said, though, and we all bear a
certain responsibility for becoming a little too entrenched in our
positions.

As far as whether Steven's (or anyone's) comments are too negative, I
think the pushback is reasonable. In particular, as far as I know Ken
is not a first-time contributor here, so he's almost certainly aware
of the sorts of concerns that come up in discussions like this, and
with that context I doubt he's offended by the reception his idea got
(indeed, his responses on this thread have been perfectly sensible and
positive). I do think we need to be more sensitive with newcomers, and
Chris Angelico's idea of a "falsehoods programmers believe about
python-ideas" collection may well be a good resource to gently remind
newcomers of some of the parameters of discussion around here.

You also say
> but it can be fun and educational to discuss the ideas

Indeed, very much so. I've certainly learned a lot about language and
API design from discussions here over the years. But again, that's the
point - the biggest things I've learned are about how *hard* good
design is, and how important it is to think beyond your own personal
requirements. Most of the "negative" comments I've seen on this list
have been along those lines - reminding people that there's a much
wider impact for their proposals, and that benefits need to be a lot
more compelling than you originally thought. That's daunting, and
often discouraging (plenty of times, I've been frustrated by the fact
that proposals that seem good to be are blocked by the risk that
someone might have built their whole business around a corner case
that I'd never thought of, or cared about). But it's the reality and
it's an immensely valuable lesson to learn (IMO).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Arguments to exceptions

2017-07-07 Thread Paul Moore
On 7 July 2017 at 04:54, Jeff Walker  wrote:
> Here is an example:
>
> class NameError(BaseException):
>  pass
>
> try:
> raise NameError('welker', db='users', template='{0}: unknown {db}.')
> except NameError as e:
> unknown_name = e.args[0]
> missing_from = e.kwargs('db')
> print(str(e))
>
> Given this example, please explain why it is you say that the arguments are 
> not
> be stored and are not accessible.

Because the proposal doesn't state that NameError is to be changed,
and the example code isn't real, as it's manually raising a system
exception.

Anyway, I'm tired of this endless debate about what Ken may or may not
have meant. I'm going to bow out now and restrict myself to only
reading and responding to actual proposals.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-26 Thread Paul Moore
On 25 April 2017 at 23:30, Erik  wrote:
> As I said above, it's not about the effort writing it out. It's about the
> effort (and accuracy) of reading the code after it has been written.

Well, personally I find all of the syntax proposals relatively
unreadable. So that's definitely a matter of opinion. And the
"explicit is better than implicit" principle argues for the longhand
form.

As has been pointed out, the case for += is more about incremented
complex computed cases than simply avoiding repeating a variable name
(although some people find that simpler case helpful, too - I'm
personally ambivalent).

> And as I also said above, decorators don't cut it anyway (at least not those
> proposed) because they blindly assign ALL of the arguments. I'm more than
> happy to hear of something that solves both of those problems without
> needing syntax changes though, as that means I can have it today ;)

That's something that wasn't clear from your original post, but you're
correct. It should be possible to modify the decorator to take a list
of the variable names you want to assign, but I suspect you won't like
that - it does reduce the number of times you have to name the
variables from 3 to 2, the same as your proposal, though.

class MyClass:
@auto_args('a', 'b')
def __init__(self, a, b, c=None):
pass

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-26 Thread Paul Moore
On 26 April 2017 at 16:17, Erik <pyt...@lucidity.plus.com> wrote:
> On 26/04/17 08:59, Paul Moore wrote:
>>
>> It should be possible to modify the decorator to take a list
>> of the variable names you want to assign, but I suspect you won't like
>> that
>
>
> Now you're second-guessing me.

Sorry :-)

>> class MyClass:
>> @auto_args('a', 'b')
>> def __init__(self, a, b, c=None):
>> pass
>
> I had forgotten that decorators could take parameters. Something like that
> pretty much ticks the boxes for me.

Cool. Glad you liked the idea.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-26 Thread Paul Moore
On 26 April 2017 at 21:51, Erik  wrote:
> It doesn't make anything more efficient, however all of the suggestions of
> how to do it with current syntax (mostly decorators) _do_ make things less
> efficient.

Is instance creation the performance bottleneck in your application?
That seems unusual. I guess it's possible if you're reading a large
database file and creating objects for each row. But in that case, as
Chris A says, you may be better with something like a named tuple. In
any case, optimising for performance is not what generic solutions are
good at, so it's not surprising that a generic decorator involves a
performance hit.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-26 Thread Paul Moore
On 26 April 2017 at 22:42, Erik  wrote:
> 2) The original proposal, which does belong on -ideas and has to take into
> account the general case, not just my specific use-case.
>
> The post you are responding to is part of (2), and hence reduced performance
> is a consideration.

Ah, OK. I'm discounting the original proposal, as there don't seem to
be sufficient (i.e., any :-)) actual use cases that aren't covered by
the decorator proposal(s). Or to put it another way, if the only
reason for the syntax proposal is performance then show me a case
where performance is so critical that it warrants a language change.

Sorry for not being clear myself.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-28 Thread Paul Moore
On 28 April 2017 at 14:07, Nick Coghlan  wrote:
>> Am I missing some point?
>
> Yes, the point I attempted to raise earlier: at the language design
> level, "How do we make __init__ methods easier to write?" is the
> *wrong question* to be asking. It's treating the symptom (writing an
> imperative initialiser is repetitive when it comes to field names)
> rather than the root cause (writing imperative initialisers is still
> part of the baseline recommendation for writing classes, and we don't
> offer any supporting infrastructure for avoiding that directly in the
> standard library)
[...]

Ah, OK. I see what you're saying here. I agree, that's the direction
we should be looking in. I'd sort of lumped all of that side of things
in my mind under the header of "must take a look at attrs" and left it
at that for now. My mistake.

So basically yes, I agree we should have better means for writing
common class definition patterns in a declarative way, retaining the
current underlying mechanisms while making it so that people typically
don't need to interact with them. I suspect that those means will
probably take the form of stdlib/builtin support (decorators, custom
base classes, etc) but the design will need some thrashing out. Beyond
that, I don't really have much more to add until I've done some
research into the current state of the art, in the form of libraries
like attrs (and Stephan Hoyer mentioned typing.NamedTuple).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-29 Thread Paul Moore
On 28 April 2017 at 23:04, Erik  wrote:
>> See what I mean? Things get out of hand *very* fast.
>
> I don't see how that's getting "out of hand". The proposal is nothing more
> complicated than a slightly-different spelling of assignment. It could be
> done today with a text-based preprocessor which converts the proposed form
> to an existing valid syntax. Therefore, if it's "out of hand" then so is the
> existing assignment syntax ;)

Well with your clarification, it's now clear to me that your proposal is for

 import ,  ...

to be equivalent to

_tmp = 
_tmp.var1 = var1
_tmp.var2 = var2
...
del _tmp

Please correct me if I'm wrong here - but that's what I get from your
comments. Note that the way I've described the proposal allows for an
*expression* on the left of import - that's simply because I see no
reason not to allow that (the whole "arbitrary restrictions" thing
again).

So based on the above, you're proposing reusing the import keyword for
a series of attribute assignments, on the basis that the current
meaning of import is to assign values to a number of attributes of the
current namespace. However, that misses the fact that the main point
of the current import statement is to do the *import* - the RHS is a
list of module names. The assignment is just how we get access to the
modules that have been imported. So by "reusing" the import keyword
purely by analogy with the assignment part of the current semantics,
you're ignoring the fundamental point of an import, which is to load a
module.

So based on your clarification, I'm now solidly against reusing the
"import" keyword, on the basis that your proposed syntax doesn't
import any modules.

On the other hand, the construct you describe above of repeated
attribute assignments is only really common in __init__ functions.
It's similar to the sort of "implied target" proposals that have been
raised before (based on the Pascal "with" statement and similar
constructs):

SOME_KEYWORD :
.var1 = var1
.var2 = var2

Your construct is more limited in that it doesn't allow the RHS of the
assignments to be anything other than a variable with the same name as
the attribute (unless you extend it with "expr as var", which I can't
recall if you included in your proposal), nor does it allow for
statements other than assignments in the block. And the implied target
proposal doesn't really avoid the verbosity that prompted your
proposal. So while it's similar in principle, it may not actually help
much in your case.

So the reason I was saying things got "out of hand" was because the
final version I got to had absolutely nothing to do with an import (in
my view) - but on reflection, you're right it's no different than the
self version. It's just that the self version also has nothing to do
with an import! However, in the original context, it's easier to see
"name injection into a namespace" as a common factor, and miss
"importing a module" as the key point (at least it was easier for me
to miss that - you may have seen that and been happy that your
proposal was unrelated to modules even though it used the import
keyword).

I agree with Nick - the place we should be looking for improvements
here is in trying to find abstractions for common patterns of class
creation. Instead of looking at how to make the low-level mechanisms
like __init__ easier to write, we should be concentrating on making it
rare for people to *need* to use the low-level forms (while still
leaving them present for specialised cases).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-28 Thread Paul Moore
On 28 April 2017 at 12:55, Tin Tvrtković  wrote:
> I'm putting forward three examples. These examples are based on attrs since
> that's what I consider to be the best way of having declarative classes in
> Python today.

Your comments and examples are interesting, but don't they just come
down to "attrs is a really good library"? I certainly intend to look
at it based on what's been said in this thread. But I don't see
anything much that suggests that anything *beyond* attrs is needed
(except maybe bringing attrs into the stdlib, if its release cycle
would make that reasonable and the author was interested, and honestly
"put it in the stdlib" could be said about any good library - in
practice though publishing on PyPI and accessing via pip is 99% of the
time the more reasonable option).

Am I missing some point?
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Augmented assignment syntax for objects.

2017-04-25 Thread Paul Moore
On 25 April 2017 at 03:53, Steven D'Aprano  wrote:
> On Tue, Apr 25, 2017 at 02:08:05AM +0100, Erik wrote:
>
>> I often find myself writing __init__ methods of the form:
>>
>> def __init__(self, foo, bar, baz, spam, ham):
>>   self.foo = foo
>>   self.bar = bar
>>   self.baz = baz
>>   self.spam = spam
>>   self.ham = ham
>>
>> This seems a little wordy and uses a lot of vertical space on the
>> screen.
>
> It does, and while both are annoying, in the grand scheme of things
> they're a very minor annoyance. After all, this is typically only an
> issue once per class, and not even every class, and vertical space is
> quite cheap. In general, the barrier for accepting new syntax is quite
> high, and "minor annoyance" generally doesn't reach it.

I suspect that with a suitably creative use of inspect.signature() you
could write a decorator for this:

@auto_attrs
def __init__(self, a, b, c):
# Remaining init code, called with self.a, self.b and self.c set

I don't have time to experiment right now, but will try to find time
later. If nothing else, such a decorator would be a good prototype for
the proposed functionality, and may well be sufficient for the likely
use cases without needing a syntax change.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] namedtuple literals [Was: RE a new namedtuple]

2017-07-30 Thread Paul Moore
On 30 July 2017 at 16:24, Nick Coghlan  wrote:
> Rather than being about any changes on that front, these threads are
> mostly about making it possible to write that first line as:
>
> MyNT = type(implicitly_typed_named_tuple_factory(foo=None, bar=None))

Is that really true, though? There's a lot of discussion about whether
ntuple(x=1, y=2) and ntuple(y=2, x=1) are equal (which implies they
are the same type). If there's any way they can be the same type, then
your definition of MyNT above is inherently ambiguous, depending on
whether we've previously referred to
implicitly_typed_named_tuple_factory(bar=None, foo=None).

For me, the showstopper with regard to this whole discussion about
ntuple(x=1, y=2) is this key point - every proposed behaviour has
turned out to be surprising to someone (and not just in a "hmm, that's
odd" sense, but rather in the sense that it'd almost certainly result
in bugs as a result of misunderstood behaviour).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] "any" and "all" support multiple arguments

2017-08-01 Thread Paul Moore
On 1 August 2017 at 14:01, Louie Lu  wrote:
> I'm not sure if this is discuss before, but can "any" and "all"
> support like min_max "arg1, arg2, *args" style?

I don't see any particular reason why not, but is there a specific use
case for this or is it just a matter of consistency? Unlike max and
min, we already have operators in this case (and/or). I'd imagine that
if I had a use for any(a, b, c) I'd write it as a or b or c, and for
all(a, b, c) I'd write a and b and c.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Pseudo methods

2017-08-04 Thread Paul Moore
On 4 August 2017 at 14:20, Joao S. O. Bueno  wrote:
> Had not this been discussed here earlier this year?
>
> (And despite there being perceived dangers to readability in the long term,
> was accepted?)
>
> Here it is on an archive:
> https://mail.python.org/pipermail/python-ideas/2017-February/044551.html

>From a very brief review of the end of that thread, it looks like it
was agreed that a PEP might be worthwhile - it was expected to be
rejected, though, and the PEP would simply document the discussion and
the fact that the idea was rejected. This agrees with my recollection
of the discussion, as well. But as far as I'm aware, no-one ever wrote
that PEP. (Not surprising, I guess, as it's hard to get enthusiastic
about proposing an idea you know in advance will be rejected).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Improving Catching Exceptions

2017-06-23 Thread Paul Moore
On 23 June 2017 at 15:20, Sven R. Kunze  wrote:
> On 23.06.2017 03:02, Cameron Simpson wrote:
>
>
> How about something like this?
>
>try:
>val = bah[5]
>except IndexError:
># handle your expected exception here
>else:
>foo(val)
>
>
> That is the kind of refactor to which I alluded in the paragraph above.
> Doing that a lot tends to obscure the core logic of the code, hence the
> desire for something more succinct requiring less internal code fiddling.
>
>
> And depending on how complex bha.__getitem__ is, it can raise IndexError
> unintentionally as well. So, rewriting the outer code doesn't even help
> then. :-(

At this point, it becomes unclear to me what constitutes an
"intentional" IndexError, as opposed to an "unintentional" one, at
least in any sense that can actually be implemented.

I appreciate that you want IndexError to mean "there is no 5th element
in bah". But if bah has a __getitem__ that raises IndexError for any
reason other than that, then the __getitem__ implementation has a bug.
And while it might be nice to be able to continue working properly
even when the code you're executing has bugs, I think it's a bit
optimistic to hope for :-)

On the other hand, I do see the point that insisting on finer and
finer grained exception handling ultimately ends up with unreadable
code. But it's not a problem I'd expect to see much in real life code
(where code is either not written that defensively, because either
there's context that allows the coder to make assumptions that objects
will behave reasonably sanely, or the code gets refactored to put the
exception handling in a function, or something like that).

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] ImportError raised for a circular import

2017-06-14 Thread Paul Moore
On 13 June 2017 at 23:36, Chris Angelico  wrote:
> On Wed, Jun 14, 2017 at 8:10 AM, Mahmoud Hashemi  wrote:
>> I didn't interpret the initial email as wanting an error on *all* circular
>> imports. Merely those which are unresolvable. I've definitely helped people
>> diagnose circular imports and wished there was an error that called that out
>> programmatically, even if it's just a string admonition to check for
>> circular imports, appended to the ImportError message.
>
> Oh! That could be interesting. How about a traceback in the import chain?

I have a feeling that mypy might flag circular imports. I've not used
mypy myself, but I just saw the output from a project where we enabled
very basic use of mypy (no type hints at all, yet) and saw an error
reported about a circular import. So with suitable configuration, mypy
could help here (and may lead to other benefits if you want to use
more of its capabilities).

OTOH, having the interpreter itself flag that it had got stuck in an
import loop with an explicit message explaining the problem sounds
like a reasonable idea.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Decorators for running a function in a Process or Thread

2017-05-01 Thread Paul Moore
On 1 May 2017 at 12:13, NoxDaFox  wrote:
>
> I think it could be a good fit for the `concurrent.futures` module.
> Decorated functions would return a `Future` object and run the logic in a
> separate thread or process.
>
>
> @concurrent.futures.thread
> def function(arg, kwarg=0):
> return arg + kwarg
>
> future = function(1, kwarg=2)
> future.result()

What's the benefit over just running the function in a thread (or
process) pool, using Executor.submit()?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Add an option for delimiters in bytes.hex()

2017-05-03 Thread Paul Moore
On 3 May 2017 at 02:48, Erik  wrote:
> Anyway, I know you can't stop anyone from *proposing* something like this,
> but as soon as they do you may decide to quote the recipe from
> "https://docs.python.org/3/library/functions.html#zip; and try to block
> their proposition. There are already threads on fora that do that.
>
> That was my sticking point at the time when I implemented a general
> solution. Why bother to propose something that (although it made my code
> significantly faster) had already been blocked as being something that
> should be a python-level operation and not something to be included in a
> built-in?

It sounds like you have a reasonable response to the suggestion of
using zip - that you have a use case where performance matters, and
your proposed solution is of value in that case. Whether it's a
*sufficient* response remains to be seen, but unless you present the
argument we won't know.

IMO, the idea behind itertools being building blocks is not to deter
proposals for new tools, but to make sure that people focus on
providing important low-level tools, and not on high level operations
that can just as easily be written using those tools - essentially the
guideline "not every 3-line function needs to be a builtin". So it's
to make people think, not to block innovation.

Hope this clarifies,
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Suggestion: Add shutil.get_dir_size

2017-05-03 Thread Paul Moore
On 3 May 2017 at 06:43, Serhiy Storchaka  wrote:
> On 02.05.17 22:07, Ram Rachum wrote:
>>
>> I have a suggestion: Add a function shutil.get_dir_size that gets the
>> size of a directory, including all the items inside it recursively. I
>> currently need this functionality and it looks like I'll have to write
>> my own function for it.
>
>
> The comprehensive implementation should take into account hard links, mount
> points, variable block sizes, sparse files, transparent files and blocks
> compression, file tails packing, blocks deduplication, additional file
> streams, file versioning, and many many other FS specific features. If you
> implement a module providing this feature, publish it on PyPI and prove its
> usefulness for common Python user, it may be considered for inclusion in the
> Python standard library.

+1 I would be interested in a pure-Python version of "du", but would
expect to see it as a 3rd party module in the first instance (at least
until all of the bugs have been thrashed out) rather than being
immediately proposed for the stdlib.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] + operator on generators

2017-06-28 Thread Paul Moore
On 28 June 2017 at 05:30, Terry Reedy  wrote:
> On 6/27/2017 10:47 PM, Nick Coghlan wrote:
>
>> While I haven't been following this thread closely, I'd like to note
>> that arguing for a "chain()" builtin has the virtue that would just be
>> arguing for the promotion of the existing itertools.chain function
>> into the builtin namespace.
>>
>> Such an approach has a lot to recommend it:
>>
>> 1. It has precedent, in that Python 3's map(), filter(), and zip(),
>> are essentially Python 2's itertools.imap(), ifilter(), and izip()
>> 2. There's no need for a naming or semantics debate, as we'd just be
>> promoting an established standard library API into the builtin
>> namespace
>
>
> A counter-argument is that there are other itertools that deserve promotion,
> by usage, even more.  But we need to see comparisons from more that one
> limited corpus.

Indeed. I don't recall *ever* using itertools.chain myself. I'd be
interested in seeing some usage stats to support this proposal. As an
example, I see 8 uses of itertools.chain in pip and its various
vendored packages, as opposed to around 30 uses of map (plus however
many list comprehensions are used in place of maps). On a very brief
scan, it looks like the various other itertools are used less than
chain, but with only 8 uses of chain, it's not really possible to read
anything more into the relative frequencies.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Hexadecimal floating literals

2017-09-21 Thread Paul Moore
On 21 September 2017 at 02:53, Steven D'Aprano  wrote:
> On Thu, Sep 21, 2017 at 11:13:44AM +1000, Nick Coghlan wrote:
>
>> I think so, as consider this question: how do you write a script that
>> accepts a user-supplied string (e.g. from a CSV file) and treats it as
>> hex floating point if it has the 0x prefix, and decimal floating point
>> otherwise?
>
> float.fromhex(s) if s.startswith('0x') else float(s)
>
> [...]
>> And if the float() builtin were to gain a "base" parameter, then it's
>> only a short step from there to allow at least the "0x" prefix on
>> literals, and potentially even "0b" and "0o" as well.
>>
>> So I'm personally +0 on the idea
>
> I agree with your arguments. I just wish I could think of a good reason
> to make it +1 instead of a luke-warm +0.

I'm also +0.

I think +0 is pretty much the correct response - it's OK with me, but
someone who actually needs or wants the feature will need to implement
it.

It's also worth remembering that there will be implementations other
than CPython that will need changes, too - Jython, PyPy, possibly
Cython, and many editors and IDEs. So setting the bar at "someone who
wants this will have to step up and provide a patch" seems reasonable
to me.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP draft: context variables

2017-10-14 Thread Paul Moore
On 14 October 2017 at 17:50, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 14 October 2017 at 21:56, Paul Moore <p.f.mo...@gmail.com> wrote:
>
> TL;DR of below: PEP 550 currently gives you what you're after, so your
> perspective counts as a preference for "please don't eagerly capture the
> creation time context in generators or coroutines".

Thank you. That satisfies my concerns pretty well.

> The suggestion has been made that we should instead be capturing the active
> context when "url_get(url)" is called, and implicitly switching back to that
> at the point where await is called. It doesn't seem like a good idea to me,
> as it breaks the "top to bottom" mental model of code execution (since the
> "await cr" expression would briefly switch the context back to the one that
> was in effect on the "cr = url_get(url)" line without even a nested suite to
> indicate that we may be adjusting the order of code execution).

OK. Then I think that's a bad idea - and anyone proposing it probably
needs to explain much more clearly why it might be a good idea to jump
around in the timeline like that.

> If you capture the context eagerly, then there are fewer opportunities to
> get materially different values from "data = list(iterable)" and "data =
> iter(context_capturing_iterable)".
>
> While that's a valid intent for folks to want to be able to express, I
> personally think it would be more clearly requested via an expression like
> "data = iter_in_context(iterable)" rather than having it be implicit in the
> way generators work (especially since having eager context capture be
> generator-only behaviour would create an odd discrepancy between generators
> and other iterators like those in itertools).

OK. I understand the point here - but I'm not sure I see the practical
use case for iter_in_context. When would something like that be used?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP draft: context variables

2017-10-15 Thread Paul Moore
On 15 October 2017 at 13:51, Amit Green <amit.mi...@gmail.com> wrote:
> Once again, I think Paul Moore gets to the heart of the issue.
>
> Generators are simply confusing & async even more so.
>
> Per my earlier email, the fact that generators look like functions, but are
> not functions, is at the root of the confusion.

I don't agree. I don't find generators *at all* confusing. They are a
very natural way of expressing things, as has been proven by how
popular they are in the Python community.

I don't *personally* understand async, but I'm currently willing to
reserve judgement until I've been in a situation where it would be
useful, and therefore needed to learn it.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP draft: context variables

2017-10-15 Thread Paul Moore
On 15 October 2017 at 05:39, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 15 October 2017 at 05:47, Paul Moore <p.f.mo...@gmail.com> wrote:
>>
>> On 14 October 2017 at 17:50, Nick Coghlan <ncogh...@gmail.com> wrote:
>> > If you capture the context eagerly, then there are fewer opportunities
>> > to
>> > get materially different values from "data = list(iterable)" and "data =
>> > iter(context_capturing_iterable)".
>> >
>> > While that's a valid intent for folks to want to be able to express, I
>> > personally think it would be more clearly requested via an expression
>> > like
>> > "data = iter_in_context(iterable)" rather than having it be implicit in
>> > the
>> > way generators work (especially since having eager context capture be
>> > generator-only behaviour would create an odd discrepancy between
>> > generators
>> > and other iterators like those in itertools).
>>
>> OK. I understand the point here - but I'm not sure I see the practical
>> use case for iter_in_context. When would something like that be used?
>
>
> Suppose you have some existing code that looks like this:
>
> results = [calculate_result(a, b) for a, b in data]
>
> If calculate_result is context dependent in some way (e.g. a & b might be
> decimal values), then eager evaluation of "calculate_result(a, b)" will use
> the context that's in effect on this line for every result.
>
> Now, suppose you want to change the code to use lazy evaluation, so that you
> don't need to bother calculating any results you don't actually use:
>
> results = (calculate_result(a, b) for a, b in data)
>
> In a PEP 550 world, this refactoring now has a side-effect that goes beyond
> simply delaying the calculation: since "calculate_result(a, b)" is no longer
> executed immediately, it will default to using whatever execution context is
> in effect when it actually does get executed, *not* the one that's in effect
> on this line.
>
> A context capturing helper for iterators would let you decide whether or not
> that's what you actually wanted by instead writing:
>
> results = iter_in_context(calculate_result(a, b) for a, b in data)
>
> Here, "iter_in_context" would indicate explicitly to the reader that
> whenever another item is taken from this iterator, the execution context is
> going to be temporarily reset back to the way it was on this line. And since
> it would be a protocol based iterator-in-iterator-out function, you could
> wrap it around *any* iterator, not just generator-iterator objects.

OK, got it. That sounds to me like a candidate for a stdlib function
(either because it's seen as a common requirement, or because it's
tricky to get right - or both). The PEP doesn't include it, as far as
I can see, though.

But I do agree with MAL, it seems wrong to need a helper for this,
even though it's a logical consequence of the other semantics I
described as intuitive :-(

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Ternary operators in list comprehensions

2017-10-05 Thread Paul Moore
>>> a = [1,2,3]
>>> [x if x & 1 else 'even' for x in a]
[1, 'even', 3]

You're mixing the if clause of the list comprehension up with a
ternary expresssion. There's no "else" in the list comprehension if
clause.

Paul

On 5 October 2017 at 16:40, Jason H  wrote:
 a = [1,2,3]
 [ x for x  in a if x & 1]
> [1, 3]
 [ x for x  in a if x & 1 else 'even']
>   File "", line 1
> [ x for x  in a if x & 1 else 'even']
> ^
> SyntaxError: invalid syntax
>
> I expected [1, 'even', 3]
>
> I would expect that the if expression would be able to provide alternative 
> values through else.
>
> The work around blows it out to:
> l = []
> for x in a:
>   if x&1:
> l.append(x)
>   else:
> l.append('even')
>
>
> Unless there is a better way?
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP draft: context variables

2017-10-13 Thread Paul Moore
On 13 October 2017 at 19:32, Yury Selivanov  wrote:
>>> It seems simpler to have one specially named and specially called function
>>> be special, rather than make the semantics
>>> more complicated for all functions.
>>
>
> It's not possible to special case __aenter__ and __aexit__ reliably
> (supporting wrappers, decorators, and possible side effects).
>
>> +1.  I think that would make it much more usable by those of us who are not
>> experts.
>
> I still don't understand what Steve means by "more usable", to be honest.

I'd consider myself a "non-expert" in async. Essentially, I ignore it
- I don't write the sort of applications that would benefit
significantly from it, and I don't see any way to just do "a little
bit" of async, so I never use it.

But I *do* see value in the context variable proposals here - if only
in terms of them being a way to write my code to respond to external
settings in an async-friendly way. I don't follow the underlying
justification (which is based in "we need this to let things work with
async/coroutines) at all, but I'm completely OK with the basic idea
(if I want to have a setting that behaves "naturally", like I'd expect
decimal contexts to do, it needs a certain amount of language support,
so the proposal is to add that). I'd expect to be able to write
context variables that my code could respond to using a relatively
simple pattern, and have things "just work". Much like I can write a
context manager using @contextmanager and yield, and not need to
understand all the intricacies of __enter__ and __exit__. (BTW,
apologies if I'm mangling the terminology here - write it off as part
of me being "not an expert" :-))

What I'm getting from this discussion is that even if I *do* have a
simple way of writing context variables, they'll still behave in ways
that seem mildly weird to me (as a non-async user). Specifically, my
head hurts when I try to understand what that decimal context example
"should do". My instincts say that the current behaviour is wrong -
but I'm not sure I can explain why. So on that example, I'd ask the
following of any proposal:

1. Users trying to write a context variable[1] shouldn't have to jump
through hoops to get "natural" behaviour. That means that suggestions
that the complexity be pushed onto decimal.context aren't OK unless
it's also accepted that the current behaviour is wrong, and the only
reason decimal.context needs to replicated is for backward
compatibility (and new code can ignore the problem).
2. The proposal should clearly establish what it views as "natural"
behaviour, and why. I'm not happy with "it's how decimal.context has
always behaved" as an explanation. Sure, people asking to break
backward compatibility should have a good justification, but equally,
people arguing to *preserve* an unintuitive current behaviour in new
code should be prepared to explain why it's not a bug. To put it
another way, context variables aren't required to be bug-compatible
with thread local storage.

[1] I'm assuming here that "settings that affect how a library behave"
is a common requirement, and the PEP is intended as the "one obvious
way" to implement them.

Nick's other async refactoring example is different. If the two forms
he showed don't behave identically in all contexts, then I'd consider
that to be a major problem. Saying that "coroutines are special" just
reads to me as "coroutines/async are sufficiently weird that I can't
expect my normal patterns of reasoning to work with them". (Apologies
if I'm conflating coroutines and async incorrectly - as a non-expert,
they are essentially indistinguishable to me). I sincerely hope that
isn't the message I should be getting - async is already more
inaccessible than I'd like for the average user.

The fact that Nick's async example immediately devolved into a
discussion that I can't follow at all is fine - to an extent. I don't
mind the experts debating implementation details that I don't need to
know about. But if you make writing context variables harder, just to
fix Nick's example, or if you make *using* async code like (either of)
Nick's forms harder, then I do object, because that's affecting the
end user experience.

In that context, I take Steve's comment as meaning "fiddling about
with how __aenter__ and __aexit__ work is fine, as that's internals
that non-experts like me don't care about - but making context
variables behave oddly because of this is *not* fine".

Apologies if the above is unhelpful. I've been lurking but not
commenting here, precisely because I *am* a non-expert, and I trust
the experts to build something that works. But when non-experts were
explicitly mentioned, I thought my input might be useful.

The following quote from the Zen seems particularly relevant here:

If the implementation is hard to explain, it's a bad idea.

(although the one about needing to be Dutch to understand why
something is obvious might well trump it ;-))

Paul

Re: [Python-ideas] if as

2017-09-07 Thread Paul Moore
On 7 September 2017 at 11:43, Denis Krienbühl  wrote:
> What I would love to see is the following syntax instead, which to me is much 
> cleaner:
>
>if computation() as result:
>do_something_with_result(result)

Hi - thanks for your suggestion! This has actually come up quite a lot
in the past. Here's a couple of links to threads you might want to
read (it's not surprising if you missed these, it's not that easy to
come up with a good search term for this topic).

https://mail.python.org/pipermail/python-ideas/2012-January/013461.html
https://mail.python.org/pipermail/python-ideas/2009-March/003423.html
  (This thread includes a note by Guido that he intentionally left out
this functionality)

In summary, it's a reasonably commonly suggested idea, but there's not
enough benefit to warrant adding it to the language.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP 554: Stdlib Module to Support Multiple Interpreters in Python Code

2017-09-07 Thread Paul Moore
On 7 September 2017 at 20:14, Eric Snow <ericsnowcurren...@gmail.com> wrote:
> On Thu, Sep 7, 2017 at 11:52 AM, Paul Moore <p.f.mo...@gmail.com> wrote:
>> Is there any reason why passing a callable and args is unsafe, and/or
>> difficult? Naively, I'd assume that
>>
>> interp.call('f(a)')
>>
>> would be precisely as safe as
>>
>> interp.call(f, a)
>
> The problem for now is with sharing objects between interpreters.  The
> simplest safe approach currently is to restrict execution to source
> strings.  Then there are no complications.  Interpreter.call() makes
> sense but I'd like to wait until we get feel for how subinterpreters
> get used and until we address some of the issues with object passing.

Ah, OK. so if I create a new interpreter, none of the classes,
functions, or objects defined in my calling code will exist within the
target interpreter? That makes sense, but I'd missed that nuance from
the description. Again, this is probably worth noting in the PEP.

And for the record, based on that one fact, I'm perfectly OK with the
initial API being string-only.

> FWIW, here are what I see as the next steps for subinterpreters in the stdlib:
>
> 1. add a basic queue class for passing objects between interpreters
> * only support strings at first (though Nick pointed out we could
> fall back to pickle or marshal for unsupported objects)
> 2. implement CSP on top of subinterpreters
> 3. expand the queue's supported types
> 4. add something like Interpreter.call()
>
> I didn't include such a queue in this proposal because I wanted to
> keep it as focused as possible.  I'll add a note to the PEP about
> this.

This all sounds very reasonable. Thanks for the clarification.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP 554: Stdlib Module to Support Multiple Interpreters in Python Code

2017-09-07 Thread Paul Moore
On 7 September 2017 at 19:26, Eric Snow  wrote:
> As part of the multi-core work I'm proposing the addition of the
> "interpreters" module to the stdlib.  This will expose the existing
> subinterpreters C-API to Python code.  I've purposefully kept the API
> simple.  Please let me know what you think.

Looks good. I agree with the idea of keeping the interface simple in
the first instance - we can easily add extra functionality later, but
removing stuff (or worse still, finding that stuff we thought was OK
but had missed corner cases of was broken) is much harder.

>run(code):
>
>   Run the provided Python code in the interpreter, in the current
>   OS thread.  Supported code: source text.

The only quibble I have is that I'd prefer it if we had a
run(callable, *args, **kwargs) method. Either instead of, or as well
as, the run(string) one here.

Is there any reason why passing a callable and args is unsafe, and/or
difficult? Naively, I'd assume that

interp.call('f(a)')

would be precisely as safe as

interp.call(f, a)

Am I missing something? Name visibility or scoping issues come to mind
as possible complications I'm not seeing. At the least, if we don't
want a callable-and-args form yet, a note in the PEP explaining why
it's been omitted would be worthwhile.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Remote package/module imports through HTTP/S

2017-08-23 Thread Paul Moore
On 23 August 2017 at 18:49, Chris Angelico  wrote:
> Still -1 on this becoming a stdlib package, as there's nothing I've
> yet seen that can't be done as a third-party package. But it's less
> scary than I thought it was :)

IMO, this would make a great 3rd party package (I note that it's not
yet published on PyPI). It's possible that it would end up being
extremely popular, and recognised as sufficiently secure - at which
point it may be worth considering for core inclusion. But it's also
possible that it remains niche, and/or people aren't willing to take
the security risks that it implies, in which case it's still useful to
those who do like it.

One aspect that hasn't been mentioned yet - as a 3rd party module, the
user (or the organisation's security team) can control whether or not
the ability to import over the web is available by controlling whether
the module is allowed to be installed - whereas with a core module,
it's there, like it or not, and *all* Python code has to be audited on
the assumption that it might be used. I could easily imagine cases
where the httpimport module was allowed on development machines and CI
servers, but forbidden on production (and pre-production) systems.
That option simply isn't available if the feature is in the core.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] tarfile.extractall progress

2017-09-01 Thread Paul Moore
On 1 September 2017 at 12:50, Tarek Ziadé  wrote:
> Hey,
>
> For large archives, I want to display a progress bar while the archive
> is being extracted with:
>
> https://docs.python.org/3/library/tarfile.html#tarfile.TarFile.extractall
>
> I could write my own version of extractall() to do this, or maybe we
> could introduce a callback option that gets called
> everytime .extract() is called in extractall()
>
> The callback can receive the tarinfo object and where it's being
> extracted. This is enough to plug a progress bar
> and avoid reinventing .extractall()
>
> I can add a ticket and maybe a patch if people think this is a good
> little enhancement

Sounds like a reasonable enhancement, but for your particular use
couldn't you just subclass TarFile and call your progress callback at
the end of the extract method after the base class extract?

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Membership of infinite iterators

2017-10-18 Thread Paul Moore
On 18 October 2017 at 10:56, Koos Zevenhoven  wrote:
> I'm unable to reproduce the "uninterruptible with Ctrl-C" problem with
> infinite iterators. At least itertools doesn't seem to have it:
>
 import itertools
 for i in itertools.count():
> ... pass
> ...
> ^CTraceback (most recent call last):
>   File "", line 1, in 
> KeyboardInterrupt

That's not the issue here, as the CPython interpreter implements this
with multiple opcodes, and checks between opcodes for Ctrl-C. The
demonstration is:

>>> import itertools
>>> 'x' in itertools.count()

... only way to break out is to kill the process.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Membership of infinite iterators

2017-10-18 Thread Paul Moore
On 18 October 2017 at 16:27, Koos Zevenhoven  wrote:
> So you're talking about code that would make a C-implemented Python iterable
> of strictly C-implemented Python objects and then pass this to something
> C-implemented like list(..) or sum(..), while expecting no Python code to be
> run or signals to be checked anywhere while doing it. I'm not really
> convinced that such code exists. But if such code does exist, it sounds like
> the code is heavily dependent on implementation details.

Well, the OP specifically noted that he had recently encountered
precisely that situation:

"""
I recently came across a bug where checking negative membership
(__contains__ returns False) of an infinite iterator will freeze the
program.
"""

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Membership of infinite iterators

2017-10-18 Thread Paul Moore
OK, looks like I've lost track of what this thread is about then.
Sorry for the noise.
Paul

On 18 October 2017 at 16:40, Koos Zevenhoven <k7ho...@gmail.com> wrote:
> On Wed, Oct 18, 2017 at 6:36 PM, Paul Moore <p.f.mo...@gmail.com> wrote:
>>
>> On 18 October 2017 at 16:27, Koos Zevenhoven <k7ho...@gmail.com> wrote:
>> > So you're talking about code that would make a C-implemented Python
>> > iterable
>> > of strictly C-implemented Python objects and then pass this to something
>> > C-implemented like list(..) or sum(..), while expecting no Python code
>> > to be
>> > run or signals to be checked anywhere while doing it. I'm not really
>> > convinced that such code exists. But if such code does exist, it sounds
>> > like
>> > the code is heavily dependent on implementation details.
>>
>> Well, the OP specifically noted that he had recently encountered
>> precisely that situation:
>>
>> """
>> I recently came across a bug where checking negative membership
>> (__contains__ returns False) of an infinite iterator will freeze the
>> program.
>> """
>>
>
> No, __contains__ does not expect no python code to be run, because Python
> code *can* run, as Serhiy in fact already demonstrated for another purpose:
>
> On Wed, Oct 18, 2017 at 3:53 PM, Serhiy Storchaka <storch...@gmail.com>
> wrote:
>>
>> 18.10.17 13:22, Nick Coghlan пише:
>>>
>>> 2.. These particular cases can be addressed locally using existing
>>> protocols, so the chances of negative side effects are low
>>
>>
>> Only the particular case `count() in count()` can be addressed without
>> breaking the following examples:
>>
>> >>> class C:
>> ... def __init__(self, count):
>> ... self.count = count
>> ... def __eq__(self, other):
>> ... print(self.count, other)
>> ... if not self.count:
>> ... return True
>> ... self.count -= 1
>> ... return False
>> ...
>> >>> import itertools
>> >>> C(5) in itertools.count()
>> 5 0
>> 4 1
>> 3 2
>> 2 3
>> 1 4
>> 0 5
>> True
>
>
>
> Clearly, Python code *does* run from within itertools.count.__contains__(..)
>
>
> ––Koos
>
>
> --
> + Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Looking for input to help with the pip situation

2017-11-10 Thread Paul Moore
On 10 November 2017 at 11:37, Oleg Broytman  wrote:
> On Fri, Nov 10, 2017 at 07:48:35AM +0100, Michel Desmoulin 
>  wrote:
>> On linux you
>> can't pip install, you need --users, admin rights or a virtualenv.
>
>Isn't it the same on Windows? For an admin-installed Python you need
> --users, admin rights or a virtualenv. And a user-installed Python on
> Windows is equivalen to a user-compiled Python on Linux -- pip installs
> packages to the user-owned site-packages directory.

It is - but the default install on Windows (using the python.org
installer) is a per-user install. So beginners don't encounter
admin-installed Python (unless they ask for it, in which case they
made the choice so they should understand the implications ;-))

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Looking for input to help with the pip situation

2017-11-10 Thread Paul Moore
On 10 November 2017 at 08:01, Nick Coghlan  wrote:
> You can't have it both ways - the only way we can systematically mask
> the environmental differences between Windows, Linux and Mac OS X is
> by providing tooling that actually masks those differences, which
> means introducing that tooling becomes a prerequisite for teaching.

On Windows, which is the only platform I can reasonably comment on,
the killer issue is that the installer doesn't make the commands
"python" and "pip" available by default. Just checking my PC, both go
and rust (which I installed *ages* ago) do appear to, so they "just
work". Java sort-of also works like that (the runtime is on PATH, but
the compilers need to be added manually).

The biggest reason we don't add Python to PATH, as I understand it, is
because we need to consider the implications of people having multiple
versions of Python installed. As far as I know, no other language
allows that. But equally, most beginners wouldn't actually *have*
multiple versions installed. So maybe we should optimise for that case
(only one version of Python installed). That would mean:

1. Go back to adding Python to PATH. Because our installers don't say
"do you want to uninstall the old version", we should probably do a
check for a "python" command on PATH in the installer, and if there is
one, warn the user "You already have Python installed - if you are
upgrading you should manually uninstall the old version first,
otherwise your old installation will remain the default". We could get
more complex at this point, but that depends on what capabilities we
can include in the installer - I don't know how powerful the toolset
is.
2. Explicitly document that multiple Python interpreters on one
machine is considered "advanced", and users with that sort of setup
should be prepared to manage PATH themselves. I'd put that as
something like "It is possible to install multiple versions of Python
at once, but if you do that, you should understand the implications -
the average user should not need to do this".

We still have to deal with the fact that basically every Unix
environment is "advanced" in the above sense (the python2/python3
split). I don't have a solution for that (other than "upgrade to
Windows" ;-)).

> It doesn't completely solve the problem (as getting into and out of
> the environment is still platform specific), but it does mean that the
> ubiquitous online instructions to run "pip install package-name" and
> "python -m command" will actually work once people are inside their
> working environment.
>
> That tooling is venv:
>
> * it ensures you have "pip" on your PATH
> * it ensures you have "python" on your PATH
> * it ensures that you have the required permissions to install new packages
> * it ensures that any commands you install from PyPI will be also on your PATH
>
> When we choose not to use venv, then it becomes necessary to ensure
> each of those things individually for each potential system starting
> state

Currently, the reality is that people use virtualenv, not venv. All
higher-level tools I'm aware of wrap virtualenv (to allow Python 2.7
support). Enhancing the capabilities of venv is fine, but promoting
venv over virtualenv involves technical challenges across the whole
toolset, not just documentation/education.

But agreed, once we get people into a virtual environment (of any
form) the portability issues become significantly reduced. The main
outstanding issue is managing multiple environments, which could be
handled by having a special "default" environment that is the only one
we'd expect beginners to use/need.

> That said, we'd *like* the default to be is per-user package
> installations into the user home directory, but that creates
> additional problems:
>
> * "python" may be missing, and you'll have to use "python3" or "py" instead
> * "pip" may be missing (or mean "install for Python 2.7")
> * you have to specify "--user" on *every* call to pip, and most online
> guides won't say that
> * there's a major backwards compatibility problem with switching pip
> over to per-user package installs as the default (we still want to do
> it eventually, though)
> * on Windows, system-wide Python installs can't adjust per-user PATH
> settings, and per-user installs are subject to being broken by
> system-wide installs
> * on Windows, the distinction between a per-user install of Python,
> and per-user installs of a package is hard to teach
> * on Debian, I believe ~/.local/bin still isn't on PATH by default

Also on Windows, the per-user bin directory isn't added to PATH even
if you add the system Python to PATH in the installer.

> That said, I think there is one improvement we could feasibly make,
> which would be to introduce the notion of a "default user environment"
> into `venv`, such that there was a single "python -m venv shell"
> command that:
>
> * created a default user environment if it didn't already exist
> * launched a subshell with 

Re: [Python-ideas] Looking for input to help with the pip situation

2017-11-10 Thread Paul Moore
On 10 November 2017 at 10:01, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 10 November 2017 at 19:50, Paul Moore <p.f.mo...@gmail.com> wrote:
>> On 10 November 2017 at 08:01, Nick Coghlan <ncogh...@gmail.com> wrote:
>>> That tooling is venv:
>>>
>>> * it ensures you have "pip" on your PATH
>>> * it ensures you have "python" on your PATH
>>> * it ensures that you have the required permissions to install new packages
>>> * it ensures that any commands you install from PyPI will be also on your 
>>> PATH
>>>
>>> When we choose not to use venv, then it becomes necessary to ensure
>>> each of those things individually for each potential system starting
>>> state
>>
>> Currently, the reality is that people use virtualenv, not venv. All
>> higher-level tools I'm aware of wrap virtualenv (to allow Python 2.7
>> support). Enhancing the capabilities of venv is fine, but promoting
>> venv over virtualenv involves technical challenges across the whole
>> toolset, not just documentation/education.
>
> We already assume there will be a step in understanding from "working
> with the latest Python 3.x locally" to "dealing with multiple Python
> versions". Switching from venv to virtualenv just becomes part of that
> process (and will often be hidden behind a higher level tool like
> pipenv, pew, or vex anyway).

OK, that's fair.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] How assignment should work with generators?

2017-11-27 Thread Paul Moore
On 27 November 2017 at 16:05, Chris Angelico  wrote:
> On Tue, Nov 28, 2017 at 2:35 AM, Steven D'Aprano  wrote:
>> In this case, there is a small but real benefit to counting the
>> assignment targets and being explicit about the number of items to
>> slice. Consider an extension to this "non-consuming" unpacking that
>> allowed syntax like this to pass silently:
>>
>> a, b = x, y, z
>>
>> That ought to be a clear error, right? I would hope you don't think that
>> Python should let that through. Okay, now we put x, y, z into a list,
>> then unpack the list:
>>
>> L = [x, y, z]
>> a, b = L
>>
>> That ought to still be an error, unless we explicity silence it. One way
>> to do so is with an explicit slice:
>>
>> a, b = L[:2]
>
> I absolutely agree with this for the default case. That's why I am
> ONLY in favour of the explicit options. So, for instance:
>
> a, b = x, y, z # error
> a, b, ... = x, y, z # valid (evaluates and ignores z)

Agreed, only explicit options are even worth considering (because of
backward compatibility if for no other reason). However, the unpacking
syntax is already complex, and hard to search for. Making it more
complex needs a strong justification. And good luck in doing a google
search for "..." if you saw that code in a project you had to
maintain. Seriously, has anyone done a proper investigation into how
much benefit this proposal would provide? It should be reasonably easy
to do a code search for something like "=.*islice", to find code
that's a candidate for using the proposed syntax. I suspect there's
very little code like that.

I'm -1 on this proposal without a much better justification of the
benefits it will bring.
Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Possible Enhancement to py.exe Python Launcher

2017-11-23 Thread Paul Moore
If I understand your proposal correctly, this is already possible, see
https://www.python.org/dev/peps/pep-0397/#python-version-qualifiers

The details are likely a little different than what you're proposing,
but if they don't cover what you're trying to do, maybe you could give
a more specific example of a use case that isn't currently possible?

Paul

On 23 November 2017 at 08:43, Steve Barnes  wrote:
> Following on from the discussions on pip I would like to suggest, (and
> possibly implement), a minor change to the current py.exe python launcher.
>
> Currently the launcher, when called without a version specifier,
> defaults to the highest version on the basis of 3>2, x.11 > x.9, -64 >
> -32 however this may not always be the most desirable behaviour.
>
> I would like to suggest that it take a configuration value for the
> default version to use, (and possibly a separate ones for pyw, & file
> associations), honouring the following with the later overriding:
>
> default - as current
> system setting - from registry
> user setting - from registry
> user setting - from config file maybe %appdata%\pylauncher\defaults.ini
> environment setting - from getenv
> local setting - from .pyconfig file in the current directory.
>
> Options would be the same format as the -X[.Y][-BB] format currently
> accepted on the command line plus a --No-Py-Default option which would
> always error out if the version to invoke was not specified.
>
> I see this as potentially adding quite a lot of value for people with
> multiple python versions installed and it could tie in quite well with
> the possible use of py.exe as an entry point for tools such as pip.
>
> It might also increase the awareness of the launcher as those who have
> to stick with python 2 for the moment or in a specific context could set
> the default to what they need but can always override.
>
> --
> Steve (Gadget) Barnes
> Any opinions in this message are my personal opinions and do not reflect
> those of my employer.
>
> ---
> This email has been checked for viruses by AVG.
> http://www.avg.com
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] [Python-Dev] What's the status of PEP 505: None-aware operators?

2017-11-29 Thread Paul Moore
On 29 November 2017 at 12:41, Nick Coghlan  wrote:
> On 29 November 2017 at 22:38, Stephan Houben  wrote:
>> What about more English-like syntax:
>>
>> X or else Y
>
> The problem with constructs like this is that they look like they
> should mean the same thing as "X or Y".

Keyword based and multi-word approaches also break down much faster
when you get more terms.

X or else Y

looks OK (ignoring Nick's comment - I could pick another keyword-based
proposal, but I'm too lazy to look for one I like), but when you have
4 options,

X or else Y or else Z or else W

the benefit isn't as obvious. Use lower-case and longer names

item_one or else item_two or else list_one[the_index] or dict_one['key_one']

and it becomes just a muddle of words.

Conversely, punctuation-based examples look worse with shorter
variables and with expressions rather than identifiers:

item_one ?? item_two ?? another_item ?? one_more_possibility

vs

x ?? y[2] ?? kw['id'] ?? 3 + 7

IMO, this is a case where artificial examples are unusually bad at
conveying the actual feel of a proposal. It's pretty easy to turn
someone's acceptable-looking example into an incomprehensible mess,
just by changing variable names and example terms. So I think it's
critically important for any proposal along these lines (even just
posts to the mailing list, and definitely for a PEP), that it's argued
in terms of actual code examples in current projects that would
reasonably be modified to use the proposed syntax. And people wanting
to be particularly honest in their proposals should probably include
both best-case and worst-case examples of readability.

Paul

PS Also, I want a pony. (I really do understand that the above is not
realistic, but maybe I can hope that at least anyone writing a PEP
take it into consideration :-))
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] How assignment should work with generators?

2017-11-30 Thread Paul Moore
On 30 November 2017 at 16:16, Steve Barnes  wrote:
> I had a sneaky feeling that it did, which raises the question of what
> the bleep this enormous thread is about, since the fundamental syntax
> currently exists

Essentially, it's about the fact that to build remainder you need to
copy all of the remaining items out of the RHS. And for an infinite
iterator like count() this is an infinite loop.

There's also the point that if the RHS is an iterator, reading *any*
values out of it changes its state - and
1. a, b, *remainder = rhs therefore exhausts rhs
2. a.b = rhs reads "one too many" values from rhs to check if
there are extra values (which the programmer has asserted shouldn't be
there by not including *remainder).

Mostly corner cases, and I don't believe there have been any
non-artificial examples posted in this thread. Certainly no-one has
offered a real-life code example that is made significantly worse by
the current semantics, and/or which couldn't be easily worked around
without needing a language change.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] How assignment should work with generators?

2017-11-27 Thread Paul Moore
On 27 November 2017 at 21:54, Kirill Balunov <kirillbalu...@gmail.com> wrote:
> 2017-11-27 19:23 GMT+03:00 Paul Moore <p.f.mo...@gmail.com>:
>
>>
>> It should be reasonably easy
>> to do a code search for something like "=.*islice", to find code
>> that's a candidate for using the proposed syntax. I suspect there's
>> very little code like that.
>
>
> While isclice is something  equivalent, it can be used in places like:
>
> x, y = seq[:2]
> x, y, z = seq[:3]

But in those places,

x, y, *_ = seq

works fine at the moment. So if the programmer didn't feel inclined to
use x, y, *_ = seq, there's no reason to assume that they would get
any benefit from x, y, ... = seq either.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] How assignment should work with generators?

2017-12-01 Thread Paul Moore
On 1 December 2017 at 09:48, Kirill Balunov <kirillbalu...@gmail.com> wrote:
> Probably, some time ago it was necessary to split this thread into two
> questions:
> 1. Philosophical question regarding sequences and iterators. In particular,
> should they behave differently depending on the context,
> or, in other words, whether to emphasize their different nature as
> fixed-size containers and those that are lazily produce values on demand.
> 2. Additional syntax in the assignment statement for partial extraction of
> values from the iterable.

That's a good summary of the two elements of the discussion here.

On (1), I'd say that Python should *not* have context-dependent
semantics like this. It's something Perl was famous for (list and
scalar contexts) and IMO makes for pretty unreadable code. Python's
Zen here is "Explicit is better than implicit". Specifically, having
the semantics of the assignment statement vary depending on the type
of the value being assigned seems like a very subtle distinction, and
not in line with any other statement in the language.

On (2), that's something that is relatively simple to debate - all of
the normal rules for new syntax proposals apply - what problem does it
solve, how much of an improvement over existing ways of solving the
problem does the proposal give, how easy is it for beginners to
understand and for people encountering it to locate the documentation,
does it break backward compatibility, etc... Personally I don't think
it's a significant enough benefit but I'm willing to be swayed if good
enough arguments are presented (currently the "a, b, ... = value"
syntax is my preferred proposal, but I don't think there's enough
benefit to justify implementing it).

> 2017-11-30 22:19 GMT+03:00 Paul Moore <p.f.mo...@gmail.com>:
>>
>>
>> Mostly corner cases, and I don't believe there have been any
>> non-artificial examples posted in this thread.
>>
>> Certainly no-one has offered a real-life code example that is made
>> significantly worse by
>> the current semantics, and/or which couldn't be easily worked around
>> without needing a language change.
>
>
> Yes, in fact, this is a good question, is whether that is sufficiently
> useful to justify extending the syntax. But it is not about corner cases, it
> is rather usual situation.
> Nevertheless, this is the most difficult moment for Rationale. By now, this
> feature does not give you new opportunities for solving problems. It's more
> about expressiveness and convenience. You can write:
>
> x, y, ... = iterable
>
> or,
>
> it = iter(iterable)
> x, y = next(it), next(it)
>
> or,
>
> from itertools import isclice
> x, y = islice(iterable, 2)
>
> or,
> x, y = iterable[:2]
>
> and others, also in some cases when you have infinite generator or iterator,
> you should use 2nd or 3rd.

It's significant to me that you're still only able to offer artificial
code as examples. In real code, I've certainly needed this type of
behaviour, but it's never been particularly problematic to just use

first_result = next(it)
second_result - next(it)

Or if I have an actual sequence, x, y = seq[:2]

The next() approach actually has some issues if the iterator
terminates early - StopIteration is typically not the exception I
want, here. But all that means is that I should use islice more. The
reason i don't think to is because I need to import it from itertools.
But that's *not* a good argument - we could use the same argument to
make everything a builtin. Importing functionality from modules is
fundamental to Python, and "this is a common requirement, so it should
be a builtin" is an argument that should be treated with extreme
suspicion. What I *don't* have a problem with is the need to specify
the number of items - that seems completely natural to me, I'm
confirming that I require an iterable that has at least 2 elements at
this point in my code.

The above is an anecdotal explanation of my experience with real code
- still not compelling, but hopefully better than an artificial
example with no real-world context :-)


> In fact, this has already been said and probably
> I will not explain it better:
>
> 2017-11-28 1:40 GMT+03:00 Greg Ewing <greg.ew...@canterbury.ac.nz>:
>>
>> Guido van Rossum wrote:
>>>
>>> Is this problem really important enough that it requires dedicated
>>> syntax? Isn't the itertools-based solution good enough?
>>
>>
>> Well, it works, but it feels very clumsy. It's annoying to
>> have to specify the number of items in two places.
>>
>> Also, it seems perverse to have to tell Python to do *more*
>> stuff to mitigate the effects of stuff it does that you
>> didn't want it to do in the first place.
>>

<    1   2   3   4   5   6   7   8   >