Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Paul Moore
On 17 August 2015 at 05:34, Victor Stinner victor.stin...@gmail.com wrote:
 2015-08-16 7:21 GMT-07:00 Paul Moore p.f.mo...@gmail.com:
 2. By far and away the most common use for me would be things like
 print(fIteration {n}: Took {end-start) seconds). At the moment I use
 str,format() for this, and it's annoyingly verbose. This would be a
 big win, and I'm +1 on the PEP for this specific reason.

 You can use a temporary variable, it's not much longer:
print(Iteration {n}: Took {dt) seconds.format(n=n, dt=end-start))
 becomes
dt = end - start
print(fIteration {n}: Took {dt) seconds)

... which is significantly shorter (my point). And using an inline expression

print(fIteration {n}: Took {end-start) seconds)

with (IMO) even better readability than the version with a temporary variable.


 3. All of the complex examples look scary, but in practice I wouldn't
 write stuff like that - why would anyone do so unless they were being
 deliberately obscure?

 I'm quite sure that users will write complex code in f-strings.

So am I. Some people will always write bad code. I won't (or at least,
I'll try not to write code that *I* consider to be complex :-)) but
you can use this construct to write bad code isn't an argument for
dropping the feature. If you couldn't find *good* uses, that would be
different, but that doesn't seem to be the case here (at least in my
view).

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Larry Hastings

On 08/17/2015 03:02 AM, Paul Moore wrote:

On 17 August 2015 at 05:34, Victor Stinner victor.stin...@gmail.com wrote:

2015-08-16 7:21 GMT-07:00 Paul Moore p.f.mo...@gmail.com:

3. All of the complex examples look scary, but in practice I wouldn't
write stuff like that - why would anyone do so unless they were being
deliberately obscure?

I'm quite sure that users will write complex code in f-strings.

So am I. Some people will always write bad code. I won't (or at least,
I'll try not to write code that *I* consider to be complex :-)) but
you can use this construct to write bad code isn't an argument for
dropping the feature. If you couldn't find *good* uses, that would be
different, but that doesn't seem to be the case here (at least in my
view).


I think this corner of the debate is covered by the Consenting adults 
guiding principle we use 'round these parts.


Cheers,


//arry/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Paul Moore
On 17 August 2015 at 12:48, Larry Hastings la...@hastings.org wrote:
 I think this corner of the debate is covered by the Consenting adults
 guiding principle we use 'round these parts.

Precisely.
Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-08-17 Thread Nathaniel Smith
On Mon, Aug 17, 2015 at 7:37 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 26 July 2015 at 07:28, Robert Collins robe...@robertcollins.net wrote:
 For my part, I'm going to pick up more or less one thing a day and
 review it, but I think it would be great if other committers were to
 also to do this: if we had 5 of us doing 1 a day, I think we'd burn
 down this 45 patch backlog rapidly without significant individual
 cost. At which point, we can fairly say to folk doing triage that
 we're ready for patches :)

 We're down to 9 such patches, and reading through them today there are
 none that I felt comfortable moving forward: either their state is
 unclear, or they are waiting for action from a *specific* core.

 However - 9 isn't a bad number for 'patches that the triagers think
 are ready to commit' inventory.

 So yay!. Also - triagers, thank you for feeding patches through the
 process. Please keep it up :)

Awesome!

If you're looking for something to do, the change in this patch had
broad consensus, but has been stalled waiting for review for a while,
and the lack of a final decision is leaving other projects in a
somewhat uncomfortable position (they want to match CPython but
CPython isn't deciding):

https://bugs.python.org/issue24294

;-)

-n

-- 
Nathaniel J. Smith -- http://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-08-17 Thread Ben Finney
Robert Collins robe...@robertcollins.net writes:

 However - 9 isn't a bad number for 'patches that the triagers think
 are ready to commit' inventory.

 So yay!. Also - triagers, thank you for feeding patches through the
 process. Please keep it up :)

If I were a cheerleader I would be able to lead a rousing “Yay, go team
backlog burners!”

-- 
 \ “I may disagree with what you say, but I will defend to the |
  `\death your right to mis-attribute this quote to Voltaire.” |
_o__)   —Avram Grumer, rec.arts.sf.written, 2000-05-30 |
Ben Finney

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Stephen J. Turnbull
Barry Warsaw writes:

  On Aug 17, 2015, at 11:02 AM, Paul Moore wrote:
  
  print(fIteration {n}: Took {end-start) seconds)
  
  This illustrates (more) problems I have with arbitrary expressions.
  
  First, you've actually made a typo there; it should be
  {end-start} -- notice the trailing curly brace.  Second, what if
  you typoed that as {end_start}?  According to PEP 498 the
  original typo above should trigger a SyntaxError

That ship has sailed, you have the same problem with str.format format
strings already.

  and the second a run-time error (NameError?).

Ditto.

  But how will syntax highlighters and linters help you discover your
  bugs before you've even saved the file?

They need to recognize that a string prefixed with f is special,
that it's not just a single token, then parse the syntax.  The hardest
part is finding the end-of-string delimiter!  The expression itself is
not a problem, since either we already have the code to handle the
expression, or we don't (and your whole point is moot).

Emacs abandoned the idea that you should do syntax highlighting without
parsing well over a decade ago.  If Python can implement the syntax,
Emacs can highlight it.  It's just a question of if there's will to do
it on the part of the python-mode maintainers.

I'm sure the same can be said about other linters and highlighters for
Python, though I have no part in implementing them.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Steve Dower

On 17Aug2015 0813, Barry Warsaw wrote:

On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote:


The linters could tell you that you have no 'end' or 'start' just as
easily when it's in that form as when it's written out in full.
Certainly the mismatched brackets could easily be caught by any sort
of syntax highlighter. The rules for f-strings are much simpler than,
say, the PHP rules and the differences between ${...} and {$...},
which I've seen editors get wrong.


I'm really asking whether it's technically feasible and realistically possible
for them to do so.  I'd love to hear from the maintainers of pyflakes, pylint,
Emacs, vim, and other editors, linters, and other static analyzers on a rough
technical assessment of whether they can support this and how much work it
would be.


With the current format string proposals (allowing arbitrary 
expressions) I think I'd implement it in our parser with a 
FORMAT_STRING_TOKEN, a FORMAT_STRING_JOIN_OPERATOR and a 
FORMAT_STRING_FORMAT_OPERATOR.


A FORMAT_STRING_TOKEN would be started by f('||'''|) and ended by 
matching quotes or before an open brace (that is not escaped).


A FORMAT_STRING_JOIN_OPERATOR would be inserted as the '{', which we'd 
either colour as part of the string or the regular brace colour. This 
also enables a parsing context where a colon becomes the 
FORMAT_STRING_FORMAT_OPERATOR and the right-hand side of this binary 
operator would be FORMAT_STRING_TOKEN. The final close brace becomes 
another FORMAT_STRING_JOIN_OPERATOR and the rest of the string is 
FORMAT_STRING_TOKEN.


So it'd translate something like this:

fThis {text} is my {string:{length+3}}

FORMAT_STRING_TOKEN[fThis ]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[text]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[ is my ]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[string]
FORMAT_STRING_FORMAT_OPERATOR[:]
FORMAT_STRING_TOKEN[]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[length]
OPERATOR[+]
NUMBER[3]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[]

I *believe* (without having tried it) that this would let us produce a 
valid tokenisation (in our model) without too much difficulty, and 
highlight/analyse correctly, including validating matching braces. 
Getting the precedence correct on the operators might be more difficult, 
but we may also just produce an AST that looks like a function call, 
since that will give us good enough handling once we're past tokenisation.


A simpler tokenisation that would probably be sufficient for many 
editors would be to treat the first and last segments ([fThis {] and 
[}]) as groupings and each section of text as separators, giving this:


OPEN_GROUPING[fThis {]
EXPRESSION[text]
COMMA[} is my {]
EXPRESSION[string]
COMMA[:{]
EXPRESSION[length+3]
COMMA[}}]
CLOSE_GROUPING[]

Initial parsing may be a little harder, but it should mean less trouble 
when expressions spread across multiple lines, since that is already 
handled for other types of groupings. And if any code analysis is 
occurring, it should be happening for dict/list/etc. contents already, 
and so format strings will get it too.


So I'm confident we can support it, and I expect either of these two 
approaches will work for most tools without too much trouble. (There's 
also a middle ground where you create new tokens for format string 
components, but combine them like the second example.)


Cheers,
Steve


Cheers,
-Barry



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Steve Dower

On 17Aug2015 1506, Steve Dower wrote:

On 17Aug2015 0813, Barry Warsaw wrote:

On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote:


The linters could tell you that you have no 'end' or 'start' just as
easily when it's in that form as when it's written out in full.
Certainly the mismatched brackets could easily be caught by any sort
of syntax highlighter. The rules for f-strings are much simpler than,
say, the PHP rules and the differences between ${...} and {$...},
which I've seen editors get wrong.


I'm really asking whether it's technically feasible and realistically
possible
for them to do so.  I'd love to hear from the maintainers of pyflakes,
pylint,
Emacs, vim, and other editors, linters, and other static analyzers on
a rough
technical assessment of whether they can support this and how much
work it
would be.


With the current format string proposals (allowing arbitrary
expressions) I think I'd implement it in our parser with a
FORMAT_STRING_TOKEN, a FORMAT_STRING_JOIN_OPERATOR and a
FORMAT_STRING_FORMAT_OPERATOR.

A FORMAT_STRING_TOKEN would be started by f('||'''|) and ended by
matching quotes or before an open brace (that is not escaped).

A FORMAT_STRING_JOIN_OPERATOR would be inserted as the '{', which we'd
either colour as part of the string or the regular brace colour. This
also enables a parsing context where a colon becomes the
FORMAT_STRING_FORMAT_OPERATOR and the right-hand side of this binary
operator would be FORMAT_STRING_TOKEN. The final close brace becomes
another FORMAT_STRING_JOIN_OPERATOR and the rest of the string is
FORMAT_STRING_TOKEN.

So it'd translate something like this:

fThis {text} is my {string:{length+3}}

FORMAT_STRING_TOKEN[fThis ]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[text]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[ is my ]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[string]
FORMAT_STRING_FORMAT_OPERATOR[:]
FORMAT_STRING_TOKEN[]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[length]
OPERATOR[+]
NUMBER[3]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[]

I *believe* (without having tried it) that this would let us produce a
valid tokenisation (in our model) without too much difficulty, and
highlight/analyse correctly, including validating matching braces.
Getting the precedence correct on the operators might be more difficult,
but we may also just produce an AST that looks like a function call,
since that will give us good enough handling once we're past
tokenisation.

A simpler tokenisation that would probably be sufficient for many
editors would be to treat the first and last segments ([fThis {] and
[}]) as groupings and each section of text as separators, giving this:

OPEN_GROUPING[fThis {]
EXPRESSION[text]
COMMA[} is my {]
EXPRESSION[string]
COMMA[:{]
EXPRESSION[length+3]
COMMA[}}]
CLOSE_GROUPING[]

Initial parsing may be a little harder, but it should mean less trouble
when expressions spread across multiple lines, since that is already
handled for other types of groupings. And if any code analysis is
occurring, it should be happening for dict/list/etc. contents already,
and so format strings will get it too.

So I'm confident we can support it, and I expect either of these two
approaches will work for most tools without too much trouble. (There's
also a middle ground where you create new tokens for format string
components, but combine them like the second example.)


The middle ground would probably be required for correct highlighting. I 
implied but didn't specify that the tokens in my second example would 
get special treatment here.



Cheers,
Steve


Cheers,
-Barry


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Wes Turner
On Aug 17, 2015 2:23 PM, Nikolaus Rath nikol...@rath.org wrote:

 On Aug 16 2015, Paul Moore p.f.mo...@gmail.com wrote:
  2. By far and away the most common use for me would be things like
  print(fIteration {n}: Took {end-start) seconds).

 I believe an even more common use willl be

 print(fIteration {n+1}: Took {end-start} seconds)

 Note that not allowing expressions would turn this into the rather
 verbose:

 iteration=n+1
 duration=end-start
 print(fIteration {iteration}: Took {duration} seconds)

* Is this more testable?
* mutability of e.g. end.__sub__
   * (do I add this syntax highlighting for Python  3.6?)



 Best,
 -Nikolaus

 --
 GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
 Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

  »Time flies like an arrow, fruit flies like a Banana.«
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-08-17 Thread Robert Collins
On 26 July 2015 at 07:28, Robert Collins robe...@robertcollins.net wrote:
 On 21 July 2015 at 19:40, Nick Coghlan ncogh...@gmail.com wrote:

 All of this is why the chart that I believe should be worrying people
 is the topmost one on this page:
 http://bugs.python.org/issue?@template=stats

 Both the number of open issues and the number of open issues with
 patches are steadily trending upwards. That means the bottleneck in
 the current process *isn't* getting patches written in the first
 place, it's getting them up to the appropriate standards and applied.
 Yet the answer to the problem isn't a simple recruit more core
 developers, as the existing core developers are *also* the bottleneck
 in the review and mentoring process for *new* core developers.

 Those charts doesn't show patches in 'commit-review' -
 http://bugs.python.org/issue?%40columns=title%40columns=idstage=5%40columns=activity%40sort=activitystatus=1%40columns=status%40pagesize=50%40startwith=0%40sortdir=on%40action=search

 There are only 45 of those patches.

 AIUI - and I'm very new to core here - anyone in triagers can get
 patches up to commit-review status.

 I think we should set a goal to keep inventory low here - e.g. review
 and either bounce back to patch review, or commit, in less than a
 month. Now - a month isn't super low, but we have lots of stuff
 greater than a month.

 For my part, I'm going to pick up more or less one thing a day and
 review it, but I think it would be great if other committers were to
 also to do this: if we had 5 of us doing 1 a day, I think we'd burn
 down this 45 patch backlog rapidly without significant individual
 cost. At which point, we can fairly say to folk doing triage that
 we're ready for patches :)

We're down to 9 such patches, and reading through them today there are
none that I felt comfortable moving forward: either their state is
unclear, or they are waiting for action from a *specific* core.

However - 9 isn't a bad number for 'patches that the triagers think
are ready to commit' inventory.

So yay!. Also - triagers, thank you for feeding patches through the
process. Please keep it up :)

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread MRAB

On 2015-08-17 23:06, Steve Dower wrote:

On 17Aug2015 0813, Barry Warsaw wrote:

On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote:


The linters could tell you that you have no 'end' or 'start' just as
easily when it's in that form as when it's written out in full.
Certainly the mismatched brackets could easily be caught by any sort
of syntax highlighter. The rules for f-strings are much simpler than,
say, the PHP rules and the differences between ${...} and {$...},
which I've seen editors get wrong.


I'm really asking whether it's technically feasible and realistically possible
for them to do so.  I'd love to hear from the maintainers of pyflakes, pylint,
Emacs, vim, and other editors, linters, and other static analyzers on a rough
technical assessment of whether they can support this and how much work it
would be.


With the current format string proposals (allowing arbitrary
expressions) I think I'd implement it in our parser with a
FORMAT_STRING_TOKEN, a FORMAT_STRING_JOIN_OPERATOR and a
FORMAT_STRING_FORMAT_OPERATOR.

A FORMAT_STRING_TOKEN would be started by f('||'''|) and ended by
matching quotes or before an open brace (that is not escaped).

A FORMAT_STRING_JOIN_OPERATOR would be inserted as the '{', which we'd
either colour as part of the string or the regular brace colour. This
also enables a parsing context where a colon becomes the
FORMAT_STRING_FORMAT_OPERATOR and the right-hand side of this binary
operator would be FORMAT_STRING_TOKEN. The final close brace becomes
another FORMAT_STRING_JOIN_OPERATOR and the rest of the string is
FORMAT_STRING_TOKEN.

So it'd translate something like this:

fThis {text} is my {string:{length+3}}

FORMAT_STRING_TOKEN[fThis ]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[text]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[ is my ]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[string]
FORMAT_STRING_FORMAT_OPERATOR[:]
FORMAT_STRING_TOKEN[]
FORMAT_STRING_JOIN_OPERATOR[{]
IDENTIFIER[length]
OPERATOR[+]
NUMBER[3]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[]
FORMAT_STRING_JOIN_OPERATOR[}]
FORMAT_STRING_TOKEN[]


I'm not sure about that. I think it might work better with, say,
FORMAT_OPEN for '{' and FORMAT_CLOSE for '}':

FORMAT_STRING_TOKEN[fThis ]
FORMAT_OPEN
IDENTIFIER[text]
FORMAT_CLOSE
FORMAT_STRING_TOKEN[ is my ]
FORMAT_OPEN
IDENTIFIER[string]
FORMAT_STRING_FORMAT_OPERATOR[:]
FORMAT_STRING_TOKEN[]
FORMAT_OPEN
IDENTIFIER[length]
OPERATOR[+]
NUMBER[3]
FORMAT_CLOSE
FORMAT_CLOSE
FORMAT_STRING_TOKEN[]


I *believe* (without having tried it) that this would let us produce a
valid tokenisation (in our model) without too much difficulty, and
highlight/analyse correctly, including validating matching braces.
Getting the precedence correct on the operators might be more difficult,
but we may also just produce an AST that looks like a function call,
since that will give us good enough handling once we're past tokenisation.

A simpler tokenisation that would probably be sufficient for many
editors would be to treat the first and last segments ([fThis {] and
[}]) as groupings and each section of text as separators, giving this:

OPEN_GROUPING[fThis {]
EXPRESSION[text]
COMMA[} is my {]
EXPRESSION[string]
COMMA[:{]
EXPRESSION[length+3]
COMMA[}}]
CLOSE_GROUPING[]

Initial parsing may be a little harder, but it should mean less trouble
when expressions spread across multiple lines, since that is already
handled for other types of groupings. And if any code analysis is
occurring, it should be happening for dict/list/etc. contents already,
and so format strings will get it too.

So I'm confident we can support it, and I expect either of these two
approaches will work for most tools without too much trouble. (There's
also a middle ground where you create new tokens for format string
components, but combine them like the second example.)

Cheers,
Steve


Cheers,
-Barry



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/python%40mrabarnett.plus.com




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Datetime-SIG] PEP 495 (Local Time Disambiguation) is ready for pronouncement

2015-08-17 Thread Alexander Belopolsky
[Posted on Python-Dev]

On Sun, Aug 16, 2015 at 3:23 PM, Guido van Rossum gu...@python.org wrote:
 I think that a courtesy message to python-dev is appropriate, with a link to
 the PEP and an invitation to discuss its merits on datetime-sig.

Per Gudo's advise, this is an invitation to join PEP 495 discussion on
Datetime-SIG.

I you would like to catch-up on the SIG discussion, the archive of
this thread starts at
https://mail.python.org/pipermail/datetime-sig/2015-August/000253.html.

The PEP itself can be found at
https://www.python.org/dev/peps/pep-0495, but if you would like to
follow draft updates as they happen, you can do it on Github at
https://github.com/abalkin/ltdf.

Even though the PEP is deliberately minimal in scope, there are still
a few issues to be ironed out including how to call the disambiguation
flag.  It is agreed that the name should not refer to DST and should
distinguish between two ambiguous times by their chronological order.
The working name is first, but no one particularly likes it
including the author of the PEP.  Some candidates are discussed in the
PEP at https://www.python.org/dev/peps/pep-0495/#questions-and-answers,
and some more have been suggested that I will add soon.

Please direct your responses to datetime-...@python.org.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Barry Warsaw
On Aug 17, 2015, at 11:02 AM, Paul Moore wrote:

print(fIteration {n}: Took {end-start) seconds)

This illustrates (more) problems I have with arbitrary expressions.

First, you've actually made a typo there; it should be {end-start} -- notice
the trailing curly brace.  Second, what if you typoed that as {end_start}?
According to PEP 498 the original typo above should trigger a SyntaxError and
the second a run-time error (NameError?).  But how will syntax highlighters
and linters help you discover your bugs before you've even saved the file?
Currently, a lot of these types of problems can be found much earlier on
through the use of such linters.  Putting arbitrary expressions in strings
will just hide them to these tools for the foreseeable future.  I have a hard
time seeing how Emacs's syntax highlighting could cope with it for example.

Cheers,
-Barry
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Eric V. Smith
On 08/16/2015 03:37 PM, Guido van Rossum wrote:
 On Sun, Aug 16, 2015 at 8:55 PM, Eric V. Smith e...@trueblade.com
 mailto:e...@trueblade.com wrote:
 
 Thanks, Paul. Good feedback.
 
 
 Indeed, I smiled when I saw Paul's post.
  
 
 Triple quoted and raw strings work like you'd expect, but you're
 right: the PEP should make this clear.
 
 I might drop the leading spaces, for a technical reason having to
 deal with passing the strings in to str.format. But I agree it's not
 a big deal one way or the other.
 
 
 Hm. I rather like allow optional leading/trailing spaces. Given that we
 support arbitrary expressions, we have to support internal spaces; I
 think that some people would really like to use leading/trailing spaces,
 especially when there's text immediately against the other side of the
 braces, as in
 
   f'Stuff{ len(self.busy) }more stuff'
 
 I also expect it might be useful to allow leading/trailing newlines, if
 they are allowed at all (i.e. inside triple-quoted strings). E.g.
 
   f'''Stuff{
   len(self.busy)
   }more stuff'''

Okay, I'm sold. This works in my current implementation:

 f'''foo
... { 3 }
... bar'''
'foo\n3\nbar'

And since this currently works, there's no implementation specific
reason to disallow leading and trailing whitespace:

 '\n{\n3 + \n 1\t\n}\n'.format_map({'\n3 + \n 1\t\n':4})
'\n4\n'

My current plan is to replace an f-string with a call to .format_map:
 foo = 100
 bar = 20
 f'foo: {foo} bar: { bar+1}'

Would become:
'foo: {foo} bar: { bar+1}'.format_map({'foo': 100, ' bar+1': 21})

The string on which format_map is called is the identical string that's
in the source code. With the exception noted in PEP 498, I think this
satisfies the principle of least surprise.

As I've said elsewhere, we could then have some i18n function look up
and replace the string before format_map is called on it. As long as it
leaves the expression text alone, everything will work out fine. There
are some quirks with having the same expression appear twice, if the
expression has side effects. But I'm not so worried about that.

 Here's another thing for everybody's pondering: when tokenizing an
 f-string, I think the pieces could each become tokens in their own
 right. Then the rest of the parsing (and rules about whitespace etc.)
 would become simpler because the grammar would deal with them. E.g. the
 string above would be tokenized as follows:
 
 f'Stuff{
 len
 (
 self
 .
 busy
 )
 }more stuff'
 
 The understanding here is that there are these new types of tokens:
 F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for
 }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e.
 not containing any substitutions). These token types can then be used in
 the grammar. (A complication would be different kinds of string quotes;
 I propose to handle that in the lexer, otherwise the number of
 open/close token types would balloon out of proportions.)

This would save a few hundred lines of C code. But a quick glance at the
lexer and I can't see how to make the opening quotes agree with the
closing quotes.

I think the i18n case (if we chose to support it) is better served by
having the entire, unaltered source string available at run time. PEP
501 comes to a similar conclusion
(http://legacy.python.org/dev/peps/pep-0501/#preserving-the-unmodified-format-string).

Eric.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Barry Warsaw
On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote:

The linters could tell you that you have no 'end' or 'start' just as
easily when it's in that form as when it's written out in full.
Certainly the mismatched brackets could easily be caught by any sort
of syntax highlighter. The rules for f-strings are much simpler than,
say, the PHP rules and the differences between ${...} and {$...},
which I've seen editors get wrong.

I'm really asking whether it's technically feasible and realistically possible
for them to do so.  I'd love to hear from the maintainers of pyflakes, pylint,
Emacs, vim, and other editors, linters, and other static analyzers on a rough
technical assessment of whether they can support this and how much work it
would be.

Cheers,
-Barry
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Chris Angelico
On Tue, Aug 18, 2015 at 12:50 AM, Barry Warsaw ba...@python.org wrote:
 On Aug 17, 2015, at 11:02 AM, Paul Moore wrote:

print(fIteration {n}: Took {end-start) seconds)

 This illustrates (more) problems I have with arbitrary expressions.

 First, you've actually made a typo there; it should be {end-start} -- notice
 the trailing curly brace.  Second, what if you typoed that as {end_start}?
 According to PEP 498 the original typo above should trigger a SyntaxError and
 the second a run-time error (NameError?).  But how will syntax highlighters
 and linters help you discover your bugs before you've even saved the file?
 Currently, a lot of these types of problems can be found much earlier on
 through the use of such linters.  Putting arbitrary expressions in strings
 will just hide them to these tools for the foreseeable future.  I have a hard
 time seeing how Emacs's syntax highlighting could cope with it for example.


The linters could tell you that you have no 'end' or 'start' just as
easily when it's in that form as when it's written out in full.
Certainly the mismatched brackets could easily be caught by any sort
of syntax highlighter. The rules for f-strings are much simpler than,
say, the PHP rules and the differences between ${...} and {$...},
which I've seen editors get wrong.

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Guido van Rossum
On Mon, Aug 17, 2015 at 7:13 AM, Eric V. Smith e...@trueblade.com wrote:

 [...]
 My current plan is to replace an f-string with a call to .format_map:
  foo = 100
  bar = 20
  f'foo: {foo} bar: { bar+1}'

 Would become:
 'foo: {foo} bar: { bar+1}'.format_map({'foo': 100, ' bar+1': 21})

 The string on which format_map is called is the identical string that's
 in the source code. With the exception noted in PEP 498, I think this
 satisfies the principle of least surprise.


Does this really work? Shouldn't this be using some internal variant of
format_map() that doesn't attempt to interpret the keys in brackets in any
ways? Otherwise there'd be problems with the different meaning of e.g.
{a[x]} (unless I misunderstand .format_map() -- I'm assuming it's just like
.format(**blah).


 As I've said elsewhere, we could then have some i18n function look up
 and replace the string before format_map is called on it. As long as it
 leaves the expression text alone, everything will work out fine. There
 are some quirks with having the same expression appear twice, if the
 expression has side effects. But I'm not so worried about that.


The more I hear Barry's objections against arbitrary expressions from the
i18n POV the more I am thinking that this is just a square peg and a round
hole situation, and we should leave i18n alone. The requirements for i18n
are just too different than the requirements for other use cases (i18n
cares deeply about preserving the original text of the {...}
interpolations; the opposite is the case for the other use cases).


 [...]
  The understanding here is that there are these new types of tokens:
  F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for
  }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e.
  not containing any substitutions). These token types can then be used in
  the grammar. (A complication would be different kinds of string quotes;
  I propose to handle that in the lexer, otherwise the number of
  open/close token types would balloon out of proportions.)

 This would save a few hundred lines of C code. But a quick glance at the
 lexer and I can't see how to make the opening quotes agree with the
 closing quotes.


The lexer would have to develop another stack for this purpose.


 I think the i18n case (if we chose to support it) is better served by
 having the entire, unaltered source string available at run time. PEP
 501 comes to a similar conclusion
 (
 http://legacy.python.org/dev/peps/pep-0501/#preserving-the-unmodified-format-string
 ).


Fair enough.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Guido van Rossum
On Mon, Aug 17, 2015 at 8:13 AM, Barry Warsaw ba...@python.org wrote:

 I'm really asking whether it's technically feasible and realistically
 possible
 for them to do so.  I'd love to hear from the maintainers of pyflakes,
 pylint,
 Emacs, vim, and other editors, linters, and other static analyzers on a
 rough
 technical assessment of whether they can support this and how much work it
 would be.


Those that aren't specific to Python will have to solve a similar problem
for e.g. Swift, which supports \(...) in all strings with arbitrary
expressions in the ..., or Perl which apparently also supports arbitrary
expressions. Heck, even Bash supports something like this,
...$(command)

I am not disinclined in adding some restrictions to make things a little
more tractable, but they would be along the lines of the Swift restriction
(the interpolated expression cannot contain string quotes). However, I do
think we should support f...{a['key']}

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Nikolaus Rath
On Aug 16 2015, Paul Moore p.f.mo...@gmail.com wrote:
 2. By far and away the most common use for me would be things like
 print(fIteration {n}: Took {end-start) seconds).

I believe an even more common use willl be

print(fIteration {n+1}: Took {end-start} seconds)

Note that not allowing expressions would turn this into the rather
verbose:

iteration=n+1
duration=end-start
print(fIteration {iteration}: Took {duration} seconds)


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Guido van Rossum
On Mon, Aug 17, 2015 at 12:23 PM, Nikolaus Rath nikol...@rath.org wrote:

 On Aug 16 2015, Paul Moore p.f.mo...@gmail.com wrote:
  2. By far and away the most common use for me would be things like
  print(fIteration {n}: Took {end-start) seconds).

 I believe an even more common use willl be

 print(fIteration {n+1}: Took {end-start} seconds)

 Note that not allowing expressions would turn this into the rather
 verbose:

 iteration=n+1
 duration=end-start
 print(fIteration {iteration}: Took {duration} seconds)


Let's stop debating this point -- any acceptable solution will have to
support (more-or-less) arbitrary expressions. *If* we end up also
attempting to solve i18n, then it will be up to the i18n toolchain to
require a stricter syntax. (I imagine this could be done during the string
extraction phase.)

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498 PEP-501: Literal String Formatting/Interpolation

2015-08-17 Thread Peter Ludemann via Python-Dev
How is this proposal of di... more than a different spelling of lambda
i...? (I think it's a great idea — but am wondering if there are some
extra semantics that I missed)

I don't think there's any need to preserve the values of the {...} (or
${...}) constituents — the normal closure mechanism should do fine because
logging is more-or-less like this:
   if conditions for logging:
 if callable(msg):
   log_msg = msg(*args)
 else:
   log_msg = msg % args
and so there's no need to preserve the values at the moment the
interpolated string is created.

Perl allows arbitrary expressions inside interpolations, but that tends to
get messy and is self-limiting for complex expressions; however, it's handy
for things like:
   print(The {i+1}th item is strange: {x[i]})


On 16 August 2015 at 13:04, Gregory P. Smith g...@krypto.org wrote:



 On Sun, Aug 9, 2015 at 3:25 PM Brett Cannon br...@python.org wrote:


 On Sun, Aug 9, 2015, 13:51 Peter Ludemann via Python-Dev 
 python-dev@python.org wrote:

 Most of my outputs are log messages, so this proposal won't help me
 because (I presume) it does eager evaluation of the format string and the
 logging methods are designed to do lazy evaluation. Python doesn't have
 anything like Lisp's special forms, so there doesn't seem to be a way to
 implicitly put a lambda on the string to delay evaluation.

 It would be nice to be able to mark the formatting as lazy ... maybe
 another string prefix character to indicate that? (And would the 2nd
 expression in an assert statement be lazy or eager?)


 That would require a lazy string type which is beyond the scope of this
 PEP as proposed since it would require its own design choices, how much
 code would not like the different type, etc.

 -Brett


 Agreed that doesn't belong in PEP 498 or 501 itself... But it is a real
 need.

 We left logging behind when we added str.format() and adding yet another
 _third_ way to do string formatting without addressing the needs of
 deferred-formatting for things like logging is annoying.

 brainstorm: Imagine a deferred interpolation string with a d'' prefix..
  di'foo ${bar}' would be a new type with a __str__ method that also retains
 a runtime reference to the necessary values from the scope within which it
 was created that will be used for substitutions when iff/when it is
 __str__()ed.  I still wouldn't enjoy reminding people to use di''
 inlogging.info(di'thing happened: ${result}') all the time any more than
 I like reminding people to undo their use of % and just pass the values as
 additional args to the logging call... But I think people would find it
 friendlier and thus be more likely to get it right on their own.  logging's
 manually deferred % is an idiom i'd like to see wither away.

 There's also a performance aspect to any new formatter, % is oddly pretty
 fast, str.format isn't. So long as you can do stuff at compile time rather
 than runtime I think these PEPs could be even faster. Constant string
 pep-498 or pep-501 formatting could be broken down at compile time and
 composed into the optimal set of operations to build the resulting string /
 call the formatter.

 So far looking over both peps, I lean towards pep-501 rather than 498:

 I really prefer the ${} syntax.
 I don't like arbitrary logical expressions within strings.
 I dislike str only things without a similar concept for bytes.

 but neither quite suits me yet.

 501's __interpolate*__ builtins are good and bad at the same time.  doing
 this at the module level does seem right, i like the i18n use aspect of
 that, but you could also imagine these being methods so that subclasses
 could override the behavior on a per-type basis.  but that probably only
 makes sense if a deferred type is created due to when and how interpolates
 would be called.  also, adding builtins, even __ones__ annoys me for some
 reason I can't quite put my finger on.

 (jumping into the threads way late)
 -gps


 PS: As to Brett's comment about the history of string interpolation ...
 my recollection/understanding is that it started with Unix shells and the
 $variable notation, with the $variable being evaluated within ... and
 not within '...'. Perl, PHP, Make (and others) picked this up. There seems
 to be a trend to avoid the bare $variable form and instead use
 ${variable} everywhere, mainly because ${...} is sometimes required to
 avoid ambiguities (e.g. There were $NUMBER ${THING}s.)

 PPS: For anyone wishing to improve the existing format options, Common
 Lisp's FORMAT http://www.gigamonkeys.com/book/a-few-format-recipes.html
 and Prolog's format/2
 https://quintus.sics.se/isl/quintus/html/quintus/mpg-ref-format.html
 have some capabilities that I miss from time to time in Python.

 On 9 August 2015 at 11:22, Eric V. Smith e...@trueblade.com wrote:

 On 8/9/2015 1:38 PM, Brett Cannon wrote:
 
 
  On Sun, 9 Aug 2015 at 01:07 Stefan Behnel stefan...@behnel.de

  mailto:stefan...@behnel.de wrote:
 
  Eric V. Smith 

Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Eric V. Smith
On 8/17/2015 2:24 PM, Guido van Rossum wrote:
 On Mon, Aug 17, 2015 at 7:13 AM, Eric V. Smith e...@trueblade.com
 mailto:e...@trueblade.com wrote:
 
 [...]
 My current plan is to replace an f-string with a call to .format_map:
  foo = 100
  bar = 20
  f'foo: {foo} bar: { bar+1}'
 
 Would become:
 'foo: {foo} bar: { bar+1}'.format_map({'foo': 100, ' bar+1': 21})
 
 The string on which format_map is called is the identical string that's
 in the source code. With the exception noted in PEP 498, I think this
 satisfies the principle of least surprise.
 
 
 Does this really work? Shouldn't this be using some internal variant of
 format_map() that doesn't attempt to interpret the keys in brackets in
 any ways? Otherwise there'd be problems with the different meaning of
 e.g. {a[x]} (unless I misunderstand .format_map() -- I'm assuming it's
 just like .format(**blah).

Good point. It will require a similar function to format_map which
doesn't interpret the contents of the braces (except to the extent that
the f-string parser already has to). For argument's sake in point #4
below, let's call this str.format_map_simple.

 As I've said elsewhere, we could then have some i18n function look up
 and replace the string before format_map is called on it. As long as it
 leaves the expression text alone, everything will work out fine. There
 are some quirks with having the same expression appear twice, if the
 expression has side effects. But I'm not so worried about that.
 
 
 The more I hear Barry's objections against arbitrary expressions from
 the i18n POV the more I am thinking that this is just a square peg and a
 round hole situation, and we should leave i18n alone. The requirements
 for i18n are just too different than the requirements for other use
 cases (i18n cares deeply about preserving the original text of the {...}
 interpolations; the opposite is the case for the other use cases).

I think it would be possible to create a version of this that works for
both i18n and regular interpolation. I think the open issues are:

1. Barry wants the substitutions to look like $identifier and possibly
${identifier}, and the PEP 498 proposal just uses {}.

2. There needs to be a way to identify interpolated strings and i18n
strings, and possibly combinations of those. This leads to PEP 501's i-
and iu- strings.

3. A way to enforce identifiers-only, instead of generalized expressions.

4. We need a safe substitution mode for str.format_map_simple (from
above).

#1 is just a matter of preference: there's no technical reason to prefer
{} over $ or ${}. We can make any decision here. I prefer {} because
it's the same as str.format.

#2 needs to be decided in concert with the tooling needed to extract the
strings from the source code. The particular prefixes are up for debate.
I'm not a big fan of using u to have a meaning different from it's
current do nothing interpretation in 3.5. But really any prefixes will
do, if we decide to use string prefixes. I think that's the question: do
we want to distinguish among these cases using string prefixes or
combinations thereof?

#3 is doable, either at runtime or in the tooling that does the string
extraction.

#4 is simple, as long as we always turn it on for the localized strings.

Personally I can go either way on including i18n. But I agree it's
beginning to sound like i18n is just too complicated for PEP 498, and I
think PEP 501 is already too complicated. I'd like to make a decision on
this one way or the other, so we can move forward.

 [...]
  The understanding here is that there are these new types of tokens:
  F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for
  }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e.
  not containing any substitutions). These token types can then be used in
  the grammar. (A complication would be different kinds of string quotes;
  I propose to handle that in the lexer, otherwise the number of
  open/close token types would balloon out of proportions.)
 
 This would save a few hundred lines of C code. But a quick glance at the
 lexer and I can't see how to make the opening quotes agree with the
 closing quotes.
 
 
 The lexer would have to develop another stack for this purpose.

I'll give it some thought.

Eric.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-17 Thread Guido van Rossum
On Mon, Aug 17, 2015 at 1:26 PM, Eric V. Smith e...@trueblade.com wrote:

 [...]
 I think it would be possible to create a version of this that works for
 both i18n and regular interpolation. I think the open issues are:

 1. Barry wants the substitutions to look like $identifier and possibly
 ${identifier}, and the PEP 498 proposal just uses {}.

 2. There needs to be a way to identify interpolated strings and i18n
 strings, and possibly combinations of those. This leads to PEP 501's i-
 and iu- strings.

 3. A way to enforce identifiers-only, instead of generalized expressions.


In an off-list message to Barry and Nick I came up with the same three
points. :-)

I think #2 is the hard one (unless we adopt a solution like Yury just
proposed where you can have an arbitrary identifier in front of a string
literal).


 4. We need a safe substitution mode for str.format_map_simple (from
 above).

 #1 is just a matter of preference: there's no technical reason to prefer
 {} over $ or ${}. We can make any decision here. I prefer {} because
 it's the same as str.format.

 #2 needs to be decided in concert with the tooling needed to extract the
 strings from the source code. The particular prefixes are up for debate.
 I'm not a big fan of using u to have a meaning different from it's
 current do nothing interpretation in 3.5. But really any prefixes will
 do, if we decide to use string prefixes. I think that's the question: do
 we want to distinguish among these cases using string prefixes or
 combinations thereof?

 #3 is doable, either at runtime or in the tooling that does the string
 extraction.

 #4 is simple, as long as we always turn it on for the localized strings.

 Personally I can go either way on including i18n. But I agree it's
 beginning to sound like i18n is just too complicated for PEP 498, and I
 think PEP 501 is already too complicated. I'd like to make a decision on
 this one way or the other, so we can move forward.


What's the rush? There's plenty of time before Python 3.6.


  [...]
   The understanding here is that there are these new types of tokens:
   F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END
 for
   }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...'
 (i.e.
   not containing any substitutions). These token types can then be
 used in
   the grammar. (A complication would be different kinds of string
 quotes;
   I propose to handle that in the lexer, otherwise the number of
   open/close token types would balloon out of proportions.)
 
  This would save a few hundred lines of C code. But a quick glance at
 the
  lexer and I can't see how to make the opening quotes agree with the
  closing quotes.
 
 
  The lexer would have to develop another stack for this purpose.

 I'll give it some thought.

 Eric.


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com