Re: [Python-ideas] Show more info when `python -vV`

2016-10-14 Thread Nick Coghlan
On 15 October 2016 at 03:52, Sebastian Krause  wrote:
> Nathaniel Smith  wrote:
>> The compiler information generally reveals the OS as well (if only
>> accidentally), and the OS is often useful information.
>
> But in which situation would you really need to call Python from
> outside to find out which OS you're on?

Folks don't always realise that the nominal version reported by
redistributors isn't necessarily exactly the same as the upstream
release bearing that version number. This discrepancy is most obvious
with LTS Linux releases that don't automatically rebase their
supported Python builds to new maintenance releases, and instead
selectively backport changes that they or their customers need.

This means that it isn't always sufficient to know that someone is
running "Python on CentOS 6" (for example) - we sometimes need to know
which *build* of Python they're running, as if a problem can't be
reproduced with a recent from-source upstream build, it may be due to
redistributor specific patches, or it may just be that there's an
already implemented fix upstream that the redistributor hasn' t
backported yet.

So +1 from me for making "python -vV" a shorthand for "python -c
'import sys; print(sys.version)'". Since older versions won't support
it, it won't help much in the near term (except as a reminder to ask
for "sys.version" in cases where it may be relevant), but it should
become a useful support helper given sufficient time.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Null coalescing operator

2016-10-14 Thread Guido van Rossum
I'm not usually swayed by surveys -- Python is not a democracy. Maybe
a bunch of longer examples would help all of us see the advantages of
the proposals.

On Fri, Oct 14, 2016 at 8:09 PM, Mark E. Haase  wrote:
> On Fri, Oct 14, 2016 at 12:10 PM, Guido van Rossum  wrote:
>>
>> I propose that the next phase of the process should be to pick the
>> best operator for each sub-proposal. Then we can decide which of the
>> sub-proposals we actually want in the language, based on a combination
>> of how important the functionality is and how acceptable we find the
>> spelling.
>>
>> --Guido
>>
>
> I just submitted an updated PEP that removes the emoijs and some other
> cruft.
>
> How I can help with this next phase? Is a survey a good idea or a bad idea?



-- 
--Guido van Rossum (python.org/~guido)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Steven D'Aprano
On Thu, Oct 13, 2016 at 05:30:49PM -0400, Random832 wrote:

> Frankly, I don't see why the pattern isn't obvious

*shrug*

Maybe your inability to look past your assumptions and see things from 
other people's perspective is just as much a blind spot as our inability 
to see why you think the pattern is obvious. We're *all* having 
difficulty in seeing things from the other side's perspective here.

Let me put it this way: as far as I am concerned, sequence unpacking is 
equivalent to manually replacing the sequence with its items:

t = (1, 2, 3)
[100, 200, *t, 300]

is equivalent to replacing "*t" with "1, 2, 3", which gives us:

[100, 200, 1, 2, 3, 300]

That's nice, simple, it makes sense, and it works in sufficiently recent 
Python versions. It applies to function calls and assignments:

func(100, 200, *t)  # like func(100, 200, 1, 2, 3)

a, b, c, d, e = 100, 200, *t  # like a, b, c, d, e = 100, 200, 1, 2, 3

although it doesn't apply when the star is on the left hand side:

a, b, *x, e = 1, 2, 3, 4, 5, 6, 7

That requires a different model for starred names, but *that* model is 
similar to its role in function parameters: def f(*args). But I digress.

Now let's apply that same model of "starred expression == expand the 
sequence in place" to a list comp:

iterable = [t]
[*t for t in iterable]

If you do the same manual replacement, you get:

[1, 2, 3 for t in iterable]

which isn't legal since it looks like a list display [1, 2, ...] 
containing invalid syntax. The only way to have this make sense is to 
use parentheses:

[(1, 2, 3) for t in iterable]

which turns [*t for t in iterable] into a no-op.

Why should the OP's complicated, hard to understand (to many of us) 
interpretation take precedence over the simple, obvious, easy to 
understand model of sequence unpacking that I describe here?

That's not a rhetorical question. If you have a good answer, please 
share it. But I strongly believe that on the evidence of this thread, 

[a, b, *t, d]

is easy to explain, teach and understand, while:

[*t for t in iterable]

will be confusing, hard to teach and understand except as "magic syntax" 
-- it works because the interpreter says it works, not because it 
follows from the rules of sequence unpacking or comprehensions. It might 
as well be spelled:

[ MAGIC HAPPENS HERE t for t in iterable]

except it is shorter.

Of course, ultimately all syntax is "magic", it all needs to be learned. 
There's nothing about + that inherently means plus. But we should 
strongly prefer to avoid overloading the same symbol with distinct 
meanings, and * is one of the most heavily overloaded symbols in Python:

- multiplication and exponentiation
- wildcard imports
- globs, regexes
- collect arguments and kwargs
- sequence unpacking
- collect unused elements from a sequence

and maybe more. This will add yet another special meaning:

- expand the comprehension ("extend instead of append").

If we're going to get this (possibly useful?) functionality, I'd rather 
see an explicit flatten() builtin, or see it spelled:

[from t for t in sequence]

which at least is *obviously* something magical, than yet another magic 
meaning to the star operator. Its easy to look it up in the docs or 
google for it, and doesn't look like Perlish line noise.



-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Null coalescing operator

2016-10-14 Thread Mark E. Haase
On Fri, Oct 14, 2016 at 12:10 PM, Guido van Rossum  wrote:

> I propose that the next phase of the process should be to pick the
> best operator for each sub-proposal. Then we can decide which of the
> sub-proposals we actually want in the language, based on a combination
> of how important the functionality is and how acceptable we find the
> spelling.
>
> --Guido
>
>
I just submitted an updated PEP that removes the emoijs and some other
cruft.

How I can help with this next phase? Is a survey a good idea or a bad idea?
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Steven D'Aprano
On Fri, Oct 14, 2016 at 04:18:40AM +0100, MRAB wrote:
> On 2016-10-14 02:04, Steven D'Aprano wrote:
> >On Thu, Oct 13, 2016 at 08:15:36PM +0200, Martti Kühne wrote:
> >
> >>Can I fix my name, though?
> >
> >I don't understand what you mean. Your email address says your name is
> >Martti Kühne. Is that incorrect?
> >
> [snip]
> 
> You wrote "Marttii" and he corrected it when he quoted you in his reply.


Ah, so I did! I'm sorry Martti, I read over my comment half a dozen 
times and couldn't see the doubled "i". My apologies.


-- 
Steven
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Greg Ewing

Steven D'Aprano wrote:
That's because some sequence of characters 
is being wrongly interpreted as an emoticon by the client software.


The only thing wrong here is that the client software
is trying to interpret the emoticons.

Emoticons are for *humans* to interpret, not software.
Subtlety and cleverness is part of their charm. If you
blatantly replace them with explicit images, you crush
that.

And don't even get me started on *animated* emoji...

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Steven D'Aprano
On Fri, Oct 14, 2016 at 07:56:29AM -0400, Random832 wrote:
> On Fri, Oct 14, 2016, at 01:54, Steven D'Aprano wrote:
> > Good luck with that last one. Even if you could convince the Chinese and 
> > Japanese to swap to ASCII, I'd like to see you pry the emoji out of the 
> > young folk's phones.
> 
> This is actually probably the one part of this proposal that *is*
> feasible. While encoding emoji as a single character each makes sense
> for a culture that already uses thousands of characters; before they
> existed the English-speaking software industry already had several
> competing "standards" emerging for encoding them as sequences of ASCII
> characters.

It really isn't feasible to use emoticons instead of emoji, not if 
you're serious about it. To put it bluntly, emoticons are amateur hour. 
Emoji implemented as dedicated code points are what professionals use. 
Why do you think phone manufacturers are standardising on dedicated code 
points instead of using emoticons?

Anyone who has every posted (say) source code on IRC, Usenet, email or 
many web forums has probably seen unexpected smileys in the middle of 
their code (false positives). That's because some sequence of characters 
is being wrongly interpreted as an emoticon by the client software. 
The more emoticons you support, the greater the chance this will 
happen. A concrete example: bash code in Pidgin (IRC) will often show 
unwanted smileys.

The quality of applications can vary greatly: once the false emoticon is 
displayed as a graphic, you may not be able to copy the source code 
containing the graphic and paste it into a text editor unchanged.

There are false negatives as well as false positives: if your :-) 
happens to fall on the boundary of a line, and your software breaks the 
sequence with a soft line break, instead of seeing the smiley face you 
expected, you might see a line ending with :- and a new line starting 
with ).

It's hard to use punctuation or brackets around emoticons without 
risking them being misinterpreted as an invalid or different sequence. 

If you are serious about offering smileys, snowmen and piles of poo to 
your users, you are much better off supporting real emoji (dedicated 
Unicode characters) instead of emoticons. It is much easier to support ☺ 
than :-) and you don't need any special software apart from fonts that 
support the emoji you care about.



-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Add sorted (ordered) containers

2016-10-14 Thread Mahmoud Hashemi
 I'm all for adding more featureful data structures. At the risk of
confusing Nick's folks, I think it's possible to do even better than
Sorted/Ordered for many collections. In my experience, the simple Ordered
trait alone was not enough of a feature improvement over the simpler
builtin, leading me to implement an OrderedMultiDict, for instance.
Another, more cogent example would be Boltons' IndexedSet:
http://boltons.readthedocs.io/en/latest/setutils.html

It's a normal MutableSet, with almost all the same time complexities,
except that you can do indexed_set[0] to get the first-added item, etc.
Sometimes it helps to think of it as a kind of UniqueList. If we're going
for more featureful containers, I say go all-in!

Mahmoud

On Fri, Oct 14, 2016 at 8:23 AM, Nick Coghlan  wrote:

> On 14 October 2016 at 06:48, Neil Girdhar  wrote:
> > Related:
> >
> > Nick posted an excellent answer to this question here:
> > http://stackoverflow.com/questions/5953205/why-are-
> there-no-sorted-containers-in-pythons-standard-libraries
>
> Ah, so this thread is why I've been getting SO notifications for that
> answer :)
>
> While I think that was a decent answer for its time (as standardising
> things too early can inhibit community experimentation - there was
> almost 12 years between Twisted's first release in 2002 and asyncio's
> provisional inclusion in the standard library in Python 3.4), I also
> think the broader context has changed enough that the question may be
> worth revisiting for Python 3.7 (in particular, the prospect that it
> may be possible to provide this efficiently without having to add a
> large new chunk of C code to maintain).
>
> However, given that Grant has already been discussing the possibility
> directly with Raymond as the collections module maintainer though,
> there's probably not a lot we can add to that discussion here, since
> the key trade-off is between:
>
> - helping folks that actually need a sorted container implementation
> find one that works well with typical memory architectures in modern
> CPUs
> - avoiding confusing folks that *don't* need a sorted container with
> yet another group of possible data structures to consider in the
> standard library
>
> *That* part of my original SO answer hasn't changed, it's just not as
> clearcut a decision from a maintainability perspective when we're
> talking about efficient and relatively easy to explain pure Python
> implementations.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Show more info when `python -vV`

2016-10-14 Thread Chris Angelico
On Sat, Oct 15, 2016 at 4:52 AM, Sebastian Krause
 wrote:
> Nathaniel Smith  wrote:
>> The compiler information generally reveals the OS as well (if only
>> accidentally), and the OS is often useful information.
>
> But in which situation would you really need to call Python from
> outside to find out which OS you're on?

It's an easy way to gather info. Example:

rosuav@sikorsky:~$ python3 -Wall
Python 3.7.0a0 (default:897fe8fa14b5+, Oct 15 2016, 03:27:56)
[GCC 6.1.1 20160802] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> "C:\Users\Demo"
  File "", line 1
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes
in position 2-3: truncated \U escape
>>> "C:\Documents\Demo"
sys:1: DeprecationWarning: invalid escape sequence '\D'
sys:1: DeprecationWarning: invalid escape sequence '\D'
'C:\\Documents\\Demo'

Just by copying and pasting the header, I tell every reader what kind
of system I'm running this on. Sure, I could tell you that I'm running
Debian Stretch, and I could tell you that I've compiled Python from
tip, but the header says all that and in a way that is permanently
valid.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Null coalescing operator

2016-10-14 Thread Gustavo Carneiro
For what it's worth, I like the C# syntax with question marks.

It is probably more risky (breaks more code) to introduce a new keyword
than a new symbol as operator.

If we have to pick a symbol, it's less confusing if we pick something
another language already uses.  There is no shame in copying from other
languages.  Many of them copy ideas from Python as well ;-)

Thanks.


On 14 October 2016 at 17:10, Guido van Rossum  wrote:

> I actually think the spelling is the main stumbling block. The
> intrinsic value of the behavior is clear, it's finding an acceptable
> spelling that hold back the proposal.
>
> I propose that the next phase of the process should be to pick the
> best operator for each sub-proposal. Then we can decide which of the
> sub-proposals we actually want in the language, based on a combination
> of how important the functionality is and how acceptable we find the
> spelling.
>
> --Guido
>
> On Thu, Oct 13, 2016 at 8:20 PM, Mark E. Haase  wrote:
> > (Replying to multiple posts in this thread)
> >
> > Guido van Rossum:
> >>
> >> Another problem is PEP 505 -- it
> >> is full of discussion but its specification is unreadable due to the
> >> author's idea to defer the actual choice of operators and use a
> >> strange sequence of unicode characters instead.
> >
> >
> > Hi, I wrote PEP-505. I'm sorry that it's unreadable. The choice of emoji
> as
> > operators was supposed to be a blatant joke. I'd be happy to submit a new
> > version that is ASCII. Or make any other changes that would facilitate
> > making a decision on the PEP.
> >
> > As I recall, the thread concluded with Guido writing, "I'll have to think
> > about this," or something to that effect. I had hoped that the next step
> > could be a survey where we could gauge opinions on the various possible
> > spellings. I believe this was how PEP-308 was handled, and that was a
> very
> > similar proposal to this one.
> >
> > Most of the discussion on list was really centered around the fact that
> > nobody like the proposed ?? or .? spellings, and nobody could see around
> > that fact to consider whether the feature itself was intrinsically
> valuable.
> > (This is why the PEP doesn't commit to a syntax.) Also, as unfortunate
> side
> > effect of a miscommunication, about 95% of the posts on this PEP were
> > written _before_ I submitted a complete draft and so most of the
> > conversation was arguing about a straw man.
> >
> > David Mertz:
> >>
> >> The idea is that we can easily have both "regular" behavior and None
> >> coalescing just by wrapping any objects in a utility class... and
> WITHOUT
> >> adding ugly syntax.  I might have missed some corners where we would
> want
> >> behavior wrapped, but those shouldn't be that hard to add in principle.
> >
> >
> > The biggest problem with a wrapper in practice is that it has to be
> > unwrapped before it can be passed to any other code that doesn't know
> how to
> > handle it. E.g. if you want to JSON encode an object, you need to unwrap
> all
> > of the NullCoalesce objects because the json module wouldn't know what
> to do
> > with them. The process of wrapping and unwrapping makes the resulting
> code
> > more verbose than any existing syntax.
> >>
> >> How much of the time is a branch of the None check a single fallback
> value
> >> or attribute access versus how often a suite of statements within the
> >> not-None branch?
> >>
> >> I definitely check for None very often also. I'm curious what the
> >> breakdown is in code I work with.
> >
> > There's a script in the PEP-505 repo that can you help you identify code
> > that could be written with the proposed syntax. (It doesn't identify
> blocks
> > that would not be affected, so this doesn't completely answer your
> > question.)
> >
> > https://github.com/mehaase/pep-0505/blob/master/find-pep505.py
> >
> > The PEP also includes the results of running this script over the
> standard
> > library.
> >
> > On Sat, Sep 10, 2016 at 1:26 PM, Guido van Rossum 
> wrote:
> >>
> >> The way I recall it, we arrived at the perfect syntax (using ?) and
> >> semantics. The issue was purely strong hesitation about whether
> >> sprinkling ? all over your code is too ugly for Python, and in the end
> >> we couldn't get agreement on *that*. Another problem is PEP 505 -- it
> >> is full of discussion but its specification is unreadable due to the
> >> author's idea to defer the actual choice of operators and use a
> >> strange sequence of unicode characters instead.
> >>
> >> If someone wants to write a new, *short* PEP that defers to PEP 505
> >> for motivation etc. and just writes up the spec for the syntax and
> >> semantics we'll have a better starting point. IMO the key syntax is
> >> simply one for accessing attributes returning None instead of raising
> >> AttributeError, so that e.g. `foo?.bar?.baz` is roughly equivalent to
> >> `foo.bar.baz if (foo is not None and foo.bar is not None) 

Re: [Python-ideas] Show more info when `python -vV`

2016-10-14 Thread Nathaniel Smith
On Fri, Oct 14, 2016 at 9:09 AM,   wrote:
> For all intents and purposes other than debugging C (for cpython, rpython
> for pypy, java for jython, .NET for IronPython... you get the idea), the
> extra details are unnecessary to debug most problems.  Most of the time it
> is sufficient to know what major, minor, and patchlevel you are using.  You
> only really need to know the commit hash and compiler if you are sending a
> bug report about the C... and since you know when you are doing that... I
> don't think its uncalled for to have the one liner.

The compiler information generally reveals the OS as well (if only
accidentally), and the OS is often useful information.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Null coalescing operator

2016-10-14 Thread Guido van Rossum
I actually think the spelling is the main stumbling block. The
intrinsic value of the behavior is clear, it's finding an acceptable
spelling that hold back the proposal.

I propose that the next phase of the process should be to pick the
best operator for each sub-proposal. Then we can decide which of the
sub-proposals we actually want in the language, based on a combination
of how important the functionality is and how acceptable we find the
spelling.

--Guido

On Thu, Oct 13, 2016 at 8:20 PM, Mark E. Haase  wrote:
> (Replying to multiple posts in this thread)
>
> Guido van Rossum:
>>
>> Another problem is PEP 505 -- it
>> is full of discussion but its specification is unreadable due to the
>> author's idea to defer the actual choice of operators and use a
>> strange sequence of unicode characters instead.
>
>
> Hi, I wrote PEP-505. I'm sorry that it's unreadable. The choice of emoji as
> operators was supposed to be a blatant joke. I'd be happy to submit a new
> version that is ASCII. Or make any other changes that would facilitate
> making a decision on the PEP.
>
> As I recall, the thread concluded with Guido writing, "I'll have to think
> about this," or something to that effect. I had hoped that the next step
> could be a survey where we could gauge opinions on the various possible
> spellings. I believe this was how PEP-308 was handled, and that was a very
> similar proposal to this one.
>
> Most of the discussion on list was really centered around the fact that
> nobody like the proposed ?? or .? spellings, and nobody could see around
> that fact to consider whether the feature itself was intrinsically valuable.
> (This is why the PEP doesn't commit to a syntax.) Also, as unfortunate side
> effect of a miscommunication, about 95% of the posts on this PEP were
> written _before_ I submitted a complete draft and so most of the
> conversation was arguing about a straw man.
>
> David Mertz:
>>
>> The idea is that we can easily have both "regular" behavior and None
>> coalescing just by wrapping any objects in a utility class... and WITHOUT
>> adding ugly syntax.  I might have missed some corners where we would want
>> behavior wrapped, but those shouldn't be that hard to add in principle.
>
>
> The biggest problem with a wrapper in practice is that it has to be
> unwrapped before it can be passed to any other code that doesn't know how to
> handle it. E.g. if you want to JSON encode an object, you need to unwrap all
> of the NullCoalesce objects because the json module wouldn't know what to do
> with them. The process of wrapping and unwrapping makes the resulting code
> more verbose than any existing syntax.
>>
>> How much of the time is a branch of the None check a single fallback value
>> or attribute access versus how often a suite of statements within the
>> not-None branch?
>>
>> I definitely check for None very often also. I'm curious what the
>> breakdown is in code I work with.
>
> There's a script in the PEP-505 repo that can you help you identify code
> that could be written with the proposed syntax. (It doesn't identify blocks
> that would not be affected, so this doesn't completely answer your
> question.)
>
> https://github.com/mehaase/pep-0505/blob/master/find-pep505.py
>
> The PEP also includes the results of running this script over the standard
> library.
>
> On Sat, Sep 10, 2016 at 1:26 PM, Guido van Rossum  wrote:
>>
>> The way I recall it, we arrived at the perfect syntax (using ?) and
>> semantics. The issue was purely strong hesitation about whether
>> sprinkling ? all over your code is too ugly for Python, and in the end
>> we couldn't get agreement on *that*. Another problem is PEP 505 -- it
>> is full of discussion but its specification is unreadable due to the
>> author's idea to defer the actual choice of operators and use a
>> strange sequence of unicode characters instead.
>>
>> If someone wants to write a new, *short* PEP that defers to PEP 505
>> for motivation etc. and just writes up the spec for the syntax and
>> semantics we'll have a better starting point. IMO the key syntax is
>> simply one for accessing attributes returning None instead of raising
>> AttributeError, so that e.g. `foo?.bar?.baz` is roughly equivalent to
>> `foo.bar.baz if (foo is not None and foo.bar is not None) else None`,
>> except evaluating foo and foo.bar only once.
>>
>> On Sat, Sep 10, 2016 at 10:14 AM, Random832 
>> wrote:
>> > On Sat, Sep 10, 2016, at 12:48, Stephen J. Turnbull wrote:
>> >> I forget if Guido was very sympathetic to null-coalescing operators,
>> >> given somebody came up with a good syntax.
>> >
>> > As I remember the discussion, I thought he'd more or less conceded on
>> > the use of ? but there was disagreement on how to implement it that
>> > never got resolved. Concerns like, you can't have a?.b return None
>> > because then a?.b() isn't callable, unless you want to use a?.b?() for
>> > this case, or some 

Re: [Python-ideas] Show more info when `python -vV`

2016-10-14 Thread tritium-list
For all intents and purposes other than debugging C (for cpython, rpython
for pypy, java for jython, .NET for IronPython... you get the idea), the
extra details are unnecessary to debug most problems.  Most of the time it
is sufficient to know what major, minor, and patchlevel you are using.  You
only really need to know the commit hash and compiler if you are sending a
bug report about the C... and since you know when you are doing that... I
don't think its uncalled for to have the one liner.

> -Original Message-
> From: Python-ideas [mailto:python-ideas-bounces+tritium-
> list=sdamon@python.org] On Behalf Of INADA Naoki
> Sent: Friday, October 14, 2016 3:40 AM
> To: python-ideas 
> Subject: [Python-ideas] Show more info when `python -vV`
> 
> When reporting issue to some project and want to include
> python version in the report, python -V shows very limited information.
> 
> $ ./python.exe -V
> Python 3.6.0b2+
> 
> sys.version is more usable, but it requires one liner.
> 
> $ ./python.exe -c 'import sys; print(sys.version)'
> 3.6.0b2+ (3.6:86a1905ea28d+, Oct 13 2016, 17:58:37)
> [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.38)]
> 
> How about `python -vV` shows sys.version?
> 
> 
> perl -V is very verbose and it's helpful to be included in bug report.
> Some of them are useful and worth enough to include in `python -vV`.
> 
> $ perl -V
> Summary of my perl5 (revision 5 version 18 subversion 2) configuration:
> 
>   Platform:
> osname=darwin, osvers=15.0, archname=darwin-thread-multi-2level
> uname='darwin osx219.apple.com 15.0 darwin kernel version 15.0.0:
> fri may 22 22:03:51 pdt 2015;
> root:xnu-3216.0.0.1.11~1development_x86_64 x86_64 '
> config_args='-ds -e -Dprefix=/usr -Dccflags=-g  -pipe  -Dldflags=
> -Dman3ext=3pm -Duseithreads -Duseshrplib -Dinc_version_list=none
> -Dcc=cc'
> hint=recommended, useposix=true, d_sigaction=define
> useithreads=define, usemultiplicity=define
> useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
> use64bitint=define, use64bitall=define, uselongdouble=undef
> usemymalloc=n, bincompat5005=undef
>   Compiler:
> cc='cc', ccflags ='-arch i386 -arch x86_64 -g -pipe -fno-common
> -DPERL_DARWIN -fno-strict-aliasing -fstack-protector',
> optimize='-Os',
> cppflags='-g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing
> -fstack-protector'
> ccversion='', gccversion='4.2.1 Compatible Apple LLVM 7.0.0
> (clang-700.0.59.1)', gccosandvers=''
> intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
> d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
> ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t',
> lseeksize=8
> alignbytes=8, prototype=define
>   Linker and Libraries:
> ld='cc -mmacosx-version-min=10.11.3', ldflags ='-arch i386 -arch
> x86_64 -fstack-protector'
> libpth=/usr/lib /usr/local/lib
> libs=
> perllibs=
> libc=, so=dylib, useshrplib=true, libperl=libperl.dylib
> gnulibc_version=''
>   Dynamic Linking:
> dlsrc=dl_dlopen.xs, dlext=bundle, d_dlsymun=undef, ccdlflags=' '
> cccdlflags=' ', lddlflags='-arch i386 -arch x86_64 -bundle
> -undefined dynamic_lookup -fstack-protector'
> 
> 
> Characteristics of this binary (from libperl):
>   Compile-time options: HAS_TIMES MULTIPLICITY PERLIO_LAYERS
> PERL_DONT_CREATE_GVSV
> PERL_HASH_FUNC_ONE_AT_A_TIME_HARD
> PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP
> PERL_PRESERVE_IVUV PERL_SAWAMPERSAND
USE_64_BIT_ALL
> USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
> USE_LOCALE USE_LOCALE_COLLATE USE_LOCALE_CTYPE
> USE_LOCALE_NUMERIC USE_PERLIO USE_PERL_ATOF
> USE_REENTRANT_API
>   Locally applied patches:
> /Library/Perl/Updates/ comes before system perl directories
> installprivlib and installarchlib points to the Updates directory
>   Built under darwin
>   Compiled at Aug 11 2015 04:22:26
>   @INC:
> /Library/Perl/5.18/darwin-thread-multi-2level
> /Library/Perl/5.18
> /Network/Library/Perl/5.18/darwin-thread-multi-2level
> /Network/Library/Perl/5.18
> /Library/Perl/Updates/5.18.2
> /System/Library/Perl/5.18/darwin-thread-multi-2level
> /System/Library/Perl/5.18
> /System/Library/Perl/Extras/5.18/darwin-thread-multi-2level
> /System/Library/Perl/Extras/5.18
> .
> 
> --
> INADA Naoki  
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: 

Re: [Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread Guido van Rossum
On Fri, Oct 14, 2016 at 6:37 AM, Gustavo Carneiro  wrote:
> I see.  short-circuiting is nice to have, sure.

No. Short-circuiting is the entire point of the proposed operators.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread Mark E. Haase
On Fri, Oct 14, 2016 at 9:37 AM, Gustavo Carneiro 
wrote:

>
> I see.  short-circuiting is nice to have, sure.
>
> But even without it, it's still useful IMHO. 
>

It's worth mentioning that SQL's COALESCE is usually (always?) short
circuiting:

https://www.postgresql.org/docs/9.5/static/functions-conditional.html
https://msdn.microsoft.com/en-us/library/ms190349.aspx

Given the debate about the utility of coalescing and the simplicity of
writing the function yourself, I doubt the standard library will accept it.
Most people here will tell you that such a utility belongs on PyPI.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread Franklin? Lee
My mistake. You're talking about the ?? operator, and I'm thinking about
the null-aware operators.

You say short-circuiting would be nice to have, but short-circuiting is
what people want it for. As for using `if-else`, that's listed as an
alternative here:
https://www.python.org/dev/peps/pep-0505/#ternary-operator

The coalesce operator has the semantic advantage ("expressiveness"?): you
are saying what you want to do, rather than how you do it. Making a
function is one way to get semantic advantage, but you can't do that if you
want short-circuiting.

The question on the table is whether the semantic advantage is worth the
cost of a new operator. That's a value question, so it's not gonna be easy
to answer it with objective observations.

(Not that I'm suggesting anything, but some languages have custom
short-circuiting, via macros or lazy arg evalation. That's be using a
missile to hammer in a nail.)

On Oct 14, 2016 9:46 AM, "Franklin? Lee" 
wrote:
>
> On Oct 14, 2016 9:14 AM, "Gustavo Carneiro"  wrote:
> >
> > Sorry if I missed the boat, but only just now saw this PEP.
> >
> > Glancing through the PEP, I don't see mentioned anywhere the SQL
alternative of having a coalesce() function:
https://www.postgresql.org/docs/9.6/static/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL
> >
> > In Python, something like this:
> >
> > def coalesce(*args):
> > for arg in args:
> > if arg is not None:
> >  return arg
> > return None
> >
> > Just drop it into builtins, and voila.   No need for lengthy
discussions about which operator to use because IMHO it needs no operator.
> >
> > Sure, it's not as sexy as a fancy new operator, nor as headline
grabbing, but it is pretty useful.
>
> That function is for a different purpose. It selects the first non-null
value from a flat collection. The coalesce operators are for traveling down
a reference tree, and shortcutting out without an exception if the path
ends.
>
> For example:
> return x?.a?.b?.c
> instead of:
> if x is None: return None
> if x.a is None: return None
> if x.a.b is None: return None
> return x.a.b.c
>
> You can use try-catch, but you might catch an irrelevant exception.
> try:
> return x.a.b.c
> except AttributeError:
> return None
> If `x` is an int, `x.a` will throw an AttributeError even though `x` is
not None.
>
> A function for the above case is:
> def coalesce(obj, *names):
> for name in names:
> if obj is None:
> return None
> obj = getattr(obj, name)
> return obj
>
> return coalesce(x, 'a', 'b', 'c')
>
> See this section for some examples:
> https://www.python.org/dev/peps/pep-0505/#behavior-in-other-languages
>
> (The PEP might need more simple examples. The Motivating Examples are
full chunks of code from real libraries, so they're full of distractions.)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread Franklin? Lee
On Oct 14, 2016 9:14 AM, "Gustavo Carneiro"  wrote:
>
> Sorry if I missed the boat, but only just now saw this PEP.
>
> Glancing through the PEP, I don't see mentioned anywhere the SQL
alternative of having a coalesce() function:
https://www.postgresql.org/docs/9.6/static/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL
>
> In Python, something like this:
>
> def coalesce(*args):
> for arg in args:
> if arg is not None:
>  return arg
> return None
>
> Just drop it into builtins, and voila.   No need for lengthy discussions
about which operator to use because IMHO it needs no operator.
>
> Sure, it's not as sexy as a fancy new operator, nor as headline grabbing,
but it is pretty useful.

That function is for a different purpose. It selects the first non-null
value from a flat collection. The coalesce operators are for traveling down
a reference tree, and shortcutting out without an exception if the path
ends.

For example:
return x?.a?.b?.c
instead of:
if x is None: return None
if x.a is None: return None
if x.a.b is None: return None
return x.a.b.c

You can use try-catch, but you might catch an irrelevant exception.
try:
return x.a.b.c
except AttributeError:
return None
If `x` is an int, `x.a` will throw an AttributeError even though `x` is not
None.

A function for the above case is:
def coalesce(obj, *names):
for name in names:
if obj is None:
return None
obj = getattr(obj, name)
return obj

return coalesce(x, 'a', 'b', 'c')

See this section for some examples:
https://www.python.org/dev/peps/pep-0505/#behavior-in-other-languages

(The PEP might need more simple examples. The Motivating Examples are full
chunks of code from real libraries, so they're full of distractions.)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread Gustavo Carneiro
On 14 October 2016 at 14:19, אלעזר  wrote:

> On Fri, Oct 14, 2016 at 4:14 PM Gustavo Carneiro 
> wrote:
>
>> Sorry if I missed the boat, but only just now saw this PEP.
>>
>> Glancing through the PEP, I don't see mentioned anywhere the SQL
>> alternative of having a coalesce() function: https://www.
>> postgresql.org/docs/9.6/static/functions-conditional.
>> html#FUNCTIONS-COALESCE-NVL-IFNULL
>>
>> In Python, something like this:
>>
>> def coalesce(*args):
>> for arg in args:
>> if arg is not None:
>>  return arg
>> return None
>>
>> Just drop it into builtins, and voila.   No need for lengthy discussions
>> about which operator to use because IMHO it needs no operator.
>>
>> Sure, it's not as sexy as a fancy new operator, nor as headline grabbing,
>> but it is pretty useful.
>>
>>
> This has the downside of not being short-circuit - arguments to the
> function are evaluated eagerly.
>

I see.  short-circuiting is nice to have, sure.

But even without it, it's still useful IMHO.  If you are worried about not
evaluating an argument, then you can just do a normal if statement instead,
for the rare cases where this is important:

result = arg1
if result is None:
   result = compute_something()

At the very least I would suggest mentioning a simple coalesce() function
in the alternatives section of the PEP.

coalesce function:
  Pros:
1. Familiarity, similar to existing function in SQL;
2. No new operator required;
  Cons:
1. Doesn't short-circuit the expressions;
2. Slightly more verbose than an operator;


Thanks.

-- 
Gustavo J. A. M. Carneiro
Gambit Research
"The universe is always one step beyond logic." -- Frank Herbert
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread אלעזר
On Fri, Oct 14, 2016 at 4:14 PM Gustavo Carneiro 
wrote:

> Sorry if I missed the boat, but only just now saw this PEP.
>
> Glancing through the PEP, I don't see mentioned anywhere the SQL
> alternative of having a coalesce() function:
> https://www.postgresql.org/docs/9.6/static/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL
>
> In Python, something like this:
>
> def coalesce(*args):
> for arg in args:
> if arg is not None:
>  return arg
> return None
>
> Just drop it into builtins, and voila.   No need for lengthy discussions
> about which operator to use because IMHO it needs no operator.
>
> Sure, it's not as sexy as a fancy new operator, nor as headline grabbing,
> but it is pretty useful.
>
>
This has the downside of not being short-circuit - arguments to the
function are evaluated eagerly.

Elazar
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

[Python-ideas] PEP 505 -- None-aware operators

2016-10-14 Thread Gustavo Carneiro
Sorry if I missed the boat, but only just now saw this PEP.

Glancing through the PEP, I don't see mentioned anywhere the SQL
alternative of having a coalesce() function:
https://www.postgresql.org/docs/9.6/static/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL

In Python, something like this:

def coalesce(*args):
for arg in args:
if arg is not None:
 return arg
return None

Just drop it into builtins, and voila.   No need for lengthy discussions
about which operator to use because IMHO it needs no operator.

Sure, it's not as sexy as a fancy new operator, nor as headline grabbing,
but it is pretty useful.

Just my 2p.

-- 
Gustavo J. A. M. Carneiro
Gambit Research
"The universe is always one step beyond logic." -- Frank Herbert
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Paul Moore
On 14 October 2016 at 07:54, Greg Ewing  wrote:
>> I think it's probably time for someone to
>> describe the precise syntax (as BNF, like the syntax in the Python
>> docs at
>> https://docs.python.org/3.6/reference/expressions.html#displays-for-lists-sets-and-dictionaries
>
>
> Replace
>
>comprehension ::=  expression comp_for
>
> with
>
>comprehension ::=  (expression | "*" expression) comp_for
>
>> and semantics (as an explanation of how to
>> rewrite any syntactically valid display as a loop).
>
>
> The expansion of the "*" case is the same as currently except
> that 'append' is replaced by 'extend' in a list comprehension,
> 'yield' is replaced by 'yield from' in a generator
> comprehension.

Thanks. That does indeed clarify. Part of my confusion was that I'm
sure I'd seen someone give an example along the lines of

[(x, *y, z) for ...]

which *doesn't* conform to the above syntax. OTOH, it is currently
valid syntax, just not an example of *this* proposal (that's part of
why all this got very confusing).

So now I understand what's being proposed, which is good. I don't
(personally) find it very intuitive, although I'm completely capable
of using the rules given to establish what it means. In practical
terms, I'd be unlikely to use or recommend it - not because of
anything specific about the proposal, just because it's "confusing". I
would say the same about [(x, *y, z) for ...].

IMO, combining unpacking and list (or other types of) comprehensions
leads to obfuscated code. Each feature is fine in isolation, but
over-enthusiastic use of the ability to combine them harms
readability. So I'm now -0 on this proposal. There's nothing *wrong*
with it, and I now see how it can be justified as a generalisation of
current rules. But I can't think of any real-world case where using
the proposed syntax would measurably improve code maintainability or
comprehensibility.

Paul
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Chris Angelico
On Fri, Oct 14, 2016 at 8:36 PM, Greg Ewing  wrote:
>> I know people who can read bash scripts
>> fast, but would you claim that bash syntax can be
>> any good compared to Python syntax?
>
>
> For the things that bash was designed to be good for,
> yes, it can. Python wins for anything beyond very
> simple programming, but bash wasn't designed for that.
> (The fact that some people use it that way says more
> about their dogged persistence in the face of
> adversity than it does about bash.)

And any time I look at a large and complex bash script and say "this
needs to be a Python script" or "this would be better done in Pike" or
whatever, I end up missing the convenient syntax of piping one thing
into another. Shell scripting languages are the undisputed kings of
process management.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Greg Ewing

Mikhail V wrote:


if "\u1230" <= c <= "\u123f":

and:

o = ord (c)
if 100 <= o <= 150:


Note that, if need be, you could also write that as

  if 0x64 <= o <= 0x96:


So yours is a valid code but for me its freaky,
and surely I stick to the second variant.


The thing is, where did you get those numbers from in
the first place?

If you got them in some way that gives them to you
in decimal, such as print(ord(c)), there is nothing
to stop you from writing them as decimal constants
in the code.

But if you got them e.g. by looking up a character
table that gives them to you in hex, you can equally
well put them in as hex constants. So there is no
particular advantage either way.


You said, I can better see in which unicode page
I am by looking at hex ordinal, but I hardly
need it, I just need to know one integer, namely
where some range begins, that's it.
Furthermore this is the code which would an average
programmer better read and maintain.


To a maintainer who is familiar with the layout of
the unicode code space, the hex representation of
a character is likely to have some meaning, whereas
the decimal representation will not. So for that
person, using decimal would make the code *harder*
to maintain.

To a maintainer who doesn't have that familiarity,
it makes no difference either way.

So your proposal would result in a *decrease* of
maintainability overall.


if I make a mistake, typo, or want to expand the range
by some value I need to make summ and substract
operation in my head to progress with my code effectively.
Is it clear now what I mean by
conversions back and forth?


Yes, but in my experience the number of times I've
had to do that kind of arithmetic with character codes
is very nearly zero. And when I do, I'm more likely to
get the computer to do it for me than work out the
numbers and then type them in as literals. I just
don't see this as being anywhere near being a
significant problem.


In standard ASCII
there are enough glyphs that would work way better
together,


Out of curiosity, what glyphs do you have in mind?


ұұ-ұ   ---ұ

you can downscale the strings, so a 16-bit
value would be ~60 pixels wide


Yes, you can make the characters narrow enough that
you can take 4 of them in at once, almost as though
they were a single glyph... at which point you've
effectively just substituted one set of 16 glyphs
for another. Then you'd have to analyse whether the
*combined* 4-element glyphs were easier to disinguish
from each other than the ones they replaced. Since
the new ones are made up of repetitions of just two
elements, whereas the old ones contain a much more
varied set of elements, I'd be skeptical about that.

BTW, your choice of ұ because of its "peak readibility"
seems to be a case of taking something out of context.
The readability of a glyph can only be judged in terms
of how easy it is to distinguish from other glyphs.
Here, the only thing that matters is distinguishing it
from the other symbol, so something like "|" would
perhaps be a better choice.

||-|   ---|


So if you are more
than 40 years old (sorry for some familiarity)
this can be really strong issue and unfortunately
hardly changeable.


Sure, being familiar with the current system means that
it would take me some effort to become proficient with
a new one.

What I'm far from convinced of is that I would gain any
benefit from making that effort, or that a fresh person
would be noticeably better off if they learned your new
system instead of the old one.

At this point you're probably going to say "Greg, it's
taken you 40 years to become that proficient in hex.
Someone learning my system would do it much faster!"

Well, no. When I was about 12 I built a computer whose
only I/O devices worked in binary. From the time I first
started toggling programs into it to the time I had the
whole binary/hex conversion table burned into my neurons
was maybe about 1 hour. And I wasn't even *trying* to
memorise it, it just happened.


It is not about speed, it is about brain load.
Chinese can read their hieroglyphs fast, but
the cognition load on the brain is 100 times higher
than current latin set.


Has that been measured? How?

This one sets off my skepticism alarm too, because
people that read Latin scripts don't read them a
letter at a time -- they recognise whole *words* at
once, or at least large chunks of them. The number of
English words is about the same order of magnitude
as the number of Chinese characters.


I know people who can read bash scripts
fast, but would you claim that bash syntax can be
any good compared to Python syntax?


For the things that bash was designed to be good for,
yes, it can. Python wins for anything beyond very
simple programming, but bash wasn't designed for that.
(The fact that some people use it that way says more
about their dogged persistence in the face of
adversity than it does about bash.)

I don't doubt that some sets of glyphs are easier to
distinguish from each 

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread אלעזר
בתאריך יום ו׳, 14 באוק' 2016, 12:19, מאת Michel Desmoulin ‏<
desmoulinmic...@gmail.com>:

> Regarding all those examples:
>
> Le 14/10/2016 à 00:08, אלעזר a écrit :
> > Trying to restate the proposal, somewhat more formal following Random832
> > and Paul's suggestion.
> >
> > I only speak about the single star.
> > ---
> >
> > *The suggested change of syntax:*
> >
> > comprehension ::=  starred_expression comp_for
> >
> > *Semantics:*
> >
> > (In the following, f(x) must always evaluate to an iterable)
> >
> > 1. List comprehension:
> >
> > result = [*f(x) for x in iterable if cond]
> >
> > Translates to
> >
> > result = []
> > for x in iterable:
> > if cond:
> > result.extend(f(x))
> >
> > 2. Set comprehension:
> >
> > result = {*f(x) for x in iterable if cond}
> >
> > Translates to
> >
> > result = set()
> > for x in iterable:
> > if cond:
> > result.update(f(x))
>
> Please note that we already have a way to do those. E.G:
>
> result = [*f(x) for x in iterable if cond]
>
> can currently been expressed as:
>
> >>> iterable = range(10)
> >>> f = lambda x: [x] * x
> >>> [y for x in iterable if x % 2 == 0 for y in f(x)]
> [2, 2, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 8]
>
>
> Now I do like the new extension syntax. I find it more natural, and more
> readable:
>
> >>> [*f(x) for x in iterable if x % 2 == 0]
>
> But it's not a missing feature, it's really just a (rather nice)
> syntaxic improvement.
>

It is about lifting restrictions from an existing syntax. That this
behavior is being *explicitly disabled* in the implementation is a strong
evidence, in my mind.

(There are more restrictions I was asked not to divert this thread, which
makes sense)

Elazar
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Michel Desmoulin

Regarding all those examples:

Le 14/10/2016 à 00:08, אלעזר a écrit :

Trying to restate the proposal, somewhat more formal following Random832
and Paul's suggestion.

I only speak about the single star.
---

*The suggested change of syntax:*

comprehension ::=  starred_expression comp_for

*Semantics:*

(In the following, f(x) must always evaluate to an iterable)

1. List comprehension:

result = [*f(x) for x in iterable if cond]

Translates to

result = []
for x in iterable:
if cond:
result.extend(f(x))

2. Set comprehension:

result = {*f(x) for x in iterable if cond}

Translates to

result = set()
for x in iterable:
if cond:
result.update(f(x))


Please note that we already have a way to do those. E.G:

   result = [*f(x) for x in iterable if cond]

can currently been expressed as:

   >>> iterable = range(10)
   >>> f = lambda x: [x] * x
   >>> [y for x in iterable if x % 2 == 0 for y in f(x)]
   [2, 2, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 8]


Now I do like the new extension syntax. I find it more natural, and more 
readable:


   >>> [*f(x) for x in iterable if x % 2 == 0]

But it's not a missing feature, it's really just a (rather nice) 
syntaxic improvement.



___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Serhiy Storchaka

On 13.10.16 17:50, Chris Angelico wrote:

Solution: Abolish most of the control characters. Let's define a brand
new character encoding with no "alphabetical garbage". These
characters will be sufficient for everyone:

* [2] Formatting characters: space, newline. Everything else can go.
* [8] Digits: 01234567
* [26] Lower case Latin letters a-z
* [2] Vital social media characters: # (now officially called "HASHTAG"), @
* [2] Can't-type-URLs-without-them: colon, slash (now called both
"SLASH" and "BACKSLASH")

That's 40 characters that should cover all the important things anyone
does - namely, Twitter, Facebook, and email. We don't need punctuation
or capitalization, as they're dying arts and just make you look
pretentious.


https://en.wikipedia.org/wiki/DEC_Radix-50


___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Chris Angelico
On Fri, Oct 14, 2016 at 7:18 PM, Cory Benfield  wrote:
> The many glyphs that exist for writing various human languages are not 
> inefficiency to be optimised away. Further, I should note that most places to 
> not legislate about what character sets are acceptable to transcribe their 
> languages. Indeed, plenty of non-romance-language-speakers have found ways to 
> transcribe their languages of choice into the limited 8-bit character sets 
> that the Anglophone world propagated: take a look at Arabish for the best 
> kind of example of this behaviour, where "الجو عامل ايه النهارده فى 
> إسكندرية؟" will get rendered as "el gaw 3amel eh elnaharda f eskendereya?”
>

I've worked with transliterations enough to have built myself a
dedicated translit tool. It's pretty straight-forward to come up with
something you can type on a US-English keyboard (eg "a\o" for "å", and
"d\-" for "đ"), and in some cases, it helps with visual/audio
synchronization, but nobody would ever claim that it's the best way to
represent that language.

https://github.com/Rosuav/LetItTrans/blob/master/25%20languages.srt

> But I think you’re in a tiny minority of people who believe that all 
> languages should be rendered in the same script. I can think of only two 
> reasons to argue for this:
>
> 1. Dealing with lots of scripts is technologically tricky and it would be 
> better if we didn’t bother. This is the anti-Unicode argument, and it’s a 
> weak argument, though it has the advantage of being internally consistent.
> 2. There is some genuine harm caused by learning non-ASCII scripts.

#1 does carry a decent bit of weight, but only if you start with the
assumption that "characters are bytes". If you once shed that
assumption (and the related assumption that "characters are 16-bit
numbers"), the only weight it carries is "right-to-left text is
hard"... and let's face it, that *is* hard, but there are far, far
harder problems in computing.

Oh wait. Naming things. In Hebrew.

That's hard.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Cory Benfield

> On 14 Oct 2016, at 08:53, Mikhail V  wrote:
> 
> What keeps people from using same characters?
> I will tell you what - it is local law. If you go to school you *have* to
> write in what is prescribed by big daddy. If youre in europe or America, you 
> are
> more lucky. And if you're in China you'll be punished if you
> want some freedom. So like it or not, learn hieroglyphs
> and become visually impaired in age of 18.

So you know, for the future, I think this comment is going to be the one that 
causes most of the people who were left to disengage with this discussion.

The many glyphs that exist for writing various human languages are not 
inefficiency to be optimised away. Further, I should note that most places to 
not legislate about what character sets are acceptable to transcribe their 
languages. Indeed, plenty of non-romance-language-speakers have found ways to 
transcribe their languages of choice into the limited 8-bit character sets that 
the Anglophone world propagated: take a look at Arabish for the best kind of 
example of this behaviour, where "الجو عامل ايه النهارده فى إسكندرية؟" will get 
rendered as "el gaw 3amel eh elnaharda f eskendereya?”

But I think you’re in a tiny minority of people who believe that all languages 
should be rendered in the same script. I can think of only two reasons to argue 
for this:

1. Dealing with lots of scripts is technologically tricky and it would be 
better if we didn’t bother. This is the anti-Unicode argument, and it’s a weak 
argument, though it has the advantage of being internally consistent.
2. There is some genuine harm caused by learning non-ASCII scripts.

Your paragraph suggest that you really believe that learning to write in Kanji 
(logographic system) as opposed to Katagana (alphabetic system with 48 
non-punctuation characters) somehow leads to active harm (your phrase was 
“become visually impaired”). I’m afraid that you’re really going to need to 
provide one hell of a citation for that, because that’s quite an extraordinary 
claim.

Otherwise, I’m afraid I have to say お先に失礼します.

Cory
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Chris Angelico
On Fri, Oct 14, 2016 at 6:53 PM, Mikhail V  wrote:
> On 13 October 2016 at 16:50, Chris Angelico  wrote:
>> On Fri, Oct 14, 2016 at 1:25 AM, Steven D'Aprano  wrote:
>>> On Thu, Oct 13, 2016 at 03:56:59AM +0200, Mikhail V wrote:
 and in long perspective when the world's alphabetical garbage will
 dissapear, two digits would be ok.
>>> Talking about "alphabetical garbage" like that makes you seem to be an
>>> ASCII bigot: rude, ignorant, arrogant and rather foolish as well. Even
>>> 7-bit ASCII has more than 100 characters (128).
>
> This is sort of rude. Are you from unicode consortium?

No, he's not. He just knows a thing or two.

>> Solution: Abolish most of the control characters. Let's define a brand
>> new character encoding with no "alphabetical garbage". These
>> characters will be sufficient for everyone:
>>
>> * [2] Formatting characters: space, newline. Everything else can go.
>> * [8] Digits: 01234567
>> * [26] Lower case Latin letters a-z
>> * [2] Vital social media characters: # (now officially called "HASHTAG"), @
>> * [2] Can't-type-URLs-without-them: colon, slash (now called both
>> "SLASH" and "BACKSLASH")
>>
>> That's 40 characters that should cover all the important things anyone
>> does - namely, Twitter, Facebook, and email. We don't need punctuation
>> or capitalization, as they're dying arts and just make you look
>> pretentious. I might have missed a few critical characters, but it
>> should be possible to fit it all within 64, which you can then
>> represent using two digits from our newly-restricted set; octal is
>> better than decimal, as it needs less symbols. (Oh, sorry, so that's
>> actually "50" characters, of which "32" are the letters. And we can
>> use up to "100" and still fit within two digits.)
>>
>> Is this the wrong approach, Mikhail?
>
> This is sort of correct approach. We do need punctuation however.
> And one does not need of course to make it too tight.
> So 8-bit units for text is excellent and enough space left for experiments.

... okay. I'm done arguing. Go do some translation work some time.
Here, have a read of some stuff I've written before.

http://rosuav.blogspot.com/2016/09/case-sensitivity-matters.html
http://rosuav.blogspot.com/2015/03/file-systems-case-insensitivity-is.html
http://rosuav.blogspot.com/2014/12/unicode-makes-life-easy.html

>> Perhaps we should go the other
>> way, then, and be *inclusive* of people who speak other languages.
>
> What keeps people from using same characters?
> I will tell you what - it is local law. If you go to school you *have* to
> write in what is prescribed by big daddy. If youre in europe or America, you 
> are
> more lucky. And if you're in China you'll be punished if you
> want some freedom. So like it or not, learn hieroglyphs
> and become visually impaired in age of 18.

Never mind about China and its political problems. All you need to do
is move around Europe for a bit and see how there are more sounds than
can be usefully represented. Turkish has a simple system wherein the
written and spoken forms have direct correspondence, which means they
need to distinguish eight fundamental vowels. How are you going to
spell those? Scandinavian languages make use of letters like "å"
(called "A with ring" in English, but identified by its sound in
Norwegian, same as our letters are - pronounced "Aww" or "Or" or "Au"
or thereabouts). To adequately represent both Turkish and Norwegian in
the same document, you *need* more letters than our 26.

>> Thanks to Unicode's rich collection of characters, we can represent
>> multiple languages in a single document;
>
> Can do it without unicode in 8-bit boundaries with tagged text,
> just need fonts for your language, of course if your
> local charset is less than 256 letters.

No, you can't. Also, you shouldn't. It makes virtually every text
operation impossible: you can't split and rejoin text without tracking
the encodings. Go try to write a text editor under your scheme and see
how hard it is.

> This is how it was before unicode I suppose.
> BTW I don't get it still what such revolutionary
> advantages has unicode compared to tagged text.

It's not tagged. That's the huge advantage.

>> script, but have different characters. Alphabetical garbage, or
>> accurate representations of sounds and words in those languages?
>
> Accurate with some 50 characters is more than enough.

Go build a chat room or something. Invite people to enter their names.
Now make sure you're courteous enough to display those names to
people. Try doing that without Unicode.

I'm done. None of this belongs on python-ideas - it's getting pretty
off-topic even for python-list, and you're talking about modifying
Python 2.7 which is a total non-starter anyway.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: 

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Neil Girdhar
Here's an interesting idea regarding yield **x:

Right now a function containing any yield returns a generator.  Therefore,
it works like a generator expression, which is the lazy version of a list
display.  lists can only contain elements x and unpackings *x.  Therefore,
it would make sense to only have "yield x" and "yield *xs" (currently
spelled "yield from xs")

If one day, there was a motivation to provide a lazy version of a dict
display, then such a function would probably have "yield key: value" or
"yield **d".   Such a lazy dictionary is the map stage of the famous
mapreduce algorithm.  It might not make sense in single processor python,
but it might in distributed Python.

Best,

Neil

On Fri, Oct 14, 2016 at 3:34 AM Sjoerd Job Postmus 
wrote:

> On Fri, Oct 14, 2016 at 07:06:12PM +1300, Greg Ewing wrote:
> > Sjoerd Job Postmus wrote:
> > >I think the suggested spelling (`*`) is the confusing part. If it were
> > >to be spelled `from ` instead, it would be less confusing.
> >
> > Are you suggesting this spelling just for generator
> > comprehensions, or for list comprehensions as well?
> > What about dict comprehensions?
>
> For both generator, list and set comprehensions it makes sense, I think.
> For dict comprehensions: not so much. That in itself is already sign
> enough that probably the */** spelling would make more sense, while also
> allowing the `yield *foo` alternative to `yield from foo`. But what
> would be the meaning of `yield **foo`? Would that be `yield
> *foo.items()`? I have no idea.
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "python-ideas" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/python-ideas/ROYNN7a5VAc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> python-ideas+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Mikhail V
On 13 October 2016 at 16:50, Chris Angelico  wrote:
> On Fri, Oct 14, 2016 at 1:25 AM, Steven D'Aprano  wrote:
>> On Thu, Oct 13, 2016 at 03:56:59AM +0200, Mikhail V wrote:
>>> and in long perspective when the world's alphabetical garbage will
>>> dissapear, two digits would be ok.
>> Talking about "alphabetical garbage" like that makes you seem to be an
>> ASCII bigot: rude, ignorant, arrogant and rather foolish as well. Even
>> 7-bit ASCII has more than 100 characters (128).

This is sort of rude. Are you from unicode consortium?

> Solution: Abolish most of the control characters. Let's define a brand
> new character encoding with no "alphabetical garbage". These
> characters will be sufficient for everyone:
>
> * [2] Formatting characters: space, newline. Everything else can go.
> * [8] Digits: 01234567
> * [26] Lower case Latin letters a-z
> * [2] Vital social media characters: # (now officially called "HASHTAG"), @
> * [2] Can't-type-URLs-without-them: colon, slash (now called both
> "SLASH" and "BACKSLASH")
>
> That's 40 characters that should cover all the important things anyone
> does - namely, Twitter, Facebook, and email. We don't need punctuation
> or capitalization, as they're dying arts and just make you look
> pretentious. I might have missed a few critical characters, but it
> should be possible to fit it all within 64, which you can then
> represent using two digits from our newly-restricted set; octal is
> better than decimal, as it needs less symbols. (Oh, sorry, so that's
> actually "50" characters, of which "32" are the letters. And we can
> use up to "100" and still fit within two digits.)
>
> Is this the wrong approach, Mikhail?

This is sort of correct approach. We do need punctuation however.
And one does not need of course to make it too tight.
So 8-bit units for text is excellent and enough space left for experiments.

> Perhaps we should go the other
> way, then, and be *inclusive* of people who speak other languages.

What keeps people from using same characters?
I will tell you what - it is local law. If you go to school you *have* to
write in what is prescribed by big daddy. If youre in europe or America, you are
more lucky. And if you're in China you'll be punished if you
want some freedom. So like it or not, learn hieroglyphs
and become visually impaired in age of 18.

> Thanks to Unicode's rich collection of characters, we can represent
> multiple languages in a single document;

Can do it without unicode in 8-bit boundaries with tagged text,
just need fonts for your language, of course if your
local charset is less than 256 letters.

This is how it was before unicode I suppose.
BTW I don't get it still what such revolutionary
advantages has unicode compared to tagged text.

> script, but have different characters. Alphabetical garbage, or
> accurate representations of sounds and words in those languages?

Accurate with some 50 characters is more than enough.

Mikhail
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Greg Ewing

Steven D'Aprano wrote:

 So why would yield *t give us this?


yield a; yield b; yield c

By analogy with the function call syntax, it should mean:

yield (a, b, c)


This is a false analogy, because yield is not a function.


However, consider the following spelling:

   l = [from f(t) for t in iterable]


That sentence no verb!

In English, 'from' is a preposition, so one expects there
to be a verb associated with it somewhere. We currently
have 'from ... import' and 'yield from'.

But 'from f(t) for t in iterable' ... do what?

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Sjoerd Job Postmus
On Fri, Oct 14, 2016 at 08:05:40AM +0200, Mikhail V wrote:
> Any critics on it? Besides not following the unicode consortium.

Besides the other remarks on "tradition", I think this is where a big
problem lies: We should not deviate from a common standard (without
very good cause).

There are cases where a language does good by deviating from the common
standard. There are also cases where it is bad to deviate.

Almost all current programming languages understand unicode, for
instance:

* C: http://en.cppreference.com/w/c/language/escape
* C++: http://en.cppreference.com/w/cpp/language/escape
* JavaScript: 
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#Using_special_characters_in_strings

and that were only the first 3 I tried. They all use `\u` followed by 4
hexadecimal digits.

You may not like the current standard. You may think/know/... it to be
suboptimal for human comprehension. However, what you are suggesting is
a very costly change. A change where --- I believe --- Python should not
take the lead, but also should not be afraid to follow if other
programming languages start to change.

I would suggest that this is a change that might be best proposed to the
unicode consortium itself, instead of going to (just) a programming
language.

It'd be interesting to see whether or not you can convince the unicode
consortium that 8 symbols will be enough.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Greg Ewing

Paul Moore wrote:

3. *fn(x) isn't an expression, and yet it *looks* like it should be ...

> To me, that suggests it would be hard to teach.

It's not an expression in any of the other places it's
used, either. Is it hard to to teach in those cases as
well?

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Greg Ewing

Paul Moore wrote:

PS I can counter a suggestion of using *f(t) rather than from f(t) in
the above, by saying that it adds yet another meaning to the already
heavily overloaded * symbol.


We've *already* given it that meaning in non-comprehension
list displays, though, so we're not really adding any new
meanings for it -- just allowing it to have that meaning in
a place where it's currently disallowed.

Something I've just noticed -- the Language Reference actually
defines both ordinary list displays and list comprehensions
as "displays", and says that a display can contain either a
comprehension or an explicit list of values. It has to go
out of its way a bit to restrict the * form to
non-comprehensions.

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Mikhail V
On 13 October 2016 at 12:05, Cory Benfield  wrote:
>
> integer & 0x00FF  # Hex
> integer & 16777215  # Decimal
> integer & 0o  # Octal
> integer & 0b  # Binary
>
> The octal representation is infuriating because one octal digit refers to 
> *three* bits

Correct, makes it not so nice looking and 8-bit-paradigm friendly.
Does not make it however
bad option in general and according to my personal suppositions and
works on glyph
development the optimal set is exactly of 8 glyphs.

> Decimal is no clearer.

In same alignment problematic context, yes, correct.

> Binary notation seems like the solution, ...

Agree with you, see my last reply to Greg for more thoughts on bitstrings
and quoternary approach.

> IIRC there’s some new syntax coming for binary literals
> that would let us represent them as 0b___

Very good. Healthy attitude :)

> less dense and loses clarity for many kinds of unusual bit patterns.

Not very clear for me what is exactly there with patterns.

> Additionally, as the number of bits increases life gets really hard:
> masking out certain bits of a 64-bit number requires

Self the editing of such a BITmask in hex notation makes life hard.
Editing it in binary notation makes life easier.

> a literal that’s at least 66 characters long,

Length is a feature of binary, though it is not major issue,
see my ideas on it in reply to Greg

> Hexadecimal has the clear advantage that each character wholly represents 4 
> bits,

This advantage is brevity, but one need slightly less brevity to make
it more readable.
So what do you think about base 4 ?

> This is a very long argument to suggest that your
> argument against hexadecimal literals
> (namely, that they use 16 glyphs as opposed
> to the 10 glyphs used in decimal)
> is an argument that is too simple to be correct.

I didn't understood this sorry :)))
Youre welcome to ask more if youre intersted in this.

> Different collections of glyphs are clearer in different contexts.
How much different collections and how much different contexts?

> while the english language requires 26 glyphs plus punctuation.

Does not *require*, but of course 8 glyphs would not suffice to effectively
read the language, so one finds a way to extend the glyph set.
Roughly speaking 20 letters is enough, but this is not exact science.
And it is quite hard science.

> But I don’t think you’re seriously proposing we should
> swap from writing English using the larger glyph set
> to writing it in decimal representation of ASCII bytes.

I didn't understand this sentence :)

In general I think we agree on many points, thank you for the input!

Cheers,
Mikhail
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Jonathan Goble
On Fri, Oct 14, 2016 at 1:54 AM, Steven D'Aprano  wrote:
>> and:
>>
>> o = ord (c)
>> if 100 <= o <= 150:
>
> Which is clearly not the same thing, and better written as:
>
> if "d" <= c <= "\x96":
> ...

Or, if you really want to use ord(), you can use hex literals:

o = ord(c)
if 0x64 <= o <= 0x96:
...
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Greg Ewing

Sjoerd Job Postmus wrote:

I think the suggested spelling (`*`) is the confusing part. If it were
to be spelled `from ` instead, it would be less confusing.


Are you suggesting this spelling just for generator
comprehensions, or for list comprehensions as well?
What about dict comprehensions?

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-14 Thread Mikhail V
On 13 October 2016 at 10:18, M.-A. Lemburg  wrote:

> I suppose you did not intend everyone to have to write
> \u010 just to get a newline code point to avoid the
> ambiguity.

Ok there are different usage cases.
So in short without going into detail,
for example if I need to type in a unicode
string literal in ASCII editor I would find such notation
replacement beneficial for me:

u'\u0430\u0431\u0432.txt'
-->
u"{1072}{1073}{1074}.txt"

Printing could be the same I suppose.
I use Python 2.7. And printing so
with numbers instead of non-ASCII would help me see
where I have non-ASCII chars. But I think the print
behavior must be easily configurable.

Any critics on it? Besides not following the unicode consortium.
Also I would not even mind fixed width 7-digit decimals actually.
Ugly but still for me better than hex.

Mikhail
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: unpacking generalisations for list comprehension

2016-10-14 Thread Greg Ewing

Neil Girdhar wrote:
At the end of this discussion it might be good to get a tally of how 
many people think the proposal is reasonable and logical.


I think it's reasonable and logical.

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/