Re: [Python-ideas] Show more info when `python -vV`

2016-10-15 Thread Serhiy Storchaka

On 14.10.16 10:40, INADA Naoki wrote:

When reporting issue to some project and want to include
python version in the report, python -V shows very limited information.

$ ./python.exe -V
Python 3.6.0b2+

sys.version is more usable, but it requires one liner.

$ ./python.exe -c 'import sys; print(sys.version)'
3.6.0b2+ (3.6:86a1905ea28d+, Oct 13 2016, 17:58:37)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.38)]

How about `python -vV` shows sys.version?


Are there precedences of combining verbose and version options in other 
programs?


PyPy just outputs sys.version for the --version option.

$ pypy -V
Python 2.7.10 (5.4.1+dfsg-1~ppa1~ubuntu16.04, Sep 06 2016, 23:11:39)
[PyPy 5.4.1 with GCC 5.4.0 20160609]

I think it would not be large breakage if new releases of CPython become 
outputting extended version information by default.



___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Random832
On Fri, Oct 14, 2016, at 22:38, Steven D'Aprano wrote:
> On Thu, Oct 13, 2016 at 05:30:49PM -0400, Random832 wrote:
> 
> > Frankly, I don't see why the pattern isn't obvious
> 
> *shrug*
> 
> Maybe your inability to look past your assumptions and see things from 
> other people's perspective is just as much a blind spot as our inability 
> to see why you think the pattern is obvious. We're *all* having 
> difficulty in seeing things from the other side's perspective here.
> 
> Let me put it this way: as far as I am concerned, sequence unpacking is 
> equivalent to manually replacing the sequence with its items:

And as far as I am concerned, comprehensions are equivalent to manually
creating a sequence/dict/set consisting of repeating the body of the
comprehension to the left of "for" with the iteration variable[s]
replaced in turn with each actual value.

> t = (1, 2, 3)
> [100, 200, *t, 300]
> 
> is equivalent to replacing "*t" with "1, 2, 3", which gives us:
> 
> [100, 200, 1, 2, 3, 300]

I don't understand why it's not _just as simple_ to say:

t = ('abc', 'def', 'ghi')
[*x for x in t]

is equivalent to replacing "x" in "*x" with, each in turn, 'abc', 'def',
and 'ghi', which gives us:

[*'abc', *'def', *'ghi']

just like [f(x) for x in t] would give you [f('abc'), f('def'),
f('ghi')]

> That's nice, simple, it makes sense, and it works in sufficiently recent 
> Python versions.

That last bit is not an argument - every new feature works in
sufficiently recent python versions. The only difference for this
proposal (provided it is approved) is that the sufficiently recent
python versions simply don't exist yet.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Thu, Oct 13, 2016 at 11:32:49PM -0400, Random832 wrote:
> On Thu, Oct 13, 2016, at 18:15, Steven D'Aprano wrote:
> > Consider the analogy with f(*t), where t = (a, b, c). We *don't* have:
> > 
> > f(*t) is equivalent to f(a); f(b); f(c)
> 
> I don't know where this "analogy" is coming from.

I'm explicitly saying that we DON'T have that behaviour with function 
calls. f(*t) is NOT expanded to f(a), f(b), f(c). I even emphasised the 
"don't" part of my sentence above.

And yet, this proposal wants to expand

[*t for t in iterable]

into the equivalent of:

result = []
for t in iterable:
a, b, c = *t
result.append(a)
result.append(b)
result.append(c)

Three separate calls to append, analogous to three separate calls to 
f(). The point I am making is that this proposed change is *not* 
analogous to the way sequence unpacking works in other contexts. I'm 
sorry if I wasn't clear enough.


[...]
> > Indeed. The reader may be forgiven for thinking that this is yet another 
> > unrelated and arbitrary use of * to join the many other uses:
> 
> How is it arbitrary? 

It is arbitrary because the suggested use of *t in list comprehensions 
has no analogy to the use of *t in other contexts.

As far as I can see, this is not equivalent to the way sequence 
(un)packing works on *either* side of assignment. It's not equivalent to 
the way sequence unpacking works in function calls, or in list displays. 
It's this magical syntax which turns a virtual append() into extend():

# [t for t in iterable]
result = []
for t in iterable:
result.append(t)

# but [*t for t in iterable]
result = []
for t in iterable:
result.extend(t)


or, if you prefer, keep the append but magical add an extra for-loop:

# [*t for t in iterable]
result = []
for t in iterable:
for x in t:
result.append(x)


> > - mathematical operator;
> > - glob and regex wild-card;
> > - unpacking;
> 
> This is unpacking. It unpacks the results into the destination.

If it were unpacking as it is understood today, with no other changes, 
it would be a no-op. (To be technical, it would convert whatever 
iterable t is into a tuple.) I've covered that in an earlier post: if 
you replace *t with the actual items of t, you DON'T get:

result = []
for t in iterable:
a, b, c = *t  # assuming t has three items, as per above
result.append(a)
result.append(b)
result.append(c)

as desired, but:

result = []
for t in iterable:
a, b, c = *t
result.append((a, b, c))


which might as well be a no-op.

To make this work, the "unpacking operator" needs to do more than just 
unpack. It has to either change append into extend, or equivalently, add 
an extra for loop into the list comprehension.



> There's a straight line from [*t, *u, *v] to [*x for x in (t, u, v)].
> What's surprising is that it doesn't work now.

I'm not surprised that it doesn't work. I expected that it wouldn't 
work. When I first saw the suggestion, I thought "That can't possibly be 
meaningful, it should be an error."

Honestly Random832, I cannot comprehend how you see this as a 
straightforward obvious extension from existing behaviour. To me, this 
is nothing like the existing behaviour, and it contradicts the way 
sequence unpacking works everywhere else.

I do not understand the reasoning you use to conclude that this is a 
straight-line extension to the current behaviour. Nothing I have seen in 
any of this discussion justifies that claim to me. I don't know what you 
are seeing that I cannot see. My opinion is that you're seeing things 
that aren't there -- I expect that your opinion is that I'm blind.


> I think last month we even had someone who didn't know about 'yield
> from' propose 'yield *x' for exactly this feature. It is intuitive - it
> is a straight-line extension of the unpacking syntax.

Except for all the folks who have repeatedly said that it is 
counter-intuitive, that it is a twisty, unexpected, confusing path from 
the existing behaviour to this proposal.



-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Fri, Oct 14, 2016 at 08:29:28PM +1300, Greg Ewing wrote:
> Steven D'Aprano wrote:
> 
>  So why would yield *t give us this?
> >
> >yield a; yield b; yield c
> >
> >By analogy with the function call syntax, it should mean:
> >
> >yield (a, b, c)
> 
> This is a false analogy, because yield is not a function.

Neither are list comprehensions or sequence unpacking in the context of 
assignment:

a, b, c = *t

Not everything is a function. What's your point?

As far as I can see, in *every* other use of sequence unpacking, *t is 
conceptually replaced by a comma-separated sequence of items from t. If 
the starred item is on the left-hand side of the = sign, we might call 
it "sequence packing" rather than unpacking, and it operates to collect 
unused items, just like *args does in function parameter lists.

Neither of these are even close to what the proposed [*t for t in 
iterable] will do.


> >>However, consider the following spelling:
> >>
> >>   l = [from f(t) for t in iterable]
> 
> That sentence no verb!
> 
> In English, 'from' is a preposition, so one expects there
> to be a verb associated with it somewhere. We currently
> have 'from ... import' and 'yield from'.
> 
> But 'from f(t) for t in iterable' ... do what?

*shrug* 

I'm not married to this suggestion. It could be written 
[MAGIC!!! HAPPENS!!! HERE!!! t for t in iterable] if you prefer.
The suggestion to use "from" came from Sjoerd Job Postmus, not me.


-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Fri, Oct 14, 2016 at 07:51:18AM +, Neil Girdhar wrote:
> Here's an interesting idea regarding yield **x:
> 
> Right now a function containing any yield returns a generator.  Therefore,
> it works like a generator expression, which is the lazy version of a list
> display.  lists can only contain elements x and unpackings *x.  Therefore,
> it would make sense to only have "yield x" and "yield *xs" (currently
> spelled "yield from xs")

No, there's no "therefore" about it. "yield from x" is not the same as 
"yield *x". 


*x is conceptually equivalent to replacing "*x" with a 
comma-separated sequence of individual items from x.

Given x = (1, 2, 3):


f(*x) is like f(1, 2, 3)

[100, 200, *x, 300] is like [100, 200, 1, 2, 3, 300]

a, b, c, d = 100, *x is like a, b, c, d = 100, 1, 2, 3

Now replace "yield *x" with "yield 1, 2, 3". Conveniently, that syntax 
already works:

py> def gen():
... yield 1, 2, 3
...
py> it = gen()
py> next(it)
(1, 2, 3)


"yield *x" should not be the same as "yield from x". Yielding a starred 
expression currently isn't allowed, but if it were allowed, it would be 
pointless: it would be the same as unpacking x, then repacking it into a 
tuple.

Either that, or we would have yet another special meaning for * 
unrelated to the existing meanings.




-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Random832
On Sat, Oct 15, 2016, at 04:00, Steven D'Aprano wrote:
> > This is unpacking. It unpacks the results into the destination.
> 
> If it were unpacking as it is understood today, with no other changes, 
> it would be a no-op. (To be technical, it would convert whatever 
> iterable t is into a tuple.) 

If that were true, it would be a no-op everywhere.

> I've covered that in an earlier post: if 
> you replace *t with the actual items of t, you DON'T get:

Replacing it _with the items_ is not the same thing as replacing it
_with a sequence containing the items_, and you're trying to pull a fast
one by claiming it is by using the fact that the "equivalent loop"
(which is and has always been a mere fiction, not a real transformation
actually performed by the interpreter) happens to use a sequence of
tokens that would cause a tuple to be created if a comma appears in the
relevant position.

> To make this work, the "unpacking operator" needs to do more than just 
> unpack. It has to either change append into extend,

Yes, that's what unpacking does. In every context where unpacking means
anything at all, it does something to arrange for the sequence's
elements to be included "unbracketed" in the context it's being
ultimately used in. It's no different from changing BUILD_LIST
(equivalent to creating an empty list and appending each item) to
BUILD_LIST_UNPACK (equivalent to creating an empty list and extending
with each item).

Imagine that we were talking about ordinary list displays, and for some
reason had developed a tradition of explaining them in terms of
"equivalent" code the way we do for comprehensions.

x = [a, b, c] is equivalent to:
x = list()
x.append(a)
x.append(b)
x.append(c)

So now if we replace c with *c [where c == [d, e]], must we now say
this?
x = list()
x.append(a)
x.append(b)
x.append(d, e)

Well, that's just not valid at all. Clearly we must reject this
ridiculous notion of allowing starred expressions within list displays,
because we _can't possibly_ change the transformation to accommodate the
new feature.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Fri, Oct 14, 2016 at 06:23:32PM +1300, Greg Ewing wrote:

> To maintain the identity
> 
>   list(*x for x in y) == [*x for x in y]
> 
> it would be necessary for the *x in (*x for x in y) to expand
> to "yield from x".

Oh man, you're not even trying to be persuasive any more. You're just 
assuming the result that you want, then declaring that it is 
"necessary".  :-(


I have a counter proposal: suppose *x is expanded to the string literal 
"Nope!". Then, given y = (1, 2, 3) (say):

list(*x for x in y)

gives ["Nope!", "Nope!", "Nope!"], and 

[*x for x in y]

also gives ["Nope!", "Nope!", "Nope!"]. Thus the identity is kept, and 
your claim of "necessity" is disproven.

We already know what *x should expand to: nearly everywhere else, *x is 
conceptually replaced by a comma-separated sequence of the items of x. 
That applies to function calls, sequence unpacking and list displays. 

The only exceptions I can think of are *args parameters in function 
parameter lists, and sequence packing on the left side of an assignment, 
both of which work in similar fashions.

But not this proposal: it wouldn't work like either of the above, hence 
it would be yet another unrelated use of the * operator for some 
special meaning.



-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Thu, Oct 13, 2016 at 01:30:45PM -0700, Neil Girdhar wrote:

> From a CPython implementation standpoint, we specifically blocked this code 
> path, and it is only a matter of unblocking it if we want to support this.

I find that difficult to believe. The suggested change seems like it 
should be much bigger than just removing a block. Can you point us to 
the relevant code?

In any case, it isn't really the difficulty of implementation that is 
being questioned. Many things are easy to implement, but we still 
don't do them. The real questions here are:

(1) Should we overload list comprehensions as sugar for a flatten() 
function?

(2) If so, should we spell that [*t for t in iterable]?


Actually the answer to (1) should be "we already do". We just spell it:

[x for t in iterable for x in t]



-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Steven D'Aprano
On Sat, Oct 15, 2016 at 01:42:34PM +1300, Greg Ewing wrote:
> Steven D'Aprano wrote:
> >That's because some sequence of characters 
> >is being wrongly interpreted as an emoticon by the client software.
> 
> The only thing wrong here is that the client software
> is trying to interpret the emoticons.
>
> Emoticons are for *humans* to interpret, not software.
> Subtlety and cleverness is part of their charm. If you
> blatantly replace them with explicit images, you crush
> that.

Heh :-)

I agree with you. But so long as people want, or at least phone and 
software developers think people want, graphical smiley faces and 
dancing paperclips and piles of poo, then emoticons are a distictly more 
troublesome way of dealing with them.



-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Martti Kühne
On Sat, Oct 15, 2016 at 10:09 AM, Steven D'Aprano  wrote:
> Not everything is a function. What's your point?
>
> As far as I can see, in *every* other use of sequence unpacking, *t is
> conceptually replaced by a comma-separated sequence of items from t. If
> the starred item is on the left-hand side of the = sign, we might call
> it "sequence packing" rather than unpacking, and it operates to collect
> unused items, just like *args does in function parameter lists.
>


You brush over the fact that *t is not limited to a replacement by a
comma-separated sequence of items from t, but *t is actually a
replacement by that comma-separated sequence of items from t INTO an
external context. For func(*t) to work, all the elements of t are kind
of "leaked externally" into the function argument list's context, and
for {**{'a': 1, 'b': 2, ...}} the inner dictionary's items are kind of
"leaked externally" into the outer's context.

You can think of the */** operators as a promotion from append to
extend, but another way to see this is as a promotion from yield to
yield from. So if you want to instead of append items to a
comprehension, as is done with [yield_me for yield_me in iterator],
you can see this new piece as a means to [*yield_from_me for
yield_from_me in iterator]. FWIW, I think it's a bit confusing
that yield needs a different keyword if these asterisk operators
already have this outspoken promotion effect.

Besides, [*thing for thing in iterable_of_iters if cond] has this cool
potential for the existing any() and all() builtins for cond, where a
decision can be made based on the composition of the in itself
iterable thing.

cheers!
mar77i
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Martti Kühne
On Sat, Oct 15, 2016 at 10:09 AM, Steven D'Aprano  wrote:
> Not everything is a function. What's your point?
>
> As far as I can see, in *every* other use of sequence unpacking, *t is
> conceptually replaced by a comma-separated sequence of items from t. If
> the starred item is on the left-hand side of the = sign, we might call
> it "sequence packing" rather than unpacking, and it operates to collect
> unused items, just like *args does in function parameter lists.
>


You brush over the fact that *t is not limited to a replacement by a
comma-separated sequence of items from t, but *t is actually a
replacement by that comma-separated sequence of items from t INTO an
external context. For func(*t) to work, all the elements of t are kind
of "leaked externally" into the function argument list's context, and
for {**{'a': 1, 'b': 2, ...}} the inner dictionary's items are kind of
"leaked externally" into the outer's context.

You can think of the */** operators as a promotion from append to
extend, but another way to see this is as a promotion from yield to
yield from. So if you want to instead of append items to a
comprehension, as is done with [yield_me for yield_me in iterator],
you can see this new piece as a means to [*yield_from_me for
yield_from_me in iterator]. Therefore I think it's a bit confusing
that yield needs a different keyword if these asterisk operators
already have this intuitive promotion effect.

Besides, [*thing for thing in iterable_of_iters if cond] has this cool
potential for the existing any() and all() builtins for cond, where a
decision can be made based on the composition of the in itself
iterable thing.

cheers!
mar77i
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Null coalescing operator

2016-10-15 Thread Sven R. Kunze

On 15.10.2016 08:10, Nick Coghlan wrote:

However, it's also the case that where we *do* have a well understood
and nicely constrained problem, it's still better to complain loudly
when data is unexpectedly missing, rather than subjecting ourselves to
the pain of having to deal with detecting problems with our data far
away from where we introduced those problems. A *lot* of software
still falls into that category, especially custom software written to
meet the needs of one particular organisation.


Definitely true. Stricter rules are similar to "fail early", "no errors 
should pass silently" and the like. This stance is conveyed by Python as 
long as I know it.



My current assumption is that those of us that now regularly need to
deal with semi-structured data are thinking "Yes, these additions are
obviously beneficial and improve Python's expressiveness, if we can
find an acceptable spelling". Meanwhile, folks dealing primarily with
entirely structured or entirely unstructured data are scratching their
heads and asking "What's the big deal? How could it ever be worth
introducing more line noise into the language just to make this kind
of code easier to write?"


That's where I like to see a common middle ground between those two 
sides of the table.



I need to work with both sides for years now. In my experience, it's 
best to avoid semi-structured data at all to keep the code simple. As we 
all know and as you described, the world isn't perfect and I can only 
agree. However, what served us best in recent years, is to keep the 
"semi-" out of the inner workings of our codebase. So, handling "semi-" 
at the system boundary proved to be a reliable way of not breaking 
everything and of keeping our devs sane.


I am unsure how to implement such solution, whether via PEP8 or via the 
proposal's PEP. It somehow reminds me of the sans-IO idea where the core 
logic should be simple/linear code and the difficult/problematic issues 
are solved at the systems boundary.



This said, let me put it differently by using an example. I can find 
None-aware operators very useful at the outermost function/methods of a 
process/library/class/module:


class FanzyTool:
def __init__(self, option1=None, option2=None, ...):
# what happens when option6 and option7 are None
# and it only matters when option 3 is not None
# but when ...

Internal function/methods/modules/classes and even processes/threads 
should have a clear, non-wishy-washy way of input and output (last but 
not least also to do unit-testing on relatively sane level).


def _append_x(self, s):
return s + 'x'  # strawman operation

Imagine, that s is something important to be passed around many times 
inside of "FanzyTool". The whole process usually makes no sense at all, 
when s is None. And having each internal method checking for None is 
getting messy fast.



I hope we can also convey this issue properly when we find an 
appropriate syntax.



Even the PEP's title is arguably a problem on that front - "None-aware
operators" is a proposed *solution* to the problem of making
semi-structured data easier to work with in Python, and hence reads
like a solution searching for a problem to folks that don't regularly
encounter these issues themselves.

Framing the problem that way also provides a hint on how we could
*document* these operations in the language reference in a readily
comprehensible way: "Operators for working with semi-structured data"


That's indeed an extraordinarily good title as it describes best what we 
intend it to be used for (common usage scenarios). +1


Regards,
Sven
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Greg Ewing

Steven D'Aprano wrote:


t = (1, 2, 3)
iterable = [t]
[*t for t in iterable]

If you do the same manual replacement, you get:

[1, 2, 3 for t in iterable]


Um, no, you need to also *remove the for loop*, otherwise
you get complete nonsense, whether * is used or not.

Let's try a less degenerate example, both ways.

   iterable = [1, 2, 3]
   [t for t in iterable]

To expand that, we replace t with each of the values
generated by the loop and put commas between them:

   [1, 2, 3]

Now with the star:

   iterable = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
   [*t for t in iterable]

Replace *t with each of the sequence generated by the
loop, with commas between:

   [1,2,3 , 4,5,6 , 7,8,9]

Maybe your inability to look past your assumptions and see things from 
other people's perspective is just as much a blind spot as our inability 
to see why you think the pattern is obvious.


It's obvious that you're having difficulty seeing what
we're seeing, but I don't know how to explain it any
more clearly, I'm sorry.

--
Greg

___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Greg Ewing

Martti Kühne wrote:

You brush over the fact that *t is not limited to a replacement by a
comma-separated sequence of items from t, but *t is actually a
replacement by that comma-separated sequence of items from t INTO an
external context.


Indeed. In situations where there isn't any context for
the interpretation of *, it's not allowed. For example:

>>> x = *(1, 2, 3)
  File "", line 1
SyntaxError: can't use starred expression here

But

>>> x = 1, *(2, 3)
>>> x
(1, 2, 3)

The * is allowed there because it's already in a context
where a comma-separated list has meaning.

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Sat, Oct 15, 2016 at 04:42:13AM -0400, Random832 wrote:
> On Sat, Oct 15, 2016, at 04:00, Steven D'Aprano wrote:
> > > This is unpacking. It unpacks the results into the destination.
> > 
> > If it were unpacking as it is understood today, with no other changes, 
> > it would be a no-op. (To be technical, it would convert whatever 
> > iterable t is into a tuple.) 
> 
> If that were true, it would be a no-op everywhere.

That's clearly not the case.

x = (1, 2, 3)
[100, 200, *x, 300]

If you do it the way I say, and replace *x with the individual items of 
x, you get this:

[100, 200, 1, 2, 3, 300]

which conveniently happens to be what you already get in Python. You 
claim that if I were write it should be a no-op -- that doesn't follow. 
Why would it be a no-op? I've repeatedly shown the transformation to 
use, and it clearly does what I say it should. How could it not?



> > I've covered that in an earlier post: if 
> > you replace *t with the actual items of t, you DON'T get:
> 
> Replacing it _with the items_ is not the same thing as replacing it
> _with a sequence containing the items_, 

I don't think I ever used the phrasing "a sequence containing the 
items". I think that's *your* phrase, not mine.

I may have said "with the sequence of items" or words to that effect. 
These two phrases do have different meanings:

x = (1, 2, 3)
[100, 200, *x, 300]

# Replace *x with "a sequence containing items of x"
[100, 200, [1, 2, 3], 300]

# Replace *x with "the sequence of items of x"
[100, 200, 1, 2, 3, 300]


Clearly they have different meanings. I'm confident that I've always 
made it clear that I'm referring to the second, not the first, but I'm 
only human and if I've been unclear or used the wrong phrasing, my 
apologies.

But nit-picking about the exact phrasing used aside, it is clear that 
expanding the *t in a list comprehension:

[*t for t in iterable]

to flatten the iterable cannot be analogous to this. Whatever 
explanation you give for why *t expands the list comprehension, it 
cannot be given in terms of replacing *t with the items of t. There has 
to be some magic to give it the desired special behaviour.


> and you're trying to pull a fast
> one by claiming it is by using the fact that the "equivalent loop"
> (which is and has always been a mere fiction, not a real transformation
> actually performed by the interpreter) happens to use a sequence of
> tokens that would cause a tuple to be created if a comma appears in the
> relevant position.

I don't know what "equivalent loop" you are accusing me of misusing.

The core developers have made it absolutely clear that changing the 
fundamental equivalence of list comps as syntactic sugar for:

result = []
for t in iterable:
result.append(t)


is NOT NEGOTIABLE. (That is much to my disappointment -- I would love to 
introduce a "while" version of list comps to match the "if" version, but 
that's not an option.)

So regardless of whether it is a fiction or an absolute loop, Python's 
list comprehensions are categorically limited to behave equivalently to 
the loop above (modulo scope, temporary variables, etc). If you want to 
change that -- change the append to an extend, for example -- you need 
to make a case for that change which is strong enough to overcome 
Guido's ruling. (Maybe Guido will be willing to bend his ruling to allow 
extend as well.)


There are three ways to get the desired flatten() behaviour from 
a list comp. One way is to explicitly add a second loop, which has the 
benefit of already working:

[x for t in iterable for x in t]


Another is to swap out the append for an extend:

[*t for t in iterable]

# real or virtual transformation, it doesn't matter
result = []
for t in iterable:
result.extend(t)


And the third is to keep the append but insert an extra virtual loop:

# real or virtual transformation, it still doesn't matter
result = []
for t in iterable:
for x in t:
result.append(x)


Neither of the second or third suggestions match the equivalent loop 
form given above. Neither the second nor third is an obvious extension 
of the way sequence unpacking works in other contexts.



[...]
> Imagine that we were talking about ordinary list displays, and for some
> reason had developed a tradition of explaining them in terms of
> "equivalent" code the way we do for comprehensions.
> 
> x = [a, b, c] is equivalent to:
> x = list()
> x.append(a)
> x.append(b)
> x.append(c)
> 
> So now if we replace c with *c [where c == [d, e]], must we now say
> this?
> x = list()
> x.append(a)
> x.append(b)
> x.append(d, e)
> 
> Well, that's just not valid at all.

Right. And if we had a tradition of saying that list displays MUST be 
equivalent to the unpacked sequence of appends, then sequence unpacking 
inside a list display would be prohibited. But we have no such 
tradition, and sequence unpacking inside the list 

Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Sat, Oct 15, 2016 at 11:36:28PM +1300, Greg Ewing wrote:

> Indeed. In situations where there isn't any context for
> the interpretation of *, it's not allowed. 

You mean like in list comprehensions?

Are you now supporting my argument that starring the list comprehension 
expression isn't meaningful? Not if star is defined as sequence 
unpacking in the usual way. If you want to invent a new meaning for * to 
make this work, to join all the other special case magic meanings for 
the * symbol, that's another story.


> For example:
> 
> >>> x = *(1, 2, 3)
>   File "", line 1
> SyntaxError: can't use starred expression here
> 
> But
> 
> >>> x = 1, *(2, 3)
> >>> x
> (1, 2, 3)
> 
> The * is allowed there because it's already in a context
> where a comma-separated list has meaning.

Oh look, just like now:

py> iterable = [(1, 'a'), (2, 'b')]
py> [(100, *t) for t in iterable]
[(100, 1, 'a'), (100, 2, 'b')]

Hands up anyone who expected to flatten the iterable and get 

[100, 1, 'a', 100, 2, 'b']

instead? Anyone? No?

Take out the (100, ...) and just leave the *t, and why should it be 
different? It is my position that:

(quote)
there isn't any context for the interpretation of *

for [*t for t in iterable]. Writing that is the list comp equivalent of 
writing x = *t.


-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread M.-A. Lemburg
On 14.10.2016 10:26, Serhiy Storchaka wrote:
> On 13.10.16 17:50, Chris Angelico wrote:
>> Solution: Abolish most of the control characters. Let's define a brand
>> new character encoding with no "alphabetical garbage". These
>> characters will be sufficient for everyone:
>>
>> * [2] Formatting characters: space, newline. Everything else can go.
>> * [8] Digits: 01234567
>> * [26] Lower case Latin letters a-z
>> * [2] Vital social media characters: # (now officially called
>> "HASHTAG"), @
>> * [2] Can't-type-URLs-without-them: colon, slash (now called both
>> "SLASH" and "BACKSLASH")
>>
>> That's 40 characters that should cover all the important things anyone
>> does - namely, Twitter, Facebook, and email. We don't need punctuation
>> or capitalization, as they're dying arts and just make you look
>> pretentious.
> 
> https://en.wikipedia.org/wiki/DEC_Radix-50

And then we store Python identifiers in a single 64-bit word,
allow at most 20 chars per identifier and use the remaining
bits for cool things like type information :-)

Not a bad idea, really.

But then again: even microbits support Unicode these days, so
apparently there isn't much need for such memory footprint
optimizations anymore.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Oct 15 2016)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> Python Database Interfaces ...   http://products.egenix.com/
>>> Plone/Zope Database Interfaces ...   http://zope.egenix.com/


::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
  http://www.malemburg.com/

___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Martti Kühne
On Sat, Oct 15, 2016 at 12:48 PM, Steven D'Aprano  wrote:
> Oh look, just like now:
>
> py> iterable = [(1, 'a'), (2, 'b')]
> py> [(100, *t) for t in iterable]
> [(100, 1, 'a'), (100, 2, 'b')]
>
> Hands up anyone who expected to flatten the iterable and get
>
> [100, 1, 'a', 100, 2, 'b']
>
> instead? Anyone? No?
>

I don't know whether that should be provocating or beside the poinnt.
It's probably both. You're putting two expectations on the same
example: first, you make the reasonable expectation that results in
[(100, 1, 'a'), (100, 2, 'b')], and then you ask whether anyone
expected [100, 1, 'a', 100, 2, 'b'], but don't add or remove anything
from the same example. Did you forget to put a second example using
the new notation in there?
Then you'd have to spell it out and start out with [*(100, *t) for t
in iterable]. And then you can ask who expected [100, 1, 'a', 100, 2,
'b']. Which is what this thread is all about.

cheers!
mar77i
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Exception to the closing brace/bracket/parenthesis indentation for multi-line constructs rule in PEP8 for multi-line function definitions

2016-10-15 Thread João Matos
Hello,

In the Code lay-out\Indentation section of PEP8 it is stated that

"

The closing brace/bracket/parenthesis on multi-line constructs may either 
line up under the first non-whitespace character of the last line of list, 
as in: 



my_list = [
1, 2, 3,
4, 5, 6,
]

result = some_function_that_takes_arguments(
'a', 'b', 'c',
'd', 'e', 'f',
)

or it may be lined up under the first character of the line that starts the 
multi-line construct, as in: 



my_list = [
1, 2, 3,
4, 5, 6,
]

result = some_function_that_takes_arguments(
'a', 'b', 'c',
'd', 'e', 'f',
)

"


however, right before that location, there are several examples that do not 
comply, like these:

"

# Aligned with opening delimiter.
foo = long_function_name(var_one, var_two,
 var_three, var_four)

# More indentation included to distinguish this from the rest.
def long_function_name(
var_one, var_two, var_three,
var_four):
print(var_one)

# Hanging indents should add a level.
foo = long_function_name(
var_one, var_two,
var_three, var_four)

"

That should be corrected but it isn't the main point of this topic.


Assuming that a multi-line function definition is considered a multi-line 
construct, I would like to propose an exception to the closing 
brace/bracket/parenthesis indentation for multi-line constructs rule in 
PEP8.

I my view all multi-line function definitions should only be allowed 
options "usual" and "acceptable" shown below, due to better readability.


I present 3 examples (usual, acceptable, horrible) where only the last 2 
comply with the current existing rule:

def do_something(parameter_one, parameter_two, parameter_three, 
parameter_four,
 parameter_five, parameter_six, parameter_seven,
 last_parameter):
"""Do something."""
pass

def do_something(parameter_one, parameter_two, parameter_three, 
parameter_four,
 parameter_five, parameter_six, parameter_seven, 
last_parameter
 ):
"""Do something."""
pass

def do_something(parameter_one, parameter_two, parameter_three, 
parameter_four,
 parameter_five, parameter_six, parameter_seven, 
last_parameter
):
"""Do something."""
pass


The same 3 examples in the new 3.5 typing style:

def do_something(parameter_one: List[str], parameter_two: List[str],
 parameter_three: List[str], parameter_four: List[str],
 parameter_five: List[str], parameter_six: List[str],
 parameter_seven: List[str],
 last_parameter: List[str]) -> bool:
"""Do something."""
pass

def do_something(parameter_one: List[str], parameter_two: List[str],
 parameter_three: List[str], parameter_four: List[str],
 parameter_five: List[str], parameter_six: List[str],
 parameter_seven: List[str], last_parameter: List[str]
 ) -> bool:
"""Do something."""
pass

def do_something(parameter_one: List[str], parameter_two: List[str],
 parameter_three: List[str], parameter_four: List[str],
 parameter_five: List[str], parameter_six: List[str],
 parameter_seven: List[str], last_parameter: List[str]
) -> bool:
"""Do something."""
pass



Best regards,

JM

___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Paul Moore
On 14 October 2016 at 10:48, Paul Moore  wrote:
> On 14 October 2016 at 07:54, Greg Ewing  wrote:
>>> I think it's probably time for someone to
>>> describe the precise syntax (as BNF, like the syntax in the Python
>>> docs at
>>> https://docs.python.org/3.6/reference/expressions.html#displays-for-lists-sets-and-dictionaries
>>
>>
>> Replace
>>
>>comprehension ::=  expression comp_for
>>
>> with
>>
>>comprehension ::=  (expression | "*" expression) comp_for
>>
>>> and semantics (as an explanation of how to
>>> rewrite any syntactically valid display as a loop).
>>
>>
>> The expansion of the "*" case is the same as currently except
>> that 'append' is replaced by 'extend' in a list comprehension,
>> 'yield' is replaced by 'yield from' in a generator
>> comprehension.
[...]
> So now I understand what's being proposed, which is good. I don't
> (personally) find it very intuitive, although I'm completely capable
> of using the rules given to establish what it means. In practical
> terms, I'd be unlikely to use or recommend it - not because of
> anything specific about the proposal, just because it's "confusing". I
> would say the same about [(x, *y, z) for ...].

Thinking some more about this, is it not true that

[ *expression for var in iterable ]

is the same as

[ x for var in iterable for x in expression ]

?

If so, then this proposal adds no new expressiveness, merely a certain
amount of "compactness". Which isn't necessarily a bad thing, but it's
clearly controversial whether the compact version is more readable /
"intuitive" in this case. Given the lack of any clear improvement, I'd
be inclined to think that "explicit is better than implicit" applies
here, and reject the new proposal.

Paul.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Mikhail V
On 14 October 2016 at 11:36, Greg Ewing  wrote:

>but bash wasn't designed for that.
>(The fact that some people use it that way says more
>about their dogged persistence in the face of
>adversity than it does about bash.)

I can not judge what bash is good for, since I never
tried to learn it. But it *looks* indeed frightening.
First feeling is OMG, I must close this and never
see again.
Also I can only hard imagine that special purpose
of some language can ignore readability,
even if it is assembler or whatever,
it can be made readable without much effort.
So I just look for some other solution for same task,
let it be 10 times more code.


> So for that
> person, using decimal would make the code *harder*
> to maintain.
> To a maintainer who doesn't have that familiarity,
> it makes no difference either way.

That is because that person from beginning
(blindly) follows the convention.
So my intention of course was not
to find out if the majority does or not,
but rather which one of two makes
more sence *initially*, just trying to imagine
that we can decide.
To be more precise, if you were to choose
between two options:

1. use hex for the glyph index and use
hex for numbers (e.g. some arbitrary
value like screen coordinates)
2. use decimal for both cases.

I personally choose option 2.
Probably nothing will convince me that option
1. will be better, all the more I don't
believe that anything more than base-8
makes much sense for readable numbers.
Just  little bit  dissapointed that others
again and again speak of convention.

>I just
>don't see this as being anywhere near being a
>significant problem.

I didn't mean that, it is just slightly
annoys me.

>>In standard ASCII
>>there are enough glyphs that would work way better
>>together,

>Out of curiosity, what glyphs do you have in mind?

If I were to decide, I would look into few options here:
1. Easy option which would raise less further
questions is to take 16 first lowercase letters.
2. Better option would be to choose letters and
possibly other glyphs to build up a more readable
set. E.g. drop "c" letter and leave "e" due to
their optical collision, drop some other weak glyphs,
like "l" "h". That is of course would raise
many further questions, like why you do you drop this
glyph and not this and so on so it will surely end up in quarrel.

Here lies another problem - non-constant width of letters,
but this is more the problem of fonts and rendering,
so adresses IDE and editors problematics.
But as said I won't recommend base 16 at all.


>>ұұ-ұ   ---ұ
>>
>>you can downscale the strings, so a 16-bit
>>value would be ~60 pixels wide

> Yes, you can make the characters narrow enough that
> you can take 4 of them in at once, almost as though
> they were a single glyph... at which point you've
> effectively just substituted one set of 16 glyphs

No no. I didn't mean to shrink them till they melt together.
The structure is still there, only that with such notation
you don't need to keep the glyph so big as with many-glyph systems.

>for another. Then you'd have to analyse whether the
>*combined* 4-element glyphs were easier to disinguish
>from each other than the ones they replaced. Since
>the new ones are made up of repetitions of just two
>elements, whereas the old ones contain a much more
>varied set of elements, I'd be skeptical about that.

I get your idea and this a very good point.
Seems you have experience in such things?
Currently I don't know for sure if such approach
more effective or less than others and for what case.
But I can bravely claim that it is better than *any*
hex notation, it just follows from what I have here
on paper on my table, namely that it is physically
impossible to make up highly effective glyph system
of more than 8 symbols. You want more only if really
*need* more glyphs.
And skepticism should always be present.

One thing however especially interests me, here not
only the differentiation of glyph comes in play,
but also positional principle which helps to compare
 and it can be beneficaial for specific cases.
So you can clearly see if one
number is two times bigger than other for example.
And of course, strictly speaking those bit groups are not glyphs,
you can call them of course so, but this is
just rhetorics. So one could call all english
written words also glyphs but they are not really.
But I get your analogy, this is how the tests
should be made.

>BTW, your choice of ұ because of its "peak readibility"
>seems to be a case of taking something out of context.
>The readability of a glyph can only be judged in terms
>of how easy it is to distinguish from other glyphs.

True and false. Each single taken glyph has a specific structure
and put alone it has optical qualities.
This is somewhat quite complicated and hardly
describable by words, but anyway, only tests can
tell what is better. In this case it is still 2
glyphs or better say one and a half glyph.
And indeed you can distinguis

Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Chris Angelico
On Sun, Oct 16, 2016 at 12:06 AM, Mikhail V  wrote:
> But I can bravely claim that it is better than *any*
> hex notation, it just follows from what I have here
> on paper on my table, namely that it is physically
> impossible to make up highly effective glyph system
> of more than 8 symbols.

You should go and hang out with jmf. Both of you have made bold
assertions that our current system is physically/mathematically
impossible, despite the fact that *it is working*. Neither of you can
cite any actual scientific research to back your claims.

Bye bye.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Martti Kühne
On Sat, Oct 15, 2016 at 3:06 PM, Paul Moore  wrote:
> is the same as
>
> [ x for var in iterable for x in expression ]
>

correction, that would be:

[var for expression in iterable for var in expression]

you are right, though. List comprehensions are already stackable.
TIL.

cheers!
mar77i
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Random832
On Sat, Oct 15, 2016, at 06:38, Steven D'Aprano wrote:
> > Replacing it _with the items_ is not the same thing as replacing it
> > _with a sequence containing the items_, 
> 
> I don't think I ever used the phrasing "a sequence containing the 
> items". I think that's *your* phrase, not mine.

It's not your phrasing, it's the actual semantic content of your claim
that it would have to wrap them in a tuple.

> The core developers have made it absolutely clear that changing the 
> fundamental equivalence of list comps as syntactic sugar for:
> 
> result = []
> for t in iterable:
> result.append(t)
> 
> 
> is NOT NEGOTIABLE.

I've never heard of this. It certainly never came up in this discussion.
And it was negotiable just fine when they got rid of the leaked loop
variable.

> (That is much to my disappointment -- I would love to 
> introduce a "while" version of list comps to match the "if" version, but 
> that's not an option.)
> 
> So regardless of whether it is a fiction or an absolute loop, Python's 
> list comprehensions are categorically limited to behave equivalently to 
> the loop above (modulo scope, temporary variables, etc). 

See, there it is. Why are *those* things that are allowed to be
differences, but this (which could be imagined as "result += [t]" if you
_really_ need a single statement where the left-hand clause is
substituted in, or otherwise) is not?
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread אלעזר
On Sat, Oct 15, 2016 at 1:49 PM Steven D'Aprano  wrote:
...

> And the transformation of *t for the items of t (I don't care if it is a
> real transformation in the implementation, or only a fictional
> transformation) cannot work in a list comp. Let's make the number of
> items of t explicit so we don't have to worry about variable item
> counts:
>
> [*t for t in iterable]  # t has three items
> [a, b, c for (a, b, c) in iterable]
>
>
> That's a syntax error. To avoid the syntax error, we need parentheses:
>
> [(a, b, c) for (a, b, c) in iterable]
>
> and that's a no-op.


You are confusing here two distinct roles of the parenthesis:
disambiguation as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3).
This overload is the reason that (1) is not a 1-tuple and we must write
(1,).

You may argue that this overloading causes confusion and make this
construct hard to understand, but please be explicit about that; even if
<1, 2,3 > was the syntax for tuples, the expansion was still

 [(a, b, c) for (a, b, c) in iterable]

Since no tuple is constructed here.

Elazar
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Chris Angelico
On Sun, Oct 16, 2016 at 4:33 AM, אלעזר  wrote:
> You are confusing here two distinct roles of the parenthesis: disambiguation
> as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3). This overload
> is the reason that (1) is not a 1-tuple and we must write (1,).

Parentheses do not a tuple make. Commas do.

1, 2, 3, # three-element tuple
1, 2, # two-element tuple
1, # one-element tuple

The only time that a tuple requires parens is when it's the empty tuple, ().

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Neil Girdhar
On Sat, Oct 15, 2016 at 5:01 AM Steven D'Aprano  wrote:

> On Thu, Oct 13, 2016 at 01:30:45PM -0700, Neil Girdhar wrote:
>
> > From a CPython implementation standpoint, we specifically blocked this
> code
> > path, and it is only a matter of unblocking it if we want to support
> this.
>
> I find that difficult to believe. The suggested change seems like it
> should be much bigger than just removing a block. Can you point us to
> the relevant code?
>
>
The Grammar specifies:

dictorsetmaker: ( ((test ':' test | '**' expr)
   (comp_for | (',' (test ':' test | '**' expr))* [','])) |
  ((test | star_expr)
   (comp_for | (',' (test | star_expr))* [','])) )

In ast.c, you can find:

if (is_dict) {
ast_error(c, n, "dict unpacking cannot be used in "
"dict comprehension");
return NULL;
}
res = ast_for_dictcomp(c, ch);

and ast_for_dictcomp supports dict unpacking.

Similarly:

if (elt->kind == Starred_kind) {
ast_error(c, ch, "iterable unpacking cannot be used in
comprehension");
return NULL;
}

comps = ast_for_comprehension(c, CHILD(n, 1));

and ast_for_comprehensions supports iterable unpacking.

In any case, it isn't really the difficulty of implementation that is
> being questioned. Many things are easy to implement, but we still
> don't do them.


If it doesn't matter, why bring it up?


> The real questions here are:
>
> (1) Should we overload list comprehensions as sugar for a flatten()
> function?
>
> (2) If so, should we spell that [*t for t in iterable]?
>
>
> Actually the answer to (1) should be "we already do". We just spell it:
>
> [x for t in iterable for x in t]
>
>
>
> --
> Steve
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "python-ideas" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/python-ideas/ROYNN7a5VAc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> python-ideas+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread אלעזר
On Sat, Oct 15, 2016 at 8:36 PM Chris Angelico  wrote:

> On Sun, Oct 16, 2016 at 4:33 AM, אלעזר  wrote:
> > You are confusing here two distinct roles of the parenthesis:
> disambiguation
> > as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3). This
> overload
> > is the reason that (1) is not a 1-tuple and we must write (1,).
>
> Parentheses do not a tuple make. Commas do.
>
> 1, 2, 3, # three-element tuple
> 1, 2, # two-element tuple
> 1, # one-element tuple
>
> And what [1, 2, 3] means? It's very different from [(1,2,3)].

Python explicitly allow 1, 2, 3 to mean tuple in certain contexts, I agree.

Elazar
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Chris Angelico
On Sun, Oct 16, 2016 at 4:38 AM, אלעזר  wrote:
> On Sat, Oct 15, 2016 at 8:36 PM Chris Angelico  wrote:
>>
>> On Sun, Oct 16, 2016 at 4:33 AM, אלעזר  wrote:
>> > You are confusing here two distinct roles of the parenthesis:
>> > disambiguation
>> > as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3). This
>> > overload
>> > is the reason that (1) is not a 1-tuple and we must write (1,).
>>
>> Parentheses do not a tuple make. Commas do.
>>
>> 1, 2, 3, # three-element tuple
>> 1, 2, # two-element tuple
>> 1, # one-element tuple
>>
> And what [1, 2, 3] means? It's very different from [(1,2,3)].
>
> Python explicitly allow 1, 2, 3 to mean tuple in certain contexts, I agree.
>

Square brackets create a list. I'm not sure what you're not understanding, here.

The comma does have other meanings in other contexts (list/dict/set
display, function parameters), but outside of those, it means "create
tuple".

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread אלעזר
On Sat, Oct 15, 2016 at 8:45 PM Chris Angelico  wrote:

> On Sun, Oct 16, 2016 at 4:38 AM, אלעזר  wrote:
> > On Sat, Oct 15, 2016 at 8:36 PM Chris Angelico  wrote:
> >>
> >> On Sun, Oct 16, 2016 at 4:33 AM, אלעזר  wrote:
> >> > You are confusing here two distinct roles of the parenthesis:
> >> > disambiguation
> >> > as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3). This
> >> > overload
> >> > is the reason that (1) is not a 1-tuple and we must write (1,).
> >>
> >> Parentheses do not a tuple make. Commas do.
> >>
> >> 1, 2, 3, # three-element tuple
> >> 1, 2, # two-element tuple
> >> 1, # one-element tuple
> >>
> > And what [1, 2, 3] means? It's very different from [(1,2,3)].
> >
> > Python explicitly allow 1, 2, 3 to mean tuple in certain contexts, I
> agree.
> >
>
> Square brackets create a list. I'm not sure what you're not understanding,
> here.
>
> The comma does have other meanings in other contexts (list/dict/set
> display, function parameters), but outside of those, it means "create
> tuple".
>

On a second thought you may decide whether the rule is tuple and there are
exceptions, or the other way around.

The point was, conceptual expansion does not "fail" just because there is
an overloading in the meaning of the tokens ( and ). It might make it
harder to understand, though.

Elazar
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

[Python-ideas] Heap data type, the revival

2016-10-15 Thread Nick Timkovich
I once again had a use for heaps, and after rewrapping the heapq.heap*
methods for the umpteenth time, figured I'd try my hand at freezing
off that wart.

Some research turned up an older thread by Facundo Batista
(https://mail.python.org/pipermail/python-ideas/2009-April/004173.html),
but it seems like interest petered out. I shoved an initial pass at a
spec, implementation, and tests (robbed from
/Lib/test/test_heapq.py mostly) into a repo at
https://github.com/nicktimko/heapo My spec is basically:

1. Provide all existing heapq.heap* functions provided by the heapq
module as methods with identical semantics
2. Provide limited magic methods to the underlying heap structure
  a. __len__ to see how big it is, also for boolean'ing
  b. __iter__ to allow reading out to something else (doesn't consume elements)
3. Add peek method to show, but not consume, lowest heap value
4. Allow custom comparison/key operation (to be implemented/copy-pasted)

Open Questions
* Should __init__ shallow-copy the list or leave that up to the
caller? Less memory if the heap object just co-opts it, but user might
accidentally reuse the reference and ruin the heap. If we make our own
list then it's easier to just suck in any arbitrary iterable.
* How much should the underlying list be exposed? Is there a use case
for __setitem__, __delitem__?
* Should there be a method to alter the priority of elements while
preserving the heap invariant? Daniel Stutzbach mentioned dynamically
increasing/decreasing priority of some list elements...but I'm
inclined to let that be a later addition.
* Add some iterable method to consume the heap in an ordered fashion?

Cheers,
Nick
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Show more info when `python -vV`

2016-10-15 Thread INADA Naoki
>
> Are there precedences of combining verbose and version options in other
> programs?
>

No, I was just afraid about other programs rely on format of python -V.

> PyPy just outputs sys.version for the --version option.
>
> $ pypy -V
> Python 2.7.10 (5.4.1+dfsg-1~ppa1~ubuntu16.04, Sep 06 2016, 23:11:39)
> [PyPy 5.4.1 with GCC 5.4.0 20160609]
>
> I think it would not be large breakage if new releases of CPython become
> outputting extended version information by default.
>

I like it if it's OK.
Does anyone against this?
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Heap data type, the revival

2016-10-15 Thread Sven R. Kunze

On 15.10.2016 20:15, Nick Timkovich wrote:

I once again had a use for heaps, and after rewrapping the heapq.heap*
methods for the umpteenth time, figured I'd try my hand at freezing
off that wart.


We re-discussed this in the beginning of 2016 and xheap 
https://pypi.python.org/pypi/xheap was one outcome. ;) In the course of 
doing so, some performance improvements were also discovered and some 
peculiarities of Python lists are discussed.


See here
https://mail.python.org/pipermail/python-list/2016-January/702568.html
and here
https://mail.python.org/pipermail/python-list/2016-March/704339.html

Cheers,
Sven
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Mikhail V
On 15 October 2016 at 16:27, Chris Angelico  wrote:
> On Sun, Oct 16, 2016 at 12:06 AM, Mikhail V  wrote:
>> But I can bravely claim that it is better than *any*
>> hex notation, it just follows from what I have here
>> on paper on my table, namely that it is physically
>> impossible to make up highly effective glyph system
>> of more than 8 symbols.
>
> You should go and hang out with jmf. Both of you have made bold
> assertions that our current system is physically/mathematically

Who is jmf?

> impossible, despite the fact that *it is working*. Neither of you can
> cite any actual scientific research to back your claims.
No, please don't ask that. I have enough work in real life. And you tend to
understand too literally my words here.

Mikhail
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Heap data type, the revival

2016-10-15 Thread Nick Timkovich
Features and speed are good, but I'm interested in getting something
with the basic features into the Standard Library so it's just there.
Not having done that before and bit clueless, I'm wanting to learn
that slightly less-technical procedure. What are the steps to make
that happen?

On Sat, Oct 15, 2016 at 3:26 PM, Sven R. Kunze  wrote:
> On 15.10.2016 20:15, Nick Timkovich wrote:
>>
>> I once again had a use for heaps, and after rewrapping the heapq.heap*
>> methods for the umpteenth time, figured I'd try my hand at freezing
>> off that wart.
>
>
> We re-discussed this in the beginning of 2016 and xheap
> https://pypi.python.org/pypi/xheap was one outcome. ;) In the course of
> doing so, some performance improvements were also discovered and some
> peculiarities of Python lists are discussed.
>
> See here
> https://mail.python.org/pipermail/python-list/2016-January/702568.html
> and here
> https://mail.python.org/pipermail/python-list/2016-March/704339.html
>
> Cheers,
> Sven
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Heap data type, the revival

2016-10-15 Thread Sven R. Kunze

On 15.10.2016 23:19, Nick Timkovich wrote:

Features and speed are good, but I'm interested in getting something
with the basic features into the Standard Library so it's just there.
Not having done that before and bit clueless, I'm wanting to learn
that slightly less-technical procedure. What are the steps to make
that happen?


As I said, it has been discussed and the consensus so far was: "not 
everything needs to be a class if it does not provide substantial 
benefit" + "functions are more flexible" + "if it's slower that the 
original it won't happen".


Cheers,
Sven
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Sven R. Kunze

On 15.10.2016 16:47, Martti Kühne wrote:

[var for expression in iterable for var in expression]
you are right, though. List comprehensions are already stackable.
TIL.


Good catch, Paul. Comprehensions appear to be a special case when it 
comes to unpacking as they provide an alternative path. So, nested 
comprehensions seem to unintuitive to those who actually favor the 
*-variant. ;) Anyway, I don't think that it's a strong argument against 
the proposal. ~10 other ways are available to do what * does and this 
kind of argument did not prevent PEP448.



What's more (and which I think is a more important response to the 
nested comprehension alternative) is that nested comprehensions are 
rarely used, and usually get long quite easily. To be practical here, 
let's look at an example I remembered this morning (taken from 
real-world code I needed to work with lately):


return [(language, text) for language, text in fulltext_tuples]

That's the minimum comprehension. So, you need to make it longer already 
to do **actual** work like filtering or mapping (otherwise, just return 
fulltext_tuples). So, we go even longer (and/or less readable):


return [t for t in tuple for tuple in fulltext_tuples if tuple[0] == 
'english']
return chain.from_iterable((language, text) for language, text in 
fulltext_tuples if language == 'english'])


I still think the * variant would have its benefits here:

return [*(language, text) for language, text in fulltext_tuples if 
language == 'english']


(Why it should be unpacked, you wonder? It's because of executemany of 
psycopg2.]


Cheers,
Sven
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Greg Ewing

Steven D'Aprano wrote:
Are you now supporting my argument that starring the list comprehension 
expression isn't meaningful?


The context it's in (a form of list display) has a clear
meaning for a comma-separated list of values, so there
is a reasonable interpretation that it *could* be given.


py> iterable = [(1, 'a'), (2, 'b')]
py> [(100, *t) for t in iterable]
[(100, 1, 'a'), (100, 2, 'b')]


The * there is in the context of constructing a tuple,
not the list into which the tuple is placed.

The difference is the same as the difference between
these:

>>> t = (10, 20)
>>> [1, (2, *t), 3]
[1, (2, 10, 20), 3]
>>> [1, 2, *t, 3]
[1, 2, 10, 20, 3]

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Greg Ewing

Mikhail V wrote:

Also I can only hard imagine that special purpose
of some language can ignore readability,


Readability is not something absolute that stands on its
own. It depends a great deal on what is being expressed.


even if it is assembler or whatever,
it can be made readable without much effort.


You seem to be focused on a very narrow aspect of
readability, i.e. fine details of individual character
glyphs. That's not what we mean when we talk about
readability of programs.


So I just look for some other solution for same task,
let it be 10 times more code.


Then it will take you 10 times longer to write, and
will on average contain 10 times as many bugs. Is
that really worth some small, probably mostly
theoretical advantage at the typographical level?


That is because that person from beginning
(blindly) follows the convention.


What you seem to be missing is that there are
*reasons* for those conventions. They were not
arbitrary choices.

Ultimately they can be traced back to the fact that
our computers are built from two-state electronic
devices. That's definitely not an arbitrary choice --
there are excellent physical reasons for it.

Base 10, on the other hand, *is* an arbitrary
choice. Due to an accident of evolution, we ended
up with 10 easily accessible appendages for counting
on, and that made its way into the counting system
that is currently the most widely used by everyday
people.

So, if anything, *you're* the one who is "blindly
following tradition" by wanting to use base 10.

> 2. Better option would be to choose letters and

possibly other glyphs to build up a more readable
set. E.g. drop "c" letter and leave "e" due to
their optical collision, drop some other weak glyphs,
like "l" "h". That is of course would raise
many further questions, like why you do you drop this
glyph and not this and so on so it will surely end up in quarrel.


Well, that's the thing. If there were large, objective,
easily measurable differences between different possible
sets of glyphs, then there would be no room for such
arguments.

The fact that you anticipate such arguments suggests
that any differences are likely to be small, hard
to measure and/or subjective.


But I can bravely claim that it is better than *any*
hex notation, it just follows from what I have here
on paper on my table,


I think "on paper" is the important thing here. I
suspect you are looking at the published results from
some study or other and greatly overestimating the
size of the effects compared to other considerations.

--
Greg

___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Brendan Barnwell

On 2016-10-12 22:46, Mikhail V wrote:

For numbers obviously you don't need so many character as for
speech encoding, so this means that only those glyphs or even a subset
of it should be used. This means anything more than 8 characters
is quite worthless for reading numbers.
Note that I can't provide here the works currently
so don't ask me for that. Some of them would be probably
available in near future.


	It's pretty clear to me by this point that your argument has no 
rational basis, so I'm regarding this thread as a dead end.


--
Brendan Barnwell
"Do not follow where the path may lead.  Go, instead, where there is no 
path, and leave a trail."

   --author unknown
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Sun, Oct 16, 2016 at 12:48:36PM +1300, Greg Ewing wrote:
> Steven D'Aprano wrote:
> >Are you now supporting my argument that starring the list comprehension 
> >expression isn't meaningful?
> 
> The context it's in (a form of list display) has a clear
> meaning for a comma-separated list of values, so there
> is a reasonable interpretation that it *could* be given.

This thread is a huge, multi-day proof that people do not agree that 
this is a "reasonable" interpretation.


> >py> iterable = [(1, 'a'), (2, 'b')]
> >py> [(100, *t) for t in iterable]
> >[(100, 1, 'a'), (100, 2, 'b')]
> 
> The * there is in the context of constructing a tuple,
> not the list into which the tuple is placed.

Right: the context of the star is meaningful. We all agree that *t in a 
list display [a, b, c, ...] is meaningful; same for tuples; same for 
function calls; same for sequence unpacking for assignment.

What is not meaningful (except as a Perlish line-noise special case to 
be memorised) is *t as the list comprehension expression.

I've never disputed that we could *assert* that *t in a list comp means 
"flatten". We could assert that it means anything we like. But it 
doesn't follow from the usual meaning of sequence unpacking anywhere 
else -- that's why it is currently a SyntaxError, and that's why people 
reacted with surprise at the OP who assumed that *t would magically 
flatten his iterable. Why would you assume that? It makes no sense to me 
-- that's not how sequence unpacking works in any other context, it 
isn't how list comprehensions work.

Right from the beginning I called this "wishful thinking", and *nothing* 
since then has changed my mind. This proposal only makes even a little 
bit of sense if you imagine list comprehensions

[*t for a in it1 for b in it2 for c in it3 ... for t in itN]

completely unrolled into a list display:

[*t, *t, *t, *t, ... ]

but who does that? Why would you reason about your list comps like that? 
If you think about list comps as we're expected to think of them -- as 
list builders equivalent to a for-loop -- the use of *t there is 
invalid. Hence it is a SyntaxError.

You want a second way to flatten your iterables? A cryptic, mysterious, 
Perlish line-noise way? Okay, fine, but don't pretend it is sequence 
unpacking -- in the context of a list comprehension, sequence unpacking 
doesn't make sense, it is invalid. Call it something else: the new 
"flatten" operator:

[^t for t in iterable]

for example, which magically adds an second invisible for-loop to your 
list comps:

# expands to
for t in iterable:
for x in t:
result.append(x)

Because as your own email inadvertently reinforces, if sequence 
unpacking made sense in the context of a list comprehension, it would 
already be allowed rather than a SyntaxError: it is intentionally 
prohibited because it doesn't make sense in the context of list comps.


-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Steven D'Aprano
On Sun, Oct 16, 2016 at 04:36:05AM +1100, Chris Angelico wrote:
> On Sun, Oct 16, 2016 at 4:33 AM, אלעזר  wrote:
> > You are confusing here two distinct roles of the parenthesis: disambiguation
> > as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3). This overload
> > is the reason that (1) is not a 1-tuple and we must write (1,).
> 
> Parentheses do not a tuple make. Commas do.
> 
> 1, 2, 3, # three-element tuple
> 1, 2, # two-element tuple
> 1, # one-element tuple
> 
> The only time that a tuple requires parens is when it's the empty tuple, ().

Or to disambiguate a tuple from some other comma-separated syntax. Hence 
why you need the parens here:

[(b, a) for a,b in sequence]




-- 
Steve
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Proposal for default character representation

2016-10-15 Thread Steve Dower
FWIW, Python 3.6 should print this in the console just fine. Feel free to 
upgrade whenever you're ready.

Cheers,
Steve

-Original Message-
From: "Mikhail V" 
Sent: ‎10/‎12/‎2016 16:07
To: "M.-A. Lemburg" 
Cc: "python-ideas@python.org" 
Subject: Re: [Python-ideas] Proposal for default character representation

Forgot to reply to all, duping my mesage...

On 12 October 2016 at 23:48, M.-A. Lemburg  wrote:

> Hmm, in Python3, I get:
>
 s = "абв.txt"
 s
> 'абв.txt'

I posted output with Python2 and Windows 7
BTW , In Windows 10 'print'  won't work in cmd console at all by default
with unicode but thats another story, let us not go into that.
I think you get my idea right, it is not only about printing.


> The hex notation for \u is a standard also used in many other
> programming languages, it's also easier to parse, so I don't
> think we should change this default.

In programming literature it is used often, but let me point out that
decimal is THE standard and is much much better standard
in sence of readability. And there is no solid reason to use 2 standards
at the same time.

>
> Take e.g.
>
 s = "\u123456"
 s
> 'ሴ56'
>
> With decimal notation, it's not clear where to end parsing
> the digit notation.

How it is not clear if the digit amount is fixed? Not very clear what
did you mean.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread David Mertz
On Oct 15, 2016 6:42 PM, "Steven D'Aprano"  wrote:
> doesn't make sense, it is invalid. Call it something else: the new
> "flatten" operator:
>
> [^t for t in iterable]
>
> for example, which magically adds an second invisible for-loop to your
list comps:

This thread is a lot of work to try to save 8 characters in the spelling of
`flatten(it)`. Let's just use the obvious and intuitive spelling.

We really don't need to be Perl. Folks who want to write Perl have a
perfectly good interpreter available already.

The recipes in itertools give a nice implementation:

def flatten(listOfLists):
"Flatten one level of nesting"
return chain.from_iterable(listOfLists)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Chris Angelico
On Sun, Oct 16, 2016 at 12:10 PM, Steven D'Aprano  wrote:
> On Sun, Oct 16, 2016 at 04:36:05AM +1100, Chris Angelico wrote:
>> On Sun, Oct 16, 2016 at 4:33 AM, אלעזר  wrote:
>> > You are confusing here two distinct roles of the parenthesis: 
>> > disambiguation
>> > as in "(1 + 2) * 2", and tuple construction as in (1, 2, 3). This overload
>> > is the reason that (1) is not a 1-tuple and we must write (1,).
>>
>> Parentheses do not a tuple make. Commas do.
>>
>> 1, 2, 3, # three-element tuple
>> 1, 2, # two-element tuple
>> 1, # one-element tuple
>>
>> The only time that a tuple requires parens is when it's the empty tuple, ().
>
> Or to disambiguate a tuple from some other comma-separated syntax. Hence
> why you need the parens here:
>
> [(b, a) for a,b in sequence]

Yes, in the same way that other operators can need to be
disambiguated. You need to say (1).bit_length() because otherwise "1."
will be misparsed. You need parens to say x = (yield 5) + 2, else it'd
yield 7. But that's not because a tuple fundamentally needs
parentheses.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

[Python-ideas] Multiple level sorting in python where the order of some levels may or may not be reversed

2016-10-15 Thread Alireza Rafiei
Hi all,

I have a list called count_list which contains tuples like below:

[('bridge', 2), ('fair', 1), ('lady', 1), ('is', 2), ('down', 4),
> ('london', 2), ('falling', 4), ('my', 1)]


I want to sort it based on the second parameter in descending order and the
tuples with the same second parameter must be sorted based on their first
parameter, in alphabetically ascending order. So the ideal output is:

[('down', 4), ('falling', 4), ('bridge', 2), ('is', 2), ('london', 2),
> ('fair', 1), ('lady', 1), ('my', 1)]



What I ended up doing is:

count_list = sorted(count_list,
> key=lambda x: (x[1], map(lambda x: -x, map(ord,
> x[0]))),
> reverse=True)


which works. Now my solution is very specific to structures like [(str,
int)] where all strs are lower case and besides ord makes it to be both
limited in use and also more difficult to add extra levels of sorting.

The main problem is that reverse argument takes only a boolean and applies
to the whole list after sorting in finished. I couldn't think of any other
way (besides mapping negative to ord values of x[0]) to say reverse on the
first level but not reverse on the second level. Something like below would
be ideal:

count_list = sorted(count_list,
> key=lambda x: (x[1], x[0]),
> reverse=(True, False))


Does such a way similar to above exist? If not, how useful would it be to
implement it?


*P.S.* It's my first time on a mailing list. I apologize before hand if
such a thing has already been discussed or even there exist a way which
already achieves that.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Greg Ewing

Steven D'Aprano wrote:

This thread is a huge, multi-day proof that people do not agree that 
this is a "reasonable" interpretation.


So far I've seen one very vocal person who disgrees, and
maybe one other who isn't sure.

This proposal only makes even a little 
bit of sense if you imagine list comprehensions


[*t for a in it1 for b in it2 for c in it3 ... for t in itN]

completely unrolled into a list display:

[*t, *t, *t, *t, ... ]

but who does that?  Why would you reason about your list comps like that?


Many people do, and it's a perfectly valid way to think
about them. They're meant to admit a declarative reading;
that's the reason they exist in the first place.

The expansion in terms of for-loops and appends is just
*one* way to describe the current semantics. It's not
written on stone tablets brought down from a mountain.
Any other way of thinking about it that gives the same
result is equally valid.

magically adds an second invisible for-loop to your 
list comps:


You might as well say that the existing * in a list
display magically inserts a for-loop into it. You can
think of it that way if you want, but you don't have
to.


it is intentionally
prohibited because it doesn't make sense in the context of list comps.


I don't know why it's currently prohibited. You would
have to ask whoever put that code in, otherwise you're
just guessing about the motivation.

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Multiple level sorting in python where the order of some levels may or may not be reversed

2016-10-15 Thread Chris Angelico
On Sun, Oct 16, 2016 at 3:29 PM, Alireza Rafiei
 wrote:
> What I ended up doing is:
>
>> count_list = sorted(count_list,
>> key=lambda x: (x[1], map(lambda x: -x, map(ord,
>> x[0]))),
>> reverse=True)
>
>
> which works. Now my solution is very specific to structures like [(str,
> int)] where all strs are lower case and besides ord makes it to be both
> limited in use and also more difficult to add extra levels of sorting.

Interesting. Personally, I would invert this; if you're sorting by an
integer and a string, negate the integer, and keep the string as-is.
If that doesn't work, a custom class might help.

# untested
class Record:
reverse = False, True, True, False, True
def __init__(data):
self.data = data
def __lt__(self, other):
for v1, v2, rev in zip(self.data, other.data, self.reverse):
if v1 < v2: return rev
if v2 > v1: return not rev
return False

This is broadly similar to how tuple.__lt__ works, allowing you to
flip the logic of whichever ones you like.

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Chris Angelico
On Sun, Oct 16, 2016 at 3:44 PM, Greg Ewing  wrote:
> Steven D'Aprano wrote:
>
>> This thread is a huge, multi-day proof that people do not agree that this
>> is a "reasonable" interpretation.
>
>
> So far I've seen one very vocal person who disgrees, and
> maybe one other who isn't sure.
>

And what you're NOT seeing is a whole lot of people (myself included)
who have mostly glazed over, unsure what is and isn't reasonable, and
not clear enough on either side of the debate to weigh in. (Or not
even clear what the two sides are.)

ChrisA
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread Neil Girdhar
On Sat, Oct 15, 2016 at 9:42 PM Steven D'Aprano  wrote:

> On Sun, Oct 16, 2016 at 12:48:36PM +1300, Greg Ewing wrote:
> > Steven D'Aprano wrote:
> > >Are you now supporting my argument that starring the list comprehension
> > >expression isn't meaningful?
> >
> > The context it's in (a form of list display) has a clear
> > meaning for a comma-separated list of values, so there
> > is a reasonable interpretation that it *could* be given.
>
> This thread is a huge, multi-day proof that people do not agree that
> this is a "reasonable" interpretation.
>
>
> > >py> iterable = [(1, 'a'), (2, 'b')]
> > >py> [(100, *t) for t in iterable]
> > >[(100, 1, 'a'), (100, 2, 'b')]
> >
> > The * there is in the context of constructing a tuple,
> > not the list into which the tuple is placed.
>
> Right: the context of the star is meaningful. We all agree that *t in a
> list display [a, b, c, ...] is meaningful; same for tuples; same for
> function calls; same for sequence unpacking for assignment.
>
> What is not meaningful (except as a Perlish line-noise special case to
> be memorised) is *t as the list comprehension expression.
>
> I've never disputed that we could *assert* that *t in a list comp means
> "flatten". We could assert that it means anything we like. But it
> doesn't follow from the usual meaning of sequence unpacking anywhere
> else -- that's why it is currently a SyntaxError, and that's why people
> reacted with surprise at the OP who assumed that *t would magically
> flatten his iterable. Why would you assume that? It makes no sense to me
> -- that's not how sequence unpacking works in any other context, it
> isn't how list comprehensions work.
>
> Right from the beginning I called this "wishful thinking", and *nothing*
> since then has changed my mind. This proposal only makes even a little
> bit of sense if you imagine list comprehensions
>
> [*t for a in it1 for b in it2 for c in it3 ... for t in itN]
>
> completely unrolled into a list display:
>
> [*t, *t, *t, *t, ... ]
>
> but who does that? Why would you reason about your list comps like that?
> If you think about list comps as we're expected to think of them -- as
> list builders equivalent to a for-loop -- the use of *t there is
> invalid. Hence it is a SyntaxError.
>
> You want a second way to flatten your iterables? A cryptic, mysterious,
> Perlish line-noise way? Okay, fine, but don't pretend it is sequence
> unpacking -- in the context of a list comprehension, sequence unpacking
> doesn't make sense, it is invalid. Call it something else: the new
> "flatten" operator:
>
> [^t for t in iterable]
>
> for example, which magically adds an second invisible for-loop to your
> list comps:
>
> # expands to
> for t in iterable:
> for x in t:
> result.append(x)
>
> Because as your own email inadvertently reinforces, if sequence
> unpacking made sense in the context of a list comprehension, it would
> already be allowed rather than a SyntaxError: it is intentionally
> prohibited because it doesn't make sense in the context of list comps.
>

Whoa, hang on a second there.  It is intentionally prohibited because
Joshua Landau (who helped a lot with writing and implementing the PEP) and
I felt like there was going to be a long debate and we wanted to get PEP
448 checked in.

If it "didn't make sense" as you say, then we would have said so in the
PEP. Instead, Josh wrote:

This was met with a mix of strong concerns about readability and mild
support. In order not to disadvantage the less controversial aspects of the
PEP, this was not accepted with the rest of the proposal.

I don't remember who it was who had those strong concerns (maybe you?)  But
that's why we didn't include it.

Best,

Neil


>
> --
> Steve
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "python-ideas" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/python-ideas/ROYNN7a5VAc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> python-ideas+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Fwd: Fwd: Fwd: unpacking generalisations for list comprehension

2016-10-15 Thread David Mertz
On Oct 15, 2016 9:45 PM, "Greg Ewing"  wrote:
>
> Steven D'Aprano wrote:
>
>> This thread is a huge, multi-day proof that people do not agree that
this is a "reasonable" interpretation.
>
>
> So far I've seen one very vocal person who disgrees, and
> maybe one other who isn't sure.

In case it wasn't entirely clear, I strongly and vehemently opposed this
unnecessary new syntax. It is confusing, bug prone, and would be difficult
to teach.

Or am I that very vocal person? I was thinking your meant Steven.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Re: [Python-ideas] Multiple level sorting in python where the order of some levels may or may not be reversed

2016-10-15 Thread Tim Peters
[Alireza Rafiei ]
> I have a list called count_list which contains tuples like below:
>
> > [('bridge', 2), ('fair', 1), ('lady', 1), ('is', 2), ('down', 4),
> > ('london', 2), ('falling', 4), ('my', 1)]
>
>
> I want to sort it based on the second parameter in descending order and the
> tuples with the same second parameter must be sorted based on their first
> parameter, in alphabetically ascending order. So the ideal output is:
>
> > [('down', 4), ('falling', 4), ('bridge', 2), ('is', 2), ('london', 2),
> > ('fair', 1), ('lady', 1), ('my', 1)]

I'd like to suggest doing something simple instead, such that

data = [('bridge', 2), ('fair', 1), ('lady', 1),
('is', 2), ('down', 4), ('london', 2),
('falling', 4), ('my', 1)]

from operator import itemgetter
multisort(data, [# primary key is 2nd element, reversed
 (itemgetter(1), True),
 # secondary key is 1st element, natural
 (itemgetter(0), False)])
import pprint
pprint.pprint(data)

prints the output you want.

It's actually very easy to do this, but the cost is that it requires
doing a full sort for _each_ field you want to sort on.  Because
Python's sort is stable, it's sufficient to merely sort on the
least-significant key first, and then sort again on each key in turn
through the most-significant.  There's another subtlety that makes
this work:

> ...
> The main problem is that reverse argument takes only a boolean and applies
> to the whole list after sorting in finished.

Luckily, that's not how `reverse` actually works.  It _first_reverses
the list, then sorts, and then reverses the list again.  The result is
that items that compare equal _retain_ their original order, where
just reversing the sorted list would invert their order.  That's why,
in your example above, after first sorting on the string component in
natural order we get (in part)

[[('down', 4), ('falling', 4), ...]

and then reverse-sorting on the integer portion _leaves_ those tuples
in the same order.  That's essential for this decomposition of the
problem to work.

With that background, here's the implementation:

def multisort(xs, specs):
for key, reverse in reversed(specs):
xs.sort(key=key, reverse=reverse)

That's all it takes!  And it accepts any number of items in `specs`.
Before you worry that it's "too slow", time it on real test data.
`.sort()` is pretty zippy, and this simple approach allows using
simple key functions.  More importantly, it's much easier on your
brain ;-)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/