[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread David Mertz
Oops, obviously I meant:

def plusone(i: int[1:1_000_000_000]):
return i+1

random.seed(42)
for n in range(1_000_000):
  plusone(random.randint(1, 1_000_000_001))

Or a zillion other things. I can construct orbitals of Mandelbrot set that
may or may not be bounded. Or bounds that depend on the twin prime
conjecture. Or whatever. Mersenne Twister is just a non-obvious calculation
that we have convenient functions for.

On Sat, Aug 8, 2020, 1:28 AM David Mertz  wrote:

>
>
> On Sat, Aug 8, 2020, 1:12 AM Steven D'Aprano
>
>> Static languages often check what
>> bounds they can at compile time, and optionally insert bound checking
>> runtime code for ambiguous places.
>
>
> Yep. That's an assert, or it's moral equivalent.
>
> Here's a deterministic program using the hypothetical new feature.
>
> def plusone(i: int[1:1_000_000_000]):
> return i+1
>
> random.seed(42)
> for n in range(1_000_000):
>   random.randint(1, 1_000_000_001)
>
> Is this program type safe? Tell me by static analysis of Mersenne Twister.
>
> Or if you want to special case the arguments to randint, will, lots of
> things. Let's say a "random" walk on the integer number line where each
> time through the loop increments or decrements some (deterministic but hard
> to calculate) amount. After N steps are we within certain bounds?
>
>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/SST7J7IVHW5ZWVKJSROIVBB5EEJVNT4B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread Steven D'Aprano
On Fri, Aug 07, 2020 at 05:44:31PM -0400, Ricky Teachey wrote:

> Would it make good semantic sense- and be useful- to specify valid
> numerical ranges using slices and type-hint syntax? My suggestion would be
> to, at minimum, provide this functionality for int and float.

We know that syntactically we can write an annotation like `int[1:10]`, 
even if it is a runtime TypeError. The question would be to ask mypy, 
and maybe some of the other type checkers, whether they are capable of 
and interested in doing static bounds checking.

Unless they are interested in the feature, having ints support it would 
be a waste of time.

For compatibility with slicing and range(), we would surely want int 
ranges to be half open:

int[1:100]  # 1-99 inclusive, 100 exclused

but for floats, half-open intervals are a real pain, as you suggest. How 
do I specify an upper bound of exactly math.pi, say? A runtime check is 
easy:

x <= math.pi

but specifying it as an open interval requires me to know that 
3.1415926535897936 is the next float greater than math.pi.


-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/MXB7642ZRGBPTW5RU5XRXUORTXAKUDDR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread David Mertz
On Sat, Aug 8, 2020, 1:12 AM Steven D'Aprano

> Static languages often check what
> bounds they can at compile time, and optionally insert bound checking
> runtime code for ambiguous places.


Yep. That's an assert, or it's moral equivalent.

Here's a deterministic program using the hypothetical new feature.

def plusone(i: int[1:1_000_000_000]):
return i+1

random.seed(42)
for n in range(1_000_000):
  random.randint(1, 1_000_000_001)

Is this program type safe? Tell me by static analysis of Mersenne Twister.

Or if you want to special case the arguments to randint, will, lots of
things. Let's say a "random" walk on the integer number line where each
time through the loop increments or decrements some (deterministic but hard
to calculate) amount. After N steps are we within certain bounds?
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/BUV7DSKL5NA3XX3V4WSD4BII5OI5ZZWC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread Steven D'Aprano
On Fri, Aug 07, 2020 at 11:48:40PM -0400, David Mertz wrote:
> On Fri, Aug 7, 2020, 6:03 PM Paul Moore  wrote:
> 
> > > x: int[0:]  # any ints greater than or equal to zero would match, others
> > would fail
> > > x: int[:101]  # any ints less than 101 match
> > > x: int[0:101:2]  # even less than 101
> >
> > I suspect the biggest issue with this is that it's likely to be
> > extremely hard (given the dynamic nature of Python) to check such type
> > assertions statically.
> 
> 
> Yes, it's hard in the sense that it would require solving the halting
> problem.

How so?

I don't see how static bounds checking would be fundamentally more 
difficult than static type checking. Static languages often check what 
bounds they can at compile time, and optionally insert bound checking 
runtime code for ambiguous places. See for example this comment:

"""
2.5 Static Analysis

Bounds checking has relied heavily on static analysis to
optimize performance [15]. Checks can be eliminated if
it can be statically determined that a pointer is safe, i.e.
always within bounds, or that a check is redundant due to
a previous check
"""

https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/baggy-USENIX2009.pdf

I believe that JS++ does something similar:

https://www.onux.com/jspp/blog/jspp-0-9-0-efficient-compile-time-analysis-of-out-of-bounds-errors/

and Checked C aims for similar compile-time bounds checking:

https://www.microsoft.com/en-us/research/project/checked-c/


In any case, nobody expects to solve the Halting Problem for arbitrarily 
complex code. It is still useful to be able to detect some failures even 
if you can't detect them all:

https://web.archive.org/web/20080509165811/http://perl.plover.com/yak/typing/notes.html


-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WDS3JQZ7YNIKB5ZMHGRJY5223D2YRYTR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread David Mertz
On Sat, Aug 8, 2020, 12:18 AM Ricky Teachey

> Yes, it's hard in the sense that it would require solving the halting
>> problem.
>>
>
> That doesn't sound so hard. ;)
>
> Thanks for educating me. Could it at least be useful for:
>
> 1. Providing semantic meaning to code (but this is probably not enough
> reason on its own)
> 2. Couldn't it still be useful for static analysis during runtime? Not in
> cpython, but when the type hints are used in cython, for example?
>

Being static like CPython doesn't help. You cannot know statically what the
result of an arbitrary computation will be.

There are certainly languages with guards. For example, Python. I can
definitely write a function like this:

def small_nums(i: int):
assert 0 < i < 100
do_stuff(i)

x = small_nums(arbitrary_computation())

In concept, an annotation could be another way to spell an assertion. But I
don't think we need that.

>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/53RDLJ7BVQHCTDUSOZ4JLDWVJNVHXA3B/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread Ricky Teachey
On Fri, Aug 7, 2020 at 11:48 PM David Mertz  wrote:

> On Fri, Aug 7, 2020, 6:03 PM Paul Moore  wrote:
>
>> > x: int[0:]  # any ints greater than or equal to zero would match,
>> others would fail
>> > x: int[:101]  # any ints less than 101 match
>> > x: int[0:101:2]  # even less than 101
>>
>> I suspect the biggest issue with this is that it's likely to be
>> extremely hard (given the dynamic nature of Python) to check such type
>> assertions statically.
>
>
> Yes, it's hard in the sense that it would require solving the halting
> problem.
>

That doesn't sound so hard. ;)

Thanks for educating me. Could it at least be useful for:

1. Providing semantic meaning to code (but this is probably not enough
reason on its own)
2. Couldn't it still be useful for static analysis during runtime? Not in
cpython, but when the type hints are used in cython, for example?

---
Ricky.

"I've never met a Kentucky man who wasn't either thinking about going home
or actually going home." - Happy Chandler
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/RD2XX6S4MLPE7BGAFM4NN4NWOYPYYW2O/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Ricky Teachey
On Fri, Aug 7, 2020 at 10:47 PM Steven D'Aprano  wrote:

> On Fri, Aug 07, 2020 at 12:09:28PM -0400, Ricky Teachey wrote:
>
> > I was actually trying to help the kwd arg case here. As illustrated by
> the
> > quote I included from Greg Ewing, there seems to be not even close to a
> > consensus over what the semantic meaning of this should be:
> >
> > m[1, 2, a=3, b=2]
>
> This is Python-Ideas. If we asked for a consensus on what the print
> function should do, I'm sure we would find at least one person
> seriously insist that it ought to erase your hard drive *wink*
>

Yeah I get it. Thanks. I just noticed it didn't seem like anything close to
a consensus was even sort of rising to the surface.


> By the way, I assume you meant `__getitem__` in each of your examples,
> since `__get__` is part of the descriptor protocol.
>

Whoops thanks for the correction. And thanks for being less grumpy.

And thank you also for going through the 4 options I gave. I accept and
agree with you on all of them.

But I had forgotten the fifth.

The semantic meaning of m[1, 2, a=3, b=2] might be made to mean:

5.m.__getx__(1, 2, a=3, b=4)

...which would in turn call, by default:

m.__getitem__((1, 2), a=3, b=4)

I was a little bit more detailed about in my first message so I'll quote
that:


> ---
>
> One idea: change the "real" names of the dunders. Give `type` default
> versions of the new dunders that direct the call to the old dunder names.
>
> The new get and del dunders would have behavior and signatures like (I am
> including **__kwargs since that could be an option in the future) :
>
> def __getx__(self, /, *__key, **__kwargs):
> return self.__getitem__(__key, **__kwargs)
>
> def __delx__(self, /,, *__key, **__kwargs):
> del self.__delitem__(__key, **__kwargs)
>
> However the set dunder signature would be a problem, because to mirror the
> current behavior we end up writing what is now a syntax error:
>
> def __setx__(self, /, *__key, __value, **__kwargs):
> self.__setitem__(__key, __value, **__kwargs)
>

When overriding `__getx__` et al, you would always call super() on the
__getx__ method, never use super().__getitem__:

class My:
def __getx__(self, my_arg, *args, my_kwarg, **kwargs):
# the way I have written things this super call will cause a
recursion error
v = super().__getx__(*args, **kwargs)
return combine(my_arg, my_kwarg, v)

Many of the advantages are shared with #1 as you recounted them:

(1) Existing positional only subscripting does not have to change for any
existing code (backwards
compatible).

(2) Easy to handle keyword arguments.

(3) Those who want to bundle all their keywords into a single object can
just define a single `**kw` parameter.

(4) Probably requires little special handling in the interpreter?

(5) Requires no extra effort for developers who don't need or want
keyword parameters in their subscript methods. Just do nothing.

(6). Consistency with other methods and functions for those that want to
use the new dunders.

Disadvantages:

(1) Probably requires more implementation effort than #1

(2) Similar to #2 will also create a long transition period- but hopefully
quite
a bit less painful than just outright changing the signature of __getitem__
etc. However libraries do not have to support both calling conventions
at all. They should be encouraged to start using the new one, but the
old one will continue to work, perhaps perpetually. But maybe things
would eventually get to the point that it could be eventually done away
with.

(3) Creates a new "kind of screwy" (Greg's words) situation that
will at times need to be explained and understood.

(4) Creates a sort of dual MRO for square bracket usage. You would end up
with situations like this:

class A:
def __getitem__(self, key, **kwargs):
print("A")

class B(A):
def __getitem__(self, key, **kwargs):
print("B")
super().__getitem__(key, **kwargs)

class C(B):
def __getx__(self, *key, **kwargs):
print("C")
super().__getx__(*key, **kwargs)

class D(C):
def __getx__(self, *key, **kwargs):
print("D")
super().__getx__(*key, **kwargs)

>>> D()[None]
D
C
B
A

This code obviously looks just a little bit odd but the result is fine.

However when different libraries and classes start getting mashed together
over time, you might end up with a situation like this:

class A:
def __getitem__(self, key, **kwargs):
print("A")

class B(A):
def __getx__(self, key, **kwargs):
print("B")
super().__getx__(*key, **kwargs)

class C(B):
def __getitem__(self, key, **kwargs):
print("C")
super().__getitem__(key, **kwargs)

class D(C):
def __getx__(self, *key, **kwargs):
print("D")
super().__getx__(*key, **kwargs)

>>> D()[None]
D
B
C
A

Seems like this could be easily fixed with a class decorator, or perhaps
the language could know to call the methods in the right order 

[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Guido van Rossum
On Fri, Aug 7, 2020 at 6:02 PM Greg Ewing 
wrote:

> On 4/08/20 9:12 am, Guido van Rossum wrote:
> > then presumably calling `c[1, index=2]` would just be an error (since it
> > would be like attempting to call the method with two values for the
> > `index` argument),
>
> Hmmm, does this mean that classes providing index notation would
> now need to document the name of the parameter they use for the
> index?
>

No, the whole point of my message was they wouldn't have to.

-- 
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/FDPCPORZN3DJEVF52N6XQGFDB6VUYMTD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread David Mertz
On Fri, Aug 7, 2020, 6:03 PM Paul Moore  wrote:

> > x: int[0:]  # any ints greater than or equal to zero would match, others
> would fail
> > x: int[:101]  # any ints less than 101 match
> > x: int[0:101:2]  # even less than 101
>
> I suspect the biggest issue with this is that it's likely to be
> extremely hard (given the dynamic nature of Python) to check such type
> assertions statically.


Yes, it's hard in the sense that it would require solving the halting
problem.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/REC5J4KEFJA6N3W4MFNTZFHQWIER2NTR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Ricky Teachey
On Fri, Aug 7, 2020 at 9:25 PM Edwin Zimmerman 
wrote:

> On 8/7/2020 8:28 PM, Greg Ewing wrote:
> >  I don't think anyone has the appetite for a Python 4 any
> > time soon.
> >
> I'm included in "anyone" here.  From reading this list, it seems to me
> that "Python 4" is invoked as some folks favorite magical justification for
> proposing major breaking changes.  Python 3 works quite well, I think.
> Non-breaking, incremental changes suite me much better that large breaking
> ones.  I have better things to do with my time than complete software
> rewrites of all the software projects I work on.
>

Nobody is asking you to rewrite anything in this thread. Quoting my first
message:


On Tue, Aug 4, 2020 at 8:14 AM Ricky Teachey  wrote:

>
> ...
>
> So here is the main question of this thread:
>
> Is there really not a backwards compatible, creative way a transition to
> positional args from a tuple in the item dunders could not be accomplished?
>
>
That's what I'm after, and making a (likely poor) attempt at a proposal
to accomplish. If the answer is no, fine.


On Fri, Aug 7, 2020 at 8:30 PM Greg Ewing 
wrote:

> ...
>
> It would certainly achieve that goal. The question is whether it would
> be worth the *enormous* upheaval of replacing the whole __getitem__
> protocol.
>
> It's hard to overstate what a big deal that would be. The old protocol
> would still be with us, complicating everything unnecessarily, for a
> very long time. It took Python 3 to finally get rid of the __getslice__
> protocol, and I don't think anyone has the appetite for a Python 4 any
> time soon.
>

My hope-- with the proposal I made (new getx/setx/delx dunders that call
the old getitem/setitem/delitem dunders)-- was to avoid most of that. Folks
would be free to continue using the existing dunder methods as long as they
find it to be beneficial. But maybe it wouldn't work out that way.

---
Ricky.

"I've never met a Kentucky man who wasn't either thinking about going home
or actually going home." - Happy Chandler
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/5TFZAFMCOGGTWC5R7QPOVU52QQUJHG7T/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Steven D'Aprano
On Fri, Aug 07, 2020 at 12:09:28PM -0400, Ricky Teachey wrote:

> I was actually trying to help the kwd arg case here. As illustrated by the
> quote I included from Greg Ewing, there seems to be not even close to a
> consensus over what the semantic meaning of this should be:
> 
> m[1, 2, a=3, b=2]

This is Python-Ideas. If we asked for a consensus on what the print 
function should do, I'm sure we would find at least one person 
seriously insist that it ought to erase your hard drive *wink*

But seriously, getting consensus is difficult, especially when people 
seem to be unwilling or unable to articulate why they prefer one 
behaviour over another, or the advantages vs disadvantages of a 
proposal.


> Which could be made to mean one of the following things, or another thing I
> haven't considered:
> 
> 1.m.__get__((1, 2), a=3, b=4)  # handling of positional arguments
> unchanged from current behavior


By the way, I assume you meant `__getitem__` in each of your examples, 
since `__get__` is part of the descriptor protocol.


Advantages: 

(1) Existing positional only subscripting does not change (backwards 
compatible).

(2) Easy to handle keyword arguments.

(3) Those who want to bundle all their keywords into a single object can 
just define a single `**kw` parameter.

(4) Probably requires little special handling in the interpreter?

(5) Probably requires the minimum amount of implementation effort?

(6) Requires no extra effort for developers who don't need or want 
keyword parameters in their subscript methods. Just do nothing.

Disadvantages: none that I can see. (Assuming we agree that this is a 
useful feature.)


> 2.m.__get__(1, 2, a=3, b=4)  # change positional argument handling from
> current behavior

Advantages:

1. Consistency with other methods and functions.

Disadvantages:

1. Breaks backwards compatibility.

2. Will require a long and painful transition period during which time 
libraries will have to somehow support both calling conventions.


> 3.m.__get__((1, 2), {'a': 3, 'b': 4})  #  handling of positional
> arguments unchanged from current behavior

I assume that if there are no keyword arguments given, only the 
first argument is passed to the method (as opposed to passing an 
empty dict). If not, the advantages listed below disappear.

Advantages:

(1) Existing positional only subscripting does not change (backwards 
compatible).

(2) Requires no extra effort for developers who don't need or want 
keyword parameters in their subscript methods. Just do nothing.

Disadvantages:

(1) Forces people to do their own parsing of keyword arguments to local 
variables inside the method, instead of allowing the interpreter to do 
it.

(2) Compounds the "Special case breaks the rules" of subscript methods 
to keyword arguments as well as positional arguments.

(3) It's not really clear to me that anyone actually wants this, apart 
from just suggesting it as an option. What's the concrete use-case for 
this?


> 4.m.__get__(KeyObject( (1, 2), {'a': 3, 'b': 4} ))   # change
> positional argument handling from current behavior only in the case that
> kwd args are provided

Use-case: you want to wrap an arbitrary number of positional arguments, 
plus an arbitrary set of keyword arguments, into a single hashable "key 
object", for some unstated reason, and be able to store that key object 
into a dict.

Advantage (double-edged, possible):

(1) Requires no change to the method signature to support keyword 
parameters (whether you want them or not, you will get them).

Disadvantages:

(1) If you don't want keyword parameters in your subscript methods, you 
can't just *do nothing* and have them be a TypeError, you have to 
explicitly check for a KeyObject argument and raise:

def __getitem__(self, index):
if isinstance(item, KeyObject):
raise TypeError('MyClass index takes no keyword arguments')

(2) Seems to be a completely artificial and useless use-case to me. If 
there is a concrete use-case for this, either I have missed it, (in 
which case my apologies) or Jonathan seems to be unwilling or unable to 
give it. But if you really wanted it, you could get it with this 
signature and a single line in the body:

def __getitem__(self, *args, **kw):
key = KeyObject(*args, **kw)

(3) Forces those who want named keyword parameters to parse them from 
the KeyObject value themselves.

Since named keyword parameters are surely going to be the most common 
use-case (just as they are for other functions), this makes the common 
case difficult and the rare and unusual case easy.

(4) KeyObject doesn't exist. We would need a new builtin type to support 
this, as well as the new syntax. This increases the complexity and 
maintenance burden of this new feature.

(5) Compounds the "kind of screwy" (Greg's words) nature of subscripting 
by extending it to keyword arguments as well as positional arguments.



-- 
Steven

[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Stephan Hoyer
On Fri, Aug 7, 2020 at 6:29 PM Stephan Hoyer  wrote:

> On Fri, Aug 7, 2020 at 9:12 AM Ricky Teachey  wrote:
>
>> On Fri, Aug 7, 2020 at 4:19 AM Steven D'Aprano 
>> wrote:
>>
>>> On Fri, Aug 07, 2020 at 05:54:18PM +1000, Steven D'Aprano wrote:
>>>
>>> > This proposal doesn't say anything about reversing the decision made
>>> all
>>> > those years ago to bundle all positional arguments in a subscript into
>>> a
>>> > single positional parameter. What's done is done, that's not going to
>>> > change.
>>>
>>> Sorry, I was referring to the proposal that inspired this thread, to add
>>> keyword arguments to subscripting. There's an actual concrete use-case
>>> for adding this, specifically for typing annotations, and I cannot help
>>> but feel that this thread is derailing the conversation to something
>>> that has not been requested by anyone actually affected by it.
>>>
>>
>> Well I wonder if they haven't asked because it would be such a huge
>> change, and it seems unlikely to happen. But I surely don't know enough
>> about the implementation details of these libraries to be able to say for
>> certain one way or the other.
>>
>
> NumPy and pandas both care a lot about backwards compatibility, and don't
> like churn for the sake of churn.
>
> Speaking as someone who has been contributing to these libraries for
> years, the present syntax for positional arguments in __getitem__ is not a
> big deal, and certainly isn't worth breaking backwards compatibility over.
>
> In practice, this is worked around with a simple normalization step, e.g.,
> something like:
>
> def __getitem__(self, key):
> if not isinstance(key, tuple):
> key = (key,)
> # normal __getitem__ method
>
> This precludes incompatible definitions for x[(1, 2)] and x[1, 2], but
> really, nobody this minor inconsistency is not a big deal. It is easy to
> write x[(1, 2), :] if you want to indicate a tuple for indexing along the
> first axis of an array.
>
> From my perspective, the other reasonable way to add keyword arguments to
> indexing would be in a completely backwards compatible with **kwargs.
>

I'm sorry, I did a poor job of editing this.

To fill in my missing word: From my perspective, the *only* reasonable way
to add keyword arguments to indexing would be in a completely backwards
compatible way with **kwargs.



>
>> I may have allowed my frustration to run ahead of me, sorry.
>>>
>>> There is a tonne of code that relies on subscripting positional
>>> arguments to be bundled into a single parameter. Even if we agreed that
>>> this was suboptimal, and I don't because I don't know the rationale for
>>> doing it in the first place, I would be very surprised if the Steering
>>> Council gave the go-ahead to a major disruption and complication to the
>>> language just for the sake of making subscript dunders like other
>>> functions.
>>
>>
>>> Things would be different if, say, numpy or pandas or other heavy users
>>> of subscripting said "we want the short term churn and pain for long
>>> term benefit".
>>>
>>> But unless that happens, I feel this is just a case of piggy-backing a
>>> large, disruptive change of minimal benefit onto a a small, focused
>>> change, which tends to ruin the chances of the small change. So please
>>> excuse my frustration, I will try to be less grumpy about it.
>>>
>>
>> I understand the grumpiness given your explanation. I'm really not
>> wanting to derail that kwd args proposal-- I really like it, whatever the
>> semantics of it turn out to be.
>>
>> I was actually trying to help the kwd arg case here. As illustrated by
>> the quote I included from Greg Ewing, there seems to be not even close to a
>> consensus over what the semantic meaning of this should be:
>>
>> m[1, 2, a=3, b=2]
>>
>> Which could be made to mean one of the following things, or another thing
>> I haven't considered:
>>
>> 1.m.__get__((1, 2), a=3, b=4)  # handling of positional arguments
>> unchanged from current behavior
>> 2.m.__get__(1, 2, a=3, b=4)  # change positional argument handling
>> from current behavior
>> 3.m.__get__((1, 2), {'a': 3, 'b': 4})  #  handling of positional
>> arguments unchanged from current behavior
>> 4.m.__get__(KeyObject( (1, 2), {'a': 3, 'b': 4} ))   # change
>> positional argument handling from current behavior only in the case that
>> kwd args are provided
>>
>> As Greg said:
>>
>> These methods are already kind of screwy in that they don't
>>> handle *positional* arguments in the usual way -- packing them
>>> into a tuple instead of passing them as individual arguments.
>>> I think this is messing up everyone's intuition on how indexing
>>> should be extended to incorporate keyword args, or even whether
>>> this should be done at all.
>>
>>
>> To illustrate the comments of "kind of screwy" and "the usual way", using
>> semantic meaning # 1 above, then these are totally equivalent * :
>>
>> m[1, 2, a=3, b=4]
>> m[(1, 2), a=3, b=4]
>>
>> ...even though these are totally different:

[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Stephan Hoyer
On Fri, Aug 7, 2020 at 9:12 AM Ricky Teachey  wrote:

> On Fri, Aug 7, 2020 at 4:19 AM Steven D'Aprano 
> wrote:
>
>> On Fri, Aug 07, 2020 at 05:54:18PM +1000, Steven D'Aprano wrote:
>>
>> > This proposal doesn't say anything about reversing the decision made
>> all
>> > those years ago to bundle all positional arguments in a subscript into
>> a
>> > single positional parameter. What's done is done, that's not going to
>> > change.
>>
>> Sorry, I was referring to the proposal that inspired this thread, to add
>> keyword arguments to subscripting. There's an actual concrete use-case
>> for adding this, specifically for typing annotations, and I cannot help
>> but feel that this thread is derailing the conversation to something
>> that has not been requested by anyone actually affected by it.
>>
>
> Well I wonder if they haven't asked because it would be such a huge
> change, and it seems unlikely to happen. But I surely don't know enough
> about the implementation details of these libraries to be able to say for
> certain one way or the other.
>

NumPy and pandas both care a lot about backwards compatibility, and don't
like churn for the sake of churn.

Speaking as someone who has been contributing to these libraries for years,
the present syntax for positional arguments in __getitem__ is not a big
deal, and certainly isn't worth breaking backwards compatibility over.

In practice, this is worked around with a simple normalization step, e.g.,
something like:

def __getitem__(self, key):
if not isinstance(key, tuple):
key = (key,)
# normal __getitem__ method

This precludes incompatible definitions for x[(1, 2)] and x[1, 2], but
really, nobody this minor inconsistency is not a big deal. It is easy to
write x[(1, 2), :] if you want to indicate a tuple for indexing along the
first axis of an array.

>From my perspective, the other reasonable way to add keyword arguments to
indexing would be in a completely backwards compatible with **kwargs.


>
> I may have allowed my frustration to run ahead of me, sorry.
>>
>> There is a tonne of code that relies on subscripting positional
>> arguments to be bundled into a single parameter. Even if we agreed that
>> this was suboptimal, and I don't because I don't know the rationale for
>> doing it in the first place, I would be very surprised if the Steering
>> Council gave the go-ahead to a major disruption and complication to the
>> language just for the sake of making subscript dunders like other
>> functions.
>
>
>> Things would be different if, say, numpy or pandas or other heavy users
>> of subscripting said "we want the short term churn and pain for long
>> term benefit".
>>
>> But unless that happens, I feel this is just a case of piggy-backing a
>> large, disruptive change of minimal benefit onto a a small, focused
>> change, which tends to ruin the chances of the small change. So please
>> excuse my frustration, I will try to be less grumpy about it.
>>
>
> I understand the grumpiness given your explanation. I'm really not wanting
> to derail that kwd args proposal-- I really like it, whatever the semantics
> of it turn out to be.
>
> I was actually trying to help the kwd arg case here. As illustrated by the
> quote I included from Greg Ewing, there seems to be not even close to a
> consensus over what the semantic meaning of this should be:
>
> m[1, 2, a=3, b=2]
>
> Which could be made to mean one of the following things, or another thing
> I haven't considered:
>
> 1.m.__get__((1, 2), a=3, b=4)  # handling of positional arguments
> unchanged from current behavior
> 2.m.__get__(1, 2, a=3, b=4)  # change positional argument handling
> from current behavior
> 3.m.__get__((1, 2), {'a': 3, 'b': 4})  #  handling of positional
> arguments unchanged from current behavior
> 4.m.__get__(KeyObject( (1, 2), {'a': 3, 'b': 4} ))   # change
> positional argument handling from current behavior only in the case that
> kwd args are provided
>
> As Greg said:
>
> These methods are already kind of screwy in that they don't
>> handle *positional* arguments in the usual way -- packing them
>> into a tuple instead of passing them as individual arguments.
>> I think this is messing up everyone's intuition on how indexing
>> should be extended to incorporate keyword args, or even whether
>> this should be done at all.
>
>
> To illustrate the comments of "kind of screwy" and "the usual way", using
> semantic meaning # 1 above, then these are totally equivalent * :
>
> m[1, 2, a=3, b=4]
> m[(1, 2), a=3, b=4]
>
> ...even though these are totally different:
>
> f(1, 2, a=3, b=4)
> f((1, 2), a=3, b=4)
>
> So my intention here isn't to derail, but to help the kwd argument
> proposal along by solving this screwiness problem.
>
> It is to suggest that maybe a way forward-- to make the intuition of the
> semantics of kwd args to [ ] much more obvious-- would be to change the
> signature so that this incongruity between what happens with "normal"
> method calls 

[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Edwin Zimmerman
On 8/7/2020 8:28 PM, Greg Ewing wrote:
>  I don't think anyone has the appetite for a Python 4 any
> time soon.
>
I'm included in "anyone" here.  From reading this list, it seems to me that 
"Python 4" is invoked as some folks favorite magical justification for 
proposing major breaking changes.  Python 3 works quite well, I think.  
Non-breaking, incremental changes suite me much better that large breaking 
ones.  I have better things to do with my time than complete software rewrites 
of all the software projects I work on.

--Edwin
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/EXUN5IEIFWLBYH6D6D6ULUZNG6C65YNJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Steven D'Aprano
On Fri, Aug 07, 2020 at 06:40:33PM +0100, Rob Cliffe via Python-ideas wrote:
> 
> 
> On 07/08/2020 17:16, Serhiy Storchaka wrote:

> >The main problem to me with the exception catching expression is that
> >you need to add the exception name and several keywords, and it makes
> >real-world examples too long, so you will need to split the expression
> >on several lines, and add extra parenthesis. And in this case there is
> >no much difference between expression
> >
> > x = (
> > 
> > except LongExceptionName:
> > 
> > )
> >
> >and statement
> >
> > try:
> > x = 
> > except LongExceptionName:
> > x = 
> >
> >(actually the statement may be shorter and better aligned).

Serhiy is correct that some examples will be long and cumbersome, but 
many other expressions are long and cumbersome, and we leave it up to 
the writer of the code to decide whether to extend a single expression 
over multiple lines or split it into multiple statements using temporary 
variables.

In Serhiy's example above, it is up to the coder and their aesthetic 
sense to decide whether to use the expression form or the statement 
form, just as today they can decide between:


if :
x = 
else:
x = 


and


x = (
 if 
 else 
 )


Neither is necessarily better or worse than the other, it will depend on 
the surrounding code, whether the value is being bound to a variable or 
being immediately used in another expression:

function(arg,
 another_arg,
 spam=long_conditional_or_except_expression,
 eggs=expression
 )


Just like if statements vs ternary if expressions, the writer will get 
to choose which is better for their purposes, and whether or not to use 
temporary variables.


> This is a strawman argument.  You set up a case where exception-catching 
> expressions are poor (long expression, long exception name) and then 
> knock it down.

Please don't misuse "strawman argument" in this way. Serhiy is making a 
legitimate criticism of the feature, and explicitly labelling it as a a 
reason he personally does not like the feature.

It is not a strawman to point out your personal reasons for disliking a 
feature. This is a legitimate issue with the suggested syntax: it is not 
especially terse, and if the expressions and exceptions are long, as 
they sometimes will be, the whole thing will be little better, or not 
better at all, than using a try...except statement.

You don't have to agree with Serhiy's preference to recognise that there 
are cases where this proposal will save no lines of code. And probably 
not rare cases -- I expect that they will be much more common than the 
short examples you quote:

> If you read the PEP you will find plenty of short examples:
> 
>   process(dic[key] except KeyError: None)
>   value = (lst[2] except IndexError: "No value")
>   cond = (args[1] except IndexError: None)
>   pwd = (os.getcwd() except OSError: None)
>   e.widget = (self._nametowidget(W) except KeyError: W)
>   line = (readline() except StopIteration: '')
> etc.

The first is a poor example because that can so easily be written as:

process(dic.get(key))

The second and third are basically the same example.

The fourth example of os.getcwd is excellent. But the fifth example 
strikes me as exceedingly weak. You are calling a private method, which 
presumably means you wrote the method yourself. (Otherwise you have no 
business calling the private method of a class you don't control.) So 
why not just add an API for providing a default instead of raising?

And the final example is also weak. If readline is a bound method of a 
file object:

readline = open(path, 'r').readline

then it already returns the empty string at the end of file. And if it 
isn't, then perhaps it should, rather than raising a completely 
inappropriate StopIteration exception.


> >Other problem specific to the PEP 463 syntax is using colon in
> >expression. Colon is mainly used before indented block in complex
> >statements. It was introduced in this role purely for aesthetic reasons.

I disagree with Serhiy here: I believe that there is objective evidence 
in the form of readability studies which suggest that colons before 
indented blocks aid readability, although I have lost the reference.


> >Using it in other context is very limited (dict display, lambda,
> >annotations, what else?).
>
> Slice notation.  As you would have discovered if you read the PEP.

No need to be so catty over a minor bit of forgetfulness, I'm sure that 
Serhiy has read the PEP, and we know that Serhiy is experienced enough 
to have seen slice notation a few times. Let's not condemn him for a 
momentary bit of forgetfulness.


> Dict display and slices are hardly of "very limited" use.  (I can't 
> comment on annotations, I don't use them.)

I don't believe that Serhiy is referring to how frequently 

[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Greg Ewing

On 8/08/20 3:24 am, Paul Moore wrote:

def is_valid_specifier(s):
 try:
 packaging.specifiers.SpecifierSet(s)
 return True
 except packahing.specifiers.InvalidSpecifier:
 return False


This doesn't quite follow the pattern, because it doesn't return
the result of the function. To fit it into an except expression
you would need to write something more convoluted, such as

   (SpecifierSet(s), True)[1] except InvalidSpecifier: False

There might be a less clunky way to write that, but I can't
think of one right now.

--
Greg
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/GFFGND5J5Q6S76PYZT333WFBO4HGT4ES/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Greg Ewing

On 4/08/20 9:12 am, Guido van Rossum wrote:
then presumably calling `c[1, index=2]` would just be an error (since it 
would be like attempting to call the method with two values for the 
`index` argument),


Hmmm, does this mean that classes providing index notation would
now need to document the name of the parameter they use for the
index?

--
Greg
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/VMNARXHO62PYWGUPF7AMZHMHMQZ4O2HF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Greg Ewing

On 8/08/20 4:09 am, Ricky Teachey wrote:
If that incongruity were to be fixed, it seems to me it would become 
*obvious* that the semantic meaning of ` m[1, 2, a=3, b=2]` should 
definitely be:


m.__get__(1, 2, a=3, b=4)


It would certainly achieve that goal. The question is whether it would
be worth the *enormous* upheaval of replacing the whole __getitem__
protocol.

It's hard to overstate what a big deal that would be. The old protocol
would still be with us, complicating everything unnecessarily, for a
very long time. It took Python 3 to finally get rid of the __getslice__
protocol, and I don't think anyone has the appetite for a Python 4 any
time soon.

--
Greg
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/TE4AI5GSDHOXBCE3R7ZV7TKB3OUGE6MY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread Ricky Teachey
On Fri, Aug 7, 2020 at 6:01 PM Paul Moore  wrote:

> On Fri, 7 Aug 2020 at 22:46, Ricky Teachey  wrote:
> >
> > This was inspired by a tweet today from Brandon Rhodes. I looked for
> something like it on the mypy issues page and didn't find anything.
> >
> > Would it make good semantic sense- and be useful- to specify valid
> numerical ranges using slices and type-hint syntax? My suggestion would be
> to, at minimum, provide this functionality for int and float.
> >
> > Consider that currently we have:
> >
> > x: int  # this means "all ints", today
> > x: float  # this means "all floating point numbers", today
> >
> > Idea in a nutshell would be for the following type declarations to mean:
> >
> > x: int[0:]  # any ints greater than or equal to zero would match, others
> would fail
> > x: int[:101]  # any ints less than 101 match
> > x: int[0:101:2]  # even less than 101
>
> I suspect the biggest issue with this is that it's likely to be
> extremely hard (given the dynamic nature of Python) to check such type
> assertions statically. Even in statically typed languages, you don't
> often see range-based types like this. And type assertions don't do
> runtime checks, so if they can't be usefully checked statically, they
> probably aren't going to be of much benefit (documentation is
> basically all).
>
> Paul


You could be right and I don't know much about this subject.

However the question that comes up for me, though, is: how does TypedDict
perform its static checks? It seems like this:

class D(typing.TypedDict):
a: int

d: D = dict(b=2)  # error

Shouldn't be any harder to type check than this:

x: int[0:] = -1  # error

No?

---
Ricky.

"I've never met a Kentucky man who wasn't either thinking about going home
or actually going home." - Happy Chandler



>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JRAGJ5RUJI2IWKKKT6HQHDDLKOPGGPP5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Guido van Rossum
On Fri, Aug 7, 2020 at 10:44 AM Rob Cliffe 
wrote:

>
> On 07/08/2020 16:58, Guido van Rossum wrote:
>
> On Fri, Aug 7, 2020 at 8:15 AM David Mertz  wrote:
>
>> I think getting Guido on board would be a huge step.  Python has added
>> quite a bit of new syntax since 2014, and Guido himself is currently
>> advocating another new big change (pattern matching).  His opinion may have
>> shifted.
>>
>
> Alas, it hasn't. Language design is not an exact science, and my gut still
> tells me that inline exceptions are a bad idea.
>
> Understood.  But that's a bit of a debate-stopper;  it would be helpful if
> you could articulate it more clearly.
>

It's not me you have to convince, it's the SC.

I personally don't want to debate this again -- if I tried to articulate my
arguments, people would just try to come up with counter-arguments, and I'd
be forced into a debate I have no interest in.

If you wanted to sway me, maybe you could write a static analyzer that
tries to measure what fraction of try/except statements have exactly this
form:

try:
some_var = some_expression
except SomeException:
some_var = some_other_expression

You can probably piggyback that on an existing static analyzer like flake8
or pylint. And for your corpus you could download the 1000 most popular
packages from PyPI (
https://github.com/python/cpython/blob/master/Tools/peg_generator/scripts/download_pypi_packages.py
).

-- 
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/AE7WVDSIUXU6KCX2K75T3MU3WBA5RSNT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread Paul Moore
On Fri, 7 Aug 2020 at 22:46, Ricky Teachey  wrote:
>
> This was inspired by a tweet today from Brandon Rhodes. I looked for 
> something like it on the mypy issues page and didn't find anything.
>
> Would it make good semantic sense- and be useful- to specify valid numerical 
> ranges using slices and type-hint syntax? My suggestion would be to, at 
> minimum, provide this functionality for int and float.
>
> Consider that currently we have:
>
> x: int  # this means "all ints", today
> x: float  # this means "all floating point numbers", today
>
> Idea in a nutshell would be for the following type declarations to mean:
>
> x: int[0:]  # any ints greater than or equal to zero would match, others 
> would fail
> x: int[:101]  # any ints less than 101 match
> x: int[0:101:2]  # even less than 101

I suspect the biggest issue with this is that it's likely to be
extremely hard (given the dynamic nature of Python) to check such type
assertions statically. Even in statically typed languages, you don't
often see range-based types like this. And type assertions don't do
runtime checks, so if they can't be usefully checked statically, they
probably aren't going to be of much benefit (documentation is
basically all).

Paul
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/KVU3I5E2O77JCYXFUUCPD5RJQSLPR42M/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] use type hints and slices to specify a valid numerical range, example: `Angle = int[0:361]`

2020-08-07 Thread Ricky Teachey
This was inspired by a tweet today from Brandon Rhodes. I looked for
something like it on the mypy issues page and didn't find anything.

Would it make good semantic sense- and be useful- to specify valid
numerical ranges using slices and type-hint syntax? My suggestion would be
to, at minimum, provide this functionality for int and float.

Consider that currently we have:

x: int  # this means "all ints", today
x: float  # this means "all floating point numbers", today

Idea in a nutshell would be for the following type declarations to mean:

x: int[0:]  # any ints greater than or equal to zero would match, others
would fail
x: int[:101]  # any ints less than 101 match
x: int[0:101:2]  # even less than 101

# for floats, it's a bit odd since the upper bound is typically exclusive...
# but we might prefer it to be inclusive? Not sure.
# regardless, any float within the range matches
import math
x: float[-math.pi:math.pi] # any angle in radians from -pi to pi
x: float[-180:180]  # any angle in degrees from -180 to 180

So going with the angle example, one use-case might be something like:

"""my_module.py"""

import math

def atan_deg_int(v: float) -> int[-90:91]:
"""Computes the angle in integer degrees using arctangent."""
return math.atan(v) * 180/math.pi

if __name__=="__main__":
AngleDeg = float[0:360]  # a positive angle up to 360 degrees
x: AngleDeg
x = atan_deg_int(-0.5)

Then we would get an error similar to below:

>>> mypy  my_module.py
:1: error: Incompatible types in assignment (expression has type
"int[-90:91]", variable has type "float[0:45]")

Here the error occurs not because ints are considered inconsistent with
floats (mypy allows ints in the place of floats, so no problem there), but
because some of the ints from -90 to 90 (inclusive) fall out of the float
range of 0 to 360.

---
Ricky.

"I've never met a Kentucky man who wasn't either thinking about going home
or actually going home." - Happy Chandler
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/DTIAW6CW7NA4XMC34EX73VB3QYONU2AD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Paul Moore
On Fri, 7 Aug 2020 at 16:32, Alex Hall  wrote:
> Paul, do you want to write `[s for s in strings if 
> (packaging.specifiers.SpecifierSet(s) except 
> packaging.specifiers.InvalidSpecifier: False)]`? That's a mouthful.

No, I would (obviously?) use some from ... imports to simplify it. But
even with that I agree it's on the borderline of readability.

(And David - no, I didn't copy it, I retyped it from memory and that
was a typo, sorry).
Paul
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/I3PCT5B5JHISFM4XFM6YEX2D2VU4XWYX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Rob Cliffe via Python-ideas



On 07/08/2020 16:58, Guido van Rossum wrote:
On Fri, Aug 7, 2020 at 8:15 AM David Mertz > wrote:


I think getting Guido on board would be a huge step. Python has
added quite a bit of new syntax since 2014, and Guido himself is
currently advocating another new big change (pattern matching). 
His opinion may have shifted.


Alas, it hasn't. Language design is not an exact science, and my gut 
still tells me that inline exceptions are a bad idea.


Understood.  But that's a bit of a debate-stopper;  it would be helpful 
if you could articulate it more clearly.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/2Z6UFEV67LRXMO6QPUFQGKPIASJLU4EF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Rob Cliffe via Python-ideas



On 07/08/2020 17:16, Serhiy Storchaka wrote:

The main problem to me with the exception catching expression is that
you need to add the exception name and several keywords, and it makes
real-world examples too long, so you will need to split the expression
on several lines, and add extra parenthesis. And in this case there is
no much difference between expression

 x = (
 
except LongExceptionName:
 
 )

and statement

 try:
 x = 
 except LongExceptionName:
 x = 

(actually the statement may be shorter and better aligned).
This is a strawman argument.  You set up a case where exception-catching 
expressions are poor (long expression, long exception name) and then 
knock it down.  If you read the PEP you will find plenty of short examples:


process(dic[key] except KeyError: None)
value = (lst[2] except IndexError: "No value")
cond = (args[1] except IndexError: None)
pwd = (os.getcwd() except OSError: None)
e.widget = (self._nametowidget(W) except KeyError: W)
line = (readline() except StopIteration: '')
etc.



Other problem specific to the PEP 463 syntax is using colon in
expression. Colon is mainly used before indented block in complex
statements. It was introduced in this role purely for aesthetic reasons.
Using it in other context is very limited (dict display, lambda,
annotations, what else?).

Slice notation.  As you would have discovered if you read the PEP.
Dict display and slices are hardly of "very limited" use.  (I can't 
comment on annotations, I don't use them.)

  Even the "if" expression does not use it.


___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/6EYZ2DLJZ7YOZPS4PYJD2UHBKZU7WEHT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Antoine Pitrou
On Fri, 07 Aug 2020 08:54:37 -
"Kazantcev Andrey" 
wrote:

> Chris Angelico wrote:
> > Why do you want dump and load to take parameters from "somewhere
> > else"?   
> 
> Because developers of libraries don't think about configuring json.dump 
> method in mos of cases.

Then you should report a bug to them to give access to such
configuration, where that makes sense.

> I would like to have a mechanism that would allow tweaking the
> behaviour for the entire program.

That's an anti-pattern IMO.

Regards

Antoine.

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/5NG7VAVWYUEPPKVS7TOQ5PQROH6HLWNU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread MRAB

On 2020-08-07 13:13, Jonathan Fine wrote:

We are discussing a proposal to extend Python's syntax to allow
     d[1, 2, a=3, b=4]

We are also discussing the associated semantics. At present
     d[1, 2]
     d[(1, 2)]
are semantically equivalent.


Python behaves as though it's wrapping parentheses around the index:

d[1, 2] => d[(1, 2)]
d[(1, 2)] => d[((1, 2))] == d[(1, 2)]


There is a proposal, that
     d[1, 2, a=3, b=4]
     d[(1, 2), a=3, b=4]
be semantically equivalent.

Will adding keyword arguments break existing code? No, because they're 
currently not allowed.


Python doesn't even allow you to unpack with *, so that won't break 
existing code either.


[snip]
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LMIRZVLBLDBLR7CT352QYDQ4MNV23SKP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Stestagg
Jonathan, I took a look at your package, and the code, it's a really nice
exploration of the solutions.

I did find having to include the o() call in the sdaprano version quite
distracting (I realise it's needed to get it to work in current cpython
without silly tricks in the interpreter)...

...But I'm not afraid of doing horribly evil things to python for the
purposes of proof-of-concepts.

The code in this gist:
https://gist.github.com/stestagg/4962d4e86fb586b14138f19af4ae4a02

Implements a very ugly codec hack to make python 'understand' keywords in
index constructs, and I use it to show how the approach proposed by Steven
D'Aprano (Please correct me if I'm mis-representing anything!) should work
in my mind.
If you want to run this code yourselves, you can do by downloading the 2
files into a directory, and running `run.py` (with an appropriate
PYTHONPATH set)

The bit of the gist that is most relevant is this:

class ObjWithGetitem(Helper):

def __getitem__(self, key=None, foo=None, bar=None):
print(f'{key=}  {foo=}')


def main():
obj = ObjWithGetitem()

obj[1]
# key=1  foo=None

obj[foo=1]
# key=None  foo=1

obj[1, foo=2]
# key=1  foo=2

obj[1, 2]
# key=(1, 2)  foo=None

obj[1, 2, foo=3]
# key=(1, 2)  foo=3

obj[(1, 2), foo=3]
# key=(1, 2)  foo=3

obj[(1, 2), 3, foo=4]
# key=((1, 2), 3)  foo=4

obj[**{'foo': 9}]
# key=None  foo=9

obj[1, 2, foo=3, xxx=5]
# TypeError: __getitem__() got an unexpected keyword argument 'xxx'


Which I like for its simplicity

I'm not sure about supporting '*args' syntax in indexes at all, unless
doing so could be shown to be trivially implementable without any nasty
corner-cases.

Steve
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/SW4G34L5YKDA2JYVZRULIWXQAANC4RW7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Serhiy Storchaka
06.08.20 23:32, Chris Angelico пише:
> On Fri, Aug 7, 2020 at 6:28 AM  wrote:
>>
>>> Have a look at PEP 463, which looks into this in some detail.
>>
>> I wish this PEP had gained more traction.  Sooner or later, everyone wants 
>> an expression form of a try/except.
>>
>> When it comes to expressing "in the event of this exception, I want this 
>> default",  exception expressions read much more nicely than an equivalent 
>> try/except block.
>>
>> Also, new syntax would keep the rest of the language clean so that don't end 
>> up adding dozens of get() methods. Or r having us expand function signatures 
>> with default arguments, like min() and max() functions for example.
>>
>> It would be great if this PEP were to be resurrected.
>>
> 
> I'd be happy to work on it again if anyone else has suggestions for
> better justifications - see the rejection notice at the top of the
> PEP.

The main problem to me with the exception catching expression is that
you need to add the exception name and several keywords, and it makes
real-world examples too long, so you will need to split the expression
on several lines, and add extra parenthesis. And in this case there is
no much difference between expression

x = (

except LongExceptionName:

)

and statement

try:
x = 
except LongExceptionName:
x = 

(actually the statement may be shorter and better aligned).

Other problem specific to the PEP 463 syntax is using colon in
expression. Colon is mainly used before indented block in complex
statements. It was introduced in this role purely for aesthetic reasons.
Using it in other context is very limited (dict display, lambda,
annotations, what else?). Even the "if" expression does not use it.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/22QC2P4Q46JJXZABLETTOSH3MICE7C6H/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Marco Sulla
On Fri, 7 Aug 2020 at 14:14, Jonathan Fine  wrote:
> At present
> d[1, 2]
> d[(1, 2)]
> are semantically equivalent.
>
> There is a proposal, that
> d[1, 2, a=3, b=4]
> d[(1, 2), a=3, b=4]
> be semantically equivalent.
>
> I find this troubling, for example because
>fn(1, 2, a=3, b=4)
>fn((1, 2), a=3, b=4)
> are semantically different.

I think I've understood your point. To be sure, what's the Steven proposal?
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/QDYCO5FOS26YYUTX4RV4LTPBAMCW6C3P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Ricky Teachey
On Fri, Aug 7, 2020 at 4:19 AM Steven D'Aprano  wrote:

> On Fri, Aug 07, 2020 at 05:54:18PM +1000, Steven D'Aprano wrote:
>
> > This proposal doesn't say anything about reversing the decision made all
> > those years ago to bundle all positional arguments in a subscript into a
> > single positional parameter. What's done is done, that's not going to
> > change.
>
> Sorry, I was referring to the proposal that inspired this thread, to add
> keyword arguments to subscripting. There's an actual concrete use-case
> for adding this, specifically for typing annotations, and I cannot help
> but feel that this thread is derailing the conversation to something
> that has not been requested by anyone actually affected by it.
>

Well I wonder if they haven't asked because it would be such a huge change,
and it seems unlikely to happen. But I surely don't know enough about the
implementation details of these libraries to be able to say for certain one
way or the other.

I may have allowed my frustration to run ahead of me, sorry.
>
> There is a tonne of code that relies on subscripting positional
> arguments to be bundled into a single parameter. Even if we agreed that
> this was suboptimal, and I don't because I don't know the rationale for
> doing it in the first place, I would be very surprised if the Steering
> Council gave the go-ahead to a major disruption and complication to the
> language just for the sake of making subscript dunders like other
> functions.


> Things would be different if, say, numpy or pandas or other heavy users
> of subscripting said "we want the short term churn and pain for long
> term benefit".
>
> But unless that happens, I feel this is just a case of piggy-backing a
> large, disruptive change of minimal benefit onto a a small, focused
> change, which tends to ruin the chances of the small change. So please
> excuse my frustration, I will try to be less grumpy about it.
>

I understand the grumpiness given your explanation. I'm really not wanting
to derail that kwd args proposal-- I really like it, whatever the semantics
of it turn out to be.

I was actually trying to help the kwd arg case here. As illustrated by the
quote I included from Greg Ewing, there seems to be not even close to a
consensus over what the semantic meaning of this should be:

m[1, 2, a=3, b=2]

Which could be made to mean one of the following things, or another thing I
haven't considered:

1.m.__get__((1, 2), a=3, b=4)  # handling of positional arguments
unchanged from current behavior
2.m.__get__(1, 2, a=3, b=4)  # change positional argument handling from
current behavior
3.m.__get__((1, 2), {'a': 3, 'b': 4})  #  handling of positional
arguments unchanged from current behavior
4.m.__get__(KeyObject( (1, 2), {'a': 3, 'b': 4} ))   # change
positional argument handling from current behavior only in the case that
kwd args are provided

As Greg said:

These methods are already kind of screwy in that they don't
> handle *positional* arguments in the usual way -- packing them
> into a tuple instead of passing them as individual arguments.
> I think this is messing up everyone's intuition on how indexing
> should be extended to incorporate keyword args, or even whether
> this should be done at all.


To illustrate the comments of "kind of screwy" and "the usual way", using
semantic meaning # 1 above, then these are totally equivalent * :

m[1, 2, a=3, b=4]
m[(1, 2), a=3, b=4]

...even though these are totally different:

f(1, 2, a=3, b=4)
f((1, 2), a=3, b=4)

So my intention here isn't to derail, but to help the kwd argument proposal
along by solving this screwiness problem.

It is to suggest that maybe a way forward-- to make the intuition of the
semantics of kwd args to [ ] much more obvious-- would be to change the
signature so that this incongruity between what happens with "normal"
method calls and the "call" for item-get/set/del can be smoothed out.

If that incongruity were to be fixed, it seems to me it would become
*obvious* that the semantic meaning of ` m[1, 2, a=3, b=2]` should
definitely be:

m.__get__(1, 2, a=3, b=4)

But if all of this is not helping but hindering. I am happy to withdraw the
idea.

* Citing my source: I borrowed these examples from Jonathan Fine's message
in the other thread


---
Ricky.

"I've never met a Kentucky man who wasn't either thinking about going home
or actually going home." - Happy Chandler
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/BJOW5FVCNKNBWKMGAOIR2NYH6HKG4ONK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Chris Angelico
On Sat, Aug 8, 2020 at 1:58 AM Guido van Rossum  wrote:
>
> On Fri, Aug 7, 2020 at 8:15 AM David Mertz  wrote:
>>
>> I think getting Guido on board would be a huge step.  Python has added quite 
>> a bit of new syntax since 2014, and Guido himself is currently advocating 
>> another new big change (pattern matching).  His opinion may have shifted.
>
>
> Alas, it hasn't. Language design is not an exact science, and my gut still 
> tells me that inline exceptions are a bad idea.
>

Thanks Guido. If people want to suggest improvements to the PEP's
wording, then I'm still open to doing some of that (someone pointed
out that some of the examples may have bugs and/or be out of date),
but with that clear statement, there's no need to reopen it.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/SVSXCEXHAEQR3HS6LCJ2JAVD6B4PNKLR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Guido van Rossum
On Fri, Aug 7, 2020 at 8:15 AM David Mertz  wrote:

> I think getting Guido on board would be a huge step.  Python has added
> quite a bit of new syntax since 2014, and Guido himself is currently
> advocating another new big change (pattern matching).  His opinion may have
> shifted.
>

Alas, it hasn't. Language design is not an exact science, and my gut still
tells me that inline exceptions are a bad idea.

-- 
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/EPO4U7EGGXAMEHZLA6DYOI3TWWTFTYPA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Serhiy Storchaka
06.08.20 11:03, Kazantcev Andrey пише:
> JSON serialization used in many different libraries without the ability for 
> configuration (Example 
> https://github.com/aio-libs/aioredis/blob/8a207609b7f8a33e74c7c8130d97186e78cc0052/aioredis/commands/pubsub.py#L18).
>  Propose to add something like the context in the decimal module, wIch will 
> contain all global configs for dump and load functions.

Changing global defaults is a bad idea, because it would break all
libraries which use the json module.

As for a local context which incorporates configuration for serializing
and deserializing, you already have two context classes. They are called
JSONEncoder and JSONDecoder.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/IWLKGHH6PRA7KOJWAVQPCT3ILOAM4K25/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread David Mertz
On Fri, Aug 7, 2020 at 11:32 AM Alex Hall  wrote:

> > def not_valid(instance, schema):
>
>> > try:
>> > return validate(instance, schema)
>> > except ValidationError as err:
>> > return str(err)
>
>
> David, your example involves capturing the exception which was deferred in
> the PEP:
> https://www.python.org/dev/peps/pep-0463/#capturing-the-exception-object
>

It does, I recognize that.  If I had the "exception ternary" that didn't
capture the exception, I'd probably just evaluate to a True instead in the
exception case... then go back to revalidate manually only in the
unexpected case of validity failing. Of course, if some nice looking syntax
did both, so much the better.  Even the simpler form would have helped me
though.

I also know it is better in my example to return the exception object
itself rather than just its stringification.  If I were publishing my API
more widely, rather than the internal use I had, I would have done that.

-- 
The dead increasingly dominate and strangle both the living and the
not-yet born.  Vampiric capital and undead corporate persons abuse
the lives and control the thoughts of homo faber. Ideas, once born,
become abortifacients against new conceptions.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WSRJECQ3C6PIEBFWNQGKSLVPBLRO2TXM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Alex Hall
On Fri, Aug 7, 2020 at 5:24 PM Paul Moore  wrote:

> On Fri, 7 Aug 2020 at 16:21, David Mertz  wrote:
> >
> > On Fri, Aug 7, 2020 at 4:58 AM Brendan Barnwell 
> wrote:
> >>
> >> It seems that the rationale that was used in the PEP was fairly
> >> narrowly focused on the comparison with things like dict.get() and the
> >> idea of EAFP.  A somewhat broader justification might be something along
> >> these lines:
> >
> >
> > For an example. Anyone is free to use, but I'm not claiming it's
> necessarily the best.  This is from... well, probably not yesterday like I
> said in other comment, but a couple days ago. The module `jsonschema` has
> an API where it raises an exception if `validate()` doesn't succeed (None
> if things are happy). I don't love that API, so want to wrap it.
> >
> > def not_valid(instance, schema):
> > try:
> > return validate(instance, schema)
> > except ValidationError as err:
> > return str(err)
> >
> > I really wanted that to be one line rather than a helper function, and
> it really feels like it should be possible... and yet.
>
> I did basically the same yesterday:
>
> def is_valid_specifier(s):
> try:
> packaging.specifiers.SpecifierSet(s)
> return True
> except packahing.specifiers.InvalidSpecifier:
> return False
>
> The function was only for use in a [s for s in strings if
> is_valid_specifier(s)] comprehension, so an in-line expression would
> have been ideal.
>
> Paul
>
>
David, your example involves capturing the exception which was deferred in
the PEP:
https://www.python.org/dev/peps/pep-0463/#capturing-the-exception-object

Paul, do you want to write `[s for s in strings if
(packaging.specifiers.SpecifierSet(s) except
packaging.specifiers.InvalidSpecifier: False)]`? That's a mouthful.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/IE6NQ467ASSBJSCLQNHWX7UWR4YRNRPN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread David Mertz
>
> For an example. Anyone is free to use, but I'm not claiming it's
> necessarily the best.  This is from... well, probably not yesterday like I
> said in other comment, but a couple days ago. The module `jsonschema` has
> an API where it raises an exception if `validate()` doesn't succeed (None
> if things are happy). I don't love that API, so want to wrap it.
>

> def not_valid(instance, schema):
> try:
> return validate(instance, schema)
> except ValidationError as err:
> return str(err)
>
> I really wanted that to be one line rather than a helper function, and it
> really feels like it should be possible... and yet.
>

Incidentally, this is also helped greatly by the Walrus operator.  With the
support function I write:

if msg := not_valid(instance, schema):
print("The problem is:", msg)   # The exception contains detailed
diagnosis
else:
do_json_stuff(instance)

-- 
The dead increasingly dominate and strangle both the living and the
not-yet born.  Vampiric capital and undead corporate persons abuse
the lives and control the thoughts of homo faber. Ideas, once born,
become abortifacients against new conceptions.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/37XWNBFW5WQQI32TONYES6GZTNYU5W3I/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Paul Moore
On Fri, 7 Aug 2020 at 16:21, David Mertz  wrote:
>
> On Fri, Aug 7, 2020 at 4:58 AM Brendan Barnwell  wrote:
>>
>> It seems that the rationale that was used in the PEP was fairly
>> narrowly focused on the comparison with things like dict.get() and the
>> idea of EAFP.  A somewhat broader justification might be something along
>> these lines:
>
>
> For an example. Anyone is free to use, but I'm not claiming it's necessarily 
> the best.  This is from... well, probably not yesterday like I said in other 
> comment, but a couple days ago. The module `jsonschema` has an API where it 
> raises an exception if `validate()` doesn't succeed (None if things are 
> happy). I don't love that API, so want to wrap it.
>
> def not_valid(instance, schema):
> try:
> return validate(instance, schema)
> except ValidationError as err:
> return str(err)
>
> I really wanted that to be one line rather than a helper function, and it 
> really feels like it should be possible... and yet.

I did basically the same yesterday:

def is_valid_specifier(s):
try:
packaging.specifiers.SpecifierSet(s)
return True
except packahing.specifiers.InvalidSpecifier:
return False

The function was only for use in a [s for s in strings if
is_valid_specifier(s)] comprehension, so an in-line expression would
have been ideal.

Paul
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/NCCZUWMMWUPRUKG2SZLICAMFIRRST27H/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Idea: Extend "for ... else ..." to allow "for ... if break ..." else

2020-08-07 Thread Stephen J. Turnbull
Rob Cliffe writes:

 > Did you see Guido's post when I raised a similar object to detecting 
 > "zero iterations", where it would be unacceptable to slow down all 
 > for-loops, so they would have to be compiled differently?

Yes.  My confusion (as you'll see elsewhere) was about the AST, not
the parser.

 > > Second, generally Python tries to avoid overloading keywords with
 > > multiple semantics.  The potential for confusion and misunderstanding
 > > of "except" (which I've suggested myself and now dislike) is pretty
 > > large I think.

 > IMO this is a bit disingenuous:

I'm not trying to mislead anybody, or try to imply there aren't cases
where keywords have been repurposed.

 >   "as" can be used with "import" and with context managers with 
 > quite different semantics.

I would disagree that the semantics are different.  Context managers
and imports have quite different semantics, but in both cases the "as"
clause has name binding semantics, while the object bound is
determined by the statement, not by the "as" clause.

 >      "del" can be used to remove a variable binding, a sequence element, 
 > a sequence slice or a dictionary key.

The connection is more tenuous, but in each case an object loses a
reference.  I see your point of view, especially since the semantics
of del on sequence elements and slices affects the "names" of other
sequence elements, but I think the "reference destruction" semantics
are "sufficiently" similar across the different uses of "del".

 >      "not" can be used as Boolean negation or in the compound operator 
 > "not in".

Which is a negation.  I don't see how anybody reading that could
mistake the meaning.

 > Whereas the new use of "except" that Matthew is proposing is very
 > similar to its existing use (certainly conceptually, if not in the
 > implementation details).

As a restricted goto, that is true.  In fact, it's so similar that we
may as well use the original!  Is one level of indentation really
worth it?

What I see as different is that Matthew's proposal is for a purpose
that is explicitly local to the loop statement, where except is
explicitly nonlocal.  Another way to put it is in this thread "except"
is proposed as marking a goto target, where in a try "except" is
almost a "come from" (although not with full INTERCAL compatility).

I also wonder about try ... excepts nested in a for loop with
excepts.  That's probably no harder to deal with than nested loops
with breaks at different levels (but that can be a bit confusing).

 > (To be clear: although I'm defending Matthew's proposal here, my 
 > preferred option is still some new syntax.)

"try" is enough to implement any of the use cases in the relatively
rare cases it's needed.

On the plus side, a "try" and its except clauses require a bit of code
to set up.  I don't know whether that's a major consideration, but one
advantage of a new implementation for the purpose of implementing
"labelled breaks" would be to have a lighter implementation.  Whether
it's worth it is above my pay grade.

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/DC77MEHTY4NMLUSIRS7JFKQA4FF6UNHX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread David Mertz
On Fri, Aug 7, 2020 at 4:58 AM Brendan Barnwell 
wrote:

> It seems that the rationale that was used in the PEP was fairly
> narrowly focused on the comparison with things like dict.get() and the
> idea of EAFP.  A somewhat broader justification might be something along
> these lines:
>

For an example. Anyone is free to use, but I'm not claiming it's
necessarily the best.  This is from... well, probably not yesterday like I
said in other comment, but a couple days ago. The module `jsonschema` has
an API where it raises an exception if `validate()` doesn't succeed (None
if things are happy). I don't love that API, so want to wrap it.

def not_valid(instance, schema):
try:
return validate(instance, schema)
except ValidationError as err:
return str(err)

I really wanted that to be one line rather than a helper function, and it
really feels like it should be possible... and yet.

-- 
The dead increasingly dominate and strangle both the living and the
not-yet born.  Vampiric capital and undead corporate persons abuse
the lives and control the thoughts of homo faber. Ideas, once born,
become abortifacients against new conceptions.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/F3TSOX3MOQ7JSJA5CH6PQ25464RCRXWZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Idea: Extend "for ... else ..." to allow "for ... if break ..." else

2020-08-07 Thread Stephen J. Turnbull
Guido van Rossum writes:

 > Maybe you’re thinking of a “one-pass” compiler, like the original Pascal
 > compiler, that generates code during parsing.

Not really.  The "after" phrasing comes from the parsing part which
does move through the source pretty much linearly in both the
traditional and PEG parsers (as far as I understand the latter).

What I really meant with respect to the code generation was what you
describe as "going 'up'" with respect to the AST.  Evidently I got
that wrong.

I have a somewhat nebulous understanding of which way is "up", I
guess. :-)  I need to go back to school on compiler tech.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/Y5BJRTXITUITRDMWWHXDFRE2I65EZMC3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread David Mertz
I think getting Guido on board would be a huge step.  Python has added
quite a bit of new syntax since 2014, and Guido himself is currently
advocating another new big change (pattern matching).  His opinion may have
shifted.

FWIW, I'm +1 on the concept. I've wanted it quite often, as recently as
yesterday.  I don't really love the syntax that was settled on.  Some sort
of keyword based version to resemble the ternary expression feels much more
natural to me.  It really *is* a kind of ternary, after all.  Exactly which
words in exactly which order... well, I could bring in yet more paint
samples for the bikeshed, but the concept is good.

Yours, David...

On Fri, Aug 7, 2020 at 1:55 AM Chris Angelico  wrote:

> On Fri, Aug 7, 2020 at 11:01 AM Jonathan Grant
>  wrote:
> >
> > How can we start to revive this PEP? And I completely agree, making the
> syntax `... except ... with ...` is much better than `eor`.
> >
>
> Have a read of the PEP's rejection notice at the top. To revive the
> PEP, the objections to it need to be solved.
>
> ChrisA
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/5HNSLOTK45U2NQOD26NK3CCG5IL3VGGN/
> Code of Conduct: http://python.org/psf/codeofconduct/
>


-- 
The dead increasingly dominate and strangle both the living and the
not-yet born.  Vampiric capital and undead corporate persons abuse
the lives and control the thoughts of homo faber. Ideas, once born,
become abortifacients against new conceptions.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/NNNPDB4CQ3Z67E55PS6QFROXSQEYCWJP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Marco Sulla
What about __json__()?
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/555MJQ4I5VTPFHZ5AJMG7VMUCCV52HS2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Marco Sulla
On Thu, 6 Aug 2020 at 14:57, Jonathan Grant
 wrote:
> Instead of writing this:
>
> try:
> return my_dict[“a”][“b”][“c”][“d”]
> except:
> return “some default”
>
> [...]
>
> I propose we allow for an inline exception handler, like `eor`:
>
> return my_dict[“a”][“b”][“c”][“d”] eor “some default”

For this behaviour, you can use the kwkey module:
https://pypi.org/project/kwkey/

About PEP 463, the examples in the PEP seem to me less readable. Not
sure where the advantage is.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/4DQHYP5VKMDNXMAQOBWEQ4XC7JJ65GIK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: bpo-41231: wraps default behavior with __annotations__

2020-08-07 Thread Joao S. O. Bueno
"3" would "break stuff around".
I'd be glad if "2" worked - but that have to be well documented and
easy to see.

"1" is basically a workaround the current behavior, and people who
care for it will have to continue doing that until Python 3.10 is
widespread - but ultimately it basically requires that everyone doing that
re-implements whatever logic you have in that PR for "2"  (thus defeating
the very purpose of "@wraps" - I'd rather rebind wrapper.__name__ and
__wrapped__ manually)

TL;DR: I agree that "2" is the way to go.


On Fri, 7 Aug 2020 at 09:14, David Caro  wrote:

>
> Hi!
>
> Some time ago I opened a bug and sent a patch, but I realize that I
> should have started a chat here first, better late than never.
>
> The default behavior of the `functools.wraps` decorator, is to copy over
> the `__annotations__` from the wrapped function to the wrapper function.
>
> This is ok if the wrapper function has the same signature that the
> wrapped one (that I guess was the main goal behind it, just a simple
> wrapper).
>
> The issue comes when the wrapper function has a different signature than
> the wrapped, for example, the `contextlib.contextmanager` decorator,
> changes the return value, returning a `_GeneratorContextManager`:
>
> ```
> def contextmanager(func):
> """...
> """
> @wraps(func)
> def helper(*args, **kwds):
> return _GeneratorContextManager(func, args, kwds)
> return helper
> ```
>
> but as it uses `wraps`, the `__annotations__` will be copied over, and
> the new function will have an incorrect return type there.
>
>
> In order to improve this, I have some proposals:
>
> 1. Keep the default behavior of `wraps`, but change the usage around the
> library to not copy over the `__annotations__` in places (like
> `contextmanager`) where the types change.
>
> Advantages are that `wraps` keeps being backwards compatible, though the
> change in `contextmanager` might force some people to "fix" their
> annotations, I would consider that a "bugfix", more than a behavior change.
>
> The disadvantage is that you have to know that `wraps` will overwrite
> the wrapped function annotations by default, and I think that it's
> broadly used when creating decorators that have different signature/types
> than
> the wrapped function, so people will have to explicitly change their
> code.
>
>
> 2. Change `wraps` so if there's any type of annotation in the wrapper
> function, will not overwrite it.
> This is what I did in the PR (though I'm not convinced).
>
> Advantages are that only people that took the time to annotate the
> wrapper function will see the change, and that will be the change they
> expect (that's my guess though).
> It's a bit smart with it, so if you don't specify a return type, will
> get it from the wrapped function, or if you don't specify types for the
> arguments will get them from the wrapped function, filling only the gap
> that was not specified.
> For everyone else, `wraps` is backwards compatible.
>
> Disadvantages, it's a bit more convoluted as it requires some logic to
> detect what annotations are defined in the wrapper and which ones in the
> wrapped and merge them.
>
>
> 3. Change the default behavior of `wraps` and don't overwrite the
> `__annotations__` property. This is non-backwards compatible, but imo
> the simplest.
>
> Advantages are that it's very simple, and you'll end up with valid,
> straight-forward `__annotations__` in the wrapped function (that is,
> whatever you declare when writing the wrapped function).
>
> Disadvantages, if the goal of the `wraps` decorator was as a helper when
> wrapping functions in a simple manner (not changing the signature), then
> this becomes a bit more inconvenient, as it will not copy the
> `__annotations__` as it was doing by default. It also changes the
> current default behavior, so it will potentially affect everyone that
> uses `wraps` currently.
>
>
> Ideas?
>
>
> Thanks!
>
> --
> David Caro
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/LFF3EXFPZG3FMXY4AXORU6SXXKBPSZOY/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/EVOMCKDHASWQWFNZBW2MRB6CH5I56M3G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Chris Angelico
On Fri, Aug 7, 2020 at 10:36 PM Rob Cliffe  wrote:
> OK, some of the arguments are a bit exaggerated.  (This can be corrected.)  
> But this is not an objection to the PEP per se, just to the way some of the 
> arguments are worded.
>

Or, looking at that differently: It's an objection to the PEP, not to
the proposal. That means that, if someone words up better arguments,
the PEP might be better received.

Hence, reopening the PEP basically means figuring out what arguments
should be put forward.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/5QACAS2RQWHRJUCK5LRLT5LCNLFXREGN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Rob Cliffe via Python-ideas



On 07/08/2020 06:52, Chris Angelico wrote:

On Fri, Aug 7, 2020 at 11:01 AM Jonathan Grant
 wrote:

How can we start to revive this PEP?

[I.e. PEP 463 Exception-catching expressions]

Have a read of the PEP's rejection notice at the top. To revive the
PEP, the objections to it need to be solved.

ChrisA
___


TLDR: The "objections" to the PEP can't be "solved" if there aren't any.

Here is the full rejection notice in *bold* and some comments from me:

*I want to reject this PEP. I think the proposed syntax is acceptable 
given the desired semantics, although it's still a bit jarring. It's 
probably no worse than the colon used with lambda (which echoes the 
colon used in a def just like the colon here echoes the one in a 
try/except) and definitely better than the alternatives listed.*


The only objection here is that the syntax is "a bit jarring", 
apparently referring to a colon appearing in the middle of a line.  But 
the syntax was subject to the usual bikeshedding at the time, and the 
proposed one is "definitely better than the alternatives listed".  It 
seems unlikely that anyone could find a better syntax now (although if 
they can, great!), so why object to it?


*But the thing I can't get behind are the motivation and rationale. I 
don't think that e.g. dict.get() would be unnecessary once we have 
except expressions, and I disagree with the position that EAFP is better 
than LBYL, or "generally recommended" by Python. (Where do you get that? 
From the same sources that are so obsessed with DRY they'd rather 

introduce a higher-order-function than repeat one line of code? :-)*

OK, some of the arguments are a bit exaggerated.  (This can be 
corrected.)  But _this is not an objection to the PEP per se_, just to 
the way some of the arguments are worded.  ISTM that the motivation and 
rationale are explained well in the PEP, and the rejection notice does 
not address them at all.  (I might particularly mention the section 
"Narrowing of exception-catching scope" which illustrates how some 
existing code can easily be _improved_ with exception-catching expressions.)


So: thus far in my IMO there has been _no substantive objection to the 
PEP_ whatseover.


*This is probably the most you can get out of me as far as a 
pronouncement. Given that the language summit is coming up I'd be happy 
to dive deeper in my reasons for rejecting it there (if there's demand).*


Yes please Guido, would you be willing to expand on this?  It's hard to 
counter objections without knowing what they are.  I apologise for the 
intrusion, but this is the reason I am copying this post to you.


*I do think that (apart from never explaining those dreadful acronyms :-)*

This too can be corrected.

t*his was a well-written and well-researched PEP, and I think you've 
done a great job moderating the discussion, collecting objections, 
reviewing alternatives, and everything else that is required to turn a 
heated debate into a PEP. Well done Chris (and everyone who helped), and 
good luck with your next PEP!*


Quite so.

Best wishes

Rob Cliffe

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/N4X2CBGZWWZCDXYTEX7OSAPOGOVEPDOC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Jonathan Fine
We are discussing a proposal to extend Python's syntax to allow
d[1, 2, a=3, b=4]

We are also discussing the associated semantics. At present
d[1, 2]
d[(1, 2)]
are semantically equivalent.

There is a proposal, that
d[1, 2, a=3, b=4]
d[(1, 2), a=3, b=4]
be semantically equivalent.

I find this troubling, for example because
   fn(1, 2, a=3, b=4)
   fn((1, 2), a=3, b=4)
are semantically different.

Here's another example. If we are allowed to write
d[*argv, a=3, b=4]
then the proposal makes this equivalent to
   d[argv, a=3, b=4]
when type(argv) is tuple.

Consider now
>>> def fn(*argv): print(argv)
>>> argv = 'key'
>>> fn(*argv)
('k', 'e', 'y')

I think it would be a trap for the unwary, that the equivalence of
   d[argv, a=3, b=4]
   d[*argv, a=3, b=4]
depends on the type of argv.

The root of the proposal that
d[1, 2, a=3, b=4]
d[(1, 2), a=3, b=4]
be semantically equivalent is this: At present
d[1, 2]
d[(1, 2)]
are semantically equivalent.

Why not instead , as part of the proposed semantics, make
d[1, 2]
d[(1, 2)]
semantically different, but only for those classes that ask for it. (This
would automatically preserve backwards compatibility.)

I believe this is possible and straightforward in the future, and also in
the present via the 'o' and K mechanism. I'm happy to implement this in
kwkeys, when I have time.

An aside: I also strongly believe that writing and studying examples that
use the new syntax, via the 'o' and K mechanism, is essential to making
good choices regarding the semantics, the writing and approval of the PEP,
and the success of the extension to Python (should the PEP be accepted).

I hope this helps us come to a shared understanding.
-- 
Jonathan
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/L3TSE4FZ2L7ETEW2JUD3W24LTFOJMEHF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] bpo-41231: wraps default behavior with __annotations__

2020-08-07 Thread David Caro


Hi!

Some time ago I opened a bug and sent a patch, but I realize that I
should have started a chat here first, better late than never.

The default behavior of the `functools.wraps` decorator, is to copy over
the `__annotations__` from the wrapped function to the wrapper function.

This is ok if the wrapper function has the same signature that the
wrapped one (that I guess was the main goal behind it, just a simple
wrapper).

The issue comes when the wrapper function has a different signature than
the wrapped, for example, the `contextlib.contextmanager` decorator,
changes the return value, returning a `_GeneratorContextManager`:

```
def contextmanager(func):
"""...
"""
@wraps(func)
def helper(*args, **kwds):
return _GeneratorContextManager(func, args, kwds)
return helper
```

but as it uses `wraps`, the `__annotations__` will be copied over, and
the new function will have an incorrect return type there.


In order to improve this, I have some proposals:

1. Keep the default behavior of `wraps`, but change the usage around the
library to not copy over the `__annotations__` in places (like
`contextmanager`) where the types change.

Advantages are that `wraps` keeps being backwards compatible, though the
change in `contextmanager` might force some people to "fix" their
annotations, I would consider that a "bugfix", more than a behavior change.

The disadvantage is that you have to know that `wraps` will overwrite
the wrapped function annotations by default, and I think that it's
broadly used when creating decorators that have different signature/types than
the wrapped function, so people will have to explicitly change their
code.


2. Change `wraps` so if there's any type of annotation in the wrapper function, 
will not overwrite it.
This is what I did in the PR (though I'm not convinced).

Advantages are that only people that took the time to annotate the
wrapper function will see the change, and that will be the change they
expect (that's my guess though).
It's a bit smart with it, so if you don't specify a return type, will
get it from the wrapped function, or if you don't specify types for the
arguments will get them from the wrapped function, filling only the gap
that was not specified.
For everyone else, `wraps` is backwards compatible.

Disadvantages, it's a bit more convoluted as it requires some logic to
detect what annotations are defined in the wrapper and which ones in the
wrapped and merge them.


3. Change the default behavior of `wraps` and don't overwrite the
`__annotations__` property. This is non-backwards compatible, but imo
the simplest.

Advantages are that it's very simple, and you'll end up with valid,
straight-forward `__annotations__` in the wrapped function (that is,
whatever you declare when writing the wrapped function).

Disadvantages, if the goal of the `wraps` decorator was as a helper when
wrapping functions in a simple manner (not changing the signature), then
this becomes a bit more inconvenient, as it will not copy the
`__annotations__` as it was doing by default. It also changes the
current default behavior, so it will potentially affect everyone that
uses `wraps` currently.


Ideas?


Thanks!

-- 
David Caro
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LFF3EXFPZG3FMXY4AXORU6SXXKBPSZOY/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Steven D'Aprano
On Thu, Aug 06, 2020 at 02:46:01PM +0100, MRAB wrote:
> On 2020-08-06 14:16, Stestagg wrote:

> > In the "New syntax", wouldn't these examples map to:
> >
> > d[1, 2, a=3]  =>  d.__getitem__((1, 2), a=3)
> > and
> > d[(1, 2), a=3]  =>  d.__getitem__((1, 2), a=3)

That is certainly what I would expect. All positional arguments are 
packed into a single positional parameter, and any keyword arguments 
passed separately.

Any current subscript dunder is likely to look like this:

def __getitem__(self, index):  # or key, whatever

i.e. it will only have a single positional argument. (Aside from self of 
course.)


> Not quite. The second should be:
> 
> d[(1, 2), a=3]  =>  d.__getitem__(((1, 2),), a=3)

Aside from the keyword argument, which is a syntax error, that's not 
what happens now.


py> class Demo:
... def __getitem__(self, index):
... print(index, type(index))
... 
py> d = Demo()
py> d[(1, 2)]  # Tuple single arg.
(1, 2) 
py> d[1, 2]  # Still a tuple.
(1, 2) 


Adding a keyword arg should not change this.



[Stestagg]
> > I.e. My understanding was that the existing conversion of all 
> > positional parameters in a subscript would be packed into a tuple 
> > (strictly for consistency with legacy behaviour) while any keywords 
> > would be passed as kwargs?

To be clear, a single argument is not packed into a tuple.

I think that what is actually going on is a matter of precedence. Commas 
do double duty, separating arguments in calls and creating tuples. 
Inside a function call, the argument separate takes priority:

func(1, 2, 3)

is interpreted as three arguments separated by commas. If you want a 
tuple, you need to change the priority by using round brackets:

func((1, 2, 3))

But inside a subscript, there is currently no concept of multiple 
arguments. There's always only one item, and commas just have their 
normal meaning of creating a tuple.

I think that's how it works, but I'm not quite good enough at reading 
the grammar specs to be sure.

https://docs.python.org/3/reference/grammar.html


-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WPS2KEUY54MXB4JBOLG2C5RULS2AXPFX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Paul Moore
On Fri, 7 Aug 2020 at 09:56, Brendan Barnwell  wrote:
> > Have a read of the PEP's rejection notice at the top. To revive the
> > PEP, the objections to it need to be solved.
>
> It seems that the rationale that was used in the PEP was fairly
> narrowly focused on the comparison with things like dict.get() and the
> idea of EAFP.  A somewhat broader justification might be something along
> these lines:

Drastic cut because this is basically little more than a +1 comment,
but that rationale sounds a *lot* better than the original one in the
PEP (that got rejected). I'm slightly skeptical that just modifying
the rationale and resubmitting it is genuinely all we need to do, but
is there any way we could get a steer from the SC or someone as to
whether that would be OK, or if not, what else would be needed?

Paul
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/PIK3DQ27PUQ4T3XFCWOLKKTVT7H3WN3G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Package kwkey and PEP 472 -- Support for indexing with keyword arguments

2020-08-07 Thread Stefano Borini
Ok so let's do it like this. I'll open a PR against the PEP and I will
aggregate all the feedback from this discussion as additional notes.
I'll have to re-read the PEP myself, It's been a while.
As I said, I'm swamped so I might start working on it probably on Monday.

On Wed, 5 Aug 2020 at 23:08, Christopher Barker  wrote:
>
> On Wed, Aug 5, 2020 at 3:01 PM Stefano Borini  
> wrote:
>>
>>  Maybe I should open a new PEP?
>
>
> I"ll let teh PEP editors decide, but it look slike it was "rejected"m with 
> this comment:
>
> "The idea never seemed to gain any traction over its near 5 years in
> existence as a PEP."
>
> So I'd think re-opening it would be fine -- rather than clutter up the PEP 
> namespace...
>
> Maybe we could use a "suspended" status for PEPs?
>
> -CHB
>
>
>
>> On Tue, 4 Aug 2020 at 14:26, Jonathan Fine  wrote:
>> >
>> > Thank you all for your posts. I'm busy now and for the next few days, so 
>> > have little time to respond. Here's some comments and suggestions.
>> >
>> > I hope that Andras, Caleb, Stefano, Neil, Joao Bueno, Todd and Stephan 
>> > will take a special interest in this post. In the previous thread, these 
>> > people saw that the proposed new syntax
>> > d[1, 2, a=3, b=4]
>> > would bring benefits to their own particular use of Python. (Apologies for 
>> > any omitted names or misunderstanding of posts).
>> >
>> > I hope the package kwkey shows that it is possible now to write
>> > from kwkey import o
>> > d[o(1, 2, a=3, b=4)]
>> > as a workable present day substitute for the proposed syntax
>> > d[1, 2, a=3, b=4]
>> >
>> > I think using this can safely go ahead, even though there may be 
>> > disagreements on the meaning of 'o' and the implementation of classes that 
>> > take advantage of the new syntax. Indeed, I think going ahead now will 
>> > contribute to understanding and resolving the disagreements, by creating a 
>> > shared experience.
>> >
>> > I suggest that those who previously suggested uses for the proposed syntax 
>> > now implement some examples. (I give a list below.) They can do this using 
>> > my API, Steven's API, or any other API. Or indeed now, using the return 
>> > value of 'o' directly.
>> >
>> > I've started this process with a toy example:
>> > https://github.com/jfine2358/python-kwkey/blob/master/kwkey/example_jfine.py
>> >
>> > Here are three aspects to the proposed syntax. They are all important, and 
>> > good design will balance between the various parts and interests.
>> >
>> > First, ordinary programmers, who perhaps want
>> > d[1, 2]
>> > d[x=1, y=2]
>> > d[1, y=2]
>> > d[y=2, x=1]
>> > to all be equivalent, for d a mapping of whose domain is points in the x-y 
>> > plane. More complicated examples might be found in function annotations 
>> > (Andras Tantos, Caleb Donovick), quantum chemistry (Stefano Borini), 
>> > networkx (Neil Girdhar), numpy and pandas (Joao Bueno), xarrary (Todd, 
>> > Stephan Hoyer).
>> >
>> > Second, there are those who implement classes that make use of the 
>> > proposed syntax.
>> >
>> > Third, there are those who implement the extension of Python that allows
>> > d[o(1, 2, a=3, b=4)]
>> > to be replaced by
>> > d[1, 2, 3, 4]
>> >
>> > I suggest that those who see benefits in feature produce experimental 
>> > implementations via kwkey, just as I did in my kwkey.example_jfine. It is 
>> > possible to do this now, and so have benefits now, in a way that is 
>> > reasonably future proof regarding implementation of the proposed new 
>> > syntax.
>> >
>> > If you're a user of kwkey, I will have some time available to help you if 
>> > you want it.
>> >
>> > I hope this helps some, and harms none.
>> > --
>> > Jonathan
>> >
>> >
>>
>>
>> --
>> Kind regards,
>>
>> Stefano Borini
>> ___
>> Python-ideas mailing list -- python-ideas@python.org
>> To unsubscribe send an email to python-ideas-le...@python.org
>> https://mail.python.org/mailman3/lists/python-ideas.python.org/
>> Message archived at 
>> https://mail.python.org/archives/list/python-ideas@python.org/message/QK3YV3BUTF4VCPKNNMHFDWVJDQIJMZ3A/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
> --
> Christopher Barker, PhD
>
> Python Language Consulting
>   - Teaching
>   - Scientific Software Development
>   - Desktop GUI and Web Development
>   - wxPython, numpy, scipy, Cython



-- 
Kind regards,

Stefano Borini
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/3ZVAM2G2MEFID3I767EUVV54MXLG3TET/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Kazantcev Andrey
Chris Angelico wrote:
> Why do you want dump and load to take parameters from "somewhere
> else"? 

Because developers of libraries don't think about configuring json.dump method 
in mos of cases. I would like to have a mechanism that would allow tweaking the 
behaviour for the entire program. I just don't know how best to do it. The idea 
with the configuration via the context manager seems to me to be good. I know 
exactly what code will be patched and I can catch the bug.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/SVJ5Q4JSK3WVWZ63K4SQ7LB6GDNIYCEB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Inline Try-Except Clause

2020-08-07 Thread Brendan Barnwell

On 2020-08-06 22:52, Chris Angelico wrote:

On Fri, Aug 7, 2020 at 11:01 AM Jonathan Grant
 wrote:


How can we start to revive this PEP? And I completely agree, making the syntax 
`... except ... with ...` is much better than `eor`.



Have a read of the PEP's rejection notice at the top. To revive the
PEP, the objections to it need to be solved.


	It seems that the rationale that was used in the PEP was fairly 
narrowly focused on the comparison with things like dict.get() and the 
idea of EAFP.  A somewhat broader justification might be something along 
these lines:


In practice, a sizable number of try/except statements do nothing except 
evaluate a single expression and assign it to a variable in the try 
block, with a single except block that instead assigns a different value 
to the same variable.  This requires four lines of code even if the 
expression and exception-set involved are quite simple.


This common pattern is Python's current way of expressing the idea of "I 
want this, or, if that doesn't work, then this other thing", where 
"doesn't work" means "raises a certain exception".  This is parallel to 
the if-else ternary operator, but the condition "raises an exception" 
cannot currently be described with an expression, so if-else cannot be 
used to handle this case.


Many well-designed functions raise exceptions to signal well-defined 
error conditions for which the calling code has an easily-expressed 
alternative value (i.e., "what to use instead of the return value of the 
function, if it raises like this").  In other words, both the 
exceptional condition and the desired result when it occurs may be 
expressed clearly and briefly --- except that it can't, because the 
try/except structure needed to shim in the alternative is itself so 
cumbersome.  Similarly, in many such situations, the variable assigned 
inside the try/except is only used once immediately after, and the whole 
if-this-except-this logic could have been inlined into the following 
code, if not for the fact that try/except is a statement.


This PEP proposes a clean and simple way to handle this common 
situation.  Just as the if-else ternary operator provides a handy way to 
encode "X unless condition A, in which case Y", the except operator 
provides a handy way to encode "X unless EXCEPTIONAL condition A, in 
which case Y".


--
Brendan Barnwell
"Do not follow where the path may lead.  Go, instead, where there is no 
path, and leave a trail."

   --author unknown
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/XD35QJMO62KBK4I6P7VVA74NWOF7XKY5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Paul Moore
On Fri, 7 Aug 2020 at 09:26, Kazantcev Andrey  wrote:
>
> The problem in this code
>
> lib.py
> ```
> from json import dumps
>
> def some_func():
> # do something
> res = dumps(...)
> # do something
> ```
>
> I wish dump and load themselves could take parameters from somewhere else, 
> and that was the standard behaviour.

That's such a general statement, you could probably use it for *any*
function, at some point. Which makes your argument pretty weak, IMO.
And to be honest, the standard Python way of providing that behaviour
is monkeypatching. It feels like a hack because the general view is
that needing to do it is a sign of a badly designed API, is all.

For your example, if wanting to change the format of dumps is an
important feature, lib.py "should" have been designed with a
configuration mechanism that allows users to choose that format. The
fact that it doesn't either implies that the author of lib.py doesn't
want you to do that, or that they didn't think of it. Monkeypatching
allows you to address that limitation, so in that sense it's a
flexible but advanced mechanism, not a hack.

It's not particularly hard (as I'm sure you realise):

def custom_dumps(obj):
json.dumps(obj, param1=preference1, ...)

old_val = lib.dumps
try:
lib.some_func()
finally:
lib.dumps = old_val

That needs an understanding of how lib.py works, and it needs you to
accept the risk of using the library in a way that's not supported,
but it's a perfectly reasonable approach for an unexpected case, IMO.

Paul
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/ZHS4DX7SCUNLSRCB4JMUDKU2FP6IALS2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Chris Angelico
On Fri, Aug 7, 2020 at 6:24 PM Kazantcev Andrey  wrote:
>
> The problem in this code
>
> lib.py
> ```
> from json import dumps
>
> def some_func():
> # do something
> res = dumps(...)
> # do something
> ```
>
> If I patch dumps like you propose lib doesn't see any change. Also, it's all 
> hacks. I wish dump and load themselves could take parameters from somewhere 
> else, and that was the standard behaviour.

PLEASE include context when you reply.

Yes, I'm aware that my original demo wouldn't work with this kind of
thing. But (1) does that actually happen? and (2) my second example
will work, as long as you do it before this library imports anything
from json.

Why do you want dump and load to take parameters from "somewhere
else"? As a general rule, it's better for them to get their parameters
entirely from, well, their parameters. What you're trying to do is
override someone else's code, and that basically means monkeypatching
the library, or monkeypatching the json module. Take your pick.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/CBDVULQMTFMROAZBUBX2ASK5M6RNGOIQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Kazantcev Andrey
The problem in this code

lib.py
```
from json import dumps

def some_func():
# do something
res = dumps(...)
# do something
```

If I patch dumps like you propose lib doesn't see any change. Also, it's all 
hacks. I wish dump and load themselves could take parameters from somewhere 
else, and that was the standard behaviour.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/BEV7LQOEA2MP27XFX2FE5DZO35ZEI6DC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Decorators for class non function properties

2020-08-07 Thread Steven D'Aprano
On Fri, Aug 07, 2020 at 12:22:28PM +1200, Greg Ewing wrote:
> On 7/08/20 2:47 am, David Mertz wrote:
> >The only difference is that in the usual existing style, 'a' doesn't 
> >know that it's called "a".  You and Steven have both, basically, said 
> >"Why would you possibly care about that?"
> 
> I've only really been thinking about attributes, but I suppose
> it might be useful for things in other contexts to know their
> names, so I guess my original proposal is not completely dead.
> But I don't have any real-world use case to put forward.

The classic example is namedtuple:

myrecord = namedtuple("myrecord", ...)

The three-argument form of type() also needs a name argument.

On the other hand, it is arguable that the status quo is more flexible, 
as you don't have to pass the same class name as the variable name you 
bind to.

-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/VYWJOSGUCCL5YSUZRFR4SEJAXVOO3F4X/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Steven D'Aprano
On Fri, Aug 07, 2020 at 05:54:18PM +1000, Steven D'Aprano wrote:

> This proposal doesn't say anything about reversing the decision made all 
> those years ago to bundle all positional arguments in a subscript into a 
> single positional parameter. What's done is done, that's not going to 
> change.

Sorry, I was referring to the proposal that inspired this thread, to add 
keyword arguments to subscripting. There's an actual concrete use-case 
for adding this, specifically for typing annotations, and I cannot help 
but feel that this thread is derailing the conversation to something 
that has not been requested by anyone actually affected by it.

I may have allowed my frustration to run ahead of me, sorry.

There is a tonne of code that relies on subscripting positional 
arguments to be bundled into a single parameter. Even if we agreed that 
this was suboptimal, and I don't because I don't know the rationale for 
doing it in the first place, I would be very surprised if the Steering 
Council gave the go-ahead to a major disruption and complication to the 
language just for the sake of making subscript dunders like other 
functions.

Things would be different if, say, numpy or pandas or other heavy users 
of subscripting said "we want the short term churn and pain for long 
term benefit".

But unless that happens, I feel this is just a case of piggy-backing a 
large, disruptive change of minimal benefit onto a a small, focused 
change, which tends to ruin the chances of the small change. So please 
excuse my frustration, I will try to be less grumpy about it.


-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/YV4IBLMGE7DI3IJJFOVJOB7MDK7ZPGEW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Changing item dunder method signatures to utilize positional arguments (open thread)

2020-08-07 Thread Steven D'Aprano
On Tue, Aug 04, 2020 at 10:58:51AM -0400, Todd wrote:

> My main issue with this is that, in my opinion, dunders are not something a
> beginner should be messing with anyway.  By the time someone is experienced
> enough to start working on this, they are also experienced enough to
> understand that special cases like this exist for historical reasons.

Define "beginner".

I'm serious -- beginners to Python vary from eight year olds who have 
never programmed before, to people with fifty years of programming 
experience in a dozen different languages aside from Python.

I'm not going to teach newcomers to programming object oriented 
techniques in the first day, but as soon as a programmer wants to create 
their own class, they will surely need to understand how to write 
dunders.


> > Another reason: it could make writing code for specialized libraries that
> > tend to abuse (for the good of us all!) item dunders, like pandas, much
> > easier. Right now such libraries have to rely on their own efforts to break
> > up a key:
> >
> > def __getitem__(self, key):
> > try:
> > k1, k2 = key
> > except TypeError:
> > raise TypeError("two tuple key required")
[...]
> But this is still a pretty simple piece of code.  Is it worth having
> everyone start over from scratch to avoid dealing with 4 lines of code?

This proposal doesn't say anything about reversing the decision made all 
those years ago to bundle all positional arguments in a subscript into a 
single positional parameter. What's done is done, that's not going to 
change.

Nobody has to start over from scratch. Nobody needs to change a single 
line of code unless they want to add support for keyword arguments to 
their class, and only some classes will do that. This proposal is 
completely 100% backwards compatible except that what was a SyntaxError 
turns into a TypeError:

obj[param=value]
TypeError: __getitem__ got an unexpected keyword argument 'param'

(or something like that).



-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/COBN6JPXGUPMIZYOOKEC4T4L2UT6B2SD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Chris Angelico
On Fri, Aug 7, 2020 at 5:50 PM Kazantcev Andrey  wrote:
>
> Chris Angelico wrote:
>
> > Not gonna be 100% reliable and I don't think it belongs in the stdlib,
> > but might be useful.
>
> That is the problem. Sometimes libs import only methods.

I don't see that often, do you have a specific example?

Worst case, you could monkeypatch json.dumps with a version that
respects a set of defaults, prior to importing the library in
question.

import json
orig = json.dumps
def dumps(*a, **kw):
return orig(*a, **{**json.defaults, **kw})
json.dumps = dumps
json.defaults = {}

Definitely hacky though (and you absolutely have to get your imports
in the right order) and I would much prefer not to do this. But it's
possible.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/EYLJQ3QMHB3XA6B4HJXRF4N7RHPVOAL2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Kazantcev Andrey
Chris Angelico wrote:

> Not gonna be 100% reliable and I don't think it belongs in the stdlib,
> but might be useful.

That is the problem. Sometimes libs import only methods.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/OX42AD3UYPBOGSPLT5VHZPDR3DZZVRWD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Chris Angelico
On Fri, Aug 7, 2020 at 5:22 PM Kazantcev Andrey  wrote:
>
> Maybe use context as a context manager.
>
> For example
>
> ```
> with json.Context(ensure_ascii=False):
> json.dumps(...)
> ```
>
> Implementation can be done via contextlib.

If all you want is a way to parameterize naive calls to json.dumps(),
you could monkeypatch it.

import json
import contextlib

@contextlib.contextmanager
def monkeypatch_json(**defaults):
orig = json.dumps
try:
def dumps(*a, **kw):
return orig(*a, **{**defaults, **kw})
json.dumps = dumps
yield
finally:
json.dumps = orig

Not gonna be 100% reliable and I don't think it belongs in the stdlib,
but might be useful.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WDANEN56SJPWHAHZKEBI7B4CQSO4L6II/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Decorators for class non function properties

2020-08-07 Thread Steven D'Aprano
On Thu, Aug 06, 2020 at 04:01:47PM -, redrad...@gmail.com wrote:

> I see lots of use-cases for property decorators ...

We have had property decorators since version Python 2.2 which was 18 
years ago. We know that there are many wonderful use-cases for things 
like this. What you are not explaining is why you can do them with 
syntax:

@function
name = value

but not 

name = function(value)



-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/76FNJ7H56NVAQT77WWUYHAQLVQSIMVQZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Decorators for class non function properties

2020-08-07 Thread Steven D'Aprano
On Thu, Aug 06, 2020 at 04:03:39PM -, redrad...@gmail.com wrote:
> No it is not possible to have something like this:

> ```python
> def function(cls):
> # Where is cls is Neuron class object
> pass
> 
> class Neuron:
> activation = function(Neuron)
> ```


Correct. And it isn't possible with decorator syntax either:


py> def decorator(cls):
... print(cls)
... def inner(func):
... return func
... return inner
... 
... 
py> class Neuron:
... @decorator(Neuron)
... def method(self):
... pass
... 
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in Neuron
NameError: name 'Neuron' is not defined


-- 
Steven
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/RRIABYLKDBBG7GBYYS5OL6M2WIIPRORM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Propouse add context to json module.

2020-08-07 Thread Kazantcev Andrey
Maybe use context as a context manager.

For example

```
with json.Context(ensure_ascii=False):
json.dumps(...)
```

Implementation can be done via contextlib.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/XIGD3UXW3ZJR67NOBMQMYTVFK3PCYHHA/
Code of Conduct: http://python.org/psf/codeofconduct/