[Python-ideas] Re: Should Python enforce Type-checking in the future?

2021-12-09 Thread Caleb Donovick
First off I love types.  They, as you point out, are a great tool for
finding bugs. However, there are reasons why requiring types would either
require the use of meaningless types e.g. marking everything as the Any
type or require python to solve the halting problem.

Consider the following code example:

```
class ArrayMeta(type):



On Thu, Dec 9, 2021 at 12:52 PM deavid  wrote:

> Hi, I would like to hear the opinion of Python's community on enforcing
> types in the future for the language. I've been using Python as my main
> language for everything for around 10 years, until I started moving to Rust
> 2 years ago; one of the main factors was types.
>
> Just before moving to Rust I started to use mypy heavily, which I liked a
> lot and uncovered tons of potential problems. Now (2 years later), it seems
> the situation hasn't changed much; I might be wrong, so let me know what
> improvements you think landed in this area in the last 2-3 years.
>
> I feel it's possible this topic might cause a lot of passionate answers,
> but I just want to hear honest opinions on this.
>
> I firmly believe that Python's future would be better if types were
> enforced by default at "compile time" (whatever this means in Python), with
> an option/flag to disable this, and integrate MyPy or similar into the
> interpreter. I'm fully aware that a transition like this would be very hard
> and long, but I don't think it's impossible.
>
> Here's a list of my reasons to think that Python is better if it was typed:
>
> 1) On really big codebases and complex projects, it's very easy to lose
> track of what things do. Types help detecting bugs early. (Ask anyone that
> has used Rust + Clippy, the amount of errors that are catched is amazing,
> programs tend to work on the first try)
> 2) Libraries are currently the top bottleneck for any team to start using
> MyPy/Pytype. Making types mandatory would ensure all libraries have type
> support. (If anyone has any other proposal to enforce this, I would like to
> hear)
> 3) IDE integration is way simpler and better with types.
> 4) The interpreter could take further optimizations if it can prove that a
> function or piece of code is guaranteed to have a limited set of types.
> This could be used by libraries to have great speed ups that currently are
> not possible.
> 5) Static analysis tools could also benefit from types to gain more
> insight on what the code is trying to do.
>
> Of course, types have their own set of drawbacks; for example it could
> make Python look harder to code for newcomers, or it might get in the way
> for things like Jupyter notebooks, ML, and similar stuff. Because of this,
> an escape hatch must always exist. (maybe there are even more problems I am
> not aware about, I'd love to hear)
>
> If it were for me, I would like to have a Python 4 that is exactly a
> Python 3 but with mypy bundled and strictly enforced by default; with a
> flag to convert errors into warnings or disable entirely. Then every
> release, say a Py3.11, would also get a Py4.11-beta (the beta would be to
> avoid people migrating until it's ready).
>
> In this way, for a library to say it has Py4 compatibility it would need
> to be type-ready. Jupyter notebooks and such would be stuck at Py3, but of
> course, getting all the releases; and enterprises would be trying to use
> Py4 whenever it were ready.
>
> Typescript is also basically Javascript with types (well, not only that,
> but anyway) and the result is quite good. In this fashion, another
> alternative is having a second binary called TPython or MPython, and
> include it on the regular Python distribution; this would cause less push
> to go for types, but it could do the trick too.
>
> So well, my question here is: why is this not a thing? Has anyone proposed
> something like this before? I feel I must have missed something important.
>
> Thanks,
> David
>
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/QBWYBXYK7NHSLQYWPV6PXAGIR4JB4FWU/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/7IW6OMFPNZWHBGL4Z22VFZPCB7FDVK7L/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Should Python enforce Type-checking in the future?

2021-12-09 Thread Caleb Donovick
Oops sorry accidentally sent mid typing:

```
class Array:
def __class_getitem__(cls, size):
# I know this usage is discouraged
if hasattr(cls, 'size'):
raise TypeError('cannot resive type')
return type(cls)(f'Array[{size}]', (), {'size': size})

def __new__(cls, *args, **kwargs):
if not hasattr(cls, 'size'):
raise TypeError('cannot instance unsized array')
return super().__new__(cls, *args, **kwargs)

# The rest of the implementation is left
# as an execersize for the reader

def concat(a: Array, b: Array) -> Array:
return Array[a.size + b.size](...)

def make_zeros(i: int) -> Array:
return Array[i](0 for _ in range(i))

# Both of these functions are typed incorrectly.
# Array[i] is not a subtype of Array
# Granted we could change the construction of Array[i]
# to use bases=(cls,) instead of bases=()
```

It also possible to write a function that costructs an entirely new class:
```
def make_new() -> Any :
class A: pass
return A()
```
there is no possible to way to type it more specifically than returning
Any (or object).

Now if libraries are going to be forced to just throw Any everywhere what's
the point?

-- Caleb Donovick

On Thu, Dec 9, 2021 at 3:58 PM Caleb Donovick 
wrote:

> First off I love types.  They, as you point out, are a great tool for
> finding bugs. However, there are reasons why requiring types would either
> require the use of meaningless types e.g. marking everything as the Any
> type or require python to solve the halting problem.
>
> Consider the following code example:
>
> ```
> class ArrayMeta(type):
>
>
>
> On Thu, Dec 9, 2021 at 12:52 PM deavid  wrote:
>
>> Hi, I would like to hear the opinion of Python's community on enforcing
>> types in the future for the language. I've been using Python as my main
>> language for everything for around 10 years, until I started moving to Rust
>> 2 years ago; one of the main factors was types.
>>
>> Just before moving to Rust I started to use mypy heavily, which I liked a
>> lot and uncovered tons of potential problems. Now (2 years later), it seems
>> the situation hasn't changed much; I might be wrong, so let me know what
>> improvements you think landed in this area in the last 2-3 years.
>>
>> I feel it's possible this topic might cause a lot of passionate answers,
>> but I just want to hear honest opinions on this.
>>
>> I firmly believe that Python's future would be better if types were
>> enforced by default at "compile time" (whatever this means in Python), with
>> an option/flag to disable this, and integrate MyPy or similar into the
>> interpreter. I'm fully aware that a transition like this would be very hard
>> and long, but I don't think it's impossible.
>>
>> Here's a list of my reasons to think that Python is better if it was
>> typed:
>>
>> 1) On really big codebases and complex projects, it's very easy to lose
>> track of what things do. Types help detecting bugs early. (Ask anyone that
>> has used Rust + Clippy, the amount of errors that are catched is amazing,
>> programs tend to work on the first try)
>> 2) Libraries are currently the top bottleneck for any team to start using
>> MyPy/Pytype. Making types mandatory would ensure all libraries have type
>> support. (If anyone has any other proposal to enforce this, I would like to
>> hear)
>> 3) IDE integration is way simpler and better with types.
>> 4) The interpreter could take further optimizations if it can prove that
>> a function or piece of code is guaranteed to have a limited set of types.
>> This could be used by libraries to have great speed ups that currently are
>> not possible.
>> 5) Static analysis tools could also benefit from types to gain more
>> insight on what the code is trying to do.
>>
>> Of course, types have their own set of drawbacks; for example it could
>> make Python look harder to code for newcomers, or it might get in the way
>> for things like Jupyter notebooks, ML, and similar stuff. Because of this,
>> an escape hatch must always exist. (maybe there are even more problems I am
>> not aware about, I'd love to hear)
>>
>> If it were for me, I would like to have a Python 4 that is exactly a
>> Python 3 but with mypy bundled and strictly enforced by default; with a
>> flag to convert errors into warnings or disable entirely. Then every
>> release, say a Py3.11, would also get a Py4.11-beta (the beta would be to
>> avoid people migrating until it's ready).
>>
>> In this way, for a library to say it has Py4 compatibility it would need
>> to be type-ready. Jupyter notebooks and 

[Python-ideas] Re: Power Assertions: Is it PEP-able?

2021-10-04 Thread Caleb Donovick
>  2) Some OTHER exception occurs on the reevaluation. It's a chained
> exception like any other.

Except it's not a chained exception and displaying as such would be VERY
confusing IMO.
Granted we could easily strip the chained exception and just return the
original one.  So after
reconsideration I agree this is not an issue.

>  It's only really the fourth case that would be confusing

I generally agree with your analysis but I think this 4th case is more
problematic than you think.
Given no information I am immediately going to split my assertion so I can
see what part is failing.
However, if the interpreter gives me incorrect information I am going to be
super confused.  Most people
will not have carefully read section 7.3 of the language reference  and
will not understand this critical
aspect of the execution of assertion statements.  They will assume that the
interpreter is not lying to them.

I think storing the intermediate results on the stack is vastly preferable
to revaluation for this reason.

On Mon, Oct 4, 2021 at 3:20 PM Chris Angelico  wrote:

> On Tue, Oct 5, 2021 at 9:02 AM Caleb Donovick 
> wrote:
> >
> > > I wonder, could this be simplified a bit, on the assumption that a
> > > well-written assertion shouldn't have a problem with being executed
> > > twice?
> >
> > While I agree as an engineering principle an assert should not have side
> effects
> > and hence re-evaluation should be fine in most cases, it is not
> universal. It is possible
> > for assertions to not have side effects but yet change value between
> evaluations if they
> > interact with a shared resource such as the file system..
> >
> > For example consider the following assertion:
> >
> > assert  os.path.isdir("config") and os.path.isfile("config/setup.yml")
> >
> > It is completely possible for the value of this expression to change
> between evaluations. Granted this would
> > like mean their is some more significant issue with my code, however, I
> would like the interpreter to give me
> > accurate information about why my assertion failed. Bad information is
> worse than no information. Like imagine
> > that on the first evaluation the directory config does not exist but on
> the second it has been created by another process.
> > A naive revaluation strategy would likely result in it pointing at the
> second clause and saying the assertion failed their
> > when it really failed on the first clause. This would send me down a
> rabbit hole of debugging why setup.yml was not
> > constructed properly instead of debugging why the config directory
> didn’t exist.
>
> That seems like an abuse of assertions. If you have assertions that
> depend on external state that can change that quickly, then the
> assertion is *already useless*. What do you gain by asserting
> something that might have changed by the next line of code?
>
> > Further while it is bad engineering practices to have side effects in an
> assert it is completely possible.
> > For example consider the following pathological example:
> >
> > class PathologicalFoo:
> > def __init__(self):
> > self._val = 0
> >
> > def get(self):
> > old_val = self._val
> > self._val = 1
> > return old_val
> >
> > foo = PathologicalFoo()
> > assert foo.get() == 1
> >
>
> Yes, side effects in assertions are always possible. If someone has
> assertions with side effects, do we say that python -O is buggy, or
> the assertion is buggy? In a world in which assertions might and might
> not be evaluated, is it such a stretch to demand that they can be
> safely reevaluated (in the same context)? Yes, it's a change to the
> expectations, but one which well-designed assertions shouldn't be
> bothered by.
>
> My imagining of this is that it'd be handled when an AssertionError
> reaches top level, and it'd be broadly thus:
>
> try:
> all_your_code()
> except AssertionError as e:
> ... reevaluate etc
>
> Meaning there are four possibilities:
> 1) The assertion is consistent, and the extra info is absolutely correct
> 2) Some OTHER exception occurs on the reevaluation. It's a chained
> exception like any other.
> 3) No assertion failure happens (eg PathologicalFoo). Might require a
> minor special case "if nothing goes wrong, print out the original" but
> that's the most obvious thing to do.
> 4) The assertion fails in an inconsistent way, but it still fails.
> You'll get the second form instead of the first.
>
> It's only really the fourth case that would be confusing, and only if
> the first evaluation actually causes the problem

[Python-ideas] Re: Power Assertions: Is it PEP-able?

2021-10-04 Thread Caleb Donovick
> I wonder, could this be simplified a bit, on the assumption that a
> well-written assertion shouldn't have a problem with being executed
> twice?

While I agree as an engineering principle an assert should not have side
effects
and hence re-evaluation should be fine in most cases, it is not universal.
It is possible
for assertions to not have side effects but yet change value between
evaluations if they
interact with a shared resource such as the file system..

For example consider the following assertion:

assert  os.path.isdir("config") and os.path.isfile("config/setup.yml")

It is completely possible for the value of this expression to change
between evaluations. Granted this would
like mean their is some more significant issue with my code, however, I
would like the interpreter to give me
accurate information about why my assertion failed. Bad information is
worse than no information. Like imagine
that on the first evaluation the directory config does not exist but on the
second it has been created by another process.
A naive revaluation strategy would likely result in it pointing at the
second clause and saying the assertion failed their
when it really failed on the first clause. This would send me down a rabbit
hole of debugging why setup.yml was not
constructed properly instead of debugging why the config directory didn’t
exist.

Further while it is bad engineering practices to have side effects in an
assert it is completely possible.
For example consider the following pathological example:

class PathologicalFoo:
def __init__(self):
self._val = 0

def get(self):
old_val = self._val
self._val = 1
return old_val

foo = PathologicalFoo()assert foo.get() == 1

Or worse it is possible for revaluation to cause errors

class UniquePtr:
def __init__(self, obj):
self.set(obj)

def get(self):
if self._valid:
self._valid = False
obj = self._obj
self._obj = None
return obj
else:
raise ValueError()

def set(self, obj):
self._obj = obj
self._valid = True

x =  UniquePtr(1)assert x.get() == 0
x.set(0)

How would the interpreter handle this?

On Sun, Sep 12, 2021 at 11:39 AM Chris Angelico  wrote:

> On Mon, Sep 13, 2021 at 1:37 AM Serhiy Storchaka 
> wrote:
> >
> > 12.09.21 17:28, Guido van Rossum пише:
> > > This is cool.
> > >
> > > AFAIK pytest does something like this. How does your implementation
> differ?
> >
> > What pytest does is awesome. I though about implementing it in the
> > standard compiler since seen it the first time.
> >
> > > What is your argument for making this part of the language? Why not a
> > > 3rd party library?
> >
> > It needs a support in the compiler. The condition expression should be
> > compiled to keep all immediate results of subexpressions on the stack.
> > If the final result is true, immediate results are dropped. If it is
> > false, the second argument of assert is evaluated and its value together
> > with all immediate results of the first expression, together with
> > references to corresponding subexpressions (as strings, ranges or AST
> > nodes) are passed to the special handler. That handler can be
> > implemented in a third-party library, because formatting and outputting
> > a report is a complex task. The default handler can just raise an
> > AttributeError.
> >
>
> I wonder, could this be simplified a bit, on the assumption that a
> well-written assertion shouldn't have a problem with being executed
> twice? Instead of keeping all the subexpressions around (a run-time
> cost), keep the AST of the expression itself (a compile-time cost).
> Then, when the exception is about to be printed to the console,
> re-evaluate it and do the display.
>
> ChrisA
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/IXPY4BRCZ3MPSSVBKWFYJGMR6ZHFGCJF/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/GZ4KL7DXCKDNFN2EISF544XLWZ5QDD5K/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: String comprehension

2021-05-03 Thread Caleb Donovick
For the record I am definitely a -1 on this.  The arguments against are
overwhelming and the arguments for are pretty weak.  However I felt the
need to rebut:

>  Tests don't really count, so there's a small handful here.

Tests 100% count as real use cases.  If this is a pattern that would be
useful in test case generation then we should be discussing that.  I have
worked on plenty of projects which were almost exclusively documented
through tests.  Being able to read and write tests fluently is as important
as any other piece of code.

On Sun, May 2, 2021 at 8:41 PM Chris Angelico  wrote:

> On Mon, May 3, 2021 at 1:00 PM David Álvarez Lombardi
>  wrote:
> > > Rather than toy examples, how about scouring the Python standard
> library for some real examples?
> >
> > Here are 73 of them that I found by grepping through Lib.
> >
> >
> https://github.com/python/cpython/blob/master/Lib/email/_encoded_words.py#L90
> >
> https://github.com/python/cpython/blob/master/Lib/email/_header_value_parser.py#L126
> >
> https://github.com/python/cpython/blob/master/Lib/email/_header_value_parser.py#L134
> >
> https://github.com/python/cpython/blob/master/Lib/email/_header_value_parser.py#L256
> >
> https://github.com/python/cpython/blob/master/Lib/email/_header_value_parser.py#L260
> >
> https://github.com/python/cpython/blob/master/Lib/email/_header_value_parser.py#L283
> >
> https://github.com/python/cpython/blob/master/Lib/lib2to3/fixes/fix_import.py#L29
> >
> https://github.com/python/cpython/blob/master/Lib/lib2to3/fixes/fix_next.py#L69
> >
> https://github.com/python/cpython/blob/master/Lib/lib2to3/refactor.py#L235
> >
> https://github.com/python/cpython/blob/master/Lib/msilib/__init__.py#L178
> >
> https://github.com/python/cpython/blob/master/Lib/msilib/__init__.py#L290
> >
> https://github.com/python/cpython/blob/master/Lib/test/_test_multiprocessing.py#L4680
> >
> https://github.com/python/cpython/blob/master/Lib/test/multibytecodec_support.py#L309
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_audioop.py#L6
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_buffer.py#L853
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_code_module.py#L123
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_code_module.py#L139
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codeccallbacks.py#L515
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codeccallbacks.py#L521
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codeccallbacks.py#L824
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L149
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L1544
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L1548
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L1552
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L1556
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L1953
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_codecs.py#L1991
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_decimal.py#L1092
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_decimal.py#L5346
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_email/test_email.py#L3526
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_email/test_email.py#L3535
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_fileinput.py#L91
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_fileinput.py#L92
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_fileinput.py#L93
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_fileinput.py#L94
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L360
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L366
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L372
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L378
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L384
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L391
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L397
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L403
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L409
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L415
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L421
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L427
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L433
> >
> https://github.com/python/cpython/blob/master/Lib/test/test_gettext.py#L455
> >
> 

[Python-ideas] Re: Allow syntax "func(arg=x if condition)"

2021-03-22 Thread Caleb Donovick
Never needed this for lists but definitely had the pain for kwargs.  Seems
very reasonable for that use case, +0.5.

In libraries I control I can make sure to use the same default values for
functions and their wrappers.
However when wrapping functions I don't control there is not a great way to
do this. And I end up
incrementally building up a kwargs dict. I suppose the same thing could
occur with *args lists so it makes sense for
both positional and keyword arguments.

Yes one could do something like:
```
def fun(a, b=0): ...
def wraps_fun(args, b=inspect.signature(fun).parameters['b'].default): ...
```
But I would hardly call that clear.  Further it is not robust as would fail
if `fun` is itself wrapped in way
that destroys its signature.  E.g.:
```
def destroy_signature(f):
# should decorate here with functools.wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
```

Caleb
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/2EHOQDIIK7BMAY54KG44Z45IYWDDSZSW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: shouldn't slices be iterable ?

2021-03-18 Thread Caleb Donovick
Or perhaps more problematic what happens if only stride is specified?

On Thu, Mar 18, 2021 at 6:09 PM Chris Angelico  wrote:

> On Fri, Mar 19, 2021 at 10:46 AM Cameron Simpson  wrote:
> >
> > I know that range(start,end,stride) will produce what I'd want from
> > iter(slice(start,end,stride)), but wouldn't it be reasonable for a slice
> > itself to be iterable?
> >
> > Yes, only one obvious way and all that, but inside eg __getitem__ it
> > seems to me that:
> >
> > if isinstance(index, slice):
> > for i in index:
> > ... do stuff with i ...
> >
> > is the obvious thing to do.
> >
>
> What if the start is positive and the end is negative? What values should
> i get?
>
> ChrisA
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/O6ZF3UZIWTJFBGOYDXDZ7X4X7FA6DNHK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/ZRAV5SGWHTLLRZF5KDA5244CPHHQY34F/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Add an __exclude_all__ complement to __all__

2021-03-05 Thread Caleb Donovick
> __all__ = (Class.__name__, func.__name__, ...)
>
> So I have to put it at the end of the module. I do this because if I
> change the class or function name and I forget to change it in
> __all__, I get an exception.

I certainly will not claim to be the arbitrator of good and bad practices
but that seems reasonable enough.

It is worth pointing out that it's pretty easy to unit test `__all__`

```
# module/__init__.py
__all__ = 'Foo', 'Bar'

class Foo: pass
```

```
# tests/test_imports.py
def test_star():
  # will raise `AttributeError: module 'module' has not attribute 'Bar'`
  from module import *
```

> from .a import *
> from .b import *

> __all__ = a.__all__ + b.__all__

Assuming you mean:
```
from . import a
from . import b
from .a import *
from .b import *
__all__ = a.__all__ + b.__all__
```
It is fairly unnecessary if you aren't adding more names. but definitely
isn't harmful.


On Fri, Mar 5, 2021 at 12:11 PM Marco Sulla 
wrote:

> On Wed, 3 Mar 2021 at 23:59, Brendan Barnwell 
> wrote:
> >  But usually you want to define it at the beginning as a sort of
> > documentation aid ("this is the public API").
>
> This is a little off-topic, but I'm curious, since usually, for public
> functions and classes, I do
>
> __all__ = (Class.__name__, func.__name__, ...)
>
> So I have to put it at the end of the module. I do this because if I
> change the class or function name and I forget to change it in
> __all__, I get an exception.
>
> Furthermore, if there's a module composed by submodules, I usually do
>
> from .a import *
> from .b import *
>
> __all__ = a.__all__ + b.__all__
>
> In your opinion, these are good or bad practices?
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/AZLCRXYWZUS63RDSAUEQ52SGRXGKY3KE/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/QNSBH5DEG74AU6QC472HRQMWPNYMJY3W/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Alternate lambda syntax

2021-02-23 Thread Caleb Donovick
I was +0.5 on the arrow syntax for `Callable`.  It seemed like a nice short
hand but understood the arguments against it in the vain of "There should
be one-- and preferably only one --obvious way to do it."

But

>  The latter is the same as the former, just in the AST form. That's what
> we ask people to do with type annotations currently - write them in the
> AST form.

Absolutely convinced me.
+1

 - Caleb

On Fri, Feb 19, 2021 at 8:45 AM Paul Sokolovsky  wrote:

> Hello,
>
> On Sat, 20 Feb 2021 00:23:13 +0900
> "Stephen J. Turnbull"  wrote:
>
> > Abdulla Al Kathiri writes:
> >
> > Condensing to the parts which are in question,
> >
> >  > def test(self, func: t.Callable[..., bool],   *args, **kwargs) ->
> >  > Predicate: return self._build_predicate(
> >  >lambda lhs, value: func(lhs, *args, **kwargs),
> >  > Operation.TEST,
> >  > (self._path, func, args, freeze(kwargs))
> >  > )
> >
> >  > def test(self, func: (...) -> bool, *args, **kwargs) -> Predicate:
> >  > return self._build_predicate(
> >  > (lhs, value) => func(lhs, *args, **kwargs),
> >  > Operation.TEST,
> >  > (self._path, func, args, freeze(kwargs))
> >  > )
> >
> > Yes, it's nicer, but I don't see a win big enough to be worth forcing
> > people who read code to learn two syntaxes for lambda, and two
> > syntaxes for Callable (one of which isn't even syntax).
>
> People won't learn "two syntaxes for Callable". People shun using
> Python's current type annotations due to their ugliness and "quick hack
> without thinking of UX" feel. Only after there will be "int | str"
> instead of "Union[int, str]", "(int, str) -> int" instead of
> "Callable[[int, str], int]", "int & const" instead of "Annotated[int,
> const]" - only then people will start learn and use them.
>
> > Also, "->"
> > can't be just syntax, if I understand type annotations correctly.  It
> > would need to become an object constructor.
>
> It depends on how "->" in that role will be implemented. It would be
> nice to not just hardcode it to "Callable[lhs, rhs]", but at the same
> time, I'd personally hope we'll avoid yet another dunder either.
>
> >  Then the question would
> > be "are there cases where 'Callable' is not what you want there?",
> > i.e., you want a subclass of Callable.
>
> There can't be subclass of Callable in the same way as there can't be:
>
> class my_foo(type(lambda:0)):
> pass
>
> > In that case you'd have to use
> > the old syntax anyway.  (I don't have an answer to that, but you would
> > need one.)
> >
> > I don't make the rules, but to me if this is the best you can do, you
> > would have to provide evidence that quite a lot of code would benefit
> > from this.
>
> The difference between "(int, str) -> int" and "Callable[[int, str],
> int]" is the same as difference __str__ and __repr__. "Callable" syntax
> is effectively an internal representation of the type annotation
> information. And currently, the people have to write that verbose,
> unwieldy, ugly representation (which normally would be needed only for
> debugging purposes) manually. What we need is pleasant surface syntax
> for type annotations in Python.
>
> So no, __str__ vs __repr__ comparison doesn't do fairness to it. The
> difference is the same as between:
>
> ---
> def foo(a):
> print("hello")
> ---
>
> and
>
> ---
> Module(
> body=[
> FunctionDef(
> name='foo',
> args=arguments(
> posonlyargs=[],
> args=[
> arg(arg='a')],
> kwonlyargs=[],
> kw_defaults=[],
> defaults=[]),
> body=[
> Expr(
> value=Call(
> func=Name(id='print', ctx=Load()),
> args=[
> Constant(value='hello')],
> keywords=[]))],
> decorator_list=[])],
> type_ignores=[])
> ---
>
> The latter is the same as the former, just in the AST form. That's what
> we ask people to do with type annotations currently - write them in the
> AST form.
>
>
> >
> > Steve
>
>
> --
> Best regards,
>  Paul  mailto:pmis...@gmail.com
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/TC4CM3R6HKSIOOF2FNQANVLXFBOU2OZJ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/C7Q5SNACJMOM55HSRP76FXPFDLXL6HOT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Method to efficiently advance iterators for sequences that support random access

2020-10-07 Thread Caleb Donovick
> This is BARELY more plausible as a real-world case.  Throwing away 28
bytes with a bunch of next() calls is completely trivial in time.  A case
where some implementation could conceivably save measurable time would
require skipping 100s of thousands or millions of next() calls... and
probably calls that actually took some work to compute to matter, even
there.

Seeing as an IP packet could in theory be as large 64K
`stream.advance(total_length - 32 - len(header))` could be skipping ~65,000
next calls. (Although in practice packets are very unlikely to exceed 1,500
bytes because of the ethernet standard.)  In any event avoiding ~1500 next
calls per packet is hardly insignificant if you want to process more than
handful of packet.

Now a better rejection to my example is that this sort of code does not
belong in python and should be in a systems language, an argument I would
agree with.


-- Caleb Donovick


On Wed, Oct 7, 2020 at 3:38 PM David Mertz  wrote:

> On Wed, Oct 7, 2020 at 6:24 PM Caleb Donovick 
> wrote:
>
>> Itertools.count was an example (hence the use of "e.g.") of an iterator
>> which can be efficiently
>> advanced without producing intermediate state. Clearly anyone can advance
>> it manually.
>> My point is that an iterator may have an efficient way to calculate its
>> state some point in the future
>> without needing to calculate the intermediate state.
>>
>
> Yes, this is technically an example.  But this doesn't get us any closer
> to a real-world use case.  If you want an iterator than counts from N, the
> spelling `count(N)` exists now.  If you want to starting counting N
> elements later than wherever you are now, I guess do:
>
> new_count = counter(next(old_cound) + N)
>
> For example the fibonacci sequence has a closed
>> form formula for the nth element and hence could be advanced efficiently.
>>
>
> Sure.  And even more relevantly, if you want the Nth Fibonacci you can
> write a closed-form function `nth_fib()` to get it.  This is toy examples
> where *theoretically* a new magic method could be used, but it's not close
> to a use case that would motivate changing the language.
>
> ```
>> def get_tcp_headers(stream: Iterator[Byte]):
>> while stream:
>> # Move to the total length field of the IP header
>> stream.advance(2)
>> # record the total length (two bytes)
>> total_length = ...
>> # skip the rest of IP header
>> stream.advance(28)
>> # record the TCP header
>> header = ...
>> yield header
>> stream.advance(total_length - 32 - len(header))
>> ```
>>
>
> This is BARELY more plausible as a real-world case.  Throwing away 28
> bytes with a bunch of next() calls is completely trivial in time.  A case
> where some implementation could conceivably save measurable time would
> require skipping 100s of thousands or millions of next() calls... and
> probably calls that actually took some work to compute to matter, even
> there.
>
> What you'd need to motivate the new API is a case where you might skip a
> million items in an iterator, and yet the million-and-first item is
> computable without computing all the others.  Ideally something where each
> of those million calls does something more than just copy a byte from a
> kernel buffer.
>
> I don't know that such a use case does not exist, but nothing comes to my
> mind, and no one has suggested one in this thread.  Otherwise,
> itertools.islice() completely covers the situation already.
>
> --
> The dead increasingly dominate and strangle both the living and the
> not-yet born.  Vampiric capital and undead corporate persons abuse
> the lives and control the thoughts of homo faber. Ideas, once born,
> become abortifacients against new conceptions.
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WQLTONF3K4U3AVWKCCWV56JLN65LB5GA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Method to efficiently advance iterators for sequences that support random access

2020-10-07 Thread Caleb Donovick
Addendum to my example:

If my get_tcp_header was made a class it would also be possible for it to
support the `__advance__` protocol:
```
class TCPHeaderIter(Iterator[TCPHeader]):
def __init__(self, stream: Iterator[Byte]):
self.stream = stream

def __next__(self) -> TCPHeader:
# similar to the body of the while loop

def __advance__(self, n):
for _ in range(n):
self.stream.advance(2)
total_length = ...
self.stream.advance(total_length - 4)
```
Now I don't have a use case for `__advance__` on a TCP header iterator but
one might
want to sample every N'th header.



On Wed, Oct 7, 2020 at 3:06 PM Caleb Donovick 
wrote:

> > For `__advance__` to be an official Python protocol, it would almost
> > certainly have to be of use for *general purpose iterators*, not just
> > specialised ones -- and probably not *hypothetical* iterators which may
> > not even exist. Do you have concrete examples of your skip list and tree
> > iterators that are in wide-spread use?
>
> "I am +0.3 on this as I don't personally have a need"
> " (I'm not sure why you would ever want to `__advance__` a tree iterator)"
>
> > What's your use case for advancing a count object, rather than just
> > creating a new one?
>
> Itertools.count was an example (hence the use of "e.g.") of an iterator
> which can be efficiently
> advanced without producing intermediate state. Clearly anyone can advance
> it manually.
> My point is that an iterator may have an efficient way to calculate its
> state some point in the future
> without needing to calculate the intermediate state.  For example the
> fibonacci sequence has a closed
> form formula for the nth element and hence could be advanced efficiently.
>
> I realize my original examples were contrived, but I have a better one.
> Consider the task of collecting TCP headers from an iterator of bytes:
>
> ```
> def get_tcp_headers(stream: Iterator[Byte]):
> while stream:
> # Move to the total length field of the IP header
> stream.advance(2)
> # record the total length (two bytes)
> total_length = ...
> # skip the rest of IP header
> stream.advance(28)
> # record the TCP header
> header = ...
> yield header
> stream.advance(total_length - 32 - len(header))
> ```
>
> Maybe Kevin can tell us what motivated him to post this idea but I can see
> many places in parsing where you might want
> to skip arbitrary portions of a stream.  The beauty of iterators is you
> don't need to be concerned with the underlying data
>  structure. Ideally I shouldn't need to write two versions of some parse
> function one which operates on sequences and one
> that operates on iterables, just so I can efficiently `advance` the
> sequences.
>
> -- Caleb Donovick
>
> On Tue, Oct 6, 2020 at 6:16 PM Steven D'Aprano 
> wrote:
>
>> On Tue, Oct 06, 2020 at 02:27:54PM -0700, Caleb Donovick wrote:
>> > I am +0.3 on this as I don't personally have a need for this but do see
>> the
>> > utility.
>> >
>> > I can think of a number of examples where an `__advance__` would be
>> > preferable to any of the proposed solutions:
>> [...]
>>
>> For `__advance__` to be an official Python protocol, it would almost
>> certainly have to be of use for *general purpose iterators*, not just
>> specialised ones -- and probably not *hypothetical* iterators which may
>> not even exist. Do you have concrete examples of your skip list and tree
>> iterators that are in wide-spread use?
>>
>> Specialised iterators can create whatever extra APIs they want to
>> support, but the official iterator protocol intentionally has a very
>> basic API:
>>
>> - anything with an `__iter__` method which returns itself;
>> - and a `__next__` method that returns the next value, raising
>>   StopIteration when exhausted.
>>
>> This is a bare minimum needed to make an iterator, and we like it that
>> way. For starters, it means that generators are iterators.
>>
>> If people want to supply objects that support the iterator protocol
>> but also offer a rich API including:
>>
>> - peek
>> - restart
>> - previous
>> - jump ahead (advance)
>>
>> all features that have been proposed, there is nothing stopping you from
>> adding those features to your iterator classes. But they all have
>> problems if considered to be necessary for *all* iterators.
>>
>> I would expect that, given a sufficiently compelling real-world
>> use-case, we would be prepar

[Python-ideas] Re: Method to efficiently advance iterators for sequences that support random access

2020-10-07 Thread Caleb Donovick
> For `__advance__` to be an official Python protocol, it would almost
> certainly have to be of use for *general purpose iterators*, not just
> specialised ones -- and probably not *hypothetical* iterators which may
> not even exist. Do you have concrete examples of your skip list and tree
> iterators that are in wide-spread use?

"I am +0.3 on this as I don't personally have a need"
" (I'm not sure why you would ever want to `__advance__` a tree iterator)"

> What's your use case for advancing a count object, rather than just
> creating a new one?

Itertools.count was an example (hence the use of "e.g.") of an iterator
which can be efficiently
advanced without producing intermediate state. Clearly anyone can advance
it manually.
My point is that an iterator may have an efficient way to calculate its
state some point in the future
without needing to calculate the intermediate state.  For example the
fibonacci sequence has a closed
form formula for the nth element and hence could be advanced efficiently.

I realize my original examples were contrived, but I have a better one.
Consider the task of collecting TCP headers from an iterator of bytes:

```
def get_tcp_headers(stream: Iterator[Byte]):
while stream:
# Move to the total length field of the IP header
stream.advance(2)
# record the total length (two bytes)
total_length = ...
# skip the rest of IP header
stream.advance(28)
# record the TCP header
header = ...
yield header
stream.advance(total_length - 32 - len(header))
```

Maybe Kevin can tell us what motivated him to post this idea but I can see
many places in parsing where you might want
to skip arbitrary portions of a stream.  The beauty of iterators is you
don't need to be concerned with the underlying data
 structure. Ideally I shouldn't need to write two versions of some parse
function one which operates on sequences and one
that operates on iterables, just so I can efficiently `advance` the
sequences.

-- Caleb Donovick

On Tue, Oct 6, 2020 at 6:16 PM Steven D'Aprano  wrote:

> On Tue, Oct 06, 2020 at 02:27:54PM -0700, Caleb Donovick wrote:
> > I am +0.3 on this as I don't personally have a need for this but do see
> the
> > utility.
> >
> > I can think of a number of examples where an `__advance__` would be
> > preferable to any of the proposed solutions:
> [...]
>
> For `__advance__` to be an official Python protocol, it would almost
> certainly have to be of use for *general purpose iterators*, not just
> specialised ones -- and probably not *hypothetical* iterators which may
> not even exist. Do you have concrete examples of your skip list and tree
> iterators that are in wide-spread use?
>
> Specialised iterators can create whatever extra APIs they want to
> support, but the official iterator protocol intentionally has a very
> basic API:
>
> - anything with an `__iter__` method which returns itself;
> - and a `__next__` method that returns the next value, raising
>   StopIteration when exhausted.
>
> This is a bare minimum needed to make an iterator, and we like it that
> way. For starters, it means that generators are iterators.
>
> If people want to supply objects that support the iterator protocol
> but also offer a rich API including:
>
> - peek
> - restart
> - previous
> - jump ahead (advance)
>
> all features that have been proposed, there is nothing stopping you from
> adding those features to your iterator classes. But they all have
> problems if considered to be necessary for *all* iterators.
>
> I would expect that, given a sufficiently compelling real-world
> use-case, we would be prepared to add a jump ahead method to
> list-iterators, as a specific feature of that iterator, not of all
> iterators.
>
>
> > A skip list which doesn't support O(1) random access but can advance
> faster
> > than naively calling next repeatedly
> > A lazy infinite iterator which can efficiently calculate its state at
> some
> > future point  (e.g. `itertools.count`)
>
> What's your use case for advancing a count object, rather than just
> creating a new one?
>
> it = itertools.count()  # start at 0
> process(it)  # process some values
> it.advance(state)  # jump forward
> process(it)  # process some more values
>
> as opposed to what is already possible:
>
> it = itertools.count()
> process(it)
> it = itertools.count(state)
> process(it)
>
> Real-world use-cases for this feature are far more useful than contrived
> and artifical use-cases unlikely to ever occur in real code.
>
>
> > My ladder two examples demonstrate that this could have utility outside
> of
> > sequences 

[Python-ideas] Re: Method to efficiently advance iterators for sequences that support random access

2020-10-06 Thread Caleb Donovick
I am +0.3 on this as I don't personally have a need for this but do see the
utility.

I can think of a number of examples where an `__advance__` would be
preferable to any of the proposed solutions:
A skip list which doesn't support O(1) random access but can advance faster
than naively calling next repeatedly
A lazy infinite iterator which can efficiently calculate its state at some
future point  (e.g. `itertools.count`)
A tree iterator could perform efficient `__advance__` by checking the size
of sub trees before descending into them (I'm not sure why you would ever
want to `__advance__` a tree iterator).

My ladder two examples demonstrate that this could have utility outside of
sequences but for iterators in general.

-- Caleb Donovick

On Tue, Oct 6, 2020 at 1:13 PM David Mertz  wrote:

>
> On Tue, Oct 6, 2020, 1:21 PM Christopher Barker
>
>> if you want to iterate through items N to the end, then how do you do
>> that without either iterating through the first N and throwing them away,
>> or making a slice, which copies the rest of the sequence?
>>
>
> it = (lst[i] for i in range(N, len(lst)))
>
> I haven't benchmarked whether this is faster than islice. It might depend
> on how many you wind up consuming.
>
> It's slightly cumbersome to write, I suppose. But it also seems like
> something one RARELY needs.
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/F7N5H6CAUJM5BICC4BEEWQDNIDGN2RTR/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/HZSIE675BKFI62Y67WTCQ7AEPK5H5YOR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-16 Thread Caleb Donovick
*unconcerned

(sorry for the spam)

On Sun, Aug 16, 2020 at 3:57 PM Caleb Donovick 
wrote:

> > Fine, so the use case you claimed was fiction. If you had just said
> "DSL" instead of "anonymous protocols and dataclasses" you would have
> gotten straight to the point and we would have been talking about whether
> extended subscription would be useful for DSLs (I can see various use
> cases), rather than arguing over whether Struct can be spelled with ()
> instead of [] (a total waste of time).
>
> Oh but the dataclasses and protocols part is not fiction, I am just
> concerned with mypy being able to leverage my annotations.
>
> On Sat, Aug 15, 2020 at 7:27 PM Guido van Rossum  wrote:
>
>> On Sat, Aug 15, 2020 at 7:14 PM Caleb Donovick 
>> wrote:
>>
>>> >  To me, the main weakness here is that you couldn't move forward with
>>> this unless you also got the various static type checkers on board. But I
>>> don't think those care much about this use case (an inline notation for
>>> what you can already do with a class definition and annotations). And
>>> without static checking this isn't going to be very popular.
>>>
>>> You underestimate my willingness to generate python files which could be
>>> consumed by static checkers via a preprocessing step.   Also, my real goal
>>> is to abuse type hints for the purposes of my DSL.  But DSL is a nuaghty
>>> term on the list so we won't mention that :)
>>>
>>
>> Fine, so the use case you claimed was fiction. If you had just said "DSL"
>> instead of "anonymous protocols and dataclasses" you would have gotten
>> straight to the point and we would have been talking about whether extended
>> subscription would be useful for DSLs (I can see various use cases), rather
>> than arguing over whether Struct can be spelled with () instead of [] (a
>> total waste of time).
>>
>> In fact, I don't know why you think "DSL" is a naughty term. (I find
>> "runtime use of annotations" much naughtier. :-)
>>
>> --
>> --Guido van Rossum (python.org/~guido)
>> *Pronouns: he/him **(why is my pronoun here?)*
>> <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
>>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/7GCLU6AHGZQOAMPHMEFVDDSKEMOBJIZH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-16 Thread Caleb Donovick
> Fine, so the use case you claimed was fiction. If you had just said "DSL"
instead of "anonymous protocols and dataclasses" you would have gotten
straight to the point and we would have been talking about whether extended
subscription would be useful for DSLs (I can see various use cases), rather
than arguing over whether Struct can be spelled with () instead of [] (a
total waste of time).

Oh but the dataclasses and protocols part is not fiction, I am just
concerned with mypy being able to leverage my annotations.

On Sat, Aug 15, 2020 at 7:27 PM Guido van Rossum  wrote:

> On Sat, Aug 15, 2020 at 7:14 PM Caleb Donovick 
> wrote:
>
>> >  To me, the main weakness here is that you couldn't move forward with
>> this unless you also got the various static type checkers on board. But I
>> don't think those care much about this use case (an inline notation for
>> what you can already do with a class definition and annotations). And
>> without static checking this isn't going to be very popular.
>>
>> You underestimate my willingness to generate python files which could be
>> consumed by static checkers via a preprocessing step.   Also, my real goal
>> is to abuse type hints for the purposes of my DSL.  But DSL is a nuaghty
>> term on the list so we won't mention that :)
>>
>
> Fine, so the use case you claimed was fiction. If you had just said "DSL"
> instead of "anonymous protocols and dataclasses" you would have gotten
> straight to the point and we would have been talking about whether extended
> subscription would be useful for DSLs (I can see various use cases), rather
> than arguing over whether Struct can be spelled with () instead of [] (a
> total waste of time).
>
> In fact, I don't know why you think "DSL" is a naughty term. (I find
> "runtime use of annotations" much naughtier. :-)
>
> --
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
> <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/TPEIPXLOV4BH6KPVWANZLEB6ST5NO74O/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-15 Thread Caleb Donovick
> Hmm... OK, that's an interesting desire.  How do square brackets get you
any closer to that?

I use `__class_getitem__` as a memoized type factory that builds a new
subtype when it's called or returns the cached type.

You are correct I could name it as something else, there is nothing special
about the square brackets other than notation but having a visual
distinction of type creation and instantiation is useful.

On Sat, Aug 15, 2020 at 3:44 PM David Mertz  wrote:

> On Sat, Aug 15, 2020 at 4:38 PM Caleb Donovick 
> wrote:
>
>> > Why would it require a metaclass? Rather than just: ...
>>
>> Because I want the following to be true:
>> x = Struct[x=int, y=str](...)
>> assert isinstance(x, Struct)
>> assert isinstance(x, Struct[x=int, y=str])
>> assert not isinstance(x, Struct[x=int, y=int])
>>
>
> Hmm... OK, that's an interesting desire.  How do square brackets get you
> any closer to that?
>
> If this proposal, in whatever variation, is adopted, `Struct[x=int,
> y=str]` is going to be some kind of call to .__getitem__().  There is some
> debate about exactly how the information gets passed into the method, but
> we can bracket that for this question.  One way or another, positional and
> named arguments are available to this future .__getitem__().
>
> So how do you make this true:
>
> assert isinstance(x, Struct.__getitem__(x=int, y=str))
> assert not isinstance(x, Struct.__getitem__(x=int, y=int))
>
> For demonstration, maybe it's easiest just to give a new name to the
> hypothetical method.  Say `Struct.bracket(...)`. It's not obvious to me how
> you'll get the behavior you want.
>
> ... and if you CAN get the behavior, why can't we name this method
> .__call__()?
>
> I'm not really sure what kind of thing Struct is meant to be, as well. Is
> it a class? An instance? A class factory? A metaclass?
>
>
> --
> The dead increasingly dominate and strangle both the living and the
> not-yet born.  Vampiric capital and undead corporate persons abuse
> the lives and control the thoughts of homo faber. Ideas, once born,
> become abortifacients against new conceptions.
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/NQXZKISQQ6DJM4IOZGURB62G32DBWMCM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-15 Thread Caleb Donovick
>  To me, the main weakness here is that you couldn't move forward with
this unless you also got the various static type checkers on board. But I
don't think those care much about this use case (an inline notation for
what you can already do with a class definition and annotations). And
without static checking this isn't going to be very popular.

You underestimate my willingness to generate python files which could be
consumed by static checkers via a preprocessing step.   Also, my real goal
is to abuse type hints for the purposes of my DSL.  But DSL is a nuaghty
term on the list so we won't mention that :)





On Sat, Aug 15, 2020 at 4:05 PM Guido van Rossum  wrote:

> On Fri, Aug 14, 2020 at 4:38 PM Caleb Donovick 
> wrote:
>
>> My own personal use for this would be for generating anonymous protocols
>> and dataclasses:
>>
>> class T(Protocol):
>> x: int
>> y: str
>> # with some abuse of notation obviously these would generate unique 
>> typesassert T == Struct[x=int, y=str]
>> # similarly @dataclassclass S:
>>x: int
>>y: str
>> assert S == Struct[x=int, y=str]
>>
>> I often want to create such types “on the fly” without needing to put a
>> name on them.
>>
>> Now as I don’t need mixed keyword / positional arguments I can achieve
>> this with:
>>
>> # K = dict
>> Struct[K(x=int, y=str)]
>>
>> But that costs 3 more keystrokes and is certainly less beautiful.
>>
>> While I would not personally use this I think a real killer app would be
>> slicing named axis, as the slice syntax is exclusive to geitem and hence
>> can not leverage the dict trick.
>>
> To me, the main weakness here is that you couldn't move forward with this
> unless you also got the various static type checkers on board. But I don't
> think those care much about this use case (an inline notation for what you
> can already do with a class definition and annotations). And without static
> checking this isn't going to be very popular.
>
> If and when we have `__getitem__` with keyword args we can start thinking
> about how to best leverage it in type annotations -- I would assume that
> describing axes of objects like numpy arrays would be the first use case.
>
> --
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
> <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JVBCKBGBCZ2VMLMXXUCZTG3GLY3CHOHU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-15 Thread Caleb Donovick
> Why would it require a metaclass? Rather than just: ...

Because I want the following to be true:

```
x = Struct[x=int, y=str](...)
assert isinstance(x, Struct)
assert isinstance(x, Struct[x=int, y=str])
assert not isinstance(x, Struct[x=int, y=int])
```


On Fri, Aug 14, 2020 at 5:27 PM David Mertz  wrote:

>
> On Fri, Aug 14, 2020, 7:53 PM Caleb Donovick 
> wrote:
>
>> > I don't see what that can possible get you that `Struct(x=int, y=str)`
>> doesn't.
>>
>> Using `Struct(x=int, y=str)` requires a metaclass, where `Struct[x=int,
>> y=str]` does not.
>>
>
> Why would it require a metaclass? Rather than just:
>
> class Struct:
> def __init__(self, **kws): ...
>
> Yes, that won't get you the MRO for T, but neither can __getitem__() on an
> entirely different object Struct.
>
> A class factory is also an option, of course.
>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/S5II33XWAICKHHR47OCKVPKURKZRZGTU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-15 Thread Caleb Donovick
>  I don't know how to interpret these examples. What's Protocol and where
does it come from? What's Struct?

`Protocol` comes from `typing`

`Struct` is my own class which generates anonymous dataclasses and
protocols as you gathered (unfortunately I currently have two versions one
for building the protocol and one for building the dataclass but thats
because of stupid engineering requirements).



On Fri, Aug 14, 2020 at 11:14 PM Steven D'Aprano 
wrote:

> On Fri, Aug 14, 2020 at 04:07:33PM -0700, Caleb Donovick wrote:
> > My own personal use for this would be for generating anonymous protocols
> > and dataclasses:
> >
> > class T(Protocol):
> > x: int
> > y: str
> > # with some abuse of notation obviously these would generate unique
> > typesassert T == Struct[x=int, y=str]
>
> I don't know how to interpret these examples. What's Protocol and where
> does it come from? What's Struct?
>
> As I recall, one of the motivations for re-visiting PEP 472 is to allow
> such keyword notation in type hints, so that we could write
>
> Struct[x=int, y=str]
>
> in a type hint and have it mean a struct with fields x (an int) and y (a
> str). I'm not sure whether that use in type hinting would allow the use
> of this Struct to create anonymous classes. I suppose it would, but I'm
> not expert enough on type hints to be sure.
>
> But assuming the two uses are compatible, I must say that having the
> same notation for type-hinting a struct and actually creating an
> anonymous struct class would be desirable:
>
> def func(widget:Struct[x=int, y=str]) -> gadget:
> pass
>
> # Later:
> MyWidget = Struct[x=int, y=str]
>
> func(MyWidget(19, 'hello'))
>
> I really like the look of that, and I think that having the Struct call
> use the same subscript notation as the Struct type hint is a plus.
>
>
> > While I would not personally use this I think a real killer app would be
> > slicing named axis, as the slice syntax is exclusive to geitem and hence
> > can not leverage the dict trick.
>
> This is one of the motivating use-cases of PEP 472.
>
>
>
> --
> Steven
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/HLXSO2KSOB62Z4RI7SA52DHWGCGECATC/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/TQHBPRCHNEPOFGBRKSB32VX4F5NY34G7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-14 Thread Caleb Donovick
> I don't see what that can possible get you that `Struct(x=int, y=str)`
doesn't.

Using `Struct(x=int, y=str)` requires a metaclass, where `Struct[x=int,
y=str]` does not.


On Fri, Aug 14, 2020 at 4:45 PM David Mertz  wrote:

> On Fri, Aug 14, 2020, 7:39 PM Caleb Donovick
>
>> class T(Protocol):
>> x: int
>> y: str
>> # with some abuse of notation obviously these would generate unique 
>> typesassert T == Struct[x=int, y=str]
>>
>> I don't see what that can possible get you that `Struct(x=int, y=str)`
> doesn't.
>
> I'm +0 on the idea, but I don't think "square brackets look nicer" is
> sufficient reason for a change.
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/WYJYCGTXYXU5L7KQJNTGWOSJ6FZ5EDUH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 - regarding d[x=1, y=2] and similar

2020-08-14 Thread Caleb Donovick
My own personal use for this would be for generating anonymous protocols
and dataclasses:

class T(Protocol):
x: int
y: str
# with some abuse of notation obviously these would generate unique
typesassert T == Struct[x=int, y=str]
# similarly @dataclassclass S:
   x: int
   y: str
assert S == Struct[x=int, y=str]

I often want to create such types “on the fly” without needing to put a
name on them.

Now as I don’t need mixed keyword / positional arguments I can achieve this
with:

# K = dict
Struct[K(x=int, y=str)]

But that costs 3 more keystrokes and is certainly less beautiful.

While I would not personally use this I think a real killer app would be
slicing named axis, as the slice syntax is exclusive to geitem and hence
can not leverage the dict trick.

Caleb

On Fri, Aug 14, 2020 at 6:30 AM Paul Moore  wrote:

> On Fri, 14 Aug 2020 at 13:12, Jonathan Fine  wrote:
> > Anyone who is experimenting with keyword keys would, I think, appreciate
> having something they can use straight away. Thus, I think, any use case
> for PEP 472 is also a use case for the general keyword class I'm
> suggesting. No use cases for PEP 472 would of course be fatal.
>
> When experimenting, I routinely write throwaway classes and functions like
>
> def f(*args, **kw):
> print(f"In f, {args=} {kw=}")
>
> I don't see why writing
>
> class A:
> def __getitem__(self, *args, **kw):
> print(f"Getting {args=}, {kw=}")
>
> would be any more onerous. A stdlib class that used the new syntax
> should stand on its own merits, not as "something people can use to
> experiment with".
>
> Paul
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/H7KIXVO3ZB45ANYJM5U2CSNSPEFPBHT5/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/HNQC5PK53VSJ7KPWMRGYKJRKQV3PU34U/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 472 -- Support for indexing with keyword arguments

2020-07-16 Thread Caleb Donovick
I have wanted this and suggested it before for use with typing.

Defining  protocols is obnoxiously verbose for "struct" like data and
keyword
arguments to subscript could help alleviate that.
I often want to write type hint like this:

```
def foo(x: Protocol[id=int, name=str]):
  bar(x)
  baz(x)

def bar(x: Protocol[name=str]): ...

def baz(x: Protocol[id=int]): ...
```

So  I either need to specify more restrictive types than necessary (which
often
is not possible because I reuse my functions), or generate a combinatorial
number of Protocols.  Beyond the obvious annoyances that come with having
to
generate many protocols, simply naming them is cognitively expensive.  I
don't
need to bind an identifier when declaring a Union or specializing a generic
but
if I want to say I have a type with some attribute it MUST be named.


On Fri, Jul 10, 2020 at 10:00 AM Paul Moore  wrote:

> On Fri, 10 Jul 2020 at 17:45, Steven D'Aprano  wrote:
>
> > I must admit I like the look of this, but I don't know what I would use
> > it for.
>
> It feels very much like the sort of "here's some syntax that might
> match someone's mental model of something" that is common in languages
> that focus on allowing users to build their own DSLs¹ (Lua and Groovy
> are two examples of the type of language I'm thinking of, although I
> don't know if either has this particular syntax).
>
> Python typically doesn't encourage DSL-style programming, so this type
> of "syntax looking for a use case" isn't very popular.
>
> Paul
>
> ¹ DSL = Domain Specific Language, in case anyone isn't familiar with the
> term.
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/HUELPGHPCAIFLUKCQ5G7O2LKQFRRZ6CU/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/Q3LB4F76MPWC33D2KZFNO6PL4XSOSS6V/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: url imports

2020-06-10 Thread Caleb Donovick
>  i.e. instead of
> `pip install package`
> and
> `import package`
> and
> `pip freeze > requirements.txt`
> for every import,

I am unclear on what you mean by "for every import".
I have never once done this and I maintain half a dozen packages.
Do people really not know what the requirements of the package they are
authoring are?

>  we stick to, for every python script
> `abc = import("package", "2.*")`

I definitely don't want to worry about which version of a package I need at
a module level.

>  from imports could be done like
> `value = import("package", "1.*").value `

You might want to read about the difference between:

`from package import module`
and
`import package.module as module`

https://docs.python.org/3/reference/simple_stmts.html#the-import-statement

 >  this completely removes the virtualenvs issue

???
Virtuelenv is amazing.

>  and solves alot of problematic issues with python imports

???


On Wed, Jun 10, 2020 at 9:50 AM Aditya Shankar  wrote:

> it'd be really cool if we could drop virtualenvs + requirements.txt
> altogether, i.e., like deno, mordern javascript and so...
>
> i.e. instead of
> `pip install package`
> and
> `import package`
> and
> `pip freeze > requirements.txt`
> for every import,
>
> we stick to, for every python script
>
> `abc = import("package", "2.*")`
>
> from imports could be done like
>
> `value = import("package", "1.*").value `
>
> this completely removes the virtualenvs issue, and solves alot of
> problematic issues with python imports
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/FNQLJJMEDNPV3TWHVI6GFSEZWRUHY6M6/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/4ZS55OHWNR4DKYLD3OFELIQ74ZACY2ME/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Optional keyword arguments

2020-05-18 Thread Caleb Donovick
Certainly the way default arguments work with mutable types is not the most
intuitive and I think your complaint has some merit.

However how would you define the following to work:

def foo():
cons = [set(), [], (),]
funs = []
for ds in cons:
def g(arg:=ds):
return arg
funs.append(g)
return funs

How would you evaluate "ds" in the context of the call?
If it were to have the same observable behavior as def g(arg=ds) except
that you would get "fresh" reference on each invocation you would get the
following:

assert [f() for f in foo()]  == [set(), [], ()]

Note it cannot be a simple syntactic transform because:

class _MISSING: pass
def foo():
cons = [set(), [], (),]
funs = []
for ds in cons:
def g(arg=_MISSING):
if arg is _MISSING:
arg = eval('ds') # equivalent to arg = ds so does not
produce a fresh reference
return arg
funs.append(g)
  return funs

assert [f() for f in foo()]  == [(), (), ()]

Granted the way closures work (especially in the context of loops) is also
a pretty unintuitive, but stands as a barrier to easily implementing your
desired behavior.
And even if that wasn't the case we still have the issue that eval('ds')
doesn't give you a fresh reference.

Would it implicitly deepcopy ds?  e.g.:

class _MISSING: pass
def build_g(default):
def g(arg=_MISSING):
if arg is _MISSING:
arg =  deepcopy(default)
return arg
return g

def foo():
cons = [set(), [], (),]
funs = []
for ds in cons:
g = build_g(ds)
funs.append(g)
  return funs

What if ds doesn't implement __deepcopy__?


On Mon, May 18, 2020 at 7:11 AM Richard Damon 
wrote:

> On 5/18/20 9:06 AM, James Lu wrote:
> > "There should be one-- and preferably only one --obvious way to do it."
>
> *obvious*
>
> multiple ways are allowed as long as there is one clear preference.
>
> --
> Richard Damon
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/PCAVU6BEI4KUYUUVL7F3CKV2EQ7ZPBPK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/YE77WSNCGMLNVCTTD472WFWAELURMHSF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: PEP 618: Add Optional Length-Checking To zip

2020-05-01 Thread Caleb Donovick
> but the main
> benefit is, again, being able to get the iterated values which were
> silently swallowed by zip when the iteration stopped.

I don't think the call back idea is terrible, however, it doesn't really
seem to have a usecase that isn't
covered by zip_longest with a sentinel.   Now as discussed in the main
thread zip strict could also be handled by zip_longest
with a sentinel.  However, zip strict is an incredibly common usecase.
There is no evidence that recovering the consumed
elements is.

> also: I don't like booleans. they're not extensible, unless you consider
> None. you either get it right the first time, add a new boolean argument
> later, or use enum.Flag from the beginning. this callback-based API
> sidesteps all these issues

While in theory I very much support the use of enums for flags, they have
serious performance problems
which makes their use inadvisable in the standard lib let alone a builtin.

https://bugs.python.org/issue39102
https://bugs.python.org/issue38659


On Fri, May 1, 2020 at 1:20 PM Soni L.  wrote:

>
>
> On 2020-05-01 4:43 p.m., Chris Angelico wrote:
> > On Sat, May 2, 2020 at 5:21 AM Soni L.  wrote:
> > >
> > >
> > >
> > > On 2020-05-01 3:41 p.m., Chris Angelico wrote:
> > > > On Sat, May 2, 2020 at 4:38 AM Soni L.  wrote:
> > > > >
> > > > >
> > > > >
> > > > > On 2020-05-01 3:10 p.m., Brandt Bucher wrote:
> > > > > > I have pushed a first draft of PEP 618:
> > > > > >
> > > > > > https://www.python.org/dev/peps/pep-0618
> > > > > >
> > > > > > Please let me know what you think – I'd love to hear any *new*
> feedback that hasn't yet been addressed in the PEP!
> > > > >
> > > > > What about using an optional kwarg for a handler for mismatched
> lengths?
> > > > > I made a post about it on the other thread and it's not addressed
> in the
> > > > > PEP. It'd make zip capable of doing zip_shortest, zip_equal (aka
> > > > > zip(strict=True)) and zip_longest, it's not stringly-typed, and
> it's
> > > > > user-extensible. Something along the lines of zip(foo, bar, baz,
> > > > > and_then=lambda consumed_items, iters: ...).
> > > > >
> > > >
> > > > YAGNI.
> > >
> > > examples:
> > >
> > > # iterates in chunks, e.g. a very large file that wouldn't fit all in
> RAM
> > > zip(*[iter(x)]*32, and_then=lambda res, _: (yield res))
> >
> > I'm honestly not sure how useful this really is in practice. Iterating
> > over a file is already going to be chunked. What do you gain by
> > wrapping it up in an opaque zip call?
> >
> > > # strict zip
> > > sentinel = object()
> > > def zip_eq(res, iters):
> > >if res or any(next(x, sentinel) is not sentinel for x in iters):
> > >  raise ValueError
> > > zip(a, b, c, and_then=zip_eq)
> > > # this would ideally be zip.strict e.g. zip(a, b, c,
> > > and_then=zip.strict), but w/e.
> >
> > So a messier and noisier spelling of what's already in this
> proposal...
> >
> > > # normal (shortest) zip but using an explicit function
> > > def no_op(*args, **kwargs):
> > >pass
> > > zip(a, b, c, and_then=no_op)
> > >
> >
> > ... and a messier and noisier spelling of what we already have.
> >
> > I say again, YAGNI. Give an actual use-case for the excessive
> > generality of your proposal - namely, the ability to provide a custom
> > function. And show that it's better with zip than just with a custom
> > generator function.
>
> we can finally push for the no_op function, for starters. but the main
> benefit is, again, being able to get the iterated values which were
> silently swallowed by zip when the iteration stopped.
>
> also: I don't like booleans. they're not extensible, unless you consider
> None. you either get it right the first time, add a new boolean argument
> later, or use enum.Flag from the beginning. this callback-based API
> sidesteps all these issues.
>
> and just in case maybe zip(strict=True) should be zip(errors=True) and
> we can later change it to zip(errors="replace", value=Foo) to get
> zip_longest >.< (no really bools = bad please don't use them in new APIs)
> >
> > ChrisA
> > ___
> > Python-ideas mailing list -- python-ideas@python.org
> > To unsubscribe send an email to python-ideas-le...@python.org
> > https://mail.python.org/mailman3/lists/python-ideas.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/UAQGBSUUFSQJRE56VGTHVXAHCJHUAYTM/
> > Code of Conduct: http://python.org/psf/codeofconduct/
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/F43SYFQAK7O7TVUGLHMKIKJDESES4W25/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org

[Python-ideas] Re: zip(x, y, z, strict=True)

2020-04-24 Thread Caleb Donovick
  +1 on almost always expecting my iterators to be the same length when I
pass them to zip.  I struggle to think of a time when I haven't had that
expectation.

To people asking whether I would catch the error that zip_strict would
raise, almost certainly not.  I rarely catch ValueError other than to log
or raise a different exception.

I don't really care about the state of the iterators post zip_strict (as I
would generally not be catching that exception) but I suppose it should be
the same as zip, evaluate left to right.

Seems to me that deprecating the current zip behavior is more trouble than
it's worth, just add zip_strict to itertools and call it a day.  If
zip_strict turns out to be super popular than we could revisit changing the
behavior of zip.

I don't think we need to have new versions of map, there isn't map_longest
in itertools.  Also building variant maps is trivial:
def map_strict(f, *iters):
 return starmap(f, zip_strict(iters))


- Caleb Donovick

On Fri, Apr 24, 2020 at 10:57 AM Serhiy Storchaka 
wrote:

> 24.04.20 07:58, Andrew Barnert via Python-ideas пише:
> > And not only that, the PEP for this first step has to make it clear that
> it’s useful on its own—not just to people like Serhiy who eventually want
> to replace zip and see it as a first step, but also to people who do not
> want zip to ever change but do want a convenient way to opt in to checking
> zips (and don’t find more-itertools convenient enough) and see this as the
> _only_ step.
>
> Don't consider me an apologist. I just think that might be a good idea.
> But now we do not have enough information to decide. We should wait
> several months or years. And even if it turns out that most users prefer
> zip_equal(), the cost of changing zip() may be too high. But we should
> not reject this possibility.
>
> While we discuss zip(), we should not forget about map() and other
> map-like functions (like Executor.map()). All that was said about zip()
> is applied to map() too, so after adding zip_equal() and/or
> zip_shortest() we will need to add corresponding map variants.
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/NJB46SWADI34ZWPFZTW3V5KTYGNT7SQK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/O5SNKD75PP3VJOGIHVWAI7O3QAOOUV7P/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Proposed class for collections: dynamicdict

2020-04-16 Thread Caleb Donovick
>  construction calls __init__ if __new__ returns an instance

Actually type's __call__ method does that, although that doesn't help my
point at all...

Your point about there being no method to perform the dispatch is good.  To
get what I want without interpreter changes there would need to be some
sort auxiliary getitem method which would perform the protocol.  Or
alternatively MissingMapping (or some other mixin) could define __getitem__
and  require __getitem_impl__ and __missing__.  As this wouldn't require
any changes to anything I think it might be the best solution I have
proposed.

>  I don't see how this is much different from what dicts already do.

I was suggesting making __missing__ part of what a subscript expression in
a load context (__getitem__) means so that user defined Mappings could use
it.

I've gotten fairly off topic I might make a new thread for my idea.

Caleb

On Wed, Apr 15, 2020 at 6:35 PM Steven D'Aprano  wrote:

> On Mon, Apr 13, 2020 at 06:43:53PM -0700, Caleb Donovick wrote:
>
> > > Why can’t you just subclass dict and override that?
> >
> > Because TypeError: multiple bases have instance lay-out conflict is one
> of
> > my least favorite errors.
>
> For the benefit of the people on this list who aren't as familiar with
> your code as you are, could you explain how that comment is relevant?
>
> I don't like "multiple bases have instance lay-out conflict" errors
> either. But I don't get them from subclassing dict.
>
> py> class MyDict(dict):
> ... def __missing__(self, key):
> ... return (key, None)
> ...
> py> MyDict()[999]
> (999, None)
>
> Works fine.
>
> Normally, if I get that error, it's a sign that I'm probably using too
> much multiple inheritence and not enough composition, or even that I
> should step away from from the OO paradigm and reconsider whether or not
> I actually need a hybrid list+float object :-)
>
> So can you please explain why how this exception is relevant, and how
> the proposal here will fix it?
>
> defaultdict is subject to the same "lay-out conflict" issue, so from my
> naive point of view, this proposal won't help you avoid the error.
>
> py> from collections import defaultdict
> py> class Weird(int, defaultdict):
> ... pass
> ...
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: multiple bases have instance lay-out conflict
>
>
> > Perhaps `__missing__` could be a first class part of the getitem of
> > protocol, instead of a `dict` specific feature.  So that
> > ```
> > r = x[key]
> > ```
> > means:
> > ```
> > try:
> >   r = x.__getitem__(key)
> > except KeyError as e: # should we also catch IndexError?
> >   try:
> > missing = x.__missing__
> >   except AttributeError:
> > raise e from None
> >   r = missing(key)
> > ```
>
> I don't see how this is much different from what dicts already do. You
> can't add a `__missing__` attribute to literal dicts (or lists, tuples,
> etc) since they don't take attributes.
>
> {}.__missing__ = lambda key: key
>
> fails, so you have to subclass anyway.
>
>
> > Obviously this would come at some performance cost for non dict mappings
> so
> > I don't know if this would fly.
>
> Giving every single dict a `__missing__` member, for the sake of the
> rare instance that needs one, would come at a performance and memory
> cost too.
>
> It would be nice if we could work out what the problem we are trying to
> solve is before we jump straight into solutions mode.
>
>
> To my mind, the naive problem "if `d[key]` fails, call a method with
> `key` as argument" already has a solution. If the `__missing__` dunder
> isn't a solution, I don't know what the problem is.
>
>
>
> --
> Steven
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/DF4TMSDTNOQ53VTESLT3I4DZFPNEXBVE/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/4RYXB7IDGOBCBNA77MEETMVNNIPGA3BO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Proposed class for collections: dynamicdict

2020-04-15 Thread Caleb Donovick
> Besides performance, I don’t think it fits with Guido’s conception of the
protocols as being more minimal than the builtin types—e.g., set has not
just a & operator, but also an intersection method that takes 0 or more
arbitrary iterables; the set protocol has no such method, so
collections.abc.Set neither specifies nor provides an intersection method).
It’s a bit muddy of a conception at the edges, but I think this goes over
the line, and maybe have been explicitly thought about and rejected for the
same reason as Set.intersection.

Making __missing__ a first class part of how __getitem__ seems more
analogous to __getattr__ and __getattribute__ than the intersection
method.  Is there any other dunder that is only implicitly called on
builtin types?

I mostly agree with everything else you said.



On Tue, Apr 14, 2020 at 11:34 AM Steele Farnsworth 
wrote:

> I've implemented the class as a stand-alone module here:
> https://github.com/swfarnsworth/dynamicdict
>
> It could in theory be made significantly more concise if `defdict_type`
> were the base for this class instead of `PyDict_Type`.
>
>
>
> On Tue, Apr 14, 2020 at 1:32 PM Andrew Barnert via Python-ideas <
> python-ideas@python.org> wrote:
>
>> On Apr 13, 2020, at 18:44, Caleb Donovick 
>> wrote:
>>
>> 
>> I have built this data structure countless times. So I am in favor.
>>
>>
>> Maybe you can give a concrete example of what you need it for, then? I
>> think that would really help the proposal. Especially if your example needs
>> a per-instance rather than per-class factory function.
>>
>> > Why can’t you just subclass dict and override that?
>>
>> Because TypeError: multiple bases have instance lay-out conflict is one
>> of my least favorite errors.
>>
>>
>> But defaultdict, being a subclass or dict, has the same problem in the
>> same situations, and (although I haven’t checked) I assume the same is true
>> for the OP’s dynamicdict.
>>
>> Perhaps `__missing__` could be a first class part of the getitem of
>> protocol, instead of a `dict` specific feature.  So that
>>
>> ```
>> r = x[key]
>> ```
>> means:
>> ```
>> try:
>>   r = x.__getitem__(key)
>> except KeyError as e: # should we also catch IndexError?
>>   try:
>> missing = x.__missing__
>>   except AttributeError:
>> raise e from None
>>   r = missing(key)
>> ```
>>
>> Obviously this would come at some performance cost for non dict mappings
>> so I don't know if this would fly.
>>
>>
>> Besides performance, I don’t think it fits with Guido’s conception of the
>> protocols as being more minimal than the builtin types—e.g., set has not
>> just a & operator, but also an intersection method that takes 0 or more
>> arbitrary iterables; the set protocol has no such method, so
>> collections.abc.Set neither specifies nor provides an intersection method).
>> It’s a bit muddy of a conception at the edges, but I think this goes over
>> the line, and maybe have been explicitly thought about and rejected for the
>> same reason as Set.intersection.
>>
>> On the other hand, none of that is an argument or any kind against your
>> method decorator:
>>
>> So instead maybe there could have standard decorator to get the same
>> behavior?
>> ```
>> def usemissing(getitem):
>>   @wraps(getitem)
>>   def wrapped(self, key):
>> try:
>>   return getitem(self, key)
>> except KeyError as e:
>>   try:
>> missing = self.__missing__
>>   except AttributeError:
>> raise e from None
>> return missing(key)
>>   return wrapped
>> ```
>>
>>
>> This seems like a great idea, although maybe it would be easier to use as
>> a class decorator rather than a method decorator. Either this:
>>
>> def usemissing(cls):
>> missing = cls.__missing__
>> getitem = cls.__getitem__
>> def __getitem__(self, key):
>> try:
>> return getitem(self, key)
>> except KeyError:
>>  return missing(self, key)
>> cls.__getitem__ = __getitem__
>> return cls
>>
>> Or this:
>>
>>def usemissing(cls):
>> getitem = cls.__getitem__
>> def __getitem__(self, key):
>> try:
>> return getitem(self, key)
>> except KeyError:
>>  return type(self).__missing__(self, key)
>> cls.__getitem__ = __getitem__
>

[Python-ideas] Re: Proposed class for collections: dynamicdict

2020-04-13 Thread Caleb Donovick
I have built this data structure countless times. So I am in favor.

> Why can’t you just subclass dict and override that?

Because TypeError: multiple bases have instance lay-out conflict is one of
my least favorite errors.

Perhaps `__missing__` could be a first class part of the getitem of
protocol, instead of a `dict` specific feature.  So that
```
r = x[key]
```
means:
```
try:
  r = x.__getitem__(key)
except KeyError as e: # should we also catch IndexError?
  try:
missing = x.__missing__
  except AttributeError:
raise e from None
  r = missing(key)
```

Obviously this would come at some performance cost for non dict mappings so
I don't know if this would fly.

So instead maybe there could have standard decorator to get the same
behavior?
```
def usemissing(getitem):
  @wraps(getitem)
  def wrapped(self, key):
try:
  return getitem(self, key)
except KeyError as e:
  try:
missing = self.__missing__
  except AttributeError:
raise e from None
return missing(key)
  return wrapped
```
Alternatively, it could be implemented as part of one of the ABCs maybe
something like:
```
class MissingMapping(Mapping):
  # Could also give MissingMapping its own metaclass
  # and do the modification of __getitem__ there.
  def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.__getitem__ = usemissing(cls.__getitem__)

  @abstractmethod
  def __missing__(self, key): pass
```

Caleb Donovick

On Fri, Apr 10, 2020 at 6:39 PM Steven D'Aprano  wrote:

> On Fri, Apr 10, 2020 at 06:02:25PM -0700, Andrew Barnert via Python-ideas
> wrote:
>
> > (Keep in mind that defaultdict
> > was added somewhere around 2.4 or 2.5, while __missing__ has only been
> > there since somewhere around 2.7/3.3. I’ll bet it would be different
> > if it were invented today.)
>
>
> Both `__missing__` and `defaultdict` were added in version 2.5.
>
> https://docs.python.org/2/library/stdtypes.html#dict
> https://docs.python.org/2/library/collections.html#defaultdict-objects
>
>
> --
> Steven
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/HARDO2LXJ72AUYJXCWKWNOYJW4PU56HG/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/FAU47KYZQY6RMSXF3OUGSSDJVHCXXVR2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Live variable analysis -> earlier release?

2020-04-08 Thread Caleb Donovick
This would almost certainly break my code.   As a DSL developer I do a lot
of (exec | eval | introspection | ... ) shenanigans, that would make doing
liveness analysis undecidable.

On Wed, Apr 8, 2020 at 10:51 AM Andrew Barnert via Python-ideas <
python-ideas@python.org> wrote:

> On Apr 8, 2020, at 09:57, Guido van Rossum  wrote:
> >
> > 
> > Look at the following code.
> >
> > def foo(a, b):
> > x = a + b
> > if not x:
> > return None
> > sleep(1)  # A calculation that does not use x
> > return a*b
> >
> > This code DECREFs x when the frame is exited (at the return statement).
> But (assuming) we can clearly see that x is not needed during the sleep
> (representing a big calculation), we could insert a "del x" statement
> before the sleep.
> >
> > I think our compiler is smart enough to find out *some* cases where it
> could safely insert such del instructions.
>
> It depends on how much you’re willing to break and still call it “safely”.
>
> def sleep(n):
> global store
> store = inspect.current_frame().f_back.f_locals['x']
>
> This is a ridiculous example, but it shows that you can’t have all of
> Python’s dynamic functionality and still know when locals are dead. And
> there are less ridiculous examples with different code. If foo actually
> calls eval, exec, locals, vars, etc., or if it has a nested function that
> nonlocals x, etc., how can we spot that at compile time and keep x alive?
>
> Maybe that’s ok. After all, that code doesn’t work in a Python
> implementation that doesn’t have stack frame support. Some of the other
> possibilities might be more portable, but I don’t know without digging in
> further.
>
> Or maybe you can add new restrictions to what locals and eval and so on
> guarantee that will make it ok? Some code will break, but only rare
> “expert” code, where the authors will know how to work around it.
>
> Or, if not, it’s definitely fine as an opt-in optimization: decorate the
> function with @deadlocals and that decorator scans the bytecode and finds
> any locals that are dead assuming there’s no use of locals/eval/cells/etc.
> and, because you told it to assume that by opting in to the decorator, it
> can insert a DELETE_FAST safely.
>
> People already do similar things today—e.g., I’ve (only once in live code,
> but that’s still more than zero) used a @fastconst decorator that turns
> globals into consts on functions that I know are safe and are bottlenecks,
> and this would be no different. And of course you can add a recursive class
> decorator, or an import hook (or maybe even a command line flag or
> something) that enables it everywhere (maybe with a @nodeadlocals decorator
> for people who want it _almost_ everywhere but need to opt out one or two
> functions).
>
> Did Victor Stinner explore this as one of the optimizations for FAT
> Python/PEP 511/etc.? Maybe not, since it’s not something you can insert a
> guard, speculatively do, and then undo if the guard triggers, which was I
> think his key idea.
>
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/OIGCRV464VJW3FRRBBK25XSNQYGWID7N/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/DSIMIUM6QCUMC2GRTFV646KVWYIR45DR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Incremental step on road to improving situation around iterable strings

2020-02-25 Thread Caleb Donovick
>
> Is there a reason mypy could not assume that all AtomicStr methods that
> return strings actually return an AtomicStr, without impacting runtime
> behavior...? Maybe it's not possible and I'm just not familiar enough with
> the behavior of the type checkers.
>

I don't know but I could say that being problematic if parts of a project
expects strings to be iterable and some expect them to atomic.

If mypy assumes `isinstance(obj, Iterable)` returns false on `str` then its
not really helping in the case where `obj: Union[str, Iterable[str]]`

And while I don't really know much about mypy, I do know it understands
stuff like `if isisnstance`, it seems like it would take tremendous hackery
to get it to understand that when `isinstance(obj, Iterable)` returns True,
you still can't pass that object to a function that consumes an iterable
without also checking `not isinstance(obj, (str, bytes))`.

assert """

> In practice this would be a very odd decision given that the definition of
> Iterable is "has an __iter__". And there are plenty of times people will
> find the resulting behavior surprising since str DOES have an __iter__
> method and there are plenty of times you might want to iterate on sequences
> and strs in the same context.

""" in set_of_draw_backs

On Tue, Feb 25, 2020 at 4:28 AM Rhodri James  wrote:

> On 24/02/2020 21:07, Alex Hall wrote:
> > This response honestly seems to ignore most of the paragraph that it's
> > responding to. It being a sharp distinction doesn't matter because
> > consistency isn't axiomatically valuable.
>
> Actually I think it is.  Or more precisely, I think inconsistency is
> axiomatically has negative value.  An inconsistency breaks your
> expectation of how a language works.  Each inconsistency creates a
> special case that you simply have to learn in order to use the language.
>   The more inconsistencies you have, the more of those exceptions you
> have to know, and the harder the language is to learn.  Just consider
> how hard English is to learn as an adult, and notice just how much of
> the language is inconsistency after inconsistency.
>
> --
> Rhodri James *-* Kynesim Ltd
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/5ZIK4ESPNPX2YL4MNGGMNIFE56YIHCAP/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/VQZWMIXFO4MJV6CH64YZXZH2JYVVRH5G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Incremental step on road to improving situation around iterable strings

2020-02-24 Thread Caleb Donovick
So I do not have extensive experience with mypy but I don't see how it
would help.  The entire issue that is that `str` is an instance of
`Iterable[str]` so how is mypy going to catch my error of passing a single
string instead of an iterable of strings to a function?

However the ability to distinguish between `str` and `Iterable[str]` is
important.  I often write functions that operate on scalar or iterable of
scalar and I always need to special case `str`.  Now you might argue that
my life would be simplified if I required everything just be an iterable
and you would be right.  However, it would still leave me with strange
errors when passing single strings and would decrease usability.  Consider
the behavior of `__slots__`, I for one think its great that `__slots__ =
"foo"` does what it does.  I think it would be bad from usability
standpoint to require a trailing comma, even if it would simplify the life
of the interpreter.

I like the idea of an `AtomicString` but it doesn't really help unless IO
and string literals return `AtomicString` instead of `str` (which is just
as breaking as changing `str`).

I agree that noisy is broken and that a warning that never becomes an error
is probably a bad idea.

While I am firmly in the camp that this is a problem I am not sure if its a
problem that should be fixed.  Any solution will break code, there is no
way around it. However, I think one possible solution (that has it own set
of draw backs) would be to simply not register `str` as subclass of
`Iterable`.  This would allow for the following pattern:

```
if isinstance(obj, Iterable):
  # iterable case
else:
  # scalar case
```

Where currently the following is required:

```
if isinstance(obj, Iterable) and not isinstance(obj, (str, bytes)):
  # iterable case
else:
  # scalar case
```

Yes this would break code, but it would break a lot less code than actually
changing the behavior of `str`.

On Mon, Feb 24, 2020 at 3:16 PM Dominik Vilsmeier 
wrote:

> I agree, a warning that is never converted to an error indicates that
> this is more about style than behavior (and in that sense it is use case
> specific). It would also be annoying for people that intentionally
> iterate over strings and find this a useful feature.
>
> So this sounds more like the job for a linter or, because it's dealing
> with types, a type checker. So what about the compromise that for
> example mypy added a flag to treat strings as atomic, i.e. then it would
> flag usage of strings where an iterable or a sequence is expected. Would
> that solve the problem?
>
> On 24.02.20 23:31, Paul Moore wrote:
> > On Mon, 24 Feb 2020 at 20:13, Alex Hall  wrote:
> >>> Conversely, I can't remember a case where I've ever accidentally
> >>> iterated over a string when I meant not to.
> >> Do you ever return a string from a function where you should have
> returned a list containing one string? Or similarly passed a string to a
> function? Forgotten to put a trailing comma in a singleton tuple? Forgotten
> to add .items() to `for key, value in kwargs:`?
> > Not that I remember - that's what I said, basically. No, I'm not
> > perfect (far from it!) but I don't recall ever hitting this issue.
> >
> >>> compelling arguments are typically
> >>> around demonstrating how much code would be demonstrably better with
> >>> the new behaviour
> >> That represents a misunderstanding of my position. I think I'm an
> outlier among the advocates in this thread, but I do not believe that
> implementing any of the ideas in this proposal would significantly affect
> code that lives in the long term. Some code would become slightly better,
> some slightly worse.
> > I beg to differ.
> >
> > * Code that chooses to use `.chars()` would fail to work on versions
> > of Python before whatever version implemented this (3.9? 3.10?). That
> > makes it effectively unusable in libraries for years to come.
> > * If you make iterating over strings produce a warning before
> > `.chars()` is available as an option for any code that would be
> > affected, you're inflicting a warning on all of that code.
> > * A warning that will never become an error is (IMO) unacceptable.
> > It's making it annoying to use a particular construct, but with no
> > intention of ever doing anything beyond annoying people into doing
> > what you want them to do.
> > * A warning that *will* become an error just delays the problem -
> > let's assume we're discussing the point when it becomes an error.
> >
> > As a maintainer of pip, which currently still supports Python 2.7, and
> > which will support versions of Python earlier than 3.9 for years yet,
> > I'd appreciate it if you would explain what pip should do about this
> > proposed change. (Note: if you suggest just suppressing the warning,
> > I'll counter by asking you why we'd ever remove the code to suppress
> > the warning, and in that case what's the point of it?)
> >
> > And pip is an application, so easier. What about the `packaging`
> > library? 

[Python-ideas] Re: Allow kwargs in __{get|set|del|}item__

2019-10-08 Thread Caleb Donovick
>
> Because
>
> >>> dict(foo=:1)
>   File "", line 1
> dict(foo=:1)
>  ^
> SyntaxError: invalid syntax
>

I don't see how that's an argument, we are talking about a syntax
extension.   Slice builder syntax is only every allowed in a subscript.
Edit my original grammar change proposal to:

```
subscriptlist: ... | kwargsubscript (','  kwargsubscript )* [',']
kwargsubscript: NAME '=' subscript
```

Now slices are allowed in keyword arguments.

-- Caleb Donovick

On Tue, Oct 8, 2019 at 1:09 PM Anders Hovmöller  wrote:

>
>
> On 8 Oct 2019, at 18:59, Todd  wrote:
>
> 
>
>
> On Tue, Oct 8, 2019, 12:46 Anders Hovmöller  wrote:
>
>>
>>
>> On 8 Oct 2019, at 18:35, Todd  wrote:
>>
>> On Tue, Oct 8, 2019 at 12:22 PM Andrew Barnert via Python-ideas <
>> python-ideas@python.org> wrote:
>>
>>> On Oct 7, 2019, at 21:21, Caleb Donovick 
>>> wrote:
>>> >
>>> > >  But what if you wanted to take both positional AND keyword?
>>> >
>>> > I was suggesting that that wouldn't be allowed.  So subscript either
>>> has a single argument, a tuple of arguments, or a dictionary of arguments.
>>> Allowing both has some advantages but is less cleanly integratible.
>>>
>>> The problem is that half the examples people conjure up involve both:
>>> using the keywords as options, while using the positional arguments for the
>>> actual indices. Calling the proposal “kwargs in getitem” encourages that
>>> thinking, because that’s the prototypical reason for kwargs in function
>>> calls.
>>>
>>> If there were non-toy examples, so people didn’t have to imagine how it
>>> would be used for themselves, that might be helpful.
>>>
>>>
>> Here is an example modified from the xarray documentation, where you want
>> to assign to a subset of your array:
>>
>> da.isel(space=0, time=slice(None, 2))[...] = spam
>>
>> With this syntax this could be changed to:
>>
>> da[space=0, time=:2] = spam
>>
>>
>> I must have missed something... when did the proposal we're discussing
>> start allowing : there?
>>
>> / Anders
>>
>
> Why wouldn't it?
>
>
> Because
>
> >>> dict(foo=:1)
>   File "", line 1
> dict(foo=:1)
>  ^
> SyntaxError: invalid syntax
>
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/ZJDP2H7EVGOFDVAE4ZYLUMKNNZN6UFCR/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/2PUTYEP3XO7U7BJNNZZHICV7RQZNTLV2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Allow kwargs in __{get|set|del|}item__

2019-10-08 Thread Caleb Donovick
>  It captures a tiny fraction of Pandas style filtering while complicating
the syntax of Python

Sure maybe I we can't represent all filters super concisely but at least
inequalities, or any filter on single axis, would not be hard. E.g.

db[x=LT(1)] == db[db.x < 1]

Granted I don’t really see a way to express logical connectives between
filters in a beautiful way -- beyond doing something like db[filter=OR(x=1,
y=2)] which really isn't any better than db.filter(OR(x=1, y=2))

>  db['x=1']

Ah yes cause parsing strings is a reasonable replacement for language
support.  I have no idea why Pandas dropped support for this but I have to
imagine it's because it's horribly ugly, prone to bugs and difficult to
metaprogram.  Semantically meaningful strings are terrible. Everytime I
write a string literal for any reason other than I a want human to read
that string I die a little inside.  Which is part of the reason I want
db[x=1] instead of db[{'x':1}].  And yes everything is a string under the
hood in python but that doesn't make semantic strings less terrible. Really
under the hood (in assembly) everything is gotos but that doesn't make
their use better either. /rant
On Mon, Oct 7, 2019 at 10:07 PM David Mertz  wrote:

> It's really not a worthwhile win.  It captures a tiny fraction of Pandas
> style filtering while complicating the syntax of Python. Here's another
> Pandas filter:
>
>   db[db.x < 1]
>
> No help there with the next syntax.  Here's another:
>
>   db[(db.x == 1) | (db.y == 2)]
>
> A much better idea doesn't require any changes in Python, just a clever
> class method. Pandas did this for a while, but deprecated it because...
> reasons. Still, the OP is free to create his version:
>
> db['x=1']
>
> Or
>
> db['x<1']
> db['x=1 or y=2']
>
> You can bikeshed the spelling of those predicates, but it doesn't matter,
> they are just strings that you can see however you decide is best.
>
> On Mon, Oct 7, 2019, 8:38 PM Steven D'Aprano  wrote:
>
>> On Tue, Oct 08, 2019 at 09:19:07AM +1100, Cameron Simpson wrote:
>> > On 07Oct2019 10:56, Joao S. O. Bueno  wrote:
>> > >So, in short, your idea is to allow "=" signs inside `[]` get notation
>> to
>> > >be translated
>> > >to dicts on the call,
>> >
>> > Subjectively that seems like a tiny tiny win. I'm quite -1 on this
>> idea;
>> > language spec bloat to neglible gain.
>>
>> As per Caleb's initial post, this is how Pandas currently does it:
>>
>> db[db['x'] == 1]
>>
>> Replacing that with db[x=1] seems like a HUGE win to me.
>>
>> Even db[{'x': 1}] is pretty clunky.
>>
>>
>>
>> --
>> Steven
>> ___
>> Python-ideas mailing list -- python-ideas@python.org
>> To unsubscribe send an email to python-ideas-le...@python.org
>> https://mail.python.org/mailman3/lists/python-ideas.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-ideas@python.org/message/RQH4VJPJ6CG3RII4GAY3ERW2DRZ6DEWW/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/5O7BLOEFMZXIOVFBIOKN7ER3ULU7APM5/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/77XKAAFHUWHV573VOR6RPSV2SMSSO3DQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Allow kwargs in __{get|set|del|}item__

2019-10-07 Thread Caleb Donovick
>  But what if you wanted to take both positional AND keyword?

I was suggesting that that wouldn't be allowed.  So subscript either has a
single argument, a tuple of arguments, or a dictionary of arguments.
Allowing both has some advantages but is less cleanly integratible.

-- Caleb Donovick

On Tue, Oct 8, 2019 at 12:16 AM Chris Angelico  wrote:

> On Tue, Oct 8, 2019 at 12:47 PM Caleb Donovick 
> wrote:
> >
> > >  Why not?
> >
> > What if I want a getitem that only has keyword arguments? I have to take
> the empty tuple as a positional argument, instead of just ensuring that the
> key is a dict.
> >
>
> But what if you wanted to take both positional AND keyword?You can't
> ensure that the key is both a dict and a tuple.
>
> Cleaner to continue passing the tuple exactly as normal.
>
> ChrisA
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/C4FHJREUQ45D3ULPRLXA4KHMOJEBNHRA/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/MKI3W472CQ5NWNNXADUZDFALHJ6IDM37/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Allow kwargs in __{get|set|del|}item__

2019-10-07 Thread Caleb Donovick
>  Why not?

What if I want a getitem that only has keyword arguments? I have to take
the empty tuple as a positional argument, instead of just ensuring that the
key is a dict.

> Now, assuming you want to allow ** in getitem lookups

I don't. *args are not allowed in subscripts either.  However, that would
make the syntax pretty much useless (in the keyword version) as indices
could not be passed to a function.

While I agree, not using a keywords create a certain class of bugs, those
bugs already exists.  At least in the sense that if I pass a dictionary as
index things are going to get weird.  However,  I'd rather not have all the
special casing that comes with keyword arguments.

On Mon, Oct 7, 2019 at 7:57 PM Andrew Barnert  wrote:

> On Oct 7, 2019, at 14:56, Caleb Donovick  wrote:
> >
> > > I think it might be better if it actually passed them as keyword
> arguments.
> >
> > If only keyword arguments are passed what happens to the positional
> index?   Is it the empty tuple?
>
> That seems like the obvious, and also most useful, answer.
>
> > Currently subscript with no index (`dict()[]`) is a syntax error should
> it continue to be?
>
> Why not? If you want to look up the empty tuple today, you can, you just
> have to be explicit: `d[()]`. This rarely comes up, but when it does, this
> reads pretty obvious (especially given that when people are using tuples as
> dict keys, they usually wrap them in parens anyway, even if it’s not
> necessary, because it looks clearer). I don’t see any reason to change that.
>
> Now, assuming you want to allow ** in getitem lookups, this would add
> another way to write the same thing: `d[**{}]`. I suppose this should be
> legal (since it could easily come up with a dynamic `d[**kw]` when there
> happen to be no keywords), but I don’t think anyone will ever be tempted to
> write it, and I don’t think anyone would be confused for more than a few
> seconds if they did, and I don’t think that allowing it requires us to
> start allowing `d[]`. That’s a restriction on the syntax for readability,
> not a limitation on the semantics, and the same readability issue still
> applies.
>
> > > Also, I think there are user implementations that accept any iterable,
> whether to treat it as a tuple or as an array-like, and a dict is an
> iterable of its keys, so it might do the wrong thing rather than raising at
> all.
> >
> > I had not thought about this.  I have a lot of code in the form:
> > ```
> > if not isinstance(key, iterable):
> > key = key,
> > # do stuff assuming key is an iterable
> > ```
> > Which would have very strange behavior if a dict was passed.
>
> Exactly. And I think what you want to happen here is a `TypeError:
> Spam.__getitem__ takes no keyword arguments` or similar, not treating the
> keywords as an iterable and ignoring their values. That’s why I think
> passing kwargs separately rather than just making key a dict might make
> more sense.
>
> But then I haven’t put a huge amount of thought into this. Given that
> there’s an existing abandoned PEP, I’ll bet this was already hashed our in
> more detail, and you probably want to go back to that, come up with answers
> to the objections and open questions, and start over
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/GSFCYPOWVQNTYNLFJWOOUMW2M2HDA6JQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Allow kwargs in __{get|set|del|}item__

2019-10-07 Thread Caleb Donovick
> I think it might be better if it actually passed them as keyword
arguments.

If only keyword arguments are passed what happens to the positional index?
 Is it the empty tuple?

Currently subscript with no index (`dict()[]`) is a syntax error should it
continue to be?

> Also, I think there are user implementations that accept any iterable,
whether to treat it as a tuple or as an array-like, and a dict is an
iterable of its keys, so it might do the wrong thing rather than raising at
all.

I had not thought about this.  I have a lot of code in the form:
```
if not isinstance(key, iterable):
key = key,
# do stuff assuming key is an iterable
```
Which would have very strange behavior if a dict was passed.


On Mon, Oct 7, 2019 at 2:44 PM Christopher Barker 
wrote:

>
> Are you aware of PEP 472 https://www.python.org/dev/peps/pep-0472 ?
>>
>
> That is indeed the same idea, though perhaps the details are a bit
> different.
>
> This example from the PEP:
>
> gridValues[x=3, y=5, z=8]
>
> Makes me wonder:
>
> Should that yield the same results as:
>
> gridValues[3,5,8]
>
> Much like positional and keyword arguments work on function calls?
>
> I suppose that would be up to the implementation, as __getitem__ doesn’t
> currently provide much help with parsing out what’s in there, other than
> making slice objects.
>
> But if something like this did go forward, it would be nice to provide
> utilities, maybe built in, that would parse and sort out the “arguments”,
> similar to function calls.
>
> -CHB
>
>
> Maybe you have something different in mind, but for me your idea looks
>> pretty the same. While the PEP 472 is in Rejected, Abandoned section, I
>> do not remember any serious criticism of this idea. It’s just that the
>> authors of that proposal lost interest and it did not receive further
>> progress. And in this regard, over time, it was abandoned.
>>
>> with kind regards,
>> -gdg
>>
>> пт, 4 окт. 2019 г. в 23:01, Caleb Donovick :
>>
>>> While there is no restriction on passing dicts to getitem.  Doing so
>>> tends to be a bit ugly.  I have two main use cases in mind for this syntax.
>>>
>>> The first  and perhaps the most obvious, is doing relational queries.
>>> ```
>>> where_x_1 = db[x=1]
>>> ```
>>> is more beautiful than
>>> ```
>>> where_x_1 = db[dict(x=1)]
>>> where_x_1 = db[{'x': 1}]
>>> # or by abusing slices
>>> where_x_1 = db['x':1]
>>> # or in the style of Pandas
>>> where_x_1 = db[db['x'] == 1]
>>> ```
>>>
>>> Beyond relational queries my own personal use case is a shorthand for
>>> dataclasses / protocols.
>>> ```
>>> foo: ProtoRecord[x=int, y=int] = DataRecord[x=int, y=int](0, 1)
>>> ```
>>> where `DataRecord[field0=T0, ..., fieldk=Tk]` generates
>>> ```
>>> @dataclass
>>> class Record:
>>>   field0: T0
>>>   ...
>>>   fieldk: Tk
>>> ```
>>> and `ProtoRecord[field0=T0, ..., fieldk=Tk]` generates a similar
>>> protocol.
>>>
>>> Allowing key value pairs in geitem need not change the interface of
>>> getitem.   All the key value pairs could be collected as a dict and passed
>>> to getitem as the index. Similar to how the all the positional arguments
>>> are gather into a single tuple.
>>> ```
>>> class Foo:
>>>   def __getitem__(self, idx):
>>>  print(idx)
>>>
>>> f = Foo()
>>> f[x=1, y=2] # {'x': 1, 'y': 2}
>>> ```
>>> This would make any legacy code using normal dicts as keys (I don't know
>>> how prevalent that is) automatically work with  the new syntax.
>>>
>>> There doesn't necessarily need to be support for mixing of tuple based
>>> indexing and keyword indexing. i.e.
>>> ```
>>> obj[0, x=1] # SyntaxError
>>> ```
>>>
>>> I don't really know anything about parsers but I think the grammar could
>>> be extended without issue with the following rule:
>>> ```
>>> subscriptlist: ... | kwargsubscript (','  kwargsubscript )* [',']
>>> kwargsubscript: NAME '=' test
>>> ```
>>> if `NAME '=' test` would result in ambiguity similar to argument it
>>> could be `test '=' test` with a block in ast.c
>>>
>>>
>>>-  Caleb Donovick
>>> ___
>>> Python-ideas mailing list -- python-ideas@python.org
>>> To unsubscribe send an email to python-ideas-le...@python.org
>>> https:

[Python-ideas] Re: Allow kwargs in __{get|set|del|}item__

2019-10-07 Thread Caleb Donovick
  >  Are you aware of PEP 472 https://www.python.org/dev/peps/pep-0472 ?
Maybe you have something different in mind, but for me your idea looks
pretty the same. While the PEP 472 is in Rejected, Abandoned section, I do
not remember any serious criticism of this idea. It’s just that the authors
of that proposal lost interest and it did not receive further progress. And
in this regard, over time, it was abandoned.

I was not aware of it.  I had exactly this in mind.  Specifically the
strict dictionary strategy
https://www.python.org/dev/peps/pep-0472/#strategy-strict-dictionary

On Mon, Oct 7, 2019 at 5:56 PM Caleb Donovick 
wrote:

> > I think it might be better if it actually passed them as keyword
> arguments.
>
> If only keyword arguments are passed what happens to the positional
> index?   Is it the empty tuple?
>
> Currently subscript with no index (`dict()[]`) is a syntax error should it
> continue to be?
>
> > Also, I think there are user implementations that accept any iterable,
> whether to treat it as a tuple or as an array-like, and a dict is an
> iterable of its keys, so it might do the wrong thing rather than raising at
> all.
>
> I had not thought about this.  I have a lot of code in the form:
> ```
> if not isinstance(key, iterable):
> key = key,
> # do stuff assuming key is an iterable
> ```
> Which would have very strange behavior if a dict was passed.
>
>
> On Mon, Oct 7, 2019 at 2:44 PM Christopher Barker 
> wrote:
>
>>
>> Are you aware of PEP 472 https://www.python.org/dev/peps/pep-0472 ?
>>>
>>
>> That is indeed the same idea, though perhaps the details are a bit
>> different.
>>
>> This example from the PEP:
>>
>> gridValues[x=3, y=5, z=8]
>>
>> Makes me wonder:
>>
>> Should that yield the same results as:
>>
>> gridValues[3,5,8]
>>
>> Much like positional and keyword arguments work on function calls?
>>
>> I suppose that would be up to the implementation, as __getitem__ doesn’t
>> currently provide much help with parsing out what’s in there, other than
>> making slice objects.
>>
>> But if something like this did go forward, it would be nice to provide
>> utilities, maybe built in, that would parse and sort out the “arguments”,
>> similar to function calls.
>>
>> -CHB
>>
>>
>> Maybe you have something different in mind, but for me your idea looks
>>> pretty the same. While the PEP 472 is in Rejected, Abandoned section, I
>>> do not remember any serious criticism of this idea. It’s just that the
>>> authors of that proposal lost interest and it did not receive further
>>> progress. And in this regard, over time, it was abandoned.
>>>
>>> with kind regards,
>>> -gdg
>>>
>>> пт, 4 окт. 2019 г. в 23:01, Caleb Donovick :
>>>
>>>> While there is no restriction on passing dicts to getitem.  Doing so
>>>> tends to be a bit ugly.  I have two main use cases in mind for this syntax.
>>>>
>>>> The first  and perhaps the most obvious, is doing relational queries.
>>>> ```
>>>> where_x_1 = db[x=1]
>>>> ```
>>>> is more beautiful than
>>>> ```
>>>> where_x_1 = db[dict(x=1)]
>>>> where_x_1 = db[{'x': 1}]
>>>> # or by abusing slices
>>>> where_x_1 = db['x':1]
>>>> # or in the style of Pandas
>>>> where_x_1 = db[db['x'] == 1]
>>>> ```
>>>>
>>>> Beyond relational queries my own personal use case is a shorthand for
>>>> dataclasses / protocols.
>>>> ```
>>>> foo: ProtoRecord[x=int, y=int] = DataRecord[x=int, y=int](0, 1)
>>>> ```
>>>> where `DataRecord[field0=T0, ..., fieldk=Tk]` generates
>>>> ```
>>>> @dataclass
>>>> class Record:
>>>>   field0: T0
>>>>   ...
>>>>   fieldk: Tk
>>>> ```
>>>> and `ProtoRecord[field0=T0, ..., fieldk=Tk]` generates a similar
>>>> protocol.
>>>>
>>>> Allowing key value pairs in geitem need not change the interface of
>>>> getitem.   All the key value pairs could be collected as a dict and passed
>>>> to getitem as the index. Similar to how the all the positional arguments
>>>> are gather into a single tuple.
>>>> ```
>>>> class Foo:
>>>>   def __getitem__(self, idx):
>>>>  print(idx)
>>>>
>>>> f = Foo()
>>>> f[x=1, y=2] # {'x': 1, 'y': 2}
>>>> ```
>>>&

[Python-ideas] Allow kwargs in __{get|set|del|}item__

2019-10-04 Thread Caleb Donovick
While there is no restriction on passing dicts to getitem.  Doing so tends
to be a bit ugly.  I have two main use cases in mind for this syntax.

The first  and perhaps the most obvious, is doing relational queries.
```
where_x_1 = db[x=1]
```
is more beautiful than
```
where_x_1 = db[dict(x=1)]
where_x_1 = db[{'x': 1}]
# or by abusing slices
where_x_1 = db['x':1]
# or in the style of Pandas
where_x_1 = db[db['x'] == 1]
```

Beyond relational queries my own personal use case is a shorthand for
dataclasses / protocols.
```
foo: ProtoRecord[x=int, y=int] = DataRecord[x=int, y=int](0, 1)
```
where `DataRecord[field0=T0, ..., fieldk=Tk]` generates
```
@dataclass
class Record:
  field0: T0
  ...
  fieldk: Tk
```
and `ProtoRecord[field0=T0, ..., fieldk=Tk]` generates a similar protocol.

Allowing key value pairs in geitem need not change the interface of
getitem.   All the key value pairs could be collected as a dict and passed
to getitem as the index. Similar to how the all the positional arguments
are gather into a single tuple.
```
class Foo:
  def __getitem__(self, idx):
 print(idx)

f = Foo()
f[x=1, y=2] # {'x': 1, 'y': 2}
```
This would make any legacy code using normal dicts as keys (I don't know
how prevalent that is) automatically work with  the new syntax.

There doesn't necessarily need to be support for mixing of tuple based
indexing and keyword indexing. i.e.
```
obj[0, x=1] # SyntaxError
```

I don't really know anything about parsers but I think the grammar could be
extended without issue with the following rule:
```
subscriptlist: ... | kwargsubscript (','  kwargsubscript )* [',']
kwargsubscript: NAME '=' test
```
if `NAME '=' test` would result in ambiguity similar to argument it could
be `test '=' test` with a block in ast.c


   -  Caleb Donovick
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/EUGDRTRFIY36K4RM3QRR52CKCI7MIR2M/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Extend ast with types for * instead of using raw lists

2019-08-16 Thread Caleb Donovick
> This part could be solved without making the lists AST nodes at all. Just
use new bare subclasses of list for each of the kinds of lists, so a field
in a node never has an empty list, it has an empty ExprList or StmtList or
whatever. If that’s sufficient for your needs, that seems pretty easy, and
I don’t think it would have any compatibility problems.
This would probably be sufficient. It would make it pretty easy to write
replacement versions of what I want from ast.py with the behavior I want.

>  Also, while a I can see a use for StmtList (because every StmtList is a
block or a module body, so they all have quite a bit in common), where
would you use ExprList?
You are right I only really care about StmtList.  Just seemed like a
strange asymmetry to have StmtList but not ExprList ect...

> But I don’t think there’s any actual behavior in AST that you need
(except maybe for looking like a structseq, with _fields, but you can fake
that with a plain old class a la namedtuple or dataclass, as long as
there’s no C code that relies on being able to iterate the structseq
fields).
I really don't care about the SmttList being a  structseq or at least I
don't think I do.  I am pretty unfamiliar with the C bits of CPython.

Both your suggestions seem completely sufficient to me.



> Just to make sure we're  on the same page: what are you using the ast
module for?
I am building DSLs and need to perform AST analysis / rewriting.  I
commonly perform block level analysis but it gets pretty verbose because of
all the different places stmt* can be.
I am unfamiliar with parso but it  looks like it has some nice convenience
functions. It probably won't be useful for me though because I need to be
able to exec the AST.  Further, it is highly desirable for me to be able to
turn the AST back into a string (as astor allows) so that I can generate
reasonable error messages and debug.


On Fri, Aug 16, 2019 at 1:24 PM Anders Hovmöller 
wrote:

> Just to make sure we're  on the same page: what are you using the ast
> module for?
>
> Maybe moving to another lib like parso actually helps your real problem
> more...
>
> > On 15 Aug 2019, at 22:02, Caleb Donovick 
> wrote:
> >
> > When walking an ast it impossible to know the type of an empty list
> without writing down some giant lookup from node types and field names to
> field types.
> >
> > More concretely it would nice be to able to programatically visit all
> blocks (stmt*)  without having to something like:
> >
> > ```
> > class BlockVisitor(NodeVisitor):
> > def visit_If(self, node: If):
> > self.visit(node.test)
> > self.visit_block(node.body)
> > self.visit_block(node.orelse)
> >
> > def visit_FunctionDef(self, node: FunctionDef):
> > for field, value in iter_fields(node):
> > if field == 'body':
> > self.visit_block(value)
> > else:
> > # the implementation of generic_visit
> > ```
> > Now it turns out that all fields that are lists and are named "body",
> "orelse", or "finalbody" are stmt* and only such fields are stmt*.  A rule
> could also be synthesized to identify expr* and so forth but this seems
> incredibly hacky to me.
> >
> > It would be much cleaner if * were actual nodes in the ast. E.g.
> something like:
> > ```
> > class ast_list(AST, MutableSequence[T_co]): ...
> > class StmtList(ast_list[stmt]): ...
> > class ExprList(ast_list[expr]): ...
> > ...
> > class FunctionDef(stmt):
> > name: identifier
> > args: arguments
> > body: StmtList
> > decorator_list: ExprList
> > returns: Optional[expr]
> > ```
> > This would not change the behavior or structure in any way other than
> tagging * and allowing * to be visited.
> >
> > It would potentially break old code which relies on stuff like `if
> isinstance(node.field, list)` e.g. the implementation of generic_visit.
> >
> >
> > Caleb Donovick
> >
> > ___
> > Python-ideas mailing list -- python-ideas@python.org
> > To unsubscribe send an email to python-ideas-le...@python.org
> > https://mail.python.org/mailman3/lists/python-ideas.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/ZHOXQTDSHOERZSGUJXLNJCYLKKQJOYTA/
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/3BX7ZI32YBRZ5JYMFJQLOF4W2GWRMOXB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Extend ast with types for * instead of using raw lists

2019-08-16 Thread Caleb Donovick
>  Also I think this is often alleviated by using super().
super doesn't help for my use case or not in any way I can see.

> Maybe it is possible to preserve backwards compatibility by making
ast_list a subclass of list? Or is it not possible for some reason?
There is a layout conflict between AST and list.


Caleb Donovick


On Fri, Aug 16, 2019 at 5:54 AM Ivan Levkivskyi 
wrote:

> On one hand I can see how this may cause little inconvenience, but on
> other hand this would be a breaking change, so I don't think it is
> realistic.
> Also I think this is often alleviated by using super().
>
> Maybe it is possible to preserve backwards compatibility by making
> ast_list a subclass of list? Or is it not possible for some reason?
>
> --
> Ivan
>
>
>
> On Thu, 15 Aug 2019 at 22:32, Caleb Donovick 
> wrote:
>
>> When walking an ast it impossible to know the type of an empty list
>> without writing down some giant lookup from node types and field names to
>> field types.
>>
>> More concretely it would nice be to able to programatically visit all
>> blocks (stmt*)  without having to something like:
>>
>> ```
>> class BlockVisitor(NodeVisitor):
>> def visit_If(self, node: If):
>> self.visit(node.test)
>> self.visit_block(node.body)
>> self.visit_block(node.orelse)
>>
>> def visit_FunctionDef(self, node: FunctionDef):
>> for field, value in iter_fields(node):
>> if field == 'body':
>> self.visit_block(value)
>> else:
>> # the implementation of generic_visit
>> ```
>> Now it turns out that all fields that are lists and are named "body",
>> "orelse", or "finalbody" are stmt* and only such fields are stmt*.  A rule
>> could also be synthesized to identify expr* and so forth but this seems
>> incredibly hacky to me.
>>
>> It would be much cleaner if * were actual nodes in the ast. E.g.
>> something like:
>> ```
>> class ast_list(AST, MutableSequence[T_co]): ...
>> class StmtList(ast_list[stmt]): ...
>> class ExprList(ast_list[expr]): ...
>> ...
>> class FunctionDef(stmt):
>> name: identifier
>> args: arguments
>> body: StmtList
>> decorator_list: ExprList
>> returns: Optional[expr]
>> ```
>> This would not change the behavior or structure in any way other than
>> tagging * and allowing * to be visited.
>>
>> It would potentially break old code which relies on stuff like `if
>> isinstance(node.field, list)` e.g. the implementation of generic_visit.
>>
>>
>> Caleb Donovick
>>
>> ___
>> Python-ideas mailing list -- python-ideas@python.org
>> To unsubscribe send an email to python-ideas-le...@python.org
>> https://mail.python.org/mailman3/lists/python-ideas.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-ideas@python.org/message/ZHOXQTDSHOERZSGUJXLNJCYLKKQJOYTA/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LSPTPYDAEGTY2QFUBT6NPLWGPKSN4DQ3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Extend ast with types for * instead of using raw lists

2019-08-15 Thread Caleb Donovick
When walking an ast it impossible to know the type of an empty list without
writing down some giant lookup from node types and field names to field
types.

More concretely it would nice be to able to programatically visit all
blocks (stmt*)  without having to something like:

```
class BlockVisitor(NodeVisitor):
def visit_If(self, node: If):
self.visit(node.test)
self.visit_block(node.body)
self.visit_block(node.orelse)

def visit_FunctionDef(self, node: FunctionDef):
for field, value in iter_fields(node):
if field == 'body':
self.visit_block(value)
else:
# the implementation of generic_visit
```
Now it turns out that all fields that are lists and are named "body",
"orelse", or "finalbody" are stmt* and only such fields are stmt*.  A rule
could also be synthesized to identify expr* and so forth but this seems
incredibly hacky to me.

It would be much cleaner if * were actual nodes in the ast. E.g.
something like:
```
class ast_list(AST, MutableSequence[T_co]): ...
class StmtList(ast_list[stmt]): ...
class ExprList(ast_list[expr]): ...
...
class FunctionDef(stmt):
name: identifier
args: arguments
body: StmtList
decorator_list: ExprList
returns: Optional[expr]
```
This would not change the behavior or structure in any way other than
tagging * and allowing * to be visited.

It would potentially break old code which relies on stuff like `if
isinstance(node.field, list)` e.g. the implementation of generic_visit.


Caleb Donovick
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/ZHOXQTDSHOERZSGUJXLNJCYLKKQJOYTA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] venv.EnvBuilder environmental variable hooks

2019-06-18 Thread Caleb Donovick
Currently EnvBuilder allows for a number of customizations of virtual
environments, changing which python is used, automatically installing
libraries, etc...  However, it does not allow for any modifications of the
activate script itself (unless one wants to rewrite it completely).
However, a fairly common requirement for python packages is to edit some
environmental variable e.g., DJANGO_SETTINGS_MODULE=.

Which leaves developers with two options set the variables globally or
manually edit the activate script.  Granted the activate script has been
fairly static for the past few years, so a patch file can be used after
manually editing once but its not the most elegant solution.  Further,
there is no guarantee that activate script will be static going forward.
In my mind there is a very simple solution, allow EnvBuilder to extend the
set of variables which are set and restored on activate / deactivate (or
more generally have activate / deactivate hooks).

In the cpython implementation of venv there is a number of template
parameters in the skeleton activate scripts which are filled by the
EnvBuilder (__VENV_DIR__, __VENV_NAME__, ...).  A simple solution would be
to extend these with __VENV_ACTIVATE_EXTRAS__ and
__VENV_DEACTIVATE_EXTRAS__ where by default
```
#Let env_vars: Dict[str, str] be the custom environmental variables
__VENV_ACTIVATE_EXTRAS__ = ''.join(f'''
_OLD_VIRTUAL_{key}="${key}"
{key}="{value}"
export {key}
''' for key,value in env_vars.items())

__VENV_DEACTIVATE_EXTRAS__ = ''.join(f'''
if [ -n "${{_OLD_VIRTUAL_{key}:-}}" ] ; then
{key}="${{_OLD_VIRTUAL_{key}:-}}"
export {key}
unset _OLD_VIRTUAL_{key}
fi
''' for key in env_vars)
```

With __VENV_ACTIVATE_EXTRAS__ at
https://github.com/python/cpython/blob/54cf2e0780ca137dd9abea5d3d974578ce0c18a9/Lib/venv/scripts/common/activate#L46
and __VENV_DEACTIVATE_EXTRAS__ at
https://github.com/python/cpython/blob/54cf2e0780ca137dd9abea5d3d974578ce0c18a9/Lib/venv/scripts/common/activate#L16

Full activate / deactivate hooks could be achieved by setting
__VENV_*_EXTRAS__ to be arbitrary shell commands.

Caleb Donovick
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/MJNFEFT4GBVBEETJWZUQM5SS6C34PT3K/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Operator as first class citizens -- like in scala -- or yet another new operator?

2019-06-18 Thread Caleb Donovick
>  And what I'm asking for is a justification for that.  Python in
> general has done fine without it for almost 3 decades.  I believe you
> that you have so far not found a way to make a pretty DSL without it,
> and similarly for Yanghao's HDL.  But it's far from obvious that none
> exists that most Pythonistas would find satisfactory.

I have found a way to make a pretty DSL, as I stated earlier in the thread
I rewrite the AST.  From a user standpoint the problem is mostly moot.
>From a developer standpoint rewriting the AST is an incredibly painful way
to operate.

> So far the request for an in-place update operator seems to fail on
> both counts.  "Need" fails for lack of examples.  "Broad benefit"
> could be implied by "need" and a bit of imagination applied to
> concrete examples, but on the face of it seems unlikely because of the
> lack of persistent voices to date, and "need" itself hasn't been
> demonstrated.

Both Yanghao and I have provided examples, what precisely do you want in an
example? Do you want my DSL code? Do you want the implementation of the AST
rewriter?

As far broader impact a whole range of  common operations could be unified
by an assign in place (stealing some form that thread)
```
context_var.set(val) # possibly the most glaring place in the standard
library where an assign operator would be beautiful
lst[:] = new_list # while a common python idiom, this certainly isn't the
most obvious syntax and only works on lists
dct.clear(); dct.update(new_dict) # to achieve the same thing as above with
a dict or set.
numpy.copyto(array, new_array) # to achieve the same as above, note
array[:] = new_array is an error
```

If we want to extend discussion beyond assign in place to be a write
operator we can add to the list
```
coroutine.send(args)
process.communicate(args)
file.write(arg)
```

Caleb Donovick

On Tue, Jun 18, 2019 at 3:43 PM nate lust  wrote:

> I have been following this discussion for a long time, and coincidentally
> I recently started working on a project that could make use of assignment
> overloading. (As an aside it is a configuration system for a astronomical
> data analysis pipeline that makes heavy use of descriptors to work around
> historical decisions and backward compatibility). Our system makes use of
> nested chains of objects and descriptors and proxy object to manage where
> state is actually stored. The whole system could collapse down nicely if
> there were assignment overloading. However, this works OK most of the time,
> but sometimes at the end of the chain things can become quite complicated.
> I was new to this code base and tasked with making some additions to it,
> and wished for an assignment operator, but knew the data binding model of
> python was incompatible from p.
>
> This got me thinking. I didnt actually need to overload assignment
> per-say, data binding could stay just how it was, but if there was a magic
> method that worked similar to how __get__ works for descriptors but would
> be called on any variable lookup (if the method was defined) it would allow
> for something akin to assignment. For example:
>
> class Foo:
> def __init__(self):
> self.value = 6
> self.myself = weakref.ref(self)
> def important_work(self):
> print(self.value)
> def __get_self__(self):
> return self.myself
> def __setattr__(self, name, value):
> self.value = value
>
> foo = Foo() # Create an instance
> foo # The interpreter would return foo.myself
> foo.value # The interpreter would return foo.myself.value
> foo = 19 # The interpreter would run foo.myself = 6 which would invoke
> foo.__setattr__('myself', 19)
>
> I am being naive is some way I am sure, possibly to how the interpreter
> could be made to do this chaining, but I figured I would weight in in case
> this message could spark some thought.
>
> On Tue, Jun 18, 2019 at 5:41 AM Yanghao Hua  wrote:
>
>> On Tue, Jun 18, 2019 at 10:57 AM Stephen J. Turnbull
>>  wrote:
>> > Maybe you'll persuade enough committers without examples.  Maybe the
>> > problem will be solved en passant if the "issubclass needs an
>> > operator" thread succeeds (I've already suggested to Yanghao offlist
>> > that Guido's suggested spelling of "<:" seems usable for "update",
>> > even though in that thread it's a comparison operator).  But both
>> > would require a lot of luck IMO.
>>
>> I must have overlooked it ... <: seems good to me. I do agree with you
>> this needs more materialized evidence, I am working on it, in a few
>> areas more than just DSL/HDL.
>>
>> For now I have abandoned my local change to cpython and settled w

[Python-ideas] Re: Operator as first class citizens -- like in scala -- or yet another new operator?

2019-06-17 Thread Caleb Donovick
> It's because Python doesn't actually have assignment to variables, it
> has binding to names.  So there's no "there" there to provide a
> definition of assignment.  In a class definition, the "local
> variables" are actually attributes of the class object.  That class
> object provides the "there", which in turn allows redefinition via a
> metaclass.

I understand this and while I would love to have metamodules and
metafunctions to provide me a 'there' that is for another thread.

I don't really want to change the semantic of =.  What Yanghao and I are
asking for is an in-place update/assign operator which isn't burdened with
numeric meaning.

Caleb Donovick

On Thu, Jun 13, 2019 at 1:42 PM Stephen J. Turnbull <
turnbull.stephen...@u.tsukuba.ac.jp> wrote:

> Caleb Donovick writes:
>
>  > In class bodies it is easy to redefine what assignment means, in
>  > every other context its very annoying, I don't see why that must be
>  > the case.
>
> It's because Python doesn't actually have assignment to variables, it
> has binding to names.  So there's no "there" there to provide a
> definition of assignment.  In a class definition, the "local
> variables" are actually attributes of the class object.  That class
> object provides the "there", which in turn allows redefinition via a
> metaclass.
>
> Of course this doesn't *have* to be the case.  But in Python it is.
> AFAICS making assignment user-definable would require the compiler to
> be able to determine the type of the LHS in every assignment statement
> in order to determine whether name binding is meant or the name refers
> to an object which knows how to assign to itself.  I don't see how to
> do that without giving up a lot of the properties that make Python
> Python, such as duck-typing.
>
> Steve
>
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/GCICN3H22DB4JNK23ZX4TMGYESNBTVS4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Operator as first class citizens -- like in scala -- or yet another new operator?

2019-06-12 Thread Caleb Donovick
Barry the reason I use python and don't parse syntax directly as I want to
have python as meta programming environment for my DSLs.  I can mostly work
within the python syntax (with some pretty heavy metaclasses) I rarely have
to touch the AST.  Their only two places where I ever have to touch the
AST, in assignment statements and in control flow.   Theres no easy way
around needing to rewrite control flow but an assignment operator would
drastically decrease the amount of AST manipulation I do.  In class bodies
it is easy to redefine what assignment means, in every other context its
very annoying, I don't see why that must be the case.

Caleb

On Wed, Jun 12, 2019 at 2:28 PM Chris Angelico  wrote:

> On Thu, Jun 13, 2019 at 6:51 AM Yanghao Hua  wrote:
> >
> > On Wed, Jun 12, 2019 at 9:39 AM Chris Angelico  wrote:
> > > If Python is really THAT close, then devise two syntaxes: an abstract
> > > syntax for your actual source code, and then a concrete syntax that
> > > can be executed. It's okay for things to be a little bit ugly (like
> > > "signal[:] = 42") in the concrete form, because you won't actually be
> > > editing that. Then your program just has to transform one into the
> > > other, and then run the program.
> >
> > Thought about that too  but as you can imagine, you can write:
> >
> > x <== 3 # or
> > x \
> > <== 3 # or
> > x \
> > \
> > ...
> > \ <== 3 # This is crazy but valid python syntax!
> > # more crazy ones are skipped ...
> >
> > so this is not a simple text replacement problem, eventually you end
> > up writing a python parser? Or a HDL parser.
>
> Yes, you would need some sort of syntactic parser. There are a couple
> of ways to go about it. One is to make use of Python's own tools, like
> the ast module; the other is to mandate that your specific syntax be
> "tidier" than the rest of the Python code, which would permit you to
> use a more naive and simplistic parser (even a regex).
>
> ChrisA
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/BCGYTPPODZCFLFSETZOXPAQFKPGJ6ZVT/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/MGOCHF4YT37637BBS7U6PSCWKFHXP4IN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Assign-in-place operator

2019-06-11 Thread Caleb Donovick
The problem as I see it with slice assignment is that if we want to
operator to mean type defined assignment not necessary in place assignment.
It creates confusion for types which have __setitem__.

Caleb Donovick




On Thu, Jun 6, 2019 at 4:59 PM Greg Ewing 
wrote:

> Stephen J. Turnbull wrote:
> > L[m:m+k] specifies that a list operation will take
> > place on the k elements starting with m.  As a value, it makes a new
> > list of references to those elements.
>
> Even that is specific to lists. There's no requirement that a
> RHS slice has to create new references to elements. A type can
> define it so that it returns a mutable view of part of the
> original object. This is how numpy arrays behave, for example.
>
> As syntax, slice notation simply denotes a range of elements,
> and it does that the same way whether it's on the LHS or RHS.
>
> --
> Greg
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/HF3RYOEM3WG73IF5DACNBDVMBLD3PCBI/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JQI2O6Y7KQM46AXNHZAC7UFAL737T2FZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Operator as first class citizens -- like in scala -- or yet another new operator?

2019-06-10 Thread Caleb Donovick
First off, I have admittedly not read all of this thread.  However, as
designer of DSL's in python, I wanted to jump in on a couple of things I
have seen suggested.  Sorry If I am repeating comments already made.  Over
the last few years I have thought about every suggestion I have seen in
this thread and they all don't work or are undesirable for one reason or
another.

Regarding what code will become simpler if an assignment operator was
available.  I currently walk the AST to rewrite assignments into the form I
want.  This is code is really hard to read if you are not familiar with the
python AST, but the task it is performing is not hard to understand at all
(replace assignment nodes with calls to a function call to
dsl_assign(target_names, value, globals(), locals()).  The dsl_assign
function basically performs some type checking before doing the
assignment.  Once again the code is much harder to understand than it
should be as it operates on names of variables and the globals / locals
dictionaries instead of on the variables themselves.  Also to get the hook
into the AST I have to have use an importer which further obscures my code
and makes use kinda annoying as one has to do the following:
```
main.py:
import dsl # sets up the importer to rewrite the AST
import dsl_code # the code which should morally be the main but most be
imported after dsl
```
Granted anything in a function or a class can be rewritten with a decorator
but module level code must be rewritten by an importer.

The problem with overloading obj@=value:
As Yanghao have pointed out @ comes with expectations of behavior.  Granted
I would gamble most developers are unaware that @ is python operator but it
still has meaning and as such I don't like abusing it.

The problem with using obj[:]=value:
Similar to @ getitem and slices have meaning which I don't necessarily want
to override. Granted this is least objectionable solution I have seen
although, it creates weird requirements for types that have __getitem__ so
it can just be used by inheriting `TypedAssignment` or something similar.

The problem with descriptors:
They are hard to pass to functions, for example consider trying to fold
assignment.
```
signals = [Signal('x'), Signal('y'), Signal('z')]
out = functools.reduce(operator.iassign, signals)
```
vs
```
signals = SignalNamespace()
signal_names =  ['x', 'y', 'z']
out = functools.reduce(lambda v, name: settattr(signals, name, v))
```
In general one has to pass the name of the signal and the namespace to a
function instead the signal itself which is problematic.

The problem with exec:
First off its totally unpythonic, but even if I hide the exec with importer
magic it still doesn't give me the behavior I want.
Consider the following
```
class TypeCheckDict(dict, MutableMapping): # dict needed to be used as
globals
  """
  Dictionary which  binds keys to a type on first assignment then type
checks on future
  assignement. Will infer type if not already bound. """
  __slots__ = '_d'
  def __init__(self, d=_MISSING):
if d is _MISSING:
  d = {}
self._d = d

  def __getitem__(self, name):
v = self._d[name][1]
if v is _MISSING:
  raise ValueError()
else:
  return v

  def __setitem__(self, name, value):
if name not in self._d:
  if isinstance(value, type):
self._d[name] = [value, _MISSING]
  else:
self._d[name] = [type(value), _MISSING]
  elif isinstance(value, self._d[name][0]):
self._d[name][1] = value
  else:
raise TypeError(f'{value} is not a {self._d[name][0]}')

  # __len__ __iter__ __delitem__ just dispatch to self._d

S = '''
x = int
x = 1
x = 'a'
'''
exec(S, TypeCheckDict(), TypeCheckDict()) # raises TypeError 'a' is not a
int

S = '''
def foo(): # type of foo inferred
  x = int
  x = 'a'
foo()
'''
exec(S, TypeCheckDict(), TypeCheckDict()) # doesn't raise an error as a
normal dict is used in foo
```

Caleb Donovick

On Thu, Jun 6, 2019 at 12:54 AM Yanghao Hua  wrote:

> On Wed, Jun 5, 2019 at 11:31 AM Chris Angelico  wrote:
> > Part of the reason you can't just treat the + operator as a method
> > call is that there are reflected methods. Consider:
> >
> > class Int(int):
> > def __radd__(self, other):
> > print("You're adding %s to me!" % other)
> > return 1234
> >
> > x = Int(7)
> > print(x + 1)
> > print(1 + x)
> >
> > If these were implemented as x.__add__(1) and (1).__add__(x), the
> > second one would use the default implementation of addition. The left
> > operand would be the only one able to decide how something should be
> > implemented.
>
> Yep, just did an experiment in Scala, where you can do x + 1, but not
> 1 + x. So it looses some flexibility in terms of how you write your
> expression, but still, it looks OK to only write x + 1 and when you
> write 1