[Python-ideas] Re: Allow syntax "func(arg=x if condition)"

2022-01-19 Thread Peter O'Connor
On Sun, Nov 14, 2021 at 2:50 PM Chris Angelico  wrote:

>
> spam(*(1,) * use_eggs)
> spam(**{"eggs": 1} if use_eggs else {})
>
> Still clunky, but legal, and guaranteed to work in all Python
> versions. It's not something I've needed often enough to want
> dedicated syntax for, though.
>
> ChrisA
>

The thing is that I find myself dealing with duplicated defaults *all the
time *- and I don't know a good way to solve the problem.  The "**{"eggs":
1} if use_eggs else {}" is obviously problematic because
* It is immune to type-inspection, pylint, mypy, IDE-assisted-refactoring,
etc,
* If trying to *pass down* the argument it actually looks like
spam(**{"eggs": kwargs["eggs"]} if "eggs" in kwargs else {}) which is even
messier

A lot of the time, my code looks like this:

def main_demo_func_with_primative_args(path: str, model_name: str,
param1: float=3.5, n_iter: float=7):
obj = BuildMyObject(
model_name = model_name,
sub_object = MySubObject(param1=param1, n_iter=n_iter)
)
for frame in iter_frames(path):
result = obj.do_something(frame)
print(result)

ie I have a main function with a list of arguments that are distributed to
build objects and call functions.

The problem is I am always dealing with duplicated defaults (between this
main function and the default values on the objects).  You define a
default, duplicate it somewhere else, change the original, forget to change
the duplicated, etc...

* It makes sense to define the default values in one place.
* It makes sense for this place to be on the objects which use them (ie at
the lowest level)
* It makes sense to be able to modify default values from the top level
function.
But the above 3 things are not compatible in current python (at least not
in a clean, pythonic way!)

The only ways I know of to avoid duplication are:
* Define the defaults as GLOBALS in the module of the called function/class
and reference them from both places (not always possible as you don't
necessarily control the called code).  Also not very nice because you have
to define a new global for each parameter of each low-level object (a a
different sort of duplication).
* Messy dict-manipulation with kwargs (see above)
* Messy and fragile default inspection using inspect module

The only decent ways I can think of to avoid duplicated-defaults are not
currently supported in Python:

1) Conditional arg passing (this proposal):
def main_func(..., param_1: Optional[float] = None, n_iter:
Optional[int] = None):
sub_object = MySubObject(param1=param1 if param1 is not None,
n_iter=n_iter if n_iter is not None)

2) Ability to cleanly reference defaults of a lower-level object:
 def main_func(..., param_1: float=MySubObject.defaults.param1,
n_iter: int=MySubObject.defaults.n_iter):
 sub_object = MySubObject(param1=param1, n_iter=n_iter)

3) "Deferred defaults
"... which seem to
be a bit of a Pandora's box

(1) seems less controversial than (2).
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JJSXU5E2KKRVSQJH3TOZTYYMCNCEUCEA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Allow syntax "func(arg=x if condition)"

2021-11-14 Thread Peter O'Connor
On Mon, Mar 22, 2021 at 1:28 PM Caleb Donovick 
wrote:

> ... One could do something like:
> ```
> def fun(a, b=0): ...
> def wraps_fun(args, b=inspect.signature(fun).parameters['b'].default): ...
> ```
> But I would hardly call that clear.
>
> Caleb
>


I like this approach too - it just needs a cleaner syntax.  Python could
make functions more "object like" by having fields for args (though I'm
sure that would inspire some controversy):

def fun(a, b=0): ...
def wraps_fun(args, b=fun.args.b.default): ...
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JEJVSDPJ65TBRDDMTRKP7ZIFFZ7GU5TT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Allow syntax "func(arg=x if condition)"

2021-03-20 Thread Peter O'Connor
bump!

On Wed, Jan 13, 2021 at 9:32 AM Peter O'Connor 
wrote:

> I often find that python lacks a nice way to say "only pass an argument
> under this condition".  (See previous python-list email in "Idea: Deferred
> Default Arguments?")
>
> Example 1: Defining a list with conditional elements
> include_bd = True
> current_way = ['a'] + (['b'] if include_bd else [])+['c']+(['d'] if
> include_bd else [])
> new_way = ['a', 'b' if include_bd, 'c', 'd' if include_bd]
> also_new_way = list('a', 'b' if include_bd, 'c', 'd' if include_bd)
>
> Example 2: Deferring to defaults of called functions
> def is_close(a, b, precicion=1e-9):
> return abs(a-b) < precision
>
> def approach(pose, target, step=0.1, precision=None):
># Defers to default precision if not otherwise specified:
> velocity = step*(target-pose) \
> if not is_close(pose, target, precision if precision is not
> None) \
> else 0
> return velocity
>
> Not sure if this has been discussed, but I cannot see any clear downside
> to adding this, and it has some clear benefits (duplicated default
> arguments and **kwargs are the scourge of many real world code-bases)
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/KM2Y3BFF2GDVPS56Z7TX2VMZXJEKKRHZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Allow syntax "func(arg=x if condition)"

2021-01-13 Thread Peter O'Connor
I often find that python lacks a nice way to say "only pass an argument
under this condition".  (See previous python-list email in "Idea: Deferred
Default Arguments?")

Example 1: Defining a list with conditional elements
include_bd = True
current_way = ['a'] + (['b'] if include_bd else [])+['c']+(['d'] if
include_bd else [])
new_way = ['a', 'b' if include_bd, 'c', 'd' if include_bd]
also_new_way = list('a', 'b' if include_bd, 'c', 'd' if include_bd)

Example 2: Deferring to defaults of called functions
def is_close(a, b, precicion=1e-9):
return abs(a-b) < precision

def approach(pose, target, step=0.1, precision=None):
   # Defers to default precision if not otherwise specified:
velocity = step*(target-pose) \
if not is_close(pose, target, precision if precision is not
None) \
else 0
return velocity

Not sure if this has been discussed, but I cannot see any clear downside to
adding this, and it has some clear benefits (duplicated default arguments
and **kwargs are the scourge of many real world code-bases)
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/XQS6RN6MAS5VX66RWBIEULUJWDEFWAEV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Optional keyword arguments

2020-05-20 Thread Peter O'Connor
On Wed, May 20, 2020 at 5:35 AM Eric V. Smith  wrote:

> On 5/20/2020 8:23 AM, James Lu wrote:
> > What's wrong with my := proposal?
> Confusion with unrelated uses of the walrus operator.
>

What's wrong with confusion with the walrus operator?
- If you are confused and don't know what walrus operator is, you google it
and find that in a function header it means late assignment.
- If you are confused and think it's referring to the walrus operator's use
for inline variable assignment - well so what - it is being used as
assignment here!
- If you are confused and think := is binding at import-time rather than
call-time - well you already are a somewhat advanced python user to be
thinking about that and the use of a different syntax to bind the argument
should trigger you to google it.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/BTEKWU7LXWL53AEH2MAYFWFDQJPSYADO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Proposal: Use "for x = value" to assign in scope

2019-08-09 Thread Peter O'Connor
Alright hear me out here:

I've often found that it would be useful for the following type of
expression to be condensed to a one-liner:

def running_average(x_seq):
averages = []
avg = 0
for t, x in enumerate(x_seq):
avg =  avg*t/(t+1) + x/(t+1)
averages.append(avg)
return averages

Because really, there's only one line doing the heavy lifting here, the
rest is kind of boilerplate.

Then I learned about the beautiful and terrible "for x in [value]":

def running_average(x_seq):
return [avg for avg in [0] for t, x in enumerate(x_seq) for avg in
[avg*t/(t+1) + x/(t+1)]]

Many people find this objectionable because it looks like there are 3 for
loops, but really there's only one: loops 0 and 2 are actually assignments.

**My Proposal**

What if we just officially bless this "using for as a temporary assignment"
arrangement, and allow "for x=value" to mean "assign within the scope of
this for".  It would be identical to "for x in [value]", just more
readable.  The running average function would then be:

def running_average(x_seq):
return [avg for avg=0 for t, x in enumerate(x_seq) for avg = avg *
t/(t+1) + x / (t+1)]

-- P.S. 1
I am aware of Python 3.8's new "walrus" operator, which would make it:

def running_average(x_seq):
avg = 0
return [avg := avg*t/(t+1) + x / (t+1) for t, x in enumerate(x_seq)]

But it seems ugly and bug-prone to be initializing a in-comprehension
variable OUTSIDE the comprehension.

-- P.S. 2
The "for x = value" syntax can achieve things that are not nicely
achievable using the := walrus.  Consider the following example (wherein we
carry forward a "hidden" variable h but do not return it):

y_seq = [y for h=0 for x in x_seq for y, h = update(x, h)]

There's not really a nice way to do this with the walrus because you can't
(as far as I understand) combine it with tuple-unpacking.  You'd have to do
something awkward like:

yh = None, 0
y_seq, _ = zip(*(yh := update(x, yh[1]) for x in x_seq))
--
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/RHW5AUV3C57YOF3REB2HEMYLWLLXSNQT/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: "?" Documentation Operator and easy reference to argument types/defaults/docstrings

2019-04-28 Thread Peter O'Connor
Thanks all for the responses. I read thought them carefully and address
each below.

I don't think any fully address the core problem - The "Argument" - the
tuple of (type, default, documentation) - is currently not a first-class
entity.  Because there is no way to reference an Argument, there is much
copypasta and dangerous-default-duplication.  The idea of this proposal is
that it should be possible to define an argument in one place, and simply
bind a different name to it in each function signature.

To recap - the points to the proposal are:
- Allow documentation to be bound to an argument: "func(arg_a: int = 3 ?
'doc about arg', ...)" or "func(arg_a: int = 3 # 'doc about arg', ...)"
- Allow reference to argument: "outer_func(new_arg_a_name:
func.args.arg_a.type = func.args.arg_a.default ? 'new doc for arg a', ...)"
- (optionally) a shorthand syntax for reusing type/doc/default of argument:
"def outer_func(new_arg_a_name :=? func.args.arg_a, ...):"

Below I have responded to each comment - please let me know if I missed
something:

--

On Thu, Apr 25, 2019 at 3:59 PM Robert Vanden Eynde 
wrote:

> Looks like a more complicated way to say :
> def f(x:'int : which does stuff' = 5, y:'int : which does more stuffs')
>

I hadn't though of incorporating documentation into the type, that's a nice
idea.  I think it's an ok "for now" solution but:
- doing it this way loses the benefits of type inspection (built in to most
IDEs now),
- does not allow you do for instance keep the type definition in the
wrapper while changing the documentation.
- Provides no easy way to reference (f.args.x.documentation) which is a
main point to the proposal.

--

On Thu, Apr 25, 2019 at 3:58 PM Chris Angelico  wrote:

> @functools.passes_args(f)
> def wrapper(spam, ham, *a, **kw):
> f(*a, **kw)
> 

If that were implemented, would it remove the need for this new syntax
> you propose?
>

This does indeed allow defaults and types to be passed on, but I find a
this approach still has the basic flaw of using **kwargs:
- It only really applies to "wrappers" - functions that wrap another
function.  The goal here is to address the common case of a function
passing args to a number of functions within it.
- It is assumed that the wrapper should use the same argument names as the
wrapped function.  A name should bind a function to an argument - but here
the name is packaged with the argument.
- It remains difficult to determine the arguments of "wrapper" by simple
inspection - added syntax for removal of certain arguments only complicates
the task and seems fragile (lest the wrapped functions argument names
change).
- Renaming an argument to "f" will change the change the arguments of
wrapper - but in a way that's not easy to inspect (so for instance if you
have a call "y = wrapper(arg_1=4)", and you change "f(arg1=)" to
"f(arg_one=...)" no IDE will catch that and make the appropriate change to
"y=wrapper(arg_one=4)".
- What happens if you're not simply wrapping one sub-function but calling
several?  What about when those subfunctions have arguments with the same
name?

--

On Thu, Apr 25, 2019 at 5:50 PM David Mertz  wrote:

> Why not just this in existing Python:
> def func_1(
> a: int = 1 # 'Something about param a',
> b: float = 2.5 # 'Something else about param b',
> ) -> float:
> """Something about func_1
>
> a and b interact in this interesting way.
> a should be in range 0 < a < 125
> floor(b) should be a prime number
>
> Something about return value of func_1
> returns a multiplication
> """
> return a*b
>

- This would currently be a syntax error (because "#" is before the comma),
but sure we could put it after the comma.
- It does not address the fact that we cannot reference "func_1.a.default"
- which is one of the main points of this proposal.
- I'm fine with "#" being the documentation operator instead of "?", but
figured people would not like it because it breaks the precedent of
anything after "#" being ignored by the compiler

---

On Thu, Apr 25, 2019 at 9:04 PM Anders Hovmöller 
wrote:

> Maybe something like...
> def foo(**kwargs):
> “””
> @signature_by:
> full.module.path.to.a.signature_function(pass_kwargs_to=bar,
> hardcoded=[‘quux’])
> “””
> return bar(**kwargs, quux=3)
>

This makes it difficult to see what the names of arguments to "foo" are, at
a glance.  And what happens if (as in the example) "foo" does not simply
wrap a function, but distributes arguments to multiple subfunctions? (this
is a common case)



On Fri, Apr 26, 2019 at 2:18 AM Stephen J. Turnbull <
turnbull.stephen...@u.tsukuba.ac.jp> wrote:

> What I would rather see is
>
> (1) Comment syntax "inside" (fvo "inside" including any comment after
> the colon but before docstring or other code) .
>
> (2) asserts involving parameters lexically are avail

[Python-ideas] Proposal: "?" Documentation Operator and easy reference to argument types/defaults/docstrings

2019-04-25 Thread Peter O'Connor
Dear all,

Despite the general beauty of Python, I find myself constantly violating
the "don't repeat yourself" maxim when trying to write clear, fully
documented code.  Take the following example:

def func_1(a: int = 1, b: float = 2.5) -> float:
"""
Something about func_1
:param a: Something about param a
:param b: Something else about param b
:return: Something about return value of func_1
"""
return a*b

def func_2(c:float=3.4, d: bool =True) -> float:
"""
Something about func_2
:param c: Something about param c
:param d: Something else about param d
:return: Something about return value
"""
return c if d else -c

def main_function(a: int = 1, b: float = 2.5, d: bool = True) -> float:
"""
Something about main_function
:param a: Something about param a
:param b: Something else about param b
:param d: Something else about param d
:return: Something about return value
"""
return func_2(func_1(a=a, b=b), d=d)

Which has the following problems:
- Defaults are defined in multiple places, which very easily leads to bugs
(I'm aware of **kwargs but it obfuscates function interfaces and usually
does more harm than good)
- Types are defined in multiple places
- Documentation is copy-pasted when referencing a single thing from
different places.  (I can't count the number of types I've written ":param
img: A (size_y, size_x, 3) RGB image" - I could now just reference a single
RGB_IMAGE_DOC variable)
- Argument names need to be written twice - in the header and documentation
- and it's up to the user / IDE to make sure they stay in sync.

I propose to resolve this with the following changes:
- Argument/return documentation can be made inline with a new "?"
operator.  Documentation becomes a first class citizen.
- Argument (type/default/doc) can be referenced by "func.args..type"
/ "func.args..default" / "func.args..doc".  Positional
reference: e.g. "func.args[1].default" also allowed.  If not specified,
they take a special, built-in "Undefined" value (because None may have
another meaning for defaults).  Return type/doc can be referenced with
"func.return.type"
/ "func.return.doc".

This would result in the following syntax:

def func_1(
a: int = 1 ? 'Something about param a',
b: float = 2.5 ? 'Something else about param b',
) -> float ? 'Something about return value of func_1':
""" Something about func_1 """
return a*b

def func_2(
c: float=3.4 ? 'Something about param c',
d: bool =True ? 'Something else about param d',
) -> float ? 'Something about return value':
""" Something about func_2 """
return c if d else -c

def main_function(
a: func_1.args.a.type = func_1.args.a.default ?
func_1.args.a.doc,
b: func_1.args.b.type = func_1.args.b.default ?
func_1.args.b.doc,
d: func_2.args.d.type = func_2.args.d.default ?
func_2.args.d.doc,
) -> func_2.return.type ? func2.return.doc:
""" Something about main_function """
return func_2(func_1(a=a, b=b), d=d)

If the main_function header seems repetitious (it does) we could allow for
an optional shorthand notation like:

def main_function(
a :=? func_1.args.a,
b :=? func_1.args.b,
d :=? func_2.args.d,
) ->? func_2.return:
""" Something about main_function """
return func_2(func_1(a=a, b=b), d=d)

Where "a :=? func_1.args.a" means "argument 'a' takes the same
type/default/documentation as argument 'a' of func_1".

So what do you say?  Yes it's a bold move, but I think in the long term
it's badly needed.  Perhaps something similar has been proposed already
that I'm not aware of.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Can we add "zip and assert equal length" to the standard library?

2018-07-27 Thread Peter O'Connor
I find that about 90% of the time I want want to zip iterators together, I
expect them to be the same length and want to throw an exception if they
aren't.  Yet there is not currently a solution for this in the standard
library for this, and as a result I always have to take this function
everywhere I go:


def zip_equal(*iterables):
"""
Zip and raise exception if lengths are not equal.

Taken from solution by Martijn Pieters, here:

http://stackoverflow.com/questions/32954486/zip-iterators-asserting-for-equal-length-in-python

:param iterables: Iterable objects
:return: A new iterator outputting tuples where one element comes
from each iterable
"""
sentinel = object()
for combo in zip_longest(*iterables, fillvalue=sentinel):
if any(sentinel is c for c in combo):
raise ValueError('Iterables have different lengths.
Iterable(s) #{} (of 0..{}) ran out first.'.format([i for i, c in
enumerate(combo) if c is sentinel], len(combo)-1))
yield combo

Would anybody object to adding this to the standard library for Python 3.8?
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Idea: Deferred Default Arguments?

2018-07-20 Thread Peter O'Connor
On Fri, Jul 20, 2018 at 5:41 PM, Steven D'Aprano 
wrote:
>
>
> What makes you think that a built-in deferred feature won't have exactly
> the same issues? Do you have an implementation that doesn't need to do
> intraspection?


I don't know about these low level things, but I assume it'd be implemented
in C and wouldn't have the same cost as entering a new function in Python.
 I imagine it just being a small modification of the mechanism that Python
already uses to assign values to arguments when a function is called.  Is
that not the case?



On Fri, Jul 20, 2018 at 5:41 PM, Steven D'Aprano 
wrote:

> On Fri, Jul 20, 2018 at 04:43:56PM +0200, Peter O'Connor wrote:
>
> > I still think it would be nice to have this as a built-in python feature,
> > for a few reasons:
> > - When using non-differable functions (say in other codebases), we have
> to
> > do a bunch of "func = deferrable_args(func)" at the top of the module (or
> > we can just do them at runtime, but then we're doing inspection every
> time,
> > which is probably slow).
> > - It adds a layer to the call stack for every deferrable function you're
> in.
> > - To avoid annoying errors where you've defined an arg as deferred but
> > forgot to wrap the function in question.
>
> What makes you think that a built-in deferred feature won't have exactly
> the same issues? Do you have an implementation that doesn't need to do
> intraspection?
>
>
> --
> Steve
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Idea: Deferred Default Arguments?

2018-07-20 Thread Peter O'Connor
Ah, right, the fix_it(fcn) is a nice idea.  It might also be a good idea,
if we're making an external library anyway, to have a "deferred" object to
avoid overloading "None" (which may mean something else than "differ
argument").  I implemented the decorator here
,
and it can be used as:

from deferral import deferrable_args, deferred

@deferrable_args
def subfunction_1(a=2, b=3, c=4):
return a+b*c

@deferrable_args
def subfunction_2(d=5, e=6, f=7):
return d*e+f

def main_function(a=deferred, b=deferred, c=deferred, d=deferred,
e=deferred, f=deferred):
return subfunction_1(a=a, b=b, c=c) + subfunction_2(d=d, e=e, f=f)

assert main_function() == (2+3*4)+(5*6+7)
assert main_function(a=8) == (8+3*4)+(5*6+7)

I still think it would be nice to have this as a built-in python feature,
for a few reasons:
- When using non-differable functions (say in other codebases), we have to
do a bunch of "func = deferrable_args(func)" at the top of the module (or
we can just do them at runtime, but then we're doing inspection every time,
which is probably slow).
- It adds a layer to the call stack for every deferrable function you're in.
- To avoid annoying errors where you've defined an arg as deferred but
forgot to wrap the function in question.


On Fri, Jul 20, 2018 at 3:39 PM, Jonathan Fine  wrote:

> Hi Peter
>
> You make the very good point, that
>
> > subfunction_1 may be written by someone totally different from the
> author of
> > main_function, and may even be in a different codebase.  For the author
> of
> > subfunction_1, it makes no sense to use the "None" approach instead of
> > python's normal default mechanism (since all arguments here are
> immutables).
>
> Good point. To rephrase, what should we do if we want to use a third
> party or legacy function, which begins
> ===
> def fn(a=1, b=2, c=3):
>   # function body
> ===
>
> We can solve this by defining a function decorator. Suppose we have a
> function fix_it, whose argument and return value are both functions.
> The basic specification of fix_it is that
> ---
> fixed_fn = fix_it(fn)
> ---
> is in practice equivalent to
> ---
> def fixed_fn(a=None, b=None, c=None):
> if a is None: a = 1
> if b is None: b = 2
> if c is None: c = 3
> # function body for fn
> # or if you prefer
> return fn(a, b, c)
> ---
>
> An aside. We can code fix_it by using
> https://docs.python.org/3/library/inspect.html
> ===
> >>> import inspect
> >>> def fn(a=1, b=2, c=3): pass
> ...
> >>> str(inspect.signature(fn))
> '(a=1, b=2, c=3)'
> ===
>
> You could also use with new code, like so:
> ---
> @fix_it
> def fn(a=1, b=2, c=3):
>   # function body
> ---
>
> I think this helps solve your problem. Is there, Peter, anything else
> that would be left to do (except, of course, write the fix_it
> function).
>
> Thank you again for your problem and comments.
>
> --
> Jonathan
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Idea: Deferred Default Arguments?

2018-07-20 Thread Peter O'Connor
On Fri, Jul 20, 2018 at 1:30 PM, Jonathan Fine  wrote:
>
>
> I sort of think we've now got a reasonable answer for Peter's problem.
> What do you think, Peter? And Brice, are you happy with my
> interpretation of your deferred keyword?


I think the problem with the "None" approach (second pattern) is that it
forces the writer of the subfunction to write their defaults in a more
awkward way in anticipation of other functions which defer their defaults
to it.  Translated to the original example, it would become:

def subfunction_1(a=None, b=None, c=None):
if a is None: a=1
if b is None: b=2
if c is None: c=3
return a+b*c

def subfunction_2(d=None, e=None, f=None):
if d is None: d=5
if e is None: e=6
if f is None: f=7
return d*e+f

def main_function(a=None, b=None, c=None, d=None, e=None, f=None):
return subfunction_1(a=a, b=b, c=c) + subfunction_2(d=d, e=e, f=f)

subfunction_1 may be written by someone totally different from the author
of main_function, and may even be in a different codebase.  For the author
of subfunction_1, it makes no sense to use the "None" approach instead of
python's normal default mechanism (since all arguments here are
immutables).



On Fri, Jul 20, 2018 at 1:30 PM, Jonathan Fine  wrote:

> Excellent contributions. I'm going to try to (partially) consolidate
> what we've got.
>
> REVIEW
> ===
> I'll start by reviewing the situation regarding default arguments.
> There are two basic patterns for default arguments.
>
> The first is
> ---
> def fn(a=EXP):
> # body of function
> ---
>
> The second is
> ---
> def fn(a=None):
> if a is None:
> a = EXP
> # body of function
> ---
>
> Here, EXP is any Python expression. A fairly gotcha is to use a list,
> or some other mutable object, as EXP. This happens when you write
> ---
> def fn(a=[]):
># body of function
> ---
> because then EXP = '[]' which will be evaluated just once, and every
> call fn() will be using the same list! To avoid this you should use
> the second pattern. I think there may be an example of this in the
> standard Python tutorial.
>
> (An aside. You probably need the second pattern if you EXP is, say,
> ([], []). Although technically immutable, this value has mutable
> members. And can't be hashed, or use as a dictionary key or element of
> a set.)
>
> WHEN TO USE None
> =
>
> If you want something mutable as the 'default argument' you have to
> use the second pattern. If your default argument is immutable then you
> can if you wish use the second pattern.
>
> But you don't have to use the second pattern. When you use the second
> pattern, the expression f(None) means 'create for me the default
> argument' (and raise an exception if there isn't one).
>
> Think about it. For immutable EXP, fn() is the same, whether fn is
> coded using the first pattern or the second. But the value of fn(None)
> depends very much on which pattern is used to code fn().
>
> So here's the big conclusion (drum roll):
> ===
> fn should be coded using the second pattern if we wish to pass None as
> a sentinel argument to fn.
> ===
>
> SUMMARY
> =
> My suggestion was to use the second pattern to solve Peter O'Connor's
> original problem. It can be done now, but is a bit verbose, and looses
> useful information in help(fn).
>
> Brice Parent's suggestion was to introduce a keyword deferred, like so
> ---
> def fn(deferred a=EXP):
> # body of function
> ---
> which I like to think of as a syntactic shorthand for the second pattern.
>
> I sort of think we've now got a reasonable answer for Peter's problem.
> What do you think, Peter? And Brice, are you happy with my
> interpretation of your deferred keyword?
>
> ---
> Jonathan
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Idea: Deferred Default Arguments?

2018-07-20 Thread Peter O'Connor
Often when programming I run into a situation where it would be nice to
have "deferred defaults".  Here is an example of what I mean:

def subfunction_1(a=2, b=3, c=4):
return a+b*c

def subfunction_2(d=5, e=6, f=7):
return d*e+f

def main_function(a=2, b=3, c=4, d=5, e=6, f=7):
return subfunction_1(a=a, b=b, c=c) + subfunction_2(d=d, e=e, f=f)

Here you can see that I had to redefine the defaults in the main_function.
In larger codebases, I find bugs often arise because defaults are defined
in multiple places, and somebody changes them in a lower-level function but
fails to realize that they are still defined differently in a higher
function.

The only way I currently see to achieve this is not very nice at all, and
completely obfuscates the signature of the function:

def main_function(**kwargs):
return subfunction_1(**{k: v for k, v in kwargs.items() if k in
['a', 'b', 'c']})
   + subfunction_2(**{k: v for k, v in kwargs.items() if k in
['d', 'e', 'f']})

What I was thinking was a "deferred" builtin that would just allow a lower
function to define the value (and raise an exception if anyone tried to use
it before it was defined)

def main_function(a=deferred, b=deferred, c=deferred, d=deferred,
e=deferred, f=deferred):
return subfunction_1(a=a, b=b, c=c) + subfunction_2(d=d, e=e, f=f)

I assume this has been discussed before somewhere, but couldn't find
anything on it, so please feel free to point me towards any previous
discussion on the topic.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-31 Thread Peter O'Connor
On Thu, May 31, 2018 at 2:55 PM, Chris Angelico  wrote:

> [process(tx, y) for x in xs for tx in [transform(x)] for y in yz]
>
...

I think Serhiy was trying to establish this form as a standard idiom,
> with optimization in the interpreter to avoid constructing a list and
> iterating over it (so it would be functionally identical to actual
> assignment). I'd rather see that happen than the creation of a messy
> 'given' syntax.
>

Perhaps it wouldn't be crazy to have "with name=initial" be that idiom
instead of "for name in [initial]".  As ..

[process(tx, y) for x in xs with tx=transform(x) for y in yz]

.. seems to convey the intention more clearly.  More generally (outside of
just comprehensions), "with name=expr:" could be used to temporarily bind
"name" to "expr" inside the scope of the with-statement (and unbind it at
the end).

And then I could have my precious initialized generators (which I believe
cannot be nicely implemented with ":=" unless we initialize the variable
outside of the scope of the comprehension, which introduces the problem of
unintended side-effects).

smooth_signal = [average with average=0 for x in seq with
average=(1-decay)*average
+ decay*x]
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-31 Thread Peter O'Connor
On Thu, May 31, 2018 at 1:55 PM, Neil Girdhar  wrote:

> Why wouldn't you want to just put the outer given outside the entire
> comprehension?
> retval = [expr(name, x) given name=update(name, x) for x in seq]
> given name=something
>

There seems to be a lot of controversy about updating variables defined
outside a comprehension within a comprehension.  Seems like it could lead
to a lot of bugs and unintended consequences, and that it's safer to not
allow side-effects of comprehensions.


> The more I think about it, the more i want to keep "given" in
> comprehensions, and given in expressions using parentheses when given is
> supposed to bind to the expression first.
>

I think the problem is that you given to be used in two ways:
-  You want "B given A" to mean "execute A before B" when B is a simple
expression
- "given A B" to mean "execute A before B" when B is a loop declaration.

The more I think about it, the more I think comprehensions should be parsed
in reverse order: "from right to left".  In this alternate world, your
initial example would have been:

   potential_updates = {
   (1)  y: command.create_potential_update(y)
   (2)  if potential_update is not None
   (3)  given potential_update =
command.create_potential_update(y)
   (4)  for y in [x, *x.synthetic_inputs()]
   (5)  for x in need_initialization_nodes
 }

Which would translate to:

   potential_updates = {}
(5)  for x in need_initialization_nodes:
(4)   for y in [x, *x.synthetic_inputs()]:
(3)   potential_update = command.create_potential_update(y)
(2)   if potential_update is not None:
(1)   potential_updates[y] = command.create_potential_
update(y)

And there would be no ambiguity about the use of given.  Also, variables
would tend to be used closer to their declarations.

But it's way to late to make a change like that to Python.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-31 Thread Peter O'Connor
Well, there need not be any ambiguity if you think of "B given A" as
"execute A before B", and remember that "given" has a lower precedence than
"for" (So [B given A for x in seq] is parsed as [(B given A) for x in seq]

Then

>
> retval = [expr(name) given name=something(x) for x in seq]
>

Is:

retval = []
for x in seq:
name = something(x)
retval.append(expr(name))

And

retval = [expr(name, x) for x in seq given name=something]

Is:
retval = []
name = something
for x in seq:
retval.append(expr(name, x))


But this is probably not a great solution, as it forces you to mentally
unwrap comprehensions in a strange order and remember a non-obvious
precedence rule.

On the plus-side, it lets you initialize generators with in-loop updates
(which cannot as far as I see be done nicely with ":="):

retval = [expr(name, x) given name=update(name, x) for x in seq given
name=something]

Is:

retval = []
name = something
for x in seq:
name = update(name, x)
retval.append(expr(name, x))



On Thu, May 31, 2018 at 10:44 AM, Neil Girdhar 
wrote:

> Yes, you're right. That's the ambiguity I mentioned in my last message.
> It's too bad because I want given for expressions and given for
> comprehensions. But if you have both, there's ambiguity and you would at
> least need parentheses:
>
> [(y given y=2*x) for x in range(3)]
>
> That might be fine.
>
> On Thu, May 31, 2018 at 4:34 AM Peter O'Connor 
> wrote:
>
>> * Sorry, message sent too early:
>>
>> On Thu, May 31, 2018 at 4:50 AM, Neil Girdhar 
>> wrote:
>>>
>>>
>>>> [expression given name=something for x in seq]
>>>>
>>>
>>> retval = []
>>> name = something
>>> for x in seq:
>>> retval.append(expression)
>>> return retval
>>>
>>
>> That's a little confusing then, because, given the way given is used
>> outside of comprehensions, you would expect
>>
>> [y given y=2*x for x in range(3)]
>>
>> to return [0, 2, 4], but it would actually raise an error.
>>
>>
>> On Thu, May 31, 2018 at 10:32 AM, Peter O'Connor <
>> peter.ed.ocon...@gmail.com> wrote:
>>
>>>
>>>
>>> On Thu, May 31, 2018 at 4:50 AM, Neil Girdhar 
>>> wrote:
>>>>
>>>>
>>>>> [expression given name=something for x in seq]
>>>>>
>>>>
>>>> retval = []
>>>> name = something
>>>> for x in seq:
>>>> retval.append(expression)
>>>> return retval
>>>>
>>>
>>> That's a little strange confusing then, because, given the way given is
>>> used outside of comprehensions, you would expect
>>>
>>> for x in range(3):
>>> y given y=2*x
>>>
>>> [y given y=2*x for x in range(3)]
>>>
>>> to return [0, 2, 4], but it would actually raise an error.
>>>
>>>
>>>
>>>
>>
>>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-31 Thread Peter O'Connor
* Sorry, message sent too early:

On Thu, May 31, 2018 at 4:50 AM, Neil Girdhar  wrote:
>
>
>> [expression given name=something for x in seq]
>>
>
> retval = []
> name = something
> for x in seq:
> retval.append(expression)
> return retval
>

That's a little confusing then, because, given the way given is used
outside of comprehensions, you would expect

[y given y=2*x for x in range(3)]

to return [0, 2, 4], but it would actually raise an error.


On Thu, May 31, 2018 at 10:32 AM, Peter O'Connor  wrote:

>
>
> On Thu, May 31, 2018 at 4:50 AM, Neil Girdhar 
> wrote:
>>
>>
>>> [expression given name=something for x in seq]
>>>
>>
>> retval = []
>> name = something
>> for x in seq:
>> retval.append(expression)
>> return retval
>>
>
> That's a little strange confusing then, because, given the way given is
> used outside of comprehensions, you would expect
>
> for x in range(3):
> y given y=2*x
>
> [y given y=2*x for x in range(3)]
>
> to return [0, 2, 4], but it would actually raise an error.
>
>
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-31 Thread Peter O'Connor
On Thu, May 31, 2018 at 4:50 AM, Neil Girdhar  wrote:
>
>
>> [expression given name=something for x in seq]
>>
>
> retval = []
> name = something
> for x in seq:
> retval.append(expression)
> return retval
>

That's a little strange confusing then, because, given the way given is
used outside of comprehensions, you would expect

for x in range(3):
y given y=2*x

[y given y=2*x for x in range(3)]

to return [0, 2, 4], but it would actually raise an error.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-30 Thread Peter O'Connor
On Wed, May 30, 2018 at 7:59 PM, Neil Girdhar  wrote:
>
> z = {a: transformed_b
>  for b in bs
>  given transformed_b = transform(b)
>  for a in as_}
>
> There is no nice, equivalent := version as far as I can tell.
>

Well you could just do:

z = {a: b
 for b in (transform(bi) for bi in bs)
 for a in as_}
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-05-30 Thread Peter O'Connor
On Thu, May 24, 2018 at 2:49 PM, Steven D'Aprano 
wrote:

> On Thu, May 24, 2018 at 02:06:03PM +0200, Peter O'Connor wrote:
> > We could use given for both the in-loop variable update and the variable
> > initialization:
> >smooth_signal =  [average given average=(1-decay)*average + decay*x
> for
> > x in signal] given average=0.
>
> So in your example, the OUTER "given" creates a local variable in the
> current scope, average=0, but the INNER "given" inside the comprehension
> exists inside a separate, sub-local comprehension scope, where you will
> get an UnboundLocalError when it tries to evaluate (1-decay)*average the
> first time.


You're right, having re-thought it, it seems that the correct way to write
it would be to define both of them in the scope of the comprehension:

smooth_signal =  [average given average=(1-decay)*average + decay*x for x
in signal given average=0.]

This makes sense and follows a simple rule: "B given A" just causes A to be
executed before B - that holds true whether B is a variable or a loop
declaration like "for x in x_gen".

So

a_gen = (g(a) given a=f(a, x) for x in x_gen given a=0)

would be a compact form of:

def a_gen_func(x_gen):
a=0
for x in x_gen:
a = f(a, x)
yield g(a)

a_gen = a_gen_func()
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] A real life example of "given"

2018-05-30 Thread Peter O'Connor
>
>  In comparison, I think that := is much simpler.


In this case that's true, but a small modification:

updates = {
y: do_something_to(potential_update)
for x in need_initialization_nodes
for y in [x, *x.synthetic_inputs()]
if potential_update is not None
given potential_update = command.create_potential_update(y)
}

Shows the flexibility of this given syntax vs ":="

If we think of "given" as just inserting a line with variable-definitions
before the preceding statement, it seems clear that:

updates = {
y: potential_update
given potential_update = command.create_potential_update(y)
for x in need_initialization_nodes
for y in [x, *x.synthetic_inputs()]
if potential_update is not None
}

Should raise a NameError: name 'potential_update' is not defined, and

updates = {
y: potential_update
for x in need_initialization_nodes
for y in [x, *x.synthetic_inputs()]
given potential_update = command.create_potential_update(y)
if potential_update is not None
}


Should raise a NameError: name 'y' is not defined.

For safety it seems reasonable that if a variable is "given" in a
comprehension, trying to refer to it (even if it defined in the enclosing
scope) before the inner-definition will result in a NameError.


On Wed, May 30, 2018 at 2:22 PM, Steven D'Aprano 
wrote:

> On Wed, May 30, 2018 at 02:42:21AM -0700, Neil Girdhar wrote:
>
> > With "given", I can write:
> >
> > potential_updates = {
> > y: potential_update
> > for x in need_initialization_nodes
> > for y in [x, *x.synthetic_inputs()]
> > given potential_update = command.create_potential_update(y)
> > if potential_update is not None}
>
> I'm not sure if that would be legal for the "given" syntax. As I
> understand it, the "given" syntax is:
>
> expression given name = another_expression
>
> but you've got half of the comprehension stuffed in the gap between the
> leading expression and the "given" keyword:
>
> expression COMPREH- given name = another_expression -ENSION
>
> so I think that's going to be illegal.
>
>
> I think it wants to be written this way:
>
> potential_updates = {
> y: potential_update
> for x in need_initialization_nodes
> for y in [x, *x.synthetic_inputs()]
> if potential_update is not None
> given potential_update = command.create_potential_update(y)
> }
>
>
> Or maybe it should be this?
>
> potential_updates = {
> y: potential_update
> given potential_update = command.create_potential_update(y)
> for x in need_initialization_nodes
> for y in [x, *x.synthetic_inputs()]
> if potential_update is not None
> }
>
>
> I'm damned if I know which way is correct. Either of them? Neither?
>
> In comparison, I think that := is much simpler. There's only one place
> it can go:
>
> potential_updates = {
> y: potential_update
> for x in need_initialization_nodes
> for y in [x, *x.synthetic_inputs()]
> if (
> potential_update := command.create_potential_update(y)
>) is not None
> }
>
>
> --
> Steve
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-05-24 Thread Peter O'Connor
To give this old horse a kick: The "given" syntax in the recent thread
could give a nice solution for the problem that started this thread.

Instead of my proposal of:
   smooth_signal = [average := (1-decay)*average + decay*x for x in signal
from average=0.]

We could use given for both the in-loop variable update and the variable
initialization:
   smooth_signal =  [average given average=(1-decay)*average + decay*x for
x in signal] given average=0.

This especially makes sense for the extended syntax, where my proposal of:
   (z, y := f(z, x) -> y for x in iter_x from z=initial_z)

Becomes:
(y given z, y = f(z, x) for x in iter_x) given z=initial_z

So in stead of adding 2 symbols and a keyword, we just need to add the one
"given" keyword.

It's worth noting, as Serhiy pointed out, that this is already supported in
python, albeit with a very clunky syntax:

smooth_signal = [average for average in [0] for x in signal for average
in [(1-decay)*average + decay*x]]

(y for z in [initial_z] for x in iter_x for z, y in [f(z, x)])




On Tue, Apr 17, 2018 at 12:02 AM, Danilo J. S. Bellini <
danilo.bell...@gmail.com> wrote:

> On 16 April 2018 at 10:49, Peter O'Connor 
> wrote:
>
>> Are you able to show how you'd implement the moving average example with
>> your package?
>>
>
> Sure! The single pole IIR filter you've shown is implemented here:
> https://github.com/danilobellini/pyscanprev/blob/
> master/examples/iir-filter.rst
>
> I tried:
>>
>> @enable_scan("average")
>> def exponential_moving_average_pyscan(signal, decay, initial=0):
>> yield from ((1-decay)*(average or initial) + decay*x for x in
>> signal)
>>
>>
>> smooth_signal_9 = list(exponential_moving_average_pyscan(signal,
>> decay=decay))[1:]
>>
>> Which almost gave the right result, but seemed to get the initial
>> conditions wrong.
>>
>
> I'm not sure what you were expecting. A sentinel as the first "average"
> value?
>
> Before the loop begins, this scan-generator just echoes the first input,
> like itertools.accumulate.
> That is, the first value this generator yields is the first "signal"
> value, which is then the first "average" value.
>
> To put an initial memory state, you should do something like this (I've
> removed the floating point trailing noise):
>
> >>> from pyscanprev import enable_scan, prepend
> >>>
> >>> @enable_scan("y")
> >>> def iir_filter(signal, decay, memory=0):
> ... return ((1 - decay) * y + decay * x for x in prepend(memory,
> signal))
> ...
> >>> list(iir_filter([1, 2, 3, 2, 1, -1, -2], decay=.1, memory=5))
> [5, 4.6, 4.34, 4.206, 3.9854, 3.68686, 3.218174, 2.6963566]
>
> In that example, "y" is the "previous result" (a.k.a. accumulator, or
> what had been called "average" here).
>
> --
> Danilo J. S. Bellini
> ---
> "*It is not our business to set up prohibitions, but to arrive at
> conventions.*" (R. Carnap)
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Inline assignments using "given" clauses

2018-05-13 Thread Peter O'Connor
 *Correction: Above code should read:

outputs = []
state = initial_state
for inp in inputs:
out, state = my_update_func(inp, state)
outputs.append(out)


On Sun, May 13, 2018 at 11:21 AM, Peter O'Connor  wrote:

>   target := expr
>   expr as target
>   expr -> target
>   target given target = expr
>   let target = expr
>  : target expr ;
>
>
> Although in general "target:=exp" seems the most palatable of these to me,
> there is one nice benefit to the "given" syntax:
>
> Suppose you have a comprehension wherein you want to pass forward an
> internal "state" between iterations, but not return it as the output:
>
> In today's python, you'd to:
>
> outputs = []
> state = initial_state
> for inp in inputs:
> out, state = my_update_func(state)
> outputs.append(state)
>
> This could not be neatly compacted into:
>
> state = initial_state
> outputs = [out given out, state = my_update_func(inp, state) for inp
> in inputs]
>
> Or maybe:
>
> outputs = [out given out, state = my_update_func(inp, state) for inp
> in inputs given state=initial_state]
>
> Though I agree for the much more common case of assigning a value inline
> "x given x=y" seems messily redundant.
>
>
>
> On Sat, May 12, 2018 at 10:37 PM, Stephen J. Turnbull <
> turnbull.stephen...@u.tsukuba.ac.jp> wrote:
>
>> David Mertz writes:
>>
>>  > Only the BDFL has a vote with non-zero weight.
>>
>> "Infinitesimal" != "zero".
>>
>> Pedantically yours,
>>
>> ___
>> Python-ideas mailing list
>> Python-ideas@python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Inline assignments using "given" clauses

2018-05-13 Thread Peter O'Connor
  target := expr
  expr as target
  expr -> target
  target given target = expr
  let target = expr
 : target expr ;


Although in general "target:=exp" seems the most palatable of these to me,
there is one nice benefit to the "given" syntax:

Suppose you have a comprehension wherein you want to pass forward an
internal "state" between iterations, but not return it as the output:

In today's python, you'd to:

outputs = []
state = initial_state
for inp in inputs:
out, state = my_update_func(state)
outputs.append(state)

This could not be neatly compacted into:

state = initial_state
outputs = [out given out, state = my_update_func(inp, state) for inp in
inputs]

Or maybe:

outputs = [out given out, state = my_update_func(inp, state) for inp in
inputs given state=initial_state]

Though I agree for the much more common case of assigning a value inline "x
given x=y" seems messily redundant.



On Sat, May 12, 2018 at 10:37 PM, Stephen J. Turnbull <
turnbull.stephen...@u.tsukuba.ac.jp> wrote:

> David Mertz writes:
>
>  > Only the BDFL has a vote with non-zero weight.
>
> "Infinitesimal" != "zero".
>
> Pedantically yours,
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-16 Thread Peter O'Connor
In any case, although I find the magic variable-injection stuff quite
strange, I like the decorator.

Something like

@scannable(average=0)  # Wrap function so that it has a "scan" method
which can be used to generate a stateful scan object
def exponential_moving_average(average, x, decay):
return (1-decay)*average + decay*x

stateful_func = exponential_moving_average.scan(average=initial)
smooth_signal = [stateful_func(x) for x in signal]

Seems appealing because it allows you to define the basic function without,
for instance, assuming that decay will be constant. If you wanted dynamic
decay, you could easily have it without changing the function:

stateful_func = exponential_moving_average.scan(average=initial)
smooth_signal = [stateful_func(x, decay=decay) for x, decay in
zip(signal, decay_schedule)]

And you pass around state explicitly.















On Mon, Apr 16, 2018 at 9:49 AM, Peter O'Connor 
wrote:

> Hi Danilo,
>
> The idea of decorating a function to show that the return variables could
> be fed back in in a scan form is interesting and could solve my problem in
> a nice way without new syntax.
>
> I looked at your code but got a bit confused as to how it works (there
> seems to be some magic where the decorator injects the scanned variable
> into the namespace).  Are you able to show how you'd implement the moving
> average example with your package?
>
> I tried:
>
> @enable_scan("average")
> def exponential_moving_average_pyscan(signal, decay, initial=0):
> yield from ((1-decay)*(average or initial) + decay*x for x in
> signal)
>
>
> smooth_signal_9 = list(exponential_moving_average_pyscan(signal,
> decay=decay))[1:]
>
> Which almost gave the right result, but seemed to get the initial
> conditions wrong.
>
> - Peter
>
>
>
> On Sat, Apr 14, 2018 at 3:57 PM, Danilo J. S. Bellini <
> danilo.bell...@gmail.com> wrote:
>
>> On 5 April 2018 at 13:52, Peter O'Connor 
>> wrote:
>>
>>> I was thinking it would be nice to be able to encapsulate this common
>>> type of operation into a more compact comprehension.
>>>
>>> I propose a new "Reduce-Map" comprehension that allows us to write:
>>>
>>> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in 
>>> range(1000)]
>>> smooth_signal = [average = (1-decay)*average + decay*x for x in signal from 
>>> average=0.]
>>>
>>> Instead of:
>>>
>>> def exponential_moving_average(signal: Iterable[float], decay: float, 
>>> initial_value: float=0.):
>>> average = initial_value
>>> for xt in signal:
>>> average = (1-decay)*average + decay*xt
>>> yield average
>>>
>>> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in 
>>> range(1000)]
>>> smooth_signal = list(exponential_moving_average(signal, decay=0.05))
>>>
>>> I wrote in this mail list the very same proposal some time ago. I was
>> trying to let the scan higher order function (itertools.accumulate with a
>> lambda, or what was done in the example above) fit into a simpler list
>> comprehension.
>>
>> As a result, I wrote this project, that adds the "scan" feature to Python
>> comprehensions using a decorator that performs bytecode manipulation (and
>> it had to fit in with a valid Python syntax):
>> https://github.com/danilobellini/pyscanprev
>>
>> In that GitHub page I've wrote several examples and a rationale on why
>> this would be useful.
>>
>> --
>> Danilo J. S. Bellini
>> ---
>> "*It is not our business to set up prohibitions, but to arrive at
>> conventions.*" (R. Carnap)
>>
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-16 Thread Peter O'Connor
Hi Danilo,

The idea of decorating a function to show that the return variables could
be fed back in in a scan form is interesting and could solve my problem in
a nice way without new syntax.

I looked at your code but got a bit confused as to how it works (there
seems to be some magic where the decorator injects the scanned variable
into the namespace).  Are you able to show how you'd implement the moving
average example with your package?

I tried:

@enable_scan("average")
def exponential_moving_average_pyscan(signal, decay, initial=0):
yield from ((1-decay)*(average or initial) + decay*x for x in
signal)


smooth_signal_9 = list(exponential_moving_average_pyscan(signal,
decay=decay))[1:]

Which almost gave the right result, but seemed to get the initial
conditions wrong.

- Peter



On Sat, Apr 14, 2018 at 3:57 PM, Danilo J. S. Bellini <
danilo.bell...@gmail.com> wrote:

> On 5 April 2018 at 13:52, Peter O'Connor 
> wrote:
>
>> I was thinking it would be nice to be able to encapsulate this common
>> type of operation into a more compact comprehension.
>>
>> I propose a new "Reduce-Map" comprehension that allows us to write:
>>
>> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in 
>> range(1000)]
>> smooth_signal = [average = (1-decay)*average + decay*x for x in signal from 
>> average=0.]
>>
>> Instead of:
>>
>> def exponential_moving_average(signal: Iterable[float], decay: float, 
>> initial_value: float=0.):
>> average = initial_value
>> for xt in signal:
>> average = (1-decay)*average + decay*xt
>> yield average
>>
>> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in 
>> range(1000)]
>> smooth_signal = list(exponential_moving_average(signal, decay=0.05))
>>
>> I wrote in this mail list the very same proposal some time ago. I was
> trying to let the scan higher order function (itertools.accumulate with a
> lambda, or what was done in the example above) fit into a simpler list
> comprehension.
>
> As a result, I wrote this project, that adds the "scan" feature to Python
> comprehensions using a decorator that performs bytecode manipulation (and
> it had to fit in with a valid Python syntax):
> https://github.com/danilobellini/pyscanprev
>
> In that GitHub page I've wrote several examples and a rationale on why
> this would be useful.
>
> --
> Danilo J. S. Bellini
> ---
> "*It is not our business to set up prohibitions, but to arrive at
> conventions.*" (R. Carnap)
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Spelling of Assignment Expressions PEP 572 (was post #4)

2018-04-13 Thread Peter O'Connor
Well this may be crazy sounding, but we could allow left or right
assignment with

name := expr
expr =: name

Although it would seem to violate the "only one obvious way" maxim, at
least it avoids this overloaded meaning with the "as" of "except" and "with"



On Fri, Apr 13, 2018 at 9:29 AM, Ethan Furman  wrote:

> On 04/13/2018 06:18 AM, Steven D'Aprano wrote:
>
>> On Fri, Apr 13, 2018 at 09:56:35PM +1000, Chris Angelico wrote:
>>
>
> If we agree that the benefit of putting the expression first is
>> sufficiently large, or that the general Pythonic look of "expr as name"
>> is sufficiently desirable (it just looks and reads nicely), then we can
>> afford certain compromises. Namely, we can rule that:
>>
>>  except expr as name:
>>  with expr as name:
>>
>> continue to have the same meaning that they have now and never mean
>> assignment expressions. Adding parens should not change that.
>>
>
> +1
>
> In other words, the rule is that "expr as name" keeps its current, older
>> semantics in with and except statements, and NEVER means the new, PEP
>> 572 assignment expression.
>>
>> Yes, that's a special case that breaks the rules, and I accept that it
>> is a point against "as". But the Zen is a guideline, not a law of
>> physics, and I think the benefits of "as" are sufficient that even
>> losing a point it still wins.
>>
>
> +1
>
> 2) Forbid any use of "(expr as name)" in the header of a 'with' statement
>>>
>>
>> You can't forbid it, because it is currently allowed syntax (albeit
>> currently without the parens). So the rule is, it is allowed, but it
>> means what it meant pre-PEP 572.
>>
>
> +1
>
> If people agree with me that it is important to put the expression first
>> rather than the target name, then the fact that statements and for loops
>> put the name first shouldn't matter.
>>
>
> +1 to expression coming first!  ;)
>
> --
> ~Ethan~
>
>
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-12 Thread Peter O'Connor
* correction to example:

moving_average_gen = (average:= moving_average_step(average, x,
decay=decay) for
x in signal from average=initial)

On Thu, Apr 12, 2018 at 3:37 PM, Peter O'Connor 
wrote:

> On Wed, Apr 11, 2018 at 10:50 AM, Paul Moore  wrote:
>
>> In particular, I'm happiest with the named moving_average() function,
>> which may reflect to some extent my lack of familiarity with the
>> subject area. I don't *care* how it's implemented internally - an
>> explicit loop is fine with me, but if a domain expert wants to be
>> clever and use something more complex, I don't need to know. An often
>> missed disadvantage of one-liners is that they get put inline, meaning
>> that people looking for a higher level overview of what the code does
>> get confronted with all the gory details.
>
>
> I'm all in favour of hiding things away into functions - I just think
> those functions should be as basic as possible, without implicit
> assumptions about how they will be used.  Let me give an example:
>
> 
>
> Lets look at your preferred method (A):
>
> def moving_average(signal_iterable, decay, initial=0):
> last_average = initial
> for x in signal_iterable:
> last_average = (1-decay)*last_average + decay*x
> yield last_average
>
> moving_average_gen = moving_average(signal, decay=decay,
> initial=initial)
>
> And compare it with (B), which would require the proposed syntax:
>
> def moving_average_step(last_average, x, decay):
> return (1-decay)*last_average + decay*x
>
> moving_average_gen = (average:= moving_average_step(average, x,
> decay=decay) for x in signal from x=initial)
>
> -
>
> Now, suppose we want to change things so that the "decay" changes with
> every step.
>
> The moving_average function (A) now has to be changed, because what we
> once thought would be a fixed parameter is now a variable that changes
> between calls.  Our options are:
> - Make "decay" another iterable (in which case other functions calling
> "moving_average" need to be changed).
> - Leave an option for "decay" to be a float which gets transformed to an
> iterable with "decay_iter = (decay for _ in itertools.count(0)) if
> isinstance(decay, (int, float)) else decay".  (awkward because 95% of
> usages don't need this.  If you do this for more parameters you suddenly
> have this weird implementation with iterators everywhere even though in
> most cases they're not needed).
> - Factor out the "pure"  "moving_average_step" from "moving_average", and
> create a new "moving_average_with_dynamic_decay" wrapper (but now we have
> to maintain two wrappers - with the duplicated arguments - which starts to
> require a lot of maintenance when you're passing down several parameters
> (or you can use the dreaded **kwargs).
>
> With approach (B) on the other hand, "moving_average_step" and all the
> functions calling it, can stay the same: we just change the way we call it
> in this instance to:
>
> moving_average_gen = (average:= moving_average_step(average, x,
> decay=decay) for x, decay in zip(signal, decay_schedule) from x=initial)
>
> 
>
> Now lets imagine this were a more complex function with 10 parameters.  I
> see these kind of examples a lot in machine-learning and robotics programs,
> where you'll have parameters like "learning rate", "regularization",
> "minibatch_size", "maximum_speed", "height_of_camera" which might initially
> be considered initialization parameters, but then later it turns out they
> need to be changed dynamically.
>
> This is why I think the "(y:=f(y, x) for x in xs from y=initial)" syntax
> can lead to cleaner, more maintainable code.
>
>
>
> On Wed, Apr 11, 2018 at 10:50 AM, Paul Moore  wrote:
>
>> On 11 April 2018 at 15:37, Peter O'Connor 
>> wrote:
>>
>> > If people are happy with these solutions and still see no need for the
>> > initialization syntax, we can stop this, but as I see it there is a
>> "hole"
>> > in the language that needs to be filled.
>>
>> Personally, I'm happy with those solutions and see no need for the
>> initialisation syntax.
>>
>> In particular, I'm happiest with the named moving_average() function,
>> which may reflect to some extent my lack of familiarity with the
>> subject area. I don't *care* how it's implemented internally - an
>> explicit loop is fine with me, but if a domain expert wants to be
>> clever and use something more complex, I don't need to know. An often
>> missed disadvantage of one-liners is that they get put inline, meaning
>> that people looking for a higher level overview of what the code does
>> get confronted with all the gory details.
>>
>> Paul
>>
>
>
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-12 Thread Peter O'Connor
On Wed, Apr 11, 2018 at 10:50 AM, Paul Moore  wrote:

> In particular, I'm happiest with the named moving_average() function,
> which may reflect to some extent my lack of familiarity with the
> subject area. I don't *care* how it's implemented internally - an
> explicit loop is fine with me, but if a domain expert wants to be
> clever and use something more complex, I don't need to know. An often
> missed disadvantage of one-liners is that they get put inline, meaning
> that people looking for a higher level overview of what the code does
> get confronted with all the gory details.


I'm all in favour of hiding things away into functions - I just think those
functions should be as basic as possible, without implicit assumptions
about how they will be used.  Let me give an example:



Lets look at your preferred method (A):

def moving_average(signal_iterable, decay, initial=0):
last_average = initial
for x in signal_iterable:
last_average = (1-decay)*last_average + decay*x
yield last_average

moving_average_gen = moving_average(signal, decay=decay,
initial=initial)

And compare it with (B), which would require the proposed syntax:

def moving_average_step(last_average, x, decay):
return (1-decay)*last_average + decay*x

moving_average_gen = (average:= moving_average_step(average, x,
decay=decay) for x in signal from x=initial)

-

Now, suppose we want to change things so that the "decay" changes with
every step.

The moving_average function (A) now has to be changed, because what we once
thought would be a fixed parameter is now a variable that changes between
calls.  Our options are:
- Make "decay" another iterable (in which case other functions calling
"moving_average" need to be changed).
- Leave an option for "decay" to be a float which gets transformed to an
iterable with "decay_iter = (decay for _ in itertools.count(0)) if
isinstance(decay, (int, float)) else decay".  (awkward because 95% of
usages don't need this.  If you do this for more parameters you suddenly
have this weird implementation with iterators everywhere even though in
most cases they're not needed).
- Factor out the "pure"  "moving_average_step" from "moving_average", and
create a new "moving_average_with_dynamic_decay" wrapper (but now we have
to maintain two wrappers - with the duplicated arguments - which starts to
require a lot of maintenance when you're passing down several parameters
(or you can use the dreaded **kwargs).

With approach (B) on the other hand, "moving_average_step" and all the
functions calling it, can stay the same: we just change the way we call it
in this instance to:

moving_average_gen = (average:= moving_average_step(average, x,
decay=decay) for x, decay in zip(signal, decay_schedule) from x=initial)



Now lets imagine this were a more complex function with 10 parameters.  I
see these kind of examples a lot in machine-learning and robotics programs,
where you'll have parameters like "learning rate", "regularization",
"minibatch_size", "maximum_speed", "height_of_camera" which might initially
be considered initialization parameters, but then later it turns out they
need to be changed dynamically.

This is why I think the "(y:=f(y, x) for x in xs from y=initial)" syntax
can lead to cleaner, more maintainable code.



On Wed, Apr 11, 2018 at 10:50 AM, Paul Moore  wrote:

> On 11 April 2018 at 15:37, Peter O'Connor 
> wrote:
>
> > If people are happy with these solutions and still see no need for the
> > initialization syntax, we can stop this, but as I see it there is a
> "hole"
> > in the language that needs to be filled.
>
> Personally, I'm happy with those solutions and see no need for the
> initialisation syntax.
>
> In particular, I'm happiest with the named moving_average() function,
> which may reflect to some extent my lack of familiarity with the
> subject area. I don't *care* how it's implemented internally - an
> explicit loop is fine with me, but if a domain expert wants to be
> clever and use something more complex, I don't need to know. An often
> missed disadvantage of one-liners is that they get put inline, meaning
> that people looking for a higher level overview of what the code does
> get confronted with all the gory details.
>
> Paul
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-11 Thread Peter O'Connor
>
> It's worth adding a reminder here that "having more options on the
> market" is pretty directly in contradiction to the Zen of Python -
> "There should be one-- and preferably only one --obvious way to do
> it".


I've got to start minding my words more.  By "options on the market" I more
meant it in a "candidates for the job" sense.  As in in the end we'd select
just one, which would in retrospect or if Dutch would seem like the obvious
choice.  Not that "everyone who uses Python should have more ways to do
this".

My reason for starting this is that there isn't "one obvious way" to do
this type of operation now (as the diversity of the exponential-moving-average
"zoo"

attests)

--

Let's look at a task where there is "one obvious way"

Suppose someone asks: "How can I build a list of squares of the first 100
odd numbers [1, 9, 25, 49, ] in Python?"  The answer is now obvious -
few people would do this:

list_of_odd_squares = []
for i in range(100):
list_of_odd_squares.append((i*2+1)**2)

or this:

def iter_odd_squares(n)):
for i in range(n):
yield (i*2+1)**2

list_of_odd_squares = list(iter_odd_squares(100))

Because it's just more clean, compact, readable and "obvious" to do:

list_of_even_squares = [(i*2+1)**2 for i in range(100)]

Maybe I'm being presumptuous, but I think most Python users would agree.

---

Now lets switch our task computing the exponential moving average of a
list.  This is a stand-in for a HUGE range of tasks that involve carrying
some state-variable forward while producing values.

Some would do this:

smooth_signal = []
average = 0
for x in signal:
average = (1-decay)*average + decay*x
smooth_signal.append(average)

Some would do this:

def moving_average(signal, decay, initial=0):
average = initial
for x in signal:
average = (1-decay)*average + decay*x
yield average

smooth_signal = list(moving_average(signal, decay=decay))

Lovers of one-liners like Serhiy would do this:

smooth_signal = [average for average in [0] for x in signal for average
in [(1-decay)*average + decay*x]]

Some would scoff at the cryptic one-liner and do this:

def update_moving_average(avg, x, decay):
return (1-decay)*avg + decay*x

smooth_signal = list(itertools.accumulate(itertools.chain([0], signal),
func=functools.partial(update_moving_average, decay=decay)))

And others would scoff at that and make make a class, or use coroutines.

--

There've been many suggestions in this thread (all documented here:
https://github.com/petered/peters_example_code/blob/master/peters_example_code/ways_to_skin_a_cat.py)
and that's good, but it seems clear that people do not agree on an
"obvious" way to do things.

I claim that if

smooth_signal = [average := (1-decay)*average + decay*x for x in signal
from average=0.]

Were allowed, it would become the "obvious" way.

Chris Angelico's suggestions are close to this and have the benefit of
requiring no new syntax in a PEP 572 world :

smooth_signal = [(average := (1-decay)*average + decay*x) for average
in [0] for x in signal]
or
smooth_signal = [(average := (1-decay)*(average or 0) + decay*x) for x
in signal]
or
   average = 0
   smooth_signal = [(average := (1-decay)*average + decay*x) for x in
signal]

But they all have oddities that detract from their "obviousness" and the
oddities stem from there not being a built-in way to initialize.  In the
first, there is the odd "for average in [0]" initializer..   The second
relies on a hidden "average = None" which is not obvious at all, and the
third has the problem that the initial value is bound to the defining scope
instead of belonging to the generator.  All seem to have oddly redundant
brackets whose purpose is not obvious, but maybe there's a good reason for
that.

If people are happy with these solutions and still see no need for the
initialization syntax, we can stop this, but as I see it there is a "hole"
in the language that needs to be filled.

On Wed, Apr 11, 2018 at 3:55 AM, Paul Moore  wrote:

> On 11 April 2018 at 04:41, Steven D'Aprano  wrote:
> >> > But in a way that more intuitively expresses the intent of the code,
> it
> >> > would be great to have more options on the market.
> >>
> >> It's worth adding a reminder here that "having more options on the
> >> market" is pretty directly in contradiction to the Zen of Python -
> >> "There should be one-- and preferably only one --obvious way to do
> >> it".
> >
> > I'm afraid I'm going to (mildly) object here. At least you didn't
> > misquote the Zen as "Only One Way To Do It" :-)
> >
> > The Zen here is not a prohibition against there being multiple ways to
> > do something -- how could it, given that Python is a general purpose
> > programming language there is always going to be mult

Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-10 Thread Peter O'Connor
>
> But even I find your use of dysphemisms like "freak show" for non-FP
> solutions quite off-putting.


Ah, I'm sorry, "freak show" was not mean to be disparaging to the authors
or even the code itself, but to describe the variety of strange solutions
(my own included) to this simple problem.

Indeed. But it seems to me that itertools.accumulate() with a initial value
> probably will solve that issue.


Kyle Lahnakoski made a pretty good case for not using
itertools.accumulate() earlier in this thread, and Tim Peters made the
point that it's non-initialized behaviour can be extremely unintuitive (try
"print(list(itertools.accumulate([1, 2, 3], lambda x, y: str(x) +
str(y"  ).  These convinced me that that itertools.accumulate should be
avoided altogether.

Alternatively, if anyone has a proposed syntax that does the same thing as
Serhiy Storchaka's:

smooth_signal = [average for average in [0] for x in signal for average
in [(1-decay)*average + decay*x]]

But in a way that more intuitively expresses the intent of the code, it
would be great to have more options on the market.



On Tue, Apr 10, 2018 at 1:32 PM, Steven D'Aprano 
wrote:

> On Tue, Apr 10, 2018 at 12:18:27PM -0400, Peter O'Connor wrote:
>
> [...]
> > I added your coroutine to the freak show:
>
> Peter, I realise that you're a fan of functional programming idioms, and
> I'm very sympathetic to that. I'm a fan of judicious use of FP too, and
> while I'm not keen on your specific syntax, I am interested in the
> general concept and would like it to have the best possible case made
> for it.
>
> But even I find your use of dysphemisms like "freak show" for non-FP
> solutions quite off-putting. (I think this is the second time you've
> used the term.)
>
> Python is not a functional programming language like Haskell, it is a
> multi-paradigm language with strong support for OO and procedural
> idioms. Notwithstanding the problems with OO idioms that you describe,
> many Python programmers find OO "better", simpler to understand, learn
> and maintain than FP. Or at least more familiar.
>
> The rejection or approval of features into Python is not a popularity
> contest, ultimately it only requires one person (Guido) to either reject
> or approve a new feature. But popular opinion is not irrelevant either:
> like all benevolent dictators, Guido has a good sense of what's popular,
> and takes it into account in his considerations. If you put people
> off-side, you hurt your chances of having this feature approved.
>
>
> [...]
> > I *almost* like the coroutine thing but find it unusable because the
> > peculiarity of having to initialize the generator when you use it (you do
> > it with next(processor)) is pretty much guaranteed to lead to errors when
> > people forget to do it.  Earlier in the thread Steven D'Aprano showed
> how a
> > @coroutine decorator can get around this:
>
> I agree that the (old-style, pre-async) coroutine idiom is little known,
> in part because of the awkwardness needed to make it work. Nevertheless,
> I think your argument about it leading to errors is overstated: if you
> forget to initialize the coroutine, you get a clear and obvious failure:
>
> py> def co():
> ... x = (yield 1)
> ...
> py> a = co()
> py> a.send(99)
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: can't send non-None value to a just-started generator
>
>
>
> > - Still, the whole coroutine thing still feels a bit magical, hacky and
> > "clever".  Also the use of generator.send will probably confuse around
> 90%
> > of programmers.
>
> In my experience, heavy use of FP idioms will probably confuse about the
> same percentage. Including me: I like FP in moderation, I wouldn't want
> to use a strict 100% functional language, and if someone even says the
> word "Monad" I break out in hives.
>
>
>
> > If you have that much of a complex workflow, you really should not make
> > > that a one-liner.
> >
> > It's not a complex workflow, it's a moving average.  It just seems
> complex
> > because we don't have a nice, compact way to describe it.
>
> Indeed. But it seems to me that itertools.accumulate() with a initial
> value probably will solve that issue.
>
> Besides... moving averages aren't that common that they *necessarily*
> need syntactic support. Wrapping the complexity in a function, then
> calling the function, may be an acceptible solution instead of putting
> the complexity directly into the language itself.
>
> The Conservation Of Complexity

Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-10 Thread Peter O'Connor
>
> First, why a class would be a bad thing ? It's clear, easy to
> understand, debug and extend.


- Lots of reduntand-looking "frameworky" lines of code: "self._param_1 =
param_1"
- Potential for opaque state changes: Caller doesn't know if
"y=my_object.do_something(x)" has any side-effect, whereas with ("y,
new_state=do_something(state, x)" / "y=do_something(state, x)") it's clear
that there (is / is not).
- Makes more assumptions on usage (should I add "param_1" as an arg to
"StatefulThing.__init__" or to "StatefulThing.update_and_get_output"


> And before trying to ask for a new syntax in the language, try to solve
> the problem with the existing tools.


Oh I have, and of course there are ways but I find them all clunkier than
needed.  I added your coroutine to the freak show:
https://github.com/petered/peters_example_code/blob/master/peters_example_code/ways_to_skin_a_cat.py#L106


> processor = stateful_thing(1, 1, 4)
> next(processor)
> processed_things = [processor.send(x) for x in x_gen]


I *almost* like the coroutine thing but find it unusable because the
peculiarity of having to initialize the generator when you use it (you do
it with next(processor)) is pretty much guaranteed to lead to errors when
people forget to do it.  Earlier in the thread Steven D'Aprano showed how a
@coroutine decorator can get around this:
https://github.com/petered/peters_example_code/blob/master/peters_example_code/ways_to_skin_a_cat.py#L63
- Still, the whole coroutine thing still feels a bit magical, hacky and
"clever".  Also the use of generator.send will probably confuse around 90%
of programmers.

If you have that much of a complex workflow, you really should not make
> that a one-liner.


It's not a complex workflow, it's a moving average.  It just seems complex
because we don't have a nice, compact way to describe it.

I've been trying to get slicing on generators and inline try/except on
> this mailing list for years and I've been said no again and again. It's
> hard. But it's also why Python stayed sane for decades.


Hey I'll support your campaign if you support mine.




On Tue, Apr 10, 2018 at 4:18 AM, Michel Desmoulin  wrote:

>
>
> Le 10/04/2018 à 00:54, Peter O'Connor a écrit :
> > Kyle, you sounded so reasonable when you were trashing
> > itertools.accumulate (which I now agree is horrible).  But then you go
> > and support Serhiy's madness:  "smooth_signal = [average for average in
> > [0] for x in signal for average in [(1-decay)*average + decay*x]]" which
> > I agree is clever, but reads more like a riddle than readable code.
> >
> > Anyway, I continue to stand by:
> >
> > (y:= f(y, x) for x in iter_x from y=initial_y)
> >
> > And, if that's not offensive enough, to its extension:
> >
> > (z, y := f(z, x) -> y for x in iter_x from z=initial_z)
> >
> > Which carries state "z" forward but only yields "y" at each iteration.
> > (see proposal: https://github.com/petered/peps/blob/master/pep-.rst
> > <https://github.com/petered/peps/blob/master/pep-.rst>)
> >
> > Why am I so obsessed?  Because it will allow you to conveniently replace
> > classes with more clean, concise, functional code.  People who thought
> > they never needed such a construct may suddenly start finding it
> > indispensable once they get used to it.
> >
> > How many times have you written something of the form?:
> >
> > class StatefulThing(object):
> >
> > def __init__(self, initial_state, param_1, param_2):
> > self._param_1= param_1
> > self._param_2 = param_2
> > self._state = initial_state
> >
> > def update_and_get_output(self, new_observation):  # (or just
> > __call__)
> > self._state = do_some_state_update(self._state,
> > new_observation, self._param_1)
> > output = transform_state_to_output(self._state,
> self._param_2)
> > return output
> >
> > processor = StatefulThing(initial_state = initial_state, param_1 =
> > 1, param_2 = 4)
> > processed_things = [processor.update_and_get_output(x) for x in
> x_gen]
> >
> > I've done this many times.  Video encoding, robot controllers, neural
> > networks, any iterative machine learning algorithm, and probably lots of
> > things I don't know about - they all tend to have this general form.
> >
>
> Personally I never have to do that very often. But let's say for the
> sake of the argument there is a class of probl

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
* correction to brackets from first example:

def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
return accumulate(get_tolls_from_day(day), initial=toll_amount_so_far)


On Mon, Apr 9, 2018 at 11:55 PM, Peter O'Connor 
wrote:

> Ok, so it seems everyone's happy with adding an initial_value argument.
>
> Now, I claim that while it should be an option, the initial value should
> NOT be returned by default.  (i.e. the returned generator should by default
> yield N elements, not N+1).
>
> Example: suppose we're doing the toll booth thing, and we want to yield a
> cumulative sum of tolls so far.  Suppose someone already made a
> reasonable-looking generator yielding the cumulative sum of tolls for today:
>
> def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
> return accumulate(get_tolls_from_day(day, initial=toll_amount_so_far))
>
> And now we want to make a way to get all tolls from the month.  One might
> reasonably expect this to work:
>
> def iter_cumsum_tolls_from_month(month, toll_amount_so_far):
> for day in month:
> for cumsum_tolls in iter_cumsum_tolls_from_day(day,
> toll_amount_so_far = toll_amount_so_far):
> yield cumsum_tolls
> toll_amount_so_far = cumsum_tolls
>
> But this would actually DUPLICATE the last toll of every day - it appears
> both as the last element of the day's generator and as the first element of
> the next day's generator.
>
> This is why I think that there should be an additional "
> include_initial_in_return=False" argument.  I do agree that it should be
> an option to include the initial value (your "find tolls over time-span"
> example shows why), but that if you want that you should have to show that
> you thought about that by specifying "include_initial_in_return=True"
>
>
>
>
>
> On Mon, Apr 9, 2018 at 10:30 PM, Tim Peters  wrote:
>
>> [Tim]
>> >> while we have N numbers, there are N+1 slice indices.  So
>> >> accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
>> >> as the first prefix sum (the empty prefix sum(xs[:0]).
>> >>
>> >> Which is exactly what a this_is_the_initial_value=0 argument would do
>> >> for us.
>>
>> [Greg Ewing ]
>> > In this case, yes. But that still doesn't mean it makes
>> > sense to require the initial value to be passed *in* as
>> > part of the input sequence.
>> >
>> > Maybe the best idea is for the initial value to be a
>> > separate argument, but be returned as the first item in
>> > the list.
>>
>> I'm not sure you've read all the messages in this thread, but that's
>> exactly what's being proposed.  That. e.g., a new optional argument:
>>
>> accumulate(xs, func, initial=S)
>>
>> act like the current
>>
>>  accumulate(chain([S], xs), func)
>>
>> Note that in neither case is the original `xs` modified in any way,
>> and in both cases the first value generated is S.
>>
>> Note too that the proposal is exactly the way Haskell's `scanl` works
>> (although `scanl` always requires specifying an initial value - while
>> the related `scanl1` doesn't allow specifying one).
>>
>> And that's all been so since the thread's first message, in which
>> Raymond gave a proposed implementation:
>>
>> _sentinel = object()
>>
>> def accumulate(iterable, func=operator.add, start=_sentinel):
>> it = iter(iterable)
>> if start is _sentinel:
>> try:
>> total = next(it)
>> except StopIteration:
>> return
>> else:
>> total = start
>> yield total
>> for element in it:
>> total = func(total, element)
>> yield total
>>
>> > I can think of another example where this would make
>> > sense. Suppose you have an initial bank balance and a
>> > list of transactions, and you want to produce a statement
>> > with a list of running balances.
>> >
>> > The initial balance and the list of transactions are
>> > coming from different places, so the most natural way
>> > to call it would be
>> >
>> >result = accumulate(transactions, initial = initial_balance)
>> >
>> > If the initial value is returned as item 0, then the
>> > result has the following properties:
>> >
>> >result[0] is

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
Ok, so it seems everyone's happy with adding an initial_value argument.

Now, I claim that while it should be an option, the initial value should
NOT be returned by default.  (i.e. the returned generator should by default
yield N elements, not N+1).

Example: suppose we're doing the toll booth thing, and we want to yield a
cumulative sum of tolls so far.  Suppose someone already made a
reasonable-looking generator yielding the cumulative sum of tolls for today:

def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
return accumulate(get_tolls_from_day(day, initial=toll_amount_so_far))

And now we want to make a way to get all tolls from the month.  One might
reasonably expect this to work:

def iter_cumsum_tolls_from_month(month, toll_amount_so_far):
for day in month:
for cumsum_tolls in iter_cumsum_tolls_from_day(day,
toll_amount_so_far = toll_amount_so_far):
yield cumsum_tolls
toll_amount_so_far = cumsum_tolls

But this would actually DUPLICATE the last toll of every day - it appears
both as the last element of the day's generator and as the first element of
the next day's generator.

This is why I think that there should be an additional "
include_initial_in_return=False" argument.  I do agree that it should be an
option to include the initial value (your "find tolls over time-span"
example shows why), but that if you want that you should have to show that
you thought about that by specifying "include_initial_in_return=True"





On Mon, Apr 9, 2018 at 10:30 PM, Tim Peters  wrote:

> [Tim]
> >> while we have N numbers, there are N+1 slice indices.  So
> >> accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
> >> as the first prefix sum (the empty prefix sum(xs[:0]).
> >>
> >> Which is exactly what a this_is_the_initial_value=0 argument would do
> >> for us.
>
> [Greg Ewing ]
> > In this case, yes. But that still doesn't mean it makes
> > sense to require the initial value to be passed *in* as
> > part of the input sequence.
> >
> > Maybe the best idea is for the initial value to be a
> > separate argument, but be returned as the first item in
> > the list.
>
> I'm not sure you've read all the messages in this thread, but that's
> exactly what's being proposed.  That. e.g., a new optional argument:
>
> accumulate(xs, func, initial=S)
>
> act like the current
>
>  accumulate(chain([S], xs), func)
>
> Note that in neither case is the original `xs` modified in any way,
> and in both cases the first value generated is S.
>
> Note too that the proposal is exactly the way Haskell's `scanl` works
> (although `scanl` always requires specifying an initial value - while
> the related `scanl1` doesn't allow specifying one).
>
> And that's all been so since the thread's first message, in which
> Raymond gave a proposed implementation:
>
> _sentinel = object()
>
> def accumulate(iterable, func=operator.add, start=_sentinel):
> it = iter(iterable)
> if start is _sentinel:
> try:
> total = next(it)
> except StopIteration:
> return
> else:
> total = start
> yield total
> for element in it:
> total = func(total, element)
> yield total
>
> > I can think of another example where this would make
> > sense. Suppose you have an initial bank balance and a
> > list of transactions, and you want to produce a statement
> > with a list of running balances.
> >
> > The initial balance and the list of transactions are
> > coming from different places, so the most natural way
> > to call it would be
> >
> >result = accumulate(transactions, initial = initial_balance)
> >
> > If the initial value is returned as item 0, then the
> > result has the following properties:
> >
> >result[0] is the balance brought forward
> >result[-1] is the current balance
> >
> > and this remains true in the corner case where there are
> > no transactions.
>
> Indeed, something quite similar often applies when parallelizing
> search loops of the form:
>
>  for candidate in accumulate(chain([starting_value], cycle(deltas))):
>
> For a sequence that eventually becomes periodic in the sequence of
> deltas it cycles through, multiple processes can run independent
> searches starting at carefully chosen different starting values "far"
> apart.  In effect, they're each a "balance brought forward" pretending
> that previous chunks have already been done.
>
> Funny:  it's been weeks now since I wrote an accumulate() that
> _didn't_ want to specify a starting value - LOL ;-)
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https:/

Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-09 Thread Peter O'Connor
Kyle, you sounded so reasonable when you were trashing itertools.accumulate
(which I now agree is horrible).  But then you go and support Serhiy's
madness:  "smooth_signal = [average for average in [0] for x in signal for
average in [(1-decay)*average + decay*x]]" which I agree is clever, but
reads more like a riddle than readable code.

Anyway, I continue to stand by:

(y:= f(y, x) for x in iter_x from y=initial_y)

And, if that's not offensive enough, to its extension:

(z, y := f(z, x) -> y for x in iter_x from z=initial_z)

Which carries state "z" forward but only yields "y" at each iteration.
(see proposal: https://github.com/petered/peps/blob/master/pep-.rst)

Why am I so obsessed?  Because it will allow you to conveniently replace
classes with more clean, concise, functional code.  People who thought they
never needed such a construct may suddenly start finding it indispensable
once they get used to it.

How many times have you written something of the form?:

class StatefulThing(object):

def __init__(self, initial_state, param_1, param_2):
self._param_1= param_1
self._param_2 = param_2
self._state = initial_state

def update_and_get_output(self, new_observation):  # (or just
__call__)
self._state = do_some_state_update(self._state, new_observation,
self._param_1)
output = transform_state_to_output(self._state, self._param_2)
return output

processor = StatefulThing(initial_state = initial_state, param_1 = 1,
param_2 = 4)
processed_things = [processor.update_and_get_output(x) for x in x_gen]

I've done this many times.  Video encoding, robot controllers, neural
networks, any iterative machine learning algorithm, and probably lots of
things I don't know about - they all tend to have this general form.

And how many times have I had issues like "Oh no now I want to change
param_1 on the fly instead of just setting it on initialization, I guess I
have to refactor all usages of this class to pass param_1 into
update_and_get_output instead of __init__".

What if instead I could just write:

def update_and_get_output(last_state, new_observation, param_1, param_2)
new_state = do_some_state_update(last_state, new_observation,
_param_1)
output = transform_state_to_output(last_state, _param_2)
return new_state, output

processed_things = [state, output:= update_and_get_output(state, x,
param_1=1, param_2=4) -> output for x in observations from
state=initial_state]

Now we have:
- No mutable objects (which cuts of a whole slew of potential bugs and
anti-patterns familiar to people who do OOP.)
- Fewer lines of code
- Looser assumptions on usage and less refactoring. (if I want to now pass
in param_1 at each iteration instead of just initialization, I need to make
no changes to update_and_get_output).
- No need for state getters/setters, since state is is passed around
explicitly.

I realize that calling for changes to syntax is a lot to ask - but I still
believe that the main objections to this syntax would also have been raised
as objections to the now-ubiquitous list-comprehensions - they seem hostile
and alien-looking at first, but very lovable once you get used to them.




On Sun, Apr 8, 2018 at 1:41 PM, Kyle Lahnakoski 
wrote:

>
>
> On 2018-04-05 21:18, Steven D'Aprano wrote:
> > (I don't understand why so many people have such an aversion to writing
> > functions and seek to eliminate them from their code.)
> >
>
> I think I am one of those people that have an aversion to writing
> functions!
>
> I hope you do not mind that I attempt to explain my aversion here. I
> want to clarify my thoughts on this, and maybe others will find
> something useful in this explanation, maybe someone has wise words for
> me. I think this is relevant to python-ideas because someone with this
> aversion will make different language suggestions than those that don't.
>
> Here is why I have an aversion to writing functions: Every unread
> function represents multiple unknowns in the code. Every function adds
> to code complexity by mapping an inaccurate name to specific
> functionality.
>
> When I read code, this is what I see:
>
> >x = you_will_never_guess_how_corner_cases_are_handled(a, b, c)
> >y =
> you_dont_know_I_throw_a_BaseException_when_I_do_not_like_your_arguments(j,
> k, l)
>
> Not everyone sees code this way: I see people read method calls, make a
> number of wild assumptions about how those methods work, AND THEY ARE
> CORRECT!  How do they do it!?  It is as if there are some unspoken
> convention about how code should work that's opaque to me.
>
> For example before I read the docs on
> itertools.accumulate(list_of_length_N, func), here are the unknowns I see:
>
> * Does it return N, or N-1 values?
> * How are initial conditions handled?
> * Must `func` perform the initialization by accepting just one
> parameter, and accumulate with more-than-one parameter?
> 

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
Also Tim Peter's one-line example of:

print(list(itertools.accumulate([1, 2, 3], lambda x, y: str(x) + str(y

I think makes it clear that itertools.accumulate is not the right vehicle
for this change - we should make a new itertools function with a required
"initial" argument.

On Mon, Apr 9, 2018 at 1:44 PM, Peter O'Connor 
wrote:

> It seems clear that the name "accumulate" has been kind of antiquated
> since the "func" argument was added and "sum" became just a default.
>
> And people seem to disagree about whether the result should have a length
> N or length N+1 (where N is the number of elements in the input iterable).
>
> The behaviour where the first element of the return is the same as the
> first element of the input can be weird and confusing.  E.g. compare:
>
> >> list(itertools.accumulate([2, 3, 4], lambda accum, val: accum-val))
> [2, -1, -5]
> >> list(itertools.accumulate([2, 3, 4], lambda accum, val: val-accum))
> [2, 1, 3]
>
> One might expect that since the second function returned the negative of
> the first function, and both are linear, that the results of the second
> would be the negative of the first, but that is not the case.
>
> Maybe we can instead let "accumulate" fall into deprecation, and instead
> add a new more general itertools "reducemap" method:
>
> def reducemap(iterable: Iterable[Any], func: Callable[(Any, Any), Any],
> initial: Any, include_initial_in_return=False): -> Generator[Any]
>
> Benefits:
> - The name is more descriptive of the operation (a reduce operation where
> we keep values at each step, like a map)
> - The existence of include_initial_in_return=False makes it somewhat
> clear that the initial value will by default NOT be provided in the
> returning generator
> - The mandatory initial argument forces you to think about initial
> conditions.
>
> Disadvantages:
> - The most common use case (summation, product), has a "natural" first
> element (0, and 1, respectively) when you'd now be required to write out.
> (but we could just leave accumulate for sum).
>
> I still prefer a built-in language comprehension syntax for this like: (y
> := f(y, x) for x in x_vals from y=0), but for a huge discussion on that see
> the other thread.
>
> --- More Examples (using "accumulate" as the name for now)  ---
>
> # Kalman filters
> def kalman_filter_update(state, measurement):
> ...
> return state
>
> online_trajectory_estimate = accumulate(measurement_generator, func=
> kalman_filter_update, initial = initial_state)
>
> ---
>
> # Bayesian stats
> def update_model(prior, evidence):
>...
>return posterior
>
> model_history  = accumulate(evidence_generator, func=update_model,
> initial = prior_distribution)
>
> ---
>
> # Recurrent Neural networks:
> def recurrent_network_layer_step(last_hidden, current_input):
> new_hidden = 
> return new_hidden
>
> hidden_state_generator = accumulate(input_sequence, func=
> recurrent_network_layer_step, initial = initial_hidden_state)
>
>
>
>
> On Mon, Apr 9, 2018 at 7:14 AM, Nick Coghlan  wrote:
>
>> On 9 April 2018 at 14:38, Raymond Hettinger 
>> wrote:
>> >> On Apr 8, 2018, at 6:43 PM, Tim Peters  wrote:
>> >> In short, for _general_ use `accumulate()` needs `initial` for exactly
>> >> the same reasons `reduce()` needed it.
>> >
>> > The reduce() function had been much derided, so I've had it mentally
>> filed in the anti-pattern category.  But yes, there may be wisdom there.
>>
>> Weirdly (or perhaps not so weirdly, given my tendency to model
>> computational concepts procedurally), I find the operation of reduce()
>> easier to understand when it's framed as "last(accumulate(iterable,
>> binop, initial=value)))".
>>
>> Cheers,
>> Nick.
>>
>> --
>> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>> ___
>> Python-ideas mailing list
>> Python-ideas@python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
It seems clear that the name "accumulate" has been kind of antiquated since
the "func" argument was added and "sum" became just a default.

And people seem to disagree about whether the result should have a length N
or length N+1 (where N is the number of elements in the input iterable).

The behaviour where the first element of the return is the same as the
first element of the input can be weird and confusing.  E.g. compare:

>> list(itertools.accumulate([2, 3, 4], lambda accum, val: accum-val))
[2, -1, -5]
>> list(itertools.accumulate([2, 3, 4], lambda accum, val: val-accum))
[2, 1, 3]

One might expect that since the second function returned the negative of
the first function, and both are linear, that the results of the second
would be the negative of the first, but that is not the case.

Maybe we can instead let "accumulate" fall into deprecation, and instead
add a new more general itertools "reducemap" method:

def reducemap(iterable: Iterable[Any], func: Callable[(Any, Any), Any],
initial: Any, include_initial_in_return=False): -> Generator[Any]

Benefits:
- The name is more descriptive of the operation (a reduce operation where
we keep values at each step, like a map)
- The existence of include_initial_in_return=False makes it somewhat clear
that the initial value will by default NOT be provided in the returning
generator
- The mandatory initial argument forces you to think about initial
conditions.

Disadvantages:
- The most common use case (summation, product), has a "natural" first
element (0, and 1, respectively) when you'd now be required to write out.
(but we could just leave accumulate for sum).

I still prefer a built-in language comprehension syntax for this like: (y
:= f(y, x) for x in x_vals from y=0), but for a huge discussion on that see
the other thread.

--- More Examples (using "accumulate" as the name for now)  ---

# Kalman filters
def kalman_filter_update(state, measurement):
...
return state

online_trajectory_estimate = accumulate(measurement_generator, func=
kalman_filter_update, initial = initial_state)

---

# Bayesian stats
def update_model(prior, evidence):
   ...
   return posterior

model_history  = accumulate(evidence_generator, func=update_model, initial
= prior_distribution)

---

# Recurrent Neural networks:
def recurrent_network_layer_step(last_hidden, current_input):
new_hidden = 
return new_hidden

hidden_state_generator = accumulate(input_sequence, func=
recurrent_network_layer_step, initial = initial_hidden_state)




On Mon, Apr 9, 2018 at 7:14 AM, Nick Coghlan  wrote:

> On 9 April 2018 at 14:38, Raymond Hettinger 
> wrote:
> >> On Apr 8, 2018, at 6:43 PM, Tim Peters  wrote:
> >> In short, for _general_ use `accumulate()` needs `initial` for exactly
> >> the same reasons `reduce()` needed it.
> >
> > The reduce() function had been much derided, so I've had it mentally
> filed in the anti-pattern category.  But yes, there may be wisdom there.
>
> Weirdly (or perhaps not so weirdly, given my tendency to model
> computational concepts procedurally), I find the operation of reduce()
> easier to understand when it's framed as "last(accumulate(iterable,
> binop, initial=value)))".
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-06 Thread Peter O'Connor
Seems to me it's much more obvious that "name:=expression" is assigning
expression to name than "name!expression".  The ! is also confusing because
"!=" means "not equals", so the "!" symbol is already sort of associated
with "not"

On Fri, Apr 6, 2018 at 11:27 AM, Cammil Taank  wrote:

> I'm not sure if my suggestion for 572 has been considered:
>
> ``name! expression``
>
> I'm curious what the pros and cons of this form would be (?).
>
> My arguments for were in a previous message but there do not seem to be
> any responses to it.
>
> Cammil
>
> On Fri, 6 Apr 2018, 16:14 Guido van Rossum,  wrote:
>
>> On Fri, Apr 6, 2018 at 7:47 AM, Peter O'Connor <
>> peter.ed.ocon...@gmail.com> wrote:
>>
>>> Hi all, thank you for the feedback.  I laughed, I cried, and I learned.
>>>
>>
>> You'll be a language designer yet. :-)
>>
>>
>>> However, it looks like I'd be fighting a raging current if I were to
>>> try and push this proposal.  It's also encouraging that most of the work
>>> would be done anyway if ("Statement Local Name Bindings") thread passes.
>>> So some more humble proposals would be:
>>>
>>> 1) An initializer to itertools.accumulate
>>> functools.reduce already has an initializer, I can't see any controversy
>>> to adding an initializer to itertools.accumulate
>>>
>>
>> See if that's accepted in the bug tracker.
>>
>>
>>> 2) Assignment returns a value (basically what's already in the "Statement
>>> local name bindings" discussion)
>>> `a=f()` returns a value of a
>>> This would allow updating variables in a generator (I don't see the need
>>> for ":=" or "f() as a") but that's another discussion
>>>
>>
>> Please join the PEP 572 discussion. The strongest contender currently is
>> `a := f()` and for good reasons.
>>
>> --
>> --Guido van Rossum (python.org/~guido)
>> ___
>> Python-ideas mailing list
>> Python-ideas@python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-06 Thread Peter O'Connor
Ah, ok, I suppose that could easily lead to typo-bugs.  Ok, then I agree
that "a:=f()" returning a is better

On Fri, Apr 6, 2018 at 10:53 AM, Eric Fahlgren 
wrote:

> On Fri, Apr 6, 2018 at 7:47 AM, Peter O'Connor  > wrote:
>
>> 3) The idea that an assignment operation "a = f()" returns a value (a) is
>> already consistent with the "chained assignment" syntax of "b=a=f()" (which
>> can be thought of as "b=(a=f())").  I don't know why we feel the need for
>> new constructs like "(a:=f())" or "(f() as a)" when we could just think of
>> assignments as returning values (unless that breaks something that I'm not
>> aware of)
>>
>
> ​Consider
>
> >>> if x = 1:
> >>> print("What did I just do?")​
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-06 Thread Peter O'Connor
Hi all, thank you for the feedback.  I laughed, I cried, and I learned.

I looked over all your suggestions and recreated them here:
https://github.com/petered/peters_example_code/blob/master/peters_example_code/ways_to_skin_a_cat.py


I still favour my (y = f(y, x) for x in xs from y=initializer) syntax for a
few reasons:

1)  By adding an "initialized generator" as a special language construct,
we could add a "last" builtin (similar to "next") so that
"last(initialized_generator)" returns the initializer if the
initialized_generator yields no values (and thus replaces reduce).

2) Declaring the initial value as part of the generator lets us pass around
the generator around so it can be run in other scopes without it keeping
alive the scope it's defined in, and bringing up awkward questions like
"What if the initializer variable in the scope that created the generator
changes after the generator is defined but before it is used?"

3) The idea that an assignment operation "a = f()" returns a value (a) is
already consistent with the "chained assignment" syntax of "b=a=f()" (which
can be thought of as "b=(a=f())").  I don't know why we feel the need for
new constructs like "(a:=f())" or "(f() as a)" when we could just think of
assignments as returning values (unless that breaks something that I'm not
aware of)

However, it looks like I'd be fighting a raging current if I were to try
and push this proposal.  It's also encouraging that most of the work would
be done anyway if ("Statement Local Name Bindings") thread passes.  So some
more humble proposals would be:

1) An initializer to itertools.accumulate
functools.reduce already has an initializer, I can't see any controversy to
adding an initializer to itertools.accumulate

2) Assignment returns a value (basically what's already in the "Statement
local name bindings" discussion)
`a=f()` returns a value of a
This would allow updating variables in a generator (I don't see the need
for ":=" or "f() as a") but that's another discussion

Is there any interest (or disagreement) to these more humble proposals?

- Peter


On Fri, Apr 6, 2018 at 2:19 AM, Serhiy Storchaka 
wrote:

> 05.04.18 19:52, Peter O'Connor пише:
>
>> I propose a new "Reduce-Map" comprehension that allows us to write:
>>
>> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1)for iin
>> range(1000)]
>> smooth_signal = [average = (1-decay)*average + decay*xfor xin signalfrom
>> average=0.]
>>
>
> Using currently supported syntax:
>
> smooth_signal = [average for average in [0] for x in signal
>  for average in [(1-decay)*average + decay*x]]
>
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-05 Thread Peter O'Connor
Well, whether you factor out the loop-function is a separate issue.  Lets
say we do:

smooth_signal = [average = compute_avg(average, x) for x in signal from
average=0]

Is just as readable and maintainable as your expanded version, but saves 4
lines of code.  What's not to love?





On Thu, Apr 5, 2018 at 5:55 PM, Paul Moore  wrote:

> On 5 April 2018 at 22:26, Peter O'Connor 
> wrote:
> > I find this a bit awkward, and maintain that it would be nice to have
> this
> > as a built-in language construct to do this natively.  You have to admit:
> >
> > smooth_signal = [average = (1-decay)*average + decay*x for x in
> signal
> > from average=0.]
> >
> > Is a lot cleaner and more intuitive than:
> >
> > dev compute_avg(avg, x):
> > return (1 - decay)*avg + decay * x
> >
> > smooth_signal =
> > itertools.islice(itertools.accumulate(itertools.chain([initial_average],
> > signal), compute_avg), 1, None)
>
> Not really, I don't... In fact, factoring out compute_avg() is the
> first step I'd take in converting the proposed syntax into something
> I'd find readable and maintainable. (It's worth remembering that when
> you understand the subject of the code very well, it's a lot easier to
> follow complex constructs, than when you're less familiar with it -
> and the person who's unfamiliar with it could easily be you in a few
> months).
>
> The string of itertools functions are *not* readable, but I'd fix that
> by expanding them into an explicit loop:
>
> smooth_signal = []
> average = 0
> for x in signal:
> average = compute_avg(average, x)
> smooth_signal.append(average)
>
> If I have that wrong, it's because I misread *both* the itertools
> calls *and* the proposed syntax. But I doubt anyone would claim that
> it's possible to misunderstand the explicit loop.
>
> > Moreover, if added with the "last" builtin proposed in the link, it could
> > also kill the need for reduce, as you could instead use:
> >
> > last_smooth_signal = last(average = (1-decay)*average + decay*x for
> x in
> > signal from average=0.)
>
> last_smooth_signal = 0
> for x in signal:
> last_smooth_signal = compute_avg(last_smooth_signal, x)
>
> or functools.reduce(compute_avg, signal, 0), if you prefer reduce() -
> I'm not sure I do.
>
> Sorry, this example has pretty much confirmed for me that an explicit
> loop is *far* more readable.
>
> Paul.
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-05 Thread Peter O'Connor
Ah, that's nice, I didn't know that itertools.accumulate now has an
optional "func" parameter.  Although to get the exact same behaviour
(output the same length as input) you'd actually have to do:

   smooth_signal = itertools.islice(itertools.accumulate([initial_average]
+ signal, compute_avg), 1, None)

And you'd also have to use iterools.chain to concatenate the
initial_average to the rest if "signal" were a generator instead of a list,
so the fully general version would be:

smooth_signal =
itertools.islice(itertools.accumulate(itertools.chain([initial_average],
signal), compute_avg), 1, None)

I find this a bit awkward, and maintain that it would be nice to have this
as a built-in language construct to do this natively.  You have to admit:

smooth_signal = [average = (1-decay)*average + decay*x for x in signal
from average=0.]

Is a lot cleaner and more intuitive than:

dev compute_avg(avg, x):
return (1 - decay)*avg + decay * x

smooth_signal =
itertools.islice(itertools.accumulate(itertools.chain([initial_average],
signal), compute_avg), 1, None)

Moreover, if added with the "last" builtin proposed in the link, it could
also kill the need for reduce, as you could instead use:

last_smooth_signal = last(average = (1-decay)*average + decay*x for x
in signal from average=0.)



On Thu, Apr 5, 2018 at 1:48 PM, Clint Hepner  wrote:

>
> > On 2018 Apr 5 , at 12:52 p, Peter O'Connor 
> wrote:
> >
> > Dear all,
> >
> > In Python, I often find myself building lists where each element depends
> on the last.  This generally means making a for-loop, create an initial
> list, and appending to it in the loop, or creating a generator-function.
> Both of these feel more verbose than necessary.
> >
> > I was thinking it would be nice to be able to encapsulate this common
> type of operation into a more compact comprehension.
> >
> > I propose a new "Reduce-Map" comprehension that allows us to write:
> > signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in
> range(1000)]
> > smooth_signal = [average = (1-decay)*average + decay*x for x in signal
> from average=0.]
> > Instead of:
> > def exponential_moving_average(signal: Iterable[float], decay: float,
> initial_value: float=0.):
> > average = initial_value
> > for xt in signal:
> > average = (1-decay)*average + decay*xt
> > yield average
> >
> > signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in
> range(1000)]
> > smooth_signal = list(exponential_moving_average(signal, decay=0.05))
> > I've created a complete proposal at: https://github.com/petered/
> peps/blob/master/pep-.rst , (and a pull-request) and I'd be
> interested to hear what people think of this idea.
> >
> > Combined with the new "last" builtin discussed in the proposal, this
> would allow u to replace "reduce" with a more Pythonic comprehension-style
> syntax.
>
>
> See itertools.accumulate, comparing the rough implementation in the docs
> to your exponential_moving_average function:
>
> signal = [math.sin(i*0.01) + random.normalvariate(0,0.1) for i in
> range(1000)]
>
> dev compute_avg(avg, x):
> return (1 - decay)*avg + decay * x
>
> smooth_signal = accumulate([initial_average] + signal, compute_avg)
>
> --
> Clint
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-05 Thread Peter O'Connor
Dear all,

In Python, I often find myself building lists where each element depends on
the last.  This generally means making a for-loop, create an initial list,
and appending to it in the loop, or creating a generator-function.  Both of
these feel more verbose than necessary.

I was thinking it would be nice to be able to encapsulate this common type
of operation into a more compact comprehension.

I propose a new "Reduce-Map" comprehension that allows us to write:

signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in range(1000)]
smooth_signal = [average = (1-decay)*average + decay*x for x in signal
from average=0.]

Instead of:

def exponential_moving_average(signal: Iterable[float], decay: float,
initial_value: float=0.):
average = initial_value
for xt in signal:
average = (1-decay)*average + decay*xt
yield average

signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in range(1000)]
smooth_signal = list(exponential_moving_average(signal, decay=0.05))

I've created a complete proposal at:
https://github.com/petered/peps/blob/master/pep-.rst , (and a
pull-request ) and I'd be
interested to hear what people think of this idea.

Combined with the new "last" builtin discussed in the proposal, this would
allow u to replace "reduce" with a more Pythonic comprehension-style
syntax.

- Peter
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/