On 10/08/2013 02:22 AM, Steven D'Aprano wrote:
On Mon, 07 Oct 2013 20:27:13 -0700, Mark Janssen wrote:
But even putting that aside, even if somebody wrote such a
description, it would be reductionism gone mad. What possible light
on the problem would be shined by a long, long list of machine
random...@fastmail.us writes:
The entire point of tail call optimization requires not keeping the
intervening stack frames around, in _any_ form, so as to allow
arbitrarily deep recursion without ever having the possibility of a
stack overflow. An implementation which reduced but did not
Steven D'Aprano writes:
Far more useful would be a high-level description of Scheme's
programming model. If names can be rebound on the fly, how does
Scheme even tell whether something is a recursive call or not?
def foo(arg):
do stuff here
foo(arg-1) # how does Scheme know that
random...@fastmail.us writes:
On Mon, Oct 7, 2013, at 13:15, Alain Ketterlin wrote:
That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.
Let's be clear about what
Antoon Pardon antoon.par...@rece.vub.ac.be writes:
Op 07-10-13 19:15, Alain Ketterlin schreef:
[...]
That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.
Your wrong.
Op 08-10-13 01:50, Steven D'Aprano schreef:
On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
I challenge you to get
down to the machine code in scheme and formally describe how it's doing
both.
For which machine?
Or are you assuming that there's only one machine code that runs
On Mon, 07 Oct 2013 20:27:13 -0700, Mark Janssen wrote:
But even putting that aside, even if somebody wrote such a
description, it would be reductionism gone mad. What possible light
on the problem would be shined by a long, long list of machine code
operations, even if written using assembly
Op 07-10-13 23:27, random...@fastmail.us schreef:
On Sat, Oct 5, 2013, at 3:39, Antoon Pardon wrote:
What does this mean?
Does it mean that a naive implementation would arbitrarily mess up
stack traces and he wasn't interested in investigating more
sophisticated implementations?
Does it
Alain Ketterlin writes:
Antoon Pardon writes:
Op 07-10-13 19:15, Alain Ketterlin schreef:
[...]
That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.
Your
Terry Reedy tjre...@udel.edu writes:
On 10/4/2013 5:49 AM, Alain Ketterlin wrote:
I think allowing rebinding of function names is extremely strange,
Steven already countered the 'is extremely strange' part by showing
that such rebinding is common, generally useful, and only occasionally
Op 07-10-13 19:15, Alain Ketterlin schreef:
I want to consider here what it would mean to concretely implement the
abstract notion 'disallow rebinding of function names' and show what
would be behind calling the idea 'not feasible'.
Again, I'm more concerned about the function than about the
On 10/7/2013 1:15 PM, Alain Ketterlin wrote:
Terry Reedy tjre...@udel.edu writes:
3. Python does not mandate how namespaces are implemented. CPython
uses both dicts and, for function local namespaces, internal C arrays.
So 'names' in code can become either string keys for dicts or integer
On Mon, Oct 7, 2013, at 13:15, Alain Ketterlin wrote:
That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.
Let's be clear about what optimizations we are talking about.
On Sat, Oct 5, 2013, at 3:39, Antoon Pardon wrote:
What does this mean?
Does it mean that a naive implementation would arbitrarily mess up
stack traces and he wasn't interested in investigating more
sophisticated implementations?
Does it mean he just didn't like the idea a stack trace
On 07/10/2013 18:57, Antoon Pardon wrote:
Op 07-10-13 19:15, Alain Ketterlin schreef:
I want to consider here what it would mean to concretely
implement the abstract notion 'disallow rebinding of function
names' and show what would be behind calling the idea 'not
feasible'.
Again, I'm more
Alain Ketterlin al...@dpt-info.u-strasbg.fr writes:
BTW, does the original callable object have a ref counter? Is it garbage
collected in that case? If not, would it be considered a bug?
In CPython ALL objects have ref counters.
--
Piet van Oostrum p...@vanoostrum.org
WWW:
That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.
Your wrong. Full dynamics is not in contradiction with tail call
optimisation. Scheme has already done it for years.
On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
I challenge you to get
down to the machine code in scheme and formally describe how it's doing
both.
For which machine?
Or are you assuming that there's only one machine code that runs on all
computing devices?
Frankly, asking
Only that you've got a consistent, stable (and therefore,
formalizable) translation from your language to the machine. That's
all. Everything else is magic. Do you know that the Warren
Abstraction Engine used to power the predicate logic in Prolog into
machien code for a VonNeumann machine
On Mon, Oct 7, 2013 at 4:50 PM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
I challenge you to get
down to the machine code in scheme and formally describe how it's doing
both.
For which machine?
Right, I should stop
On Mon, 07 Oct 2013 17:16:35 -0700, Mark Janssen wrote:
It's like this: there *should* be one-to-one mappings between the
various high-level constructs to the machine code, varying only between
different chips (that is the purpose of the compiler after all), yet for
some operations, in
Mark Janssen dreamingforw...@gmail.com writes:
Yeah, and this is where two models of computation have been conflated,
creating magical effects, confusing everybody. I challenge you to get
down to the machine code in scheme and formally describe how it's
doing both.
Which two models of
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes:
Far more useful would be a high-level description of Scheme's programming
model. If names can be rebound on the fly, how does Scheme even tell
whether something is a recursive call or not?
Maybe it doesn't have to tell. If you do
But even putting that aside, even if somebody wrote such a description,
it would be reductionism gone mad. What possible light on the problem
would be shined by a long, long list of machine code operations, even
if written using assembly mnemonics?
Only that you've got a consistent, stable
Yeah, and this is where two models of computation have been conflated,
creating magical effects, confusing everybody. I challenge you to get
down to the machine code in scheme and formally describe how it's
doing both.
Which two models of computation are you talking about? And what magica;
Op 04-10-13 23:14, Terry Reedy schreef:
On 10/4/2013 6:46 AM, Ian Kelly wrote:
On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the
On Thursday, October 3, 2013 5:33:27 AM UTC+8, Terry Reedy wrote:
On 10/2/2013 8:31 AM, random...@fastmail.us wrote:
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration
Mark Janssen dreamingforw...@gmail.com writes:
def fact(n): return 1 if n = 1 else n * fact(n-1)
class Strange:
...
def __le__(dummy):
global fact
fact = someotherfun # this is binding
return false
You cannot prevent this in python.
No, but you can't prevent a lot of
Neil Cerutti ne...@norwich.edu wrote:
On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
It isn't hard to imagine adding a TAIL_CALL opcode to the
interpreter that checks whether the function to be called is
the same as the current function and if it is just updates the
arguments
On Fri, Oct 4, 2013 at 4:16 AM, Duncan Booth
duncan.booth@invalid.invalid wrote:
Neil Cerutti ne...@norwich.edu wrote:
On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
It isn't hard to imagine adding a TAIL_CALL opcode to the
interpreter that checks whether the function to be
On Fri, Oct 4, 2013 at 4:41 AM, Ian Kelly ian.g.ke...@gmail.com wrote:
There is no doubt that it's a tail call. Whether it is recursion is
irrelevant to optimizing it. The reason we talk about tail call
recursion specifically is because the recursive case is the one that
makes the
On Thursday, October 3, 2013 10:57:48 PM UTC+5:30, Ravi Sahni wrote:
On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
4. There is a whole spectrum of such optimizaitons --
4a eg a single-call structural recursion example, does not need to push
return address on the stack. It only needs to
On Fri, 04 Oct 2013 11:49:26 +0200, Alain Ketterlin wrote:
I think allowing rebinding of function names is extremely strange,
It's not, it's quite common. Functions in Python are first-class values,
and we can do things like this:
from somelibrary import somethingwithalonglongname as
Duncan Booth writes:
Neil Cerutti wrote:
On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
It isn't hard to imagine adding a TAIL_CALL opcode to the
interpreter that checks whether the function to be called is
the same as the current function and if it is just updates the
Ian Kelly ian.g.ke...@gmail.com wrote:
On Fri, Oct 4, 2013 at 4:41 AM, Ian Kelly ian.g.ke...@gmail.com wrote:
There is no doubt that it's a tail call. Whether it is recursion is
irrelevant to optimizing it. The reason we talk about tail call
recursion specifically is because the recursive
On 10/4/2013 6:46 AM, Ian Kelly wrote:
On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the stack traces just because they happened
On 10/4/2013 5:49 AM, Alain Ketterlin wrote:
I think allowing rebinding of function names is extremely strange,
Steven already countered the 'is extremely strange' part by showing that
such rebinding is common, generally useful, and only occasionally dodgy
and a candidate for being blocked.
On Wed, Oct 2, 2013, at 17:33, Terry Reedy wrote:
5. Conversion of apparent recursion to iteration assumes that the
function really is intended to be recursive. This assumption is the
basis for replacing the recursive call with assignment and an implied
internal goto. The programmer can
On Wed, Oct 2, 2013, at 21:46, MRAB wrote:
The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const. (Much like the interning of small
integers) The list, however, would always have to be copied from the
compile-time object. So that object
On Wed, Oct 2, 2013, at 22:34, Steven D'Aprano wrote:
You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places. But that's not the case, as a simple test
will show you:
def f():
... return (1, 2, 3)
f() is f()
True
It does, in fact, re-use it when it
Alain Ketterlin al...@dpt-info.u-strasbg.fr wrote:
Terry Reedy tjre...@udel.edu writes:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here it is.
On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
How do know that either = or * didn't rebind the name
fact to something else? I think that's the main reason why
python cannot apply any procedural optimization (even things
like inlining are impossible, or possible only under
On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
4. There is a whole spectrum of such optimizaitons --
4a eg a single-call structural recursion example, does not need to push
return address on the stack. It only needs to store the recursion depth:
If zero jump to outside return add; if 0 jump
On 10/2/2013 10:34 PM, Steven D'Aprano wrote:
You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places.
No I did not. To save tuple creation time, a pre-compiled tuple is
reused when its display expression is re-executed. If I had been
interested in
random...@fastmail.us writes:
Hey, while we're on the subject, can we talk about frozen(set|dict)
literals again? I really don't understand why this discussion fizzles
out whenever it's brought up on python-ideas.
Can you start us off by searching for previous threads discussing it,
and
On Wed, 02 Oct 2013 22:41:00 -0400, Terry Reedy wrote:
I am referring to constant-value objects included in the code object.
def f(): return (1,2,3)
f.__code__.co_consts
(None, 1, 2, 3, (1, 2, 3))
Okay, now that's more clear. I didn't understand what you meant before.
So long as we
On Thu, 03 Oct 2013 10:09:25 -0400, random832 wrote:
Speaking of assumptions, I would almost say that we should make the
assumption that operators (other than the __i family, and
setitem/setattr/etc) are not intended to have visible side effects. This
would open a _huge_ field of potential
Terry Reedy tjre...@udel.edu writes:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here it is.
Assume that you have already done the work of turning a
rusi rustompm...@gmail.com writes:
On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps.
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.
That should be a reason it _does_ do it -
On Wednesday, October 2, 2013 1:53:46 PM UTC+5:30, Alain Ketterlin wrote:
rusi writes:
On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost
On Wed, 02 Oct 2013 08:31:25 -0400, random832 wrote:
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here
On Wed, Oct 2, 2013, at 9:32, Steven D'Aprano wrote:
Python is not as aggressively functional as (say) Haskell, but it is
surely an exaggeration to suggest that the failure to include tail call
optimization means that Python rejects functional programming styles.
You can still write you
def fact(n): return 1 if n = 1 else n * fact(n-1)
into a tail recursion like
[...]
How do know that either = or * didn't rebind the name fact to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or
On Wed, 02 Oct 2013 10:04:49 -0400, random832 wrote:
On Wed, Oct 2, 2013, at 9:32, Steven D'Aprano wrote:
Python is not as aggressively functional as (say) Haskell, but it is
surely an exaggeration to suggest that the failure to include tail call
optimization means that Python rejects
On Wed, Oct 2, 2013 at 1:23 PM, Alain Ketterlin al...@unistra.fr wrote:
On 10/02/2013 08:59 PM, Mark Janssen wrote:
def fact(n): return 1 if n = 1 else n * fact(n-1)
How do know that either = or * didn't rebind the name fact to
something else? I think that's the main reason why python cannot
On 10/2/2013 8:31 AM, random...@fastmail.us wrote:
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.
That should be a reason it _does_ do it - saying people should rewrite
their functions with
On 10/2/2013 4:17 AM, Alain Ketterlin wrote:
Terry Reedy tjre...@udel.edu writes:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here it is.
Assume that
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no
On 2/10/2013 21:24, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be
On 03/10/2013 02:39, Dave Angel wrote:
On 2/10/2013 21:24, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and
On Thu, 03 Oct 2013 02:46:53 +0100, MRAB wrote:
On 03/10/2013 02:39, Dave Angel wrote:
On 2/10/2013 21:24, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler.
On 10/2/2013 9:46 PM, MRAB wrote:
On 03/10/2013 02:39, Dave Angel wrote:
On 2/10/2013 21:24, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can
On 10/2/2013 9:24 PM, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be
On Thu, Oct 3, 2013 at 12:34 PM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
py def f():
... a = (1, 2, 3)
... b = (1, 2, 3)
... return a is b
...
py f() # Are the tuples the same object?
False
That just means the compiler doesn't detect reuse of the same
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.
Assume that you have already done the work of turning a body recursive
('not tail recursive')
On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.
What happens for mutual
68 matches
Mail list logo