On 10/08/2013 02:22 AM, Steven D'Aprano wrote:
On Mon, 07 Oct 2013 20:27:13 -0700, Mark Janssen wrote:
But even putting that aside, even if somebody wrote such a
description, it would be reductionism gone mad. What possible light
on the problem would be shined by a long, long list of machine co
Alain Ketterlin writes:
> Antoon Pardon writes:
>
> > Op 07-10-13 19:15, Alain Ketterlin schreef:
>
> [...]
> >> That's fine. My point was: you can't at the same time have full
> >> dynamicity *and* procedural optimizations (like tail call opt).
> >> Everybody should be clear about the trade-off.
Op 07-10-13 23:27, random...@fastmail.us schreef:
> On Sat, Oct 5, 2013, at 3:39, Antoon Pardon wrote:
>> What does this mean?
>>
>> Does it mean that a naive implementation would arbitrarily mess up
>> stack traces and he wasn't interested in investigating more
>> sophisticated implementations?
>>
On Mon, 07 Oct 2013 20:27:13 -0700, Mark Janssen wrote:
But even putting that aside, even if somebody wrote such a
description, it would be reductionism gone mad. What possible light
on the problem would be shined by a long, long list of machine code
operations, even if written
Op 08-10-13 01:50, Steven D'Aprano schreef:
> On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
>
>> I challenge you to get
>> down to the machine code in scheme and formally describe how it's doing
>> both.
>
> For which machine?
>
> Or are you assuming that there's only one machine code
Antoon Pardon writes:
> Op 07-10-13 19:15, Alain Ketterlin schreef:
[...]
>> That's fine. My point was: you can't at the same time have full
>> dynamicity *and* procedural optimizations (like tail call opt).
>> Everybody should be clear about the trade-off.
>
> Your wrong. Full dynamics is not i
random...@fastmail.us writes:
> On Mon, Oct 7, 2013, at 13:15, Alain Ketterlin wrote:
>> That's fine. My point was: you can't at the same time have full
>> dynamicity *and* procedural optimizations (like tail call opt).
>> Everybody should be clear about the trade-off.
>
> Let's be clear about wha
Steven D'Aprano writes:
> Far more useful would be a high-level description of Scheme's
> programming model. If names can be rebound on the fly, how does
> Scheme even tell whether something is a recursive call or not?
>
> def foo(arg):
> do stuff here
> foo(arg-1) # how does Scheme know
random...@fastmail.us writes:
> The entire point of tail call optimization requires not keeping the
> intervening stack frames around, in _any_ form, so as to allow
> arbitrarily deep recursion without ever having the possibility of a
> stack overflow. An implementation which reduced but did not
>
>> Yeah, and this is where two models of computation have been conflated,
>> creating magical effects, confusing everybody. I challenge you to get
>> down to the machine code in scheme and formally describe how it's
>> doing both.
>
> Which two models of computation are you talking about? And what
>>> But even putting that aside, even if somebody wrote such a description,
>>> it would be reductionism gone mad. What possible light on the problem
>>> would be shined by a long, long list of machine code operations, even
>>> if written using assembly mnemonics?
>>
>> Only that you've got a consi
Steven D'Aprano writes:
> Far more useful would be a high-level description of Scheme's programming
> model. If names can be rebound on the fly, how does Scheme even tell
> whether something is a recursive call or not?
Maybe it doesn't have to tell. If you do tail call optimization there is no
Mark Janssen writes:
> Yeah, and this is where two models of computation have been conflated,
> creating magical effects, confusing everybody. I challenge you to get
> down to the machine code in scheme and formally describe how it's
> doing both.
Which two models of computation are you talking
On Mon, 07 Oct 2013 17:16:35 -0700, Mark Janssen wrote:
> It's like this: there *should* be one-to-one mappings between the
> various high-level constructs to the machine code, varying only between
> different chips (that is the purpose of the compiler after all), yet for
> some operations, in lan
On Mon, Oct 7, 2013 at 4:50 PM, Steven D'Aprano
wrote:
> On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
>> I challenge you to get
>> down to the machine code in scheme and formally describe how it's doing
>> both.
>
> For which machine?
Right, I should stop assuming a modern implementati
> Only that you've got a consistent, stable (and therefore,
> formalizable) translation from your language to the machine. That's
> all. Everything else is magic. Do you know that the Warren
> Abstraction Engine used to power the predicate logic in Prolog into
> machien code for a VonNeumann mac
On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
> I challenge you to get
> down to the machine code in scheme and formally describe how it's doing
> both.
For which machine?
Or are you assuming that there's only one machine code that runs on all
computing devices?
Frankly, asking some
>> That's fine. My point was: you can't at the same time have full
>> dynamicity *and* procedural optimizations (like tail call opt).
>> Everybody should be clear about the trade-off.
>
> Your wrong. Full dynamics is not in contradiction with tail call
> optimisation. Scheme has already done it for
Alain Ketterlin writes:
> BTW, does the original callable object have a ref counter? Is it garbage
> collected in that case? If not, would it be considered a bug?
In CPython ALL objects have ref counters.
--
Piet van Oostrum
WWW: http://pietvanoostrum.com/
PGP key: [8DAE142BE17999C4]
--
https
On 07/10/2013 18:57, Antoon Pardon wrote:
Op 07-10-13 19:15, Alain Ketterlin schreef:
I want to consider here what it would mean to concretely
implement the abstract notion 'disallow rebinding of function
names' and show what would be behind calling the idea 'not
feasible'.
Again, I'm more con
On Sat, Oct 5, 2013, at 3:39, Antoon Pardon wrote:
> What does this mean?
>
> Does it mean that a naive implementation would arbitrarily mess up
> stack traces and he wasn't interested in investigating more
> sophisticated implementations?
>
> Does it mean he just didn't like the idea a stack tra
On Mon, Oct 7, 2013, at 13:15, Alain Ketterlin wrote:
> That's fine. My point was: you can't at the same time have full
> dynamicity *and* procedural optimizations (like tail call opt).
> Everybody should be clear about the trade-off.
Let's be clear about what optimizations we are talking about. T
On 10/7/2013 1:15 PM, Alain Ketterlin wrote:
Terry Reedy writes:
3. Python does not mandate how namespaces are implemented. CPython
uses both dicts and, for function local namespaces, internal C arrays.
So 'names' in code can become either string keys for dicts or integer
indexes for arrays.
Op 07-10-13 19:15, Alain Ketterlin schreef:
I want to consider here what it would mean to concretely implement the
abstract notion 'disallow rebinding of function names' and show what
would be behind calling the idea 'not feasible'.
Again, I'm more concerned about the function than about the na
Terry Reedy writes:
> On 10/4/2013 5:49 AM, Alain Ketterlin wrote:
>
>> I think allowing rebinding of function names is extremely strange,
>
> Steven already countered the 'is extremely strange' part by showing
> that such rebinding is common, generally useful, and only occasionally
> dodgy and a
Op 04-10-13 23:14, Terry Reedy schreef:
On 10/4/2013 6:46 AM, Ian Kelly wrote:
On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the s
On 10/4/2013 5:49 AM, Alain Ketterlin wrote:
I think allowing rebinding of function names is extremely strange,
Steven already countered the 'is extremely strange' part by showing that
such rebinding is common, generally useful, and only occasionally dodgy
and a candidate for being blocked.
On 10/4/2013 6:46 AM, Ian Kelly wrote:
On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the stack traces just because they happened to
Ian Kelly wrote:
> On Fri, Oct 4, 2013 at 4:41 AM, Ian Kelly wrote:
>> There is no doubt that it's a tail call. Whether it is recursion is
>> irrelevant to optimizing it. The reason we talk about "tail call
>> recursion" specifically is because the recursive case is the one that
>> makes the o
Duncan Booth writes:
> Neil Cerutti wrote:
> > On 2013-10-03, Duncan Booth wrote:
> >> It isn't hard to imagine adding a TAIL_CALL opcode to the
> >> interpreter that checks whether the function to be called is
> >> the same as the current function and if it is just updates the
> >> arguments and
On Thursday, October 3, 2013 10:57:48 PM UTC+5:30, Ravi Sahni wrote:
> On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
> > 4. There is a whole spectrum of such optimizaitons --
> > 4a eg a single-call structural recursion example, does not need to push
> > return address on the stack. It only needs t
On Fri, 04 Oct 2013 11:49:26 +0200, Alain Ketterlin wrote:
> I think allowing rebinding of function names is extremely strange,
It's not, it's quite common. Functions in Python are first-class values,
and we can do things like this:
from somelibrary import somethingwithalonglongname as shortnam
On Fri, Oct 4, 2013 at 4:41 AM, Ian Kelly wrote:
> There is no doubt that it's a tail call. Whether it is recursion is
> irrelevant to optimizing it. The reason we talk about "tail call
> recursion" specifically is because the recursive case is the one that
> makes the optimization worthwhile, n
On Fri, Oct 4, 2013 at 4:16 AM, Duncan Booth
wrote:
> Neil Cerutti wrote:
>> On 2013-10-03, Duncan Booth wrote:
>>> It isn't hard to imagine adding a TAIL_CALL opcode to the
>>> interpreter that checks whether the function to be called is
>>> the same as the current function and if it is just up
Neil Cerutti wrote:
> On 2013-10-03, Duncan Booth wrote:
>> It isn't hard to imagine adding a TAIL_CALL opcode to the
>> interpreter that checks whether the function to be called is
>> the same as the current function and if it is just updates the
>> arguments and jumps to the start of the code b
Mark Janssen writes:
> def fact(n): return 1 if n <= 1 else n * fact(n-1)
>> class Strange:
>> ...
>> def __le__(dummy):
>> global fact
>> fact = someotherfun # this is "binding"
>> return false
>> You cannot prevent this in python.
> No, but you can't prevent a lot of bad
On Thursday, October 3, 2013 5:33:27 AM UTC+8, Terry Reedy wrote:
> On 10/2/2013 8:31 AM, random...@fastmail.us wrote:
>
> > On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
>
> >> Part of the reason that Python does not do tail call optimization is
>
> >> that turning tail recursion into while
On Thu, 03 Oct 2013 10:09:25 -0400, random832 wrote:
> Speaking of assumptions, I would almost say that we should make the
> assumption that operators (other than the __i family, and
> setitem/setattr/etc) are not intended to have visible side effects. This
> would open a _huge_ field of potential
On Wed, 02 Oct 2013 22:41:00 -0400, Terry Reedy wrote:
> I am referring to constant-value objects included in the code object.
> >>> def f(): return (1,2,3)
>
> >>> f.__code__.co_consts
> (None, 1, 2, 3, (1, 2, 3))
Okay, now that's more clear. I didn't understand what you meant before.
So lon
random...@fastmail.us writes:
> Hey, while we're on the subject, can we talk about frozen(set|dict)
> literals again? I really don't understand why this discussion fizzles
> out whenever it's brought up on python-ideas.
Can you start us off by searching for previous threads discussing it,
and sum
On 10/2/2013 10:34 PM, Steven D'Aprano wrote:
You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places.
No I did not. To save tuple creation time, a pre-compiled tuple is
reused when its display expression is re-executed. If I had been
interested in multi
On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
> 4. There is a whole spectrum of such optimizaitons --
> 4a eg a single-call structural recursion example, does not need to push
> return address on the stack. It only needs to store the recursion depth:
>
> If zero jump to outside return add; if > 0 j
On 2013-10-03, Duncan Booth wrote:
>> How do know that either "<=" or "*" didn't rebind the name
>> "fact" to something else? I think that's the main reason why
>> python cannot apply any procedural optimization (even things
>> like inlining are impossible, or possible only under very
>> conservat
Alain Ketterlin wrote:
> Terry Reedy writes:
>
>> Part of the reason that Python does not do tail call optimization is
>> that turning tail recursion into while iteration is almost trivial,
>> once you know the secret of the two easy steps. Here it is.
>>
>> Assume that you have already done th
On Wed, Oct 2, 2013, at 22:34, Steven D'Aprano wrote:
> You are both assuming that LOAD_CONST will re-use the same tuple
> (1, 2, 3) in multiple places. But that's not the case, as a simple test
> will show you:
>>> def f():
... return (1, 2, 3)
>>> f() is f()
True
It does, in fact, re-use it
On Wed, Oct 2, 2013, at 21:46, MRAB wrote:
> > The difference is that a tuple can be reused, so it makes sense for the
> > comiler to produce it as a const. (Much like the interning of small
> > integers) The list, however, would always have to be copied from the
> > compile-time object. So that
On Wed, Oct 2, 2013, at 17:33, Terry Reedy wrote:
> 5. Conversion of apparent recursion to iteration assumes that the
> function really is intended to be recursive. This assumption is the
> basis for replacing the recursive call with assignment and an implied
> internal goto. The programmer can
On Thu, Oct 3, 2013 at 12:34 PM, Steven D'Aprano
wrote:
> py> def f():
> ... a = (1, 2, 3)
> ... b = (1, 2, 3)
> ... return a is b
> ...
> py> f() # Are the tuples the same object?
> False
That just means the compiler doesn't detect reuse of the same tuple.
But compare:
>>> def f():
On 10/2/2013 9:24 PM, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
On 10/2/2013 9:46 PM, MRAB wrote:
On 03/10/2013 02:39, Dave Angel wrote:
On 2/10/2013 21:24, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can alway
On Thu, 03 Oct 2013 02:46:53 +0100, MRAB wrote:
> On 03/10/2013 02:39, Dave Angel wrote:
>> On 2/10/2013 21:24, Steven D'Aprano wrote:
>>
>>> On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
>>>
CPython core developers have be very conservative about what
tranformations they put in
On 03/10/2013 02:39, Dave Angel wrote:
On 2/10/2013 21:24, Steven D'Aprano wrote:
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so
On 2/10/2013 21:24, Steven D'Aprano wrote:
> On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
>
>> CPython core developers have be very conservative about what
>> tranformations they put into the compiler. (1,2,3) can always be
>> compiled as a constant, and so it is. [1,2,3] might or might
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:
> CPython core developers have be very conservative about what
> tranformations they put into the compiler. (1,2,3) can always be
> compiled as a constant, and so it is. [1,2,3] might or might not be a
> constant, depending on the context, and
On 10/2/2013 4:17 AM, Alain Ketterlin wrote:
Terry Reedy writes:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here it is.
Assume that you have already do
>> Part of the reason that Python does not do tail call optimization is
>> that turning tail recursion into while iteration is almost trivial, once
>> you know the secret of the two easy steps. Here it is.
>
> That should be a reason it _does_ do it - saying people should rewrite
> their functions
On 10/2/2013 8:31 AM, random...@fastmail.us wrote:
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is
On Wed, Oct 2, 2013 at 1:23 PM, Alain Ketterlin wrote:
> On 10/02/2013 08:59 PM, Mark Janssen wrote:
def fact(n): return 1 if n <= 1 else n * fact(n-1)
>>>
>>> How do know that either "<=" or "*" didn't rebind the name "fact" to
>>> something else? I think that's the main reason why pyth
On Wed, 02 Oct 2013 10:04:49 -0400, random832 wrote:
> On Wed, Oct 2, 2013, at 9:32, Steven D'Aprano wrote:
>> Python is not as aggressively functional as (say) Haskell, but it is
>> surely an exaggeration to suggest that the failure to include tail call
>> optimization means that Python "rejects"
>> def fact(n): return 1 if n <= 1 else n * fact(n-1)
>>
>> into a tail recursion like
> [...]
>
> How do know that either "<=" or "*" didn't rebind the name "fact" to
> something else? I think that's the main reason why python cannot apply
> any procedural optimization (even things like inlining a
On Wed, Oct 2, 2013, at 9:32, Steven D'Aprano wrote:
> Python is not as aggressively functional as (say) Haskell, but it is
> surely an exaggeration to suggest that the failure to include tail call
> optimization means that Python "rejects" functional programming styles.
> You can still write yo
On Wed, 02 Oct 2013 08:31:25 -0400, random832 wrote:
> On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
>> Part of the reason that Python does not do tail call optimization is
>> that turning tail recursion into while iteration is almost trivial,
>> once you know the secret of the two easy steps.
On Wednesday, October 2, 2013 1:53:46 PM UTC+5:30, Alain Ketterlin wrote:
> rusi writes:
>
>
>
> > On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
> >> Part of the reason that Python does not do tail call optimization is
> >> that turning tail recursion into while iteratio
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
> Part of the reason that Python does not do tail call optimization is
> that turning tail recursion into while iteration is almost trivial, once
> you know the secret of the two easy steps. Here it is.
That should be a reason it _does_ do it - s
rusi writes:
> On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
>> Part of the reason that Python does not do tail call optimization is
>> that turning tail recursion into while iteration is almost trivial, once
>> you know the secret of the two easy steps. Here it is.
>
>
Terry Reedy writes:
> Part of the reason that Python does not do tail call optimization is
> that turning tail recursion into while iteration is almost trivial,
> once you know the secret of the two easy steps. Here it is.
>
> Assume that you have already done the work of turning a body recursive
On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
> Part of the reason that Python does not do tail call optimization is
> that turning tail recursion into while iteration is almost trivial, once
> you know the secret of the two easy steps. Here it is.
What happens for mutual
Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.
Assume that you have already done the work of turning a body recursive
('not tail recursive') for
68 matches
Mail list logo