Re: why memoizing is faster

2011-03-26 Thread Terry Reedy
On 3/26/2011 12:17 AM, Stefan Behnel wrote: Not restarted in the sense that it gets cleaned up, though. The above simply passes an explicit value for it that will be used for the single call. Which satisfies the time test need, but... Future calls won't be affected. Good point. If one does

Re: why memoizing is faster

2011-03-26 Thread Lie
On Mar 26, 5:10 pm, Terry Reedy tjre...@udel.edu wrote: On 3/26/2011 12:17 AM, Stefan Behnel wrote: Not restarted in the sense that it gets cleaned up, though. The above simply passes an explicit value for it that will be used for the single call. Which satisfies the time test need,

Re: why memoizing is faster

2011-03-26 Thread Andrea Crotti
Stefan Behnel stefan...@behnel.de writes: Not restarted in the sense that it gets cleaned up, though. The above simply passes an explicit value for it that will be used for the single call. Future calls won't be affected. Stefan About this global caching thing I thought, but isn't this a

Re: why memoizing is faster

2011-03-26 Thread Steven D'Aprano
On Sat, 26 Mar 2011 12:06:46 +0100, Andrea Crotti wrote: About this global caching thing I thought, but isn't this a source of possible HUGE memory leaks? Hypothetically, but probably not that huge. And I wouldn't call it a *leak* as such, since you can access the memory and recover it if you

Re: why memoizing is faster

2011-03-26 Thread eryksun ()
On Saturday, March 26, 2011 7:50:36 AM UTC-4, Steven D#39;Aprano wrote: That's of the order of 200 MB of memory -- not that much for today's systems. I've had people email me .doc files that big *wink* Yikes! I know this thread is about caching the output of a function, but in the example

Re: why memoizing is faster

2011-03-26 Thread Steven D'Aprano
On Sat, 26 Mar 2011 07:14:17 -0700, eryksun () wrote: On Saturday, March 26, 2011 7:50:36 AM UTC-4, Steven D#39;Aprano wrote: That's of the order of 200 MB of memory -- not that much for today's systems. I've had people email me .doc files that big *wink* Yikes! I know this thread is

Re: why memoizing is faster

2011-03-26 Thread Terry Reedy
On 3/26/2011 5:55 AM, Lie wrote: On Mar 26, 5:10 pm, Terry Reedytjre...@udel.edu wrote: On 3/26/2011 12:17 AM, Stefan Behnel wrote: Not restarted in the sense that it gets cleaned up, though. The above simply passes an explicit value for it that will be used for the single call. Which

Re: why memoizing is faster

2011-03-26 Thread eryksun ()
On Saturday, March 26, 2011 11:49:02 AM UTC-4, Steven D#39;Aprano wrote: def fib(n): phi = (5 ** 0.5 + 1) / 2 f = (phi ** n - (1 - phi) ** n) / 5 ** 0.5 return int(f) Have you tried it? Unfortunately, with floating point rounding error, that rapidly

Re: why memoizing is faster

2011-03-26 Thread Terry Reedy
On 3/26/2011 7:06 AM, Andrea Crotti wrote: Stefan Behnelstefan...@behnel.de writes: Not restarted in the sense that it gets cleaned up, though. The above simply passes an explicit value for it that will be used for the single call. Future calls won't be affected. Stefan About this global

Re: why memoizing is faster

2011-03-26 Thread Terry Reedy
On 3/26/2011 11:49 AM, Steven D'Aprano wrote: On Sat, 26 Mar 2011 07:14:17 -0700, eryksun () wrote: Yikes! I know this thread is about caching the output of a function, but in the example of Fibonacci numbers, we're blessed with an easily derived closed form expression (via Z transform, etc):

Re: why memoizing is faster

2011-03-26 Thread Steven D'Aprano
On Sat, 26 Mar 2011 10:25:31 -0700, eryksun () wrote about Fibonacci function: That's true. If you need more than double precision, obviously it's time to defer to the experts. Here's an O(n) algorithm: http://www.gnu.org/software/gmp/manual/html_node/Fibonacci-Numbers- Algorithm.html We

Re: why memoizing is faster

2011-03-25 Thread Andrea Crotti
Terry Reedy tjre...@udel.edu writes: For the reason Stefan explained and hinted above. Try the following instead: def fib_iter(n, _cache = [1,1]): k = len(_cache) if n = k: for i in range(k, n+1): _cache.append(_cache[i-2] + _cache[i-1]) return _cache[n] This should be

Re: why memoizing is faster

2011-03-25 Thread Stefan Behnel
Andrea Crotti, 25.03.2011 09:49: Terry Reedy writes: For the reason Stefan explained and hinted above. Try the following instead: def fib_iter(n, _cache = [1,1]): k = len(_cache) if n= k: for i in range(k, n+1): _cache.append(_cache[i-2] + _cache[i-1]) return _cache[n]

Re: why memoizing is faster

2011-03-25 Thread Tim Wintle
On Fri, 2011-03-25 at 09:49 +0100, Andrea Crotti wrote: Terry Reedy tjre...@udel.edu writes: For the reason Stefan explained and hinted above. Try the following instead: def fib_iter(n, _cache = [1,1]): k = len(_cache) if n = k: for i in range(k, n+1):

Re: why memoizing is faster

2011-03-25 Thread eryksun ()
On Thursday, March 24, 2011 8:12:22 PM UTC-4, Terry Reedy wrote: If Python did what some have asked, which is to make 'recursive' functions actually recursive by replacing a local variable that matches the function name (in this 'fib') with a direct reference to the function itself, as a

Re: why memoizing is faster

2011-03-25 Thread Andrea Crotti
eryksun () eryk...@gmail.com writes: Regarding this I have question about function attributes. Is there an equivalent of 'self' for functions? I've seen people use function attributes as local static variables, but this seems problematic when all you have is a global reference to the

Re: why memoizing is faster

2011-03-25 Thread eryksun ()
On Friday, March 25, 2011 6:53:48 AM UTC-4, Andrea Crotti wrote: I had never seen such a thing, but why would this make sense at all? When does it make sense logically for a function to have an attribute? A function should have some input, some local variables to compute temporary results

Re: why memoizing is faster

2011-03-25 Thread Andrea Crotti
eryksun () eryk...@gmail.com writes: See Pep 232: http://www.python.org/dev/peps/pep-0232 Also, see the discussion thread on Python-Dev: http://mail.python.org/pipermail/python-dev/2000-April/thread.html#3282 I don't think self-referenced static variables (as in C's static declaration) are

Re: why memoizing is faster

2011-03-25 Thread Terry Reedy
On 3/25/2011 4:49 AM, Andrea Crotti wrote: Terry Reedytjre...@udel.edu writes: def fib_iter(n, _cache = [1,1]): k = len(_cache) if n= k: for i in range(k, n+1): _cache.append(_cache[i-2] + _cache[i-1]) return _cache[n] I just realized that the signature really ought to

Re: why memoizing is faster

2011-03-25 Thread Terry Reedy
On 3/25/2011 5:16 AM, Stefan Behnel wrote: Terry's version is playing with the fact that default arguments are only instantiated once, i.e. (unless overridden by passing an explicit argument) the _cache is shared over all calls to the function. This is similar to what your memoization decorator

Re: why memoizing is faster

2011-03-25 Thread Stefan Behnel
Terry Reedy, 25.03.2011 21:18: On 3/25/2011 5:16 AM, Stefan Behnel wrote: Terry's version is playing with the fact that default arguments are only instantiated once, i.e. (unless overridden by passing an explicit argument) the _cache is shared over all calls to the function. This is similar to

why memoizing is faster

2011-03-24 Thread Andrea Crotti
I was showing a nice memoize decorator to a friend using the classic fibonacci problem. --8---cut here---start-8--- def memoize(f, cache={}): def g(*args, **kwargs): # first must create a key to store the arguments called # it's

Re: why memoizing is faster

2011-03-24 Thread Stefan Behnel
Andrea Crotti, 24.03.2011 14:48: I was showing a nice memoize decorator to a friend using the classic fibonacci problem. --8---cut here---start-8--- def memoize(f, cache={}): def g(*args, **kwargs): # first must create a key to store the

Re: why memoizing is faster

2011-03-24 Thread Terry Reedy
On 3/24/2011 9:48 AM, Andrea Crotti wrote: def fib_iter(n): ls = {0: 1, 1:1} Storing a linear array in a dict is a bit bizarre for i in range(2, n+1): ls[i] = ls[i-1] + ls[i-2] return ls[max(ls)] So is using max(keys) to find the highest index, which

Re: why memoizing is faster

2011-03-24 Thread Terry Reedy
On 3/24/2011 9:48 AM, Andrea Crotti wrote: I was showing a nice memoize decorator to a friend using the classic fibonacci problem. --8---cut here---start-8--- def memoize(f, cache={}): def g(*args, **kwargs): # first must create a key to

Re: why memoizing is faster

2011-03-24 Thread Fons Adriaensen
On Thu, Mar 24, 2011 at 08:12:22PM -0400, Terry Reedy wrote: The irony of this is that memoizing 'recursive' functions with a decorator depends on the fact the Python does not have truly recursive functions. A function cannot call itself directly. I wonder what exactly is meant by that

Re: why memoizing is faster

2011-03-24 Thread Terry Reedy
On 3/24/2011 8:26 PM, Fons Adriaensen wrote: On Thu, Mar 24, 2011 at 08:12:22PM -0400, Terry Reedy wrote: The irony of this is that memoizing 'recursive' functions with a decorator depends on the fact the Python does not have truly recursive functions. A function cannot call itself directly.