On Thu, 2009-09-10 at 22:53 -0700, Derick Eddington wrote:
> Darn, my stack-lang implementation sucks:
>
> [...]
>
> As you can see, the same algorithm took mine 7.2 seconds and took Factor
> only 0.6 second.
>
> I'm not sure why mine is so slow. I suspect it's because my data stack
> is an SRFI-39 parameter and it is used heavily. I'm going to change my
> implementation to pass and return the data stack through "word"
> procedure calls, to see if that helps.
I did that and it brought the time down to 3.3 seconds, but it's still
allocating 1.4 GB. I don't know if I can bring that allocation down
significantly, because I suspect it's happening because my data stack is
a Scheme list and so there's lots of cons'ing to push values.
That fib function applied to 34 results in 18454929 recursive calls to
itself (according to my calculation [1]). So the average allocation per
call:
> (inexact (/ 1402578680 18454929))
76.00022086240483
which doesn't seem like much per call, but there's tons of calls
cons'ing. I imagine the Factor runtime has a lower-level
memory-contiguous pre-allocated block-allocated stack, and so it's way
faster. Does this reasoning make sense?
Using a Scheme list is so easy and convenient for interfacing with
Scheme code. But I think I'll try making a pre-allocated
block-allocated stack using a linked list of vectors and make convenient
abstractions for working with it. Does that seem worthwhile?
[1]
> (define (num-fib-calls x)
(+ 1
(if (<= x 1)
0
(+ (num-fib-calls (- x 1))
(num-fib-calls (- x 2))))))
> (num-fib-calls 34)
18454929
--
: Derick
----------------------------------------------------------------