On 23-08-2010, Ethan Burns <burns.et...@gmail.com> wrote:
> On Mon, Aug 23, 2010 at 8:06 AM, Christophe TROESTLER
><christophe.troestler+oc...@umh.ac.be> wrote:
>> On Thu, 19 Aug 2010 07:52:33 -0400, Ethan Burns wrote:
>>>
>>> let r = ref 0.0 ;;
>>> for i = 0 to 1000000000 do r := float i done;
>>> Printf.printf "%f\n" !r;
>>> Printf.printf "words: %f\n" (Gc.stat ()).Gc.minor_words
>>
>> To add a precision to others' answers : float refs are unboxed
>> _locally_.  If you rewrite your code as
>>
>> let r = ref 0.0 in
>> for i = 0 to 1000_000_000 do r := float i done;
>> Printf.printf "%f\n" !r;
>> Printf.printf "words: %f\n" (Gc.stat ()).Gc.minor_words
>>
>> then it runs at about the same speed as you other version.
>
>
> $ time ./a.out
> 1000000000.000000
> words: 2000000367.000000
>
> real  0m2.655s
>
> It does seem to run a lot faster than my first version, but it also
> seems to allocate a whole lot.  If it is still allocating just as much
> why is this version so much faster?
>

Allocation on the minor heap is very cheap compared to assignement into
the major heap. It is better to allocate a lot on the minor heap than to
do operations on the major heap. 

I think the main reason for the difference is that the first example
(float ref not local) implies a call to "caml_modify"
(byterun/memory.c|h) which has a cost. This cost is bigger on amd64
architecture because one test is quite expensive (Is_in_heap I think)
due to address space randomization.

Regards,
Sylvain Le Gall

_______________________________________________
Caml-list mailing list. Subscription management:
http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
Archives: http://caml.inria.fr
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs

Reply via email to