The same essential result can be used by allocating an array in the heap and using inner-ref to obtain references to its elements.
On Thu, Mar 5, 2009 at 12:51 AM, Pal-Kristian Engstad <[email protected]> wrote: > Jonathan S. Shapiro wrote: >> On Wed, Mar 4, 2009 at 9:22 PM, Pal-Kristian Engstad >> <[email protected]> wrote: >> >>> Jonathan S. Shapiro wrote: >>> >>>> What people tend to forget is that GC is just as fast as hand allocation. >>>> >>>> >>> I wondered about this statement -- how can that be true? If we assume >>> that the programmer did some careful analysis of the code, determining >>> when memory can be reclaimed, how can that be as fast as having code >>> (albeit with type information) that scans memory to see if it is still >>> reachable? Mind you, I'm not arguing that it can be done with fairly low >>> overhead, but there still must be some overhead? So can you try to >>> substantiate that claim, Jonathan? >>> >> >> GC is a big topic. I'm fairly well read on it, but I don't consider >> myself a real expert. That being said, let me compare one commonly >> used GC scheme with manual allocation, which may give you a sense. >> > Thanks -- that does make sense to me. If you compare malloc()/free() > versus GC, then I totally buy the argument. > > However, at least in my industry, the common way to perform allocation > is to make use of "pools". Essentially, you pre-allocate a large block > of memory for a specific task. Then you allocate the same way as you do > in GCs, you simply bump up the current pointer, or else you bump it down > from the top (for "temporary") allocations. If you run out of memory, > then you simply assert and make sure to use a larger buffer. > Deallocation is done by resetting the pointers, in which case the memory > that was used is dead (a rather scary thought - I admit). > > Another typical use-case is the double-buffering scheme. You allocate > two of the above memory pools, one for each frame. Since you know that > after a frame has been drawn the memory is no longer needed, you simply > reset the pointers. > > I think I know the answer -- what we are doing is very unsafe! However, > it does work and provides us with an extremely fast memory allocator > (and deallocator). Would BitC simply disallow such a usage pattern? > >> A manual allocator incurs a *very* large cost per allocation and per >> free. malloc() must find an appropriately sized block to allocate. >> Both operations must [de]coalesce blocks of memory to maintain those >> blocks. The list management alone is surprisingly complicated, and >> list traversals in general have surprisingly bad cache behavior (your >> neighbor is almost never in cache). So the overhead of these >> operations is quite high, and the procedure call for malloc cannot be >> avoided. The cost of that procedure call alone (the call itself, not >> the internal execution) is already 10x the cost of allocating in a >> modern GC system. >> >> The reason is that GC-based allocators do not need to deal with >> fragmentation. An allocate consists of checking whether the current >> heap is big enough (if not, invoke GC), recording the current end of >> heap pointer (which is the address of the new object) and adding the >> object size to create the new end of heap pointer. That's three to >> four instructions, which are invariably inlined, after which they can >> be "boxcarred" through standard common subexpression elimination. All >> of the cost in a GC system is incurred by the collector. That cost >> works out in most programs to be about the same as the combined cost >> of malloc/free. >> >> That was all true when safe languages were very prone to heap >> allocation. Modern compilers do optimizations to allocate things on >> the stack when possible, and that helps further. >> >> A copying collector is relatively cache friendly where free() is not. >> It also has the side effect of improving cache and TLB locality in >> comparison to a malloc()/free() heap, which actually improves >> normal-mode performance. >> >> shap >> _______________________________________________ >> bitc-dev mailing list >> [email protected] >> http://www.coyotos.org/mailman/listinfo/bitc-dev >> >> > > > -- > Pål-Kristian Engstad ([email protected]), > Lead Graphics & Engine Programmer, > Naughty Dog, Inc., 1601 Cloverfield Blvd, 6000 North, > Santa Monica, CA 90404, USA. Ph.: (310) 633-9112. > > "Emacs would be a far better OS if it was shipped with > a halfway-decent text editor." -- Slashdot, Dec 13. 2005. > > > > _______________________________________________ > bitc-dev mailing list > [email protected] > http://www.coyotos.org/mailman/listinfo/bitc-dev > > _______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
