Mattias Gaertner wrote:
Plus some bytes for the memory manager for each mem block. Typically 8.
  
Forgot about that, sorry.  In any case, the math still works.  I'll explain.
In this case, 10,000 entries occupy 80,008 bytes
(80,000 + 4 for pointer to First + 4 for pointer to Last),
    

~160,000

  
distributed
around the memory table.  Also keep in mind that the data payload for the
linked list item is usually contained within the structure itself.

A TList (stripped down for this case) requires 4 bytes for the list
allocation, plus 4 bytes per list entry.  10,000 entries occupy 80,004
bytes. 
    

~40,000
  
Actually, if you account for the allocation of each of the 10,000 items added to the TList, it is ~120,000 (4 + 8 for memory manager that you pointed out above) plus the 40,000, bringing the total to 160,000.  My point above is that the linked list item itself typically houses the payload, so no additional pointer allocation (or memory manager record) is required.

  
Now, two things:

1. With automatically growing lists you have a very high likelihood of
unused pointers, so while a linked list of 10,000 items is utilizing all
80,004 bytes of memory, the TList allocated (10,000-TList.Count)*4 unused
bytes of memory.
    
i.e. plus a maximum of 25% with the current implementation
~50,000
 
  
2. The TList entry only points to the actual data payload, meaning another
4+n bytes must be allocated to store the data.  This means an additional
40,000 bytes is required for a TList vs. a linked list. 
    
Huh?
  
Here I'm pointing out that the item each entry in the TList refers to has to be allocated somewhere.  Best case scenario, a record, means a minimum of 4 bytes for each allocation, plus 8 for the memory manager.  It all adds up.

The main problem is the mem fragmentation. Here a growing TList can need up
to 4 times its used memory. So in worst case it will need the memory of a
single linked list. 
  
Regarding fragmentation, it's my personal experience that allocation of large numbers (millions) of small data packets is easier to manage in a suitably unpredictable environment.  In cases where I need TList functionality, I really have to estimate (in my case at run-time, which is very hit-and-miss) how large to make the TList.  If I mispredict then I chew up large chunks of contiguous memory very very quickly.  If I overpredict, which usually happens by very large amounts, I'm wasting memory.  Granted that's not a huge issue on servers with four gigabytes of RAM, but on these high-traffic servers it could be.
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to