On 17 Jul 2010, at 21:50, Micha Nelissen wrote:

> Jonas Maebe wrote:
>> All applications but the ones that allocate a only few memory blocks 
>> (especially in case it's a few small blocks of many different sizes) would 
>> benefit from this change, not just apps allocating hundreds of megabytes at 
>> the same time (it also helps in case applications maximally use 10MB, but 
>> allocate and free a lot of data so the blocks get released back to the OS). 
>> Applications allocating a ton of them would obviously benefit more than 
>> others.
> 
> So why 256kb? not 64, 128, or 512kb?

I've now committed a dynamic scheme that starts of with chunks of 32KiB, but 
which grows the chunk size (per thread) as more and more get allocated (up to a 
limit of 256KiB). The speed is only slightly lower than when starting of 
immediately with 256KiB blocks, and for apps performing only a few allocations 
nothing changes regarding the amount of memory they use.

The 256KiB upper limit comes from benchmarking the compiler both when compiling 
itself (average scenario, mixed allocations and frees) and the "all packages 
units" (extreme scenario). Going higher did not provide noticeable speed gains 
while increasing memory usage quite a bit. The same goes for the limit of when 
the size of the chunks to allocate is doubled.


Jonas_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to