Hi,

Lately, I have been pondering how memory managers deal with a
situation, when memory is fragmented with scattered allocated blocks,
in the event a memory allocation request, is made for a memory chunk,
whose size is bigger than the  biggest unallocated contiguous memory
chunk.

In a situation similar to the one mentioned, my temptation is to opt
to using linked lists, so as to avoid requiring large unallocated
contiguous memory blocks. However, this increases the overall
processing load which tends to slow whatever program using such a
model.

The question is how do memory managers succeed to remain efficient and
yet cope with memory allocation of so many different sizes?

Edward
_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

Reply via email to