On 4/18/06, Christian Smith <[EMAIL PROTECTED]> wrote:
> The OS has to track each MMAP segment, which is usually a linear linked
> list sorted by virtual address. As most processes don't have that many
> segments mapped (10's of segments would be considered a lot) the O(n)
> linear search is not considered much of a burden. If this increases to the
> 100's or 1000's of segments, the kernel will spend correspondingly more
> time in VM management. Such linear searches can also have a detrimental
> effect on the CPU caching.

Modern OSs use binary trees (rb-trees or the like) to manage memory
maps (linux has an rb-tree and also a linked list), so it's not a
linear search.

Also, as all modern OSs use paging everywhere, they tend to use
"mmaping" internally for every file access (on devices that support
it, off course). The "mmap" paradigm fits best with the paging system
than the old buffer aproach, allowing lot's of "neat" tricks, like
demand paging, lazy writes, copy on write, etc.

Windows internally uses sections (the kernel equivalent of mmaps) for
all file accesses, with the diference those sections of the file are
dynamic for sequential/random file access (iirc, 256KB view blocks)
and fixed when user explicitly mmaps a file (creates a file view).

I believe Linux also does it (for block devices which support mmap),
but I don't know the details.

10's of segments is not a lot, if you remember all DLLs are also
mmaped, specially on big GUI applications made with very high level
languages.

Off course there will be an overhead when using a LOT of mmaped files,
but that will be mainly because of the increased memory need,
affecting caching and paging, two of the most big factors for
performance levels, but not a direct relation as there are ways to
fight it (eg, maximize cache locality).


Best regards,
~Nuno Lucas

Reply via email to