On Sat, 25 Jan 2003, Dave Mitchell wrote:
> On Sat, Jan 25, 2003 at 06:18:47AM -0800, Sean O'Rourke wrote:
> > On Sat, 25 Jan 2003, Leopold Toetsch wrote:
> > > Dan Sugalski wrote:
> > >
> > > > At 5:32 PM +0000 1/24/03, Dave Mitchell wrote:
> > > >
> > > >> I just wrote a quick C program that successfully mmap-ed in all 1639
> > > >> files in my Linux box's /usr/share/man/man1 directory.
> > > >
> > > >
> > > > Linux is not the universe, though.
> >
> > How true.  On Solaris, for example, mmap's are aligned on 64k boundaries,
> > which leads to horrible virtual address space consumption when you map
> > lots of small things.  If we're mmap()ing things, we want to be sure
> > they're fairly large.
>
> Okay, I just ran a program on a a Solaris machines that mmaps in each
> of 571 man files 20 times (a total of 11420 mmaps). The process size
> was 181Mb, but the total system swap available only decreased by 1.2Mb
> (since files mmapped in RO effecctively don't consume swap).

The problem's actually _virtual_ memory use/fragmentation, not physical
memory or swap.  Say you map in 10k small files -- that's 640M virtual
memory, just over a fourth of what's available.  Now let's say you're also
using mmap() in your webserver to send large (10M) files quickly over the
network.  The small files, if they're long-lived get scattered all over
VA-space, so there's a non-trivial chance that the OS won't be able to
find a 10MB chunk of free addresses at some point.

To see it, you might try changing your program to map and unmap a large
file periodically while mapping the man pages.  Then take a look at the
process's address space with /usr/proc/bin/pmap to see what the OS is
doing with the maps.

Weird, I know, but that's why it stuck in my mind.  You have to map quite
a few files to get this to happen, but it's a real possibility with a
32-bit address space and a long-running process that does many small
mmap()s and some large ones.

Anyways...

/s

Reply via email to