[cutting down on cc's]
"Chris G. Demetriou" wrote:
>
> I'd _really_ like to know how you figure this.
>
> text data bss dec hex filename
> 45024 4096 392 49512 c168 /bin/cat
> 311264 12288 9900 333452 5168c /bin/sh
> 212960 4096 28492 245548 3bf2c inetd.static
> 458720 12288 73716 544724 84fd4 sendmail.static
>
> None of these are particularly huge. Dynamically linked binaries
Fine. Let's see... suppose you have 10 /bin/sh running. You have to
allocate 122880 bytes of "data". This is writable, y'know. /bin/sh
might decide to write to it, so you need one copy for each instance
of /bin/sh running on a non-overcommit system.
> * while you certainly need to allocate more backing store than you
> would with overcommit, it's _not_ ridiculously more for most
> applications(+), and, finally,
It comes down to this: in the overcommit model, memory is only
allocated *on-demand*. If you are not overcommitting, that is
equivalent to having everything "demanded". Now, consider a system
which does not overcommit reaching the maximum available memory...
this is the point at which the overcommitting ones will kill
processes, and the non-overcommitting ones will return failure on
malloc(). Select *one* running process, and make it's memory
overcommitted. As a result, you'll have *more* memory available, at
least for a few more cycles. Repeat this with every other process.
Now you have much more memory available, and you are very unlikely
to spend it in the next few cycles, unless there is a runaway
process (which will be the process killed, if all memory is
allocated). So, you would have been better off overcommitting.
> * even if you are not willing to pay that price, there _are_ people
> who are quite willing to pay that price to get the benefits that they
> see (whether it's a matter of perception or not, from their
> perspective they may as well be real) of such a scheme.
Sure. Point me to +one+ AIX admin that has turned the overcommitting
off.
> (+): obviously, there are some applications for which no-overcommit is
> just silly. However, 'normal' UNIX applications by and large allocate
> memory (or map files writeable/private, or map anonymous memory) to
> actually use it. I.e. if 'cat' or 'inetd' or 'sendmail' allocates a
> page from the system, it's almost certainly going write something to
> it, and, while there are undoubtedly a few pages that aren't written
> to, they are by far the majority. And, of course, once the page has
> been written, it's no longer reserved, it's committed. 8-)
Memory allocated is not instantly used. The process might not *need*
the memory until other processes have finished.
And in addition to my previous comments, these "normal" UNIX
applications are also quite small. Memory will be mostly consumed by
larger processes, which usually contain big tables, of which only
parts will actually be used on each instance.
> I would honestly love to know: what do you see huge numbers of
> reserved pages being reserved for, if they're not actually being
> committed, by 'average' UNIX applications (for any definition of
> average that excludes applications which do memory based computation
> on sparse dasta).
Try any application generated by yacc or lex, or anything that
parses. They all have these static data tables...
--
Daniel C. Sobral (8-DCS)
[EMAIL PROTECTED]
[EMAIL PROTECTED]
"Would you like to go out with me?"
"I'd love to."
"Oh, well, n... err... would you?... ahh... huh... what do I do
next?"
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message