Hi all,

I posted a few weeks back about apache core dumping with signal 11 - as
it turns out, it's not just apache, but a lot of processes which are
doing it. Signal 11 is apparently indicative of hardware problems, so I
thought in this case it might be memory allocation.

I have been investigating how memory works under FreeBSD, and it seems
to be this way:

Irrespective of how much memory you really have, you have a virtual
memory size of 4Gb (maximum imposed by the 32-bit architecture). Of
this, the kernel will use 1Gb leaving 3Gb for user processes. Each
process by default is granted 128Mb data size up to a 512Mb limit. This
limit can be raised up to just under 2Gb by setting MAXDSIZ (due to the
use of a signed int, I suspect).

Given this information, and I would appreciate corrections if I have any
of it wrong, my question then becomes this - if I have, for example,
512Mb of real memory and 2Gb of swap space then that would give me a
maximum 2.5Gb (depending on how the swapper handles this). What happens
once this maximum is used up, since the system has a theoretical maximum
of another 1.5Gb? If the system actually tries to use the space
allocated through the over-committing strategy is the net result going
to be that dreaded signal 11?

For that matter, even if one does have 4Gb available, whether real or
swap, it seems entirely possible that one could run several large
processes and go over this limit. Would this cause a crash?

A more practical question might be "Should I ensure that the number of
precesses running won't use more than my total available memory less the
1Gb for the kernel?"

Thanks in advance
Duncan Anker




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-questions" in the body of the message

Reply via email to