Daniel Hanks wrote: > Recently on this list the idea of 'pinning' or locking the root apache process > in memory has been discussed with some interest. The reason being was that some > users have experienced the situtaion where a server becomes loaded, and the root > apache process gets swapped out, and in the process loses some of its shared > memory. Future child processes that are forked also share in the loss of shared > memory, so methods like using GtopLimit to 'recyle' child processes when their > shared memory becomes too low cease to work because when they come up, they are > already too low on shared memory. > > In our systems we had attempted this but it always came down to the same problem > --the root process would lose its shared memory, to the point that any child > process would come up, serve a request, find that it was beyond the threshold > for shared memory and die. The only help was to restart Apache altogether. > > So in scouring the list I found someone mentioning using the mlockall C function > to lock the pages of the core apache process in memory. Some handy .xs code was > provided, so I built a module, Sys::Mman, which wraps mlockall, and makes it > available to Perl. > > We installed this on our servers, and call mlockall right at the end of our > preload stuff, i.e., the end of the 'startup.pl'-style script called from > httpd.conf. The result has been very encouraging. The core apache process is > able then to maintain all its shared memory, and child processes that are forked > are able to start with high amounts of shared memory, all making for a much > happier system. > > Now I also read that probably better than this would be to ensure that you > never swap by tuning MaxClients, as well as examining our Perl code to make it > less prone to lose shared memory. We're working on that sort of tuning, but > in volatile environments like ours, where we serve a very large amount of data, > and new code is coming out almost daily here and there, locking the core > httpd in memory has been very helpful. I just thought I would let others know > on the list that it is feasible, and works well in our environment. > > If there's enough interest I might put the module up on CPAN, but it's really > very simple. h2xs did most of the work for me. And thanks to Doug MacEachern for > posting the .xs code. It worked like a charm.
See the discussion on the [EMAIL PROTECTED] list, http://marc.theaimsgroup.com/?t=101659730800001&r=1&w=2 where it was said that it's a very bad idea to use mlock and variants. Moreover the memory doesn't get unshared when the parent pages are paged out, it's the reporting tools that report the wrong information and of course mislead the the size limiting modules which start killing the processes. As a conclusion to this thread I've added the following section to the performance chapter of the guide: =head3 Potential Drawbacks of Memory Sharing Restriction It's very important that the system won't be heavily engaged in swapping process. Some systems do swap in and out every so often even if they have plenty of real memory available and it's OK. The following applies to conditions when there is hardly any free memory available. So if the system uses almost all of its real memory (including the cache), there is a danger of parent's process memory pages being swapped out (written to a swap device). If this happens the memory usage reporting tools will report all those swapped out pages as non-shared, even though in reality these pages are still shared on most OSs. When these pages are getting swapped in, the sharing will be reported back to normal after a certain amount of time. If a big chunk of the memory shared with child processes is swapped out, it's most likely that C<Apache::SizeLimit> or C<Apache::GTopLimit> will notice that the shared memory floor threshold was crossed and as a result kill those processes. If many of the parent process' pages are swapped out, and the newly created child process is already starting with shared memory below the limit, it'll be killed immediately after serving a single request (assuming that we the C<$CHECK_EVERY_N_REQUESTS> is set to one). This is a very bad situation which will eventually lead to a state where the system won't respond at all, as it'll be heavily engaged in swapping process. This effect may be less or more severe depending on the memory manager's implementation and it certainly varies from OS to OS, and different kernel versions. Therefore you should be aware of this potential problem and simply try to avoid situations where the system needs to swap at all, by adding more memory, reducing the number of child servers or spreading the load across more machines, if reducing the number of child servers is not an options because of the request rate demands. __________________________________________________________________ Stas Bekman JAm_pH ------> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com