On Thu, 2005-12-15 at 01:15 -0600, Chase Venters wrote: > I could probably get > some of these answers myself with more study of the mod_perl / perl source, > but I was hoping someone might just know the answer and be able to tell me :)
Not a great way to start your post. Please read this: http://modperlbook.org/html/ch10_01.html > First off, am I correct in the assumption that it has been wise even in > mod_perl 1 (under Apache's child-per-request model) to preload all of your > modules for memory savings? Yes. > Is > there some sort of mmap / shmem way that the Apache children share their Perl > trees? No. > Or perhaps the processes each have all that memory *allocated* > individually, but because of COW pages from the OS, you only need one copy > resident (hence less paging)? Yes, it's all from COW. > In answering the above questions - are these reasons / behaviors > consistent > with mod_perl 2 under prefork? Yes. > Also - as of the current Perl 5.8 series, we're still not sharing / > doing COW > with variable memory (SV/HV/AV) right? COW doesn't care what's in the memory. It will share pages with variables in them if they don't change. Even reading a variable in a new context can change it in perl though. > The thing is that I have a big nested config structure, along with lots > of > other big nested structures. A given request doesn't need *all* of the data, > so I've been brainstorming and thought about writing a "reduce" method that I > would dispatch around my module tree from one of the prior-to-clone handlers > that would take these structures, use Data::Dumper to get their "source code" > form, and eval them into subroutines in some "stash" package. How much data are we talking about? If it's more than a few MBs, I wouldn't recommend it. Just put it in MySQL, or Cache::FastMmap, or BerkeleyDB, and then access only the bits you need from each process. - Perrin