On Wed, 2005-12-07 at 13:46 -0200, Fredrik Lindmark wrote: > Optimal would be to share the memory locally inside mod_perl and run > one process and many threads. > i should have basicly the same speed as a process prefork, and > non-repeating data in the cache memory.
You will lose performance due to the threading, and your apache processes will take more memory than they do under prefork. This may all be offset by being able to share the data in your case. > If i eventually need more > processes to keep the performance, the amount of memory to is still > less than the preforks... Only for your shared data cache. For the rest of the memory used by non-shared variables and compiled code it will use more than prefork. > Furtheron I've been thinking of having front end proxys to guide the > user into the process running a certain area. that way i can divide the > work by areas into different process and the processes would hold just > the memory used for that area. Good idea. You could probably do this with your current queue system if you give it the ability to run multiple job handlers which look for specific job types. > > a) threaded-perl which is significatly slower in most operation you are > > executing > > I've just heard they are "comparable" in speed.. how big can this > difference be? About 15% in my tests. > > b) starting new processes with a threaded perl you cann't use COW which > > is used when forking > > Its okey, cause i prefer as few processes as possible. We're really talking about starting threads here. 10 worker threads will take more memory than 10 prefork processes, except for your shared cache. > One of my concerns are how safe is it really is to run just 1 process, > and letting the threads do the parallel work.. > should i count on more downfalls than letting the processes do the work? If there's a segfault, I believe it will kill all the threads in that process. You probably want to run more than 1 process. More about Cache::FastMmap in minute... - Perrin