On Dec 7, 2005, at 10:09 AM, Tom Schindl wrote:
Was running prefork apache 2.0.52 earlier. without any of these symptoms..
Im switching to the worker in an attempt to improve the performance,

I don't think that you gain much performance at least on the mod-perl
side of the story because a you need:

a) threaded-perl which is significatly slower in most operation you are
executing

b) starting new processes with a threaded perl you cann't use COW which
is used when forking

If you need better apache performance i'd use the proxy setup where the
frontend apache uses the worker mpm (without mod-perl) and my apache in
the back is full featured apache running in prefork mode.

Do you really gain that much performance using worker?

I've the vision of gaining it..

I've a program that works over a big range of data and is very extended in itself.. During the executation i need to access data from both the databases and earlier calculations of that data. going forward and back towards the database 500-1000 times during a run.. is not that efficient.
since much of the data is the same.. i gain a lot on caching it..
Caching analyzes of the data saves me even more, when no cpu is tortured.

this can be done locally inside the prefork process, but then i cant share the memory allocated.. i end up having GB of almost equal data in each process after running the system for some time.. and until all the processes have filled up their cache the speed will be slow for a long time... thats why i want to switch to worker.

memcache and other daemons could help out here but they take too much time compared to perls internal data memory to access and write. Since they are not really fit for complex hash trees without doing a heck of data coping forward and back.. i would count on twice the time. So optimal for me as i see it is to keep the cache close by hand in perl to have most efficient output..

If i go down on one process its possible to achive this.. (thats how we run right now) with some proxy servers taking away all the user load we basicly run execution queue on the backend server. but then we are suffering from delay when a big sql query is filed and all others have to wait for its result.. etc. I havent tried it during heavy loads.. but i guess it wont do well there either.

Optimal would be to share the memory locally inside mod_perl and run one process and many threads. i should have basicly the same speed as a process prefork, and non-repeating data in the cache memory. If i eventually need more processes to keep the performance, the amount of memory to is still less than the preforks...

Furtheron I've been thinking of having front end proxys to guide the user into the process running a certain area. that way i can divide the work by areas into different process and the processes would hold just the memory used for that area. And i can use the same work memory size but divided in a limited amount of processes its not very dynamic but it adds up some safety to the single process / multiple thread model.

a) threaded-perl which is significatly slower in most operation you are
executing

I've just heard they are "comparable" in speed.. how big can this difference be?

b) starting new processes with a threaded perl you cann't use COW which
is used when forking

Its okey, cause i prefer as few processes as possible. if they start on zero its just an initial stage they are slow before the cache tree is rebuilt again..

One of my concerns are how safe is it really is to run just 1 process, and letting the threads do the parallel work..
should i count on more downfalls than letting the processes do the work?

Regards,

~ F

Reply via email to