On 9-Aug-06, at 9:47 AM, [EMAIL PROTECTED] wrote:

The server sits there listening for a request. This request
has to go to a db, retrieve lots of data, generate the xml etc
and eventually pass back html. Let's say that the retrieval of data
takes about 5 mins.
If 100 users put in a request at the same time, the server blocks to
service to first user, how does it service the other users without
forking or passing them on to other daemons ot threads?
(I have never used threads and have little idea about them)

So rather than answer this, I'll throw the question back to you - how do you currently cope with a system that expects 100 concurrent users requesting pages that take 5 minutes to generate?

Would I be correct in thinking that the mod_perlish speed you
can achieve is due to the one instance of perl that is running the
httpd and app?
So rather than the perl httpd running my perl app, in effect my perl
app has an httpd embedded in it?

Yes.

(don't worry about the gritty details - I've done this sort of thing
for qpsmtpd already so it's just a SMOP and documenting it)

SMOP?

Small/Simple Matter of Programming.

btw, how does one download axkit2?
At the moment it's SVN only. I can put a snapshot up somewhere if you
like.

Thank you, but I've sussed it out. Well the d/l at least. The up and
running may take some time, I'm a perl Makefile.PL wallah :)

Yeah, there'll be a Makefile.PL eventually. I have weird priorities on when things will get built :-)

Matt.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to