Jeff Turner wrote:
> 
> Well I think you're completely nuts :) For 10,000 concurrent requests for
> *dynamic* data (or else you'd just be using Apache.. right?), you'd need a
> server farm full of fancy equipment, a load-balancing server, and stuff that
> even my wildly ignorant speculations can't conceive.

(big db...)

> 
> Besides, I don't think any single server with simple thread-per-request,
> blocking IO model can handle that sort of load. A smart fellow called Matt
> Welsh did his PhD thesis on highly concurrent server apps, and the results of
> his thinking are here:
> 
> http://www.cs.berkeley.edu/~mdw/proj/sandstorm/
> 
> In particular, the paper "The Staged Event-Driven Architecture for Highly
> Concurrent Servers." is well worth reading.

Thanks for the link. Interesting to see someone doing in-depth study of
it - I have been involved in two financial market data server
architectures where data processing was staged and event driven (by
nature, the data tends to be asynchronous and needs to be delivered in
as close to real-time as possible else people get really angry :) and we
resorted to what a cursory view of the above link seems to be.

One of them could carry thousands (6k +) permanent client connections at
the same time - the per-client throughput was 'small' ( few a second, on
ave), though the aggregate throughput was large.  Worked very well.  The
other was a smaller number of connections, but large message rates (ave
5-7k / sec with bursts up to 10k or so)

geir


> --Jeff
> (who thinks 50 concurrent users is pushing things)
> 
> On Mon, Aug 13, 2001 at 01:54:28PM +0530, Santosh Pasi wrote:
> > Hi Jeff,
> >
> > Thanks for reply.
> > I am looking around 10,000 to 12,000 concurrent connections.
> [snip]

-- 
Geir Magnusson Jr.                           [EMAIL PROTECTED]
System and Software Consulting
Developing for the web?  See http://jakarta.apache.org/velocity/
Well done is better than well said - New England Proverb

Reply via email to