On Fri, 18 Jul 2003, Tom Lane wrote:

> "scott.marlowe" <[EMAIL PROTECTED]> writes:
> > But I'm sure that with a few tweaks to the code here and there it's
> > doable, just don't expect it to work "out of the box".
>
> I think you'd be sticking your neck out to assume that 10k concurrent
> connections would perform well, even after tweaking.  I'd worry first
> about whether the OS can handle 10k processes (which among other things
> would probably require order-of-300k open file descriptors...).  Maybe
> Solaris is built to do that but the Unixen I've dealt with would go
> belly up.  After that you'd have to look at Postgres' internal issues
> --- contention on access to the PROC array would probably become a
> significant factor, for example, and we'd have to do some redesign to
> avoid linear scans of the PROC array where possible.
>

This page describes all the problems and strategies a web server would use
to handle 10k concurrent connections.  This is the kind of thing that can
bring an otherwise performant OS to it's knees.  And this is just to grab
some data off disk and shovel it out over HTTP, consider how much more
work a database must do.

http://www.kegel.com/c10k.html

Kris Jurka


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to