Really, this seems like it would be a pretty strong case for a
replicated database..... assuming not all 10000 clients will need to be
doing modifications. Or if they do, that they could open up a seperate,
temporary connection with the master db.
On Aug 16, 2004, at 7:37 AM, Peter Eisentraut wrote:
Am Montag, 16. August 2004 16:20 schrieb Csaba Nagy:
Peter is definitely not a newby on this list, so i'm sure he already
thought about some kind of pooling if applicable... but then I'm
dead-curious what kind of application could possibly rule out
connection
pooling even if it means so many open connections ? Please give us
some
light Peter...
There is already a connection pool in front of the real server, but the
connection pool doesn't help you if you have in fact 10000 concurrent
requests, it only saves connection start effort. (You could make the
connection pool server queue the requests, but that is not the point
of this
exercise.) I didn't quite consider the RAM question, but the machine
is
almost big enough that it wouldn't matter. I'm thinking more in terms
of the
practical limits of the internal structures or the (Linux 2.6) kernel.
---------------------------(end of
broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]