> >> > I think the postgresql design team would disagree with you on this point. > >> > When postgres reaches max_connections it returns an error: "psql: FATAL: > >> > sorry, too many clients already" and denies the new connection. This is > >> > the > >> > behavior clients connecting to pgpool will expect given that this is how > >> > postgres behaves. > >> > >> I have to agree with Steven on this. :) > > > > Probably Apache developers would disagree with both of you:-) > > I think that we will find good arguments to back up both positions > but, in this case, keeping the same behaviour PostgreSQL offers is a > really good idea. IMHO, of course.
For me PostgreSQL design is just a compromise. Actually in early days of PostgreSQL (until 6.5, if my memory serves), it didn't care about number of concurrent clients. So no "sorry, too many clients already" message was there. What happens if there are too many clients? PostgreSQL just crashed:-< So I added the check and the message (I think Tom or other guys changed the error message from what I originaly proposed, but it was just because my English is poor). Apparently the original designer of PostgreSQL didn't think that PostgreSQL needed to handle so many concurrent clients. Commercial databases such as Oracle offers a connection queuing process. Or some kind of "TP monitor" middle wares offer similar functionality. With these system you could handle much more concurrent clients than the number of database engine instances. Without this, you cannot handle 10k of concurrent clients such as ATM systems. One of my TODO items is, implementing such kind of client connection queuing system, which does not rely on kernel network stacks "backlog" queue. -- Tatsuo Ishii SRA OSS, Inc. Japan _______________________________________________ Pgpool-general mailing list [email protected] http://pgfoundry.org/mailman/listinfo/pgpool-general
