On 11 Feb 2003, Greg Copeland wrote: > On Tue, 2003-02-11 at 12:55, Tom Lane wrote: > > "scott.marlowe" <[EMAIL PROTECTED]> writes: > > > Is setting the max connections to something like 200 reasonable, or likely > > > to cause too many problems? > > > > That would likely run into number-of-semaphores limitations (SEMMNI, > > SEMMNS). We do not seem to have as good documentation about changing > > that as we do about changing the SHMMAX setting, so I'm not sure I want > > to buy into the "it's okay to expect people to fix this before they can > > start Postgres the first time" argument here. > > > > Also, max-connections doesn't silently skew your testing: if you need > > to raise it, you *will* know it. > > > > Besides, I'm not sure that it makes sense to let other product needs > dictate the default configurations for this one. It would be one thing > if the vast majority of people only used PostgreSQL with Apache. I know > I'm using it in environments in which no way relate to the web. I'm > thinking I'm not alone.
True, but even so, 32 max connections is a bit light. I have more pgsql databases than that on my box now. My point in my previous answer to Tom was that you HAVE to shut down postgresql to change this. It doesn't allocate tons of semaphores on startup, just when the child processes are spawned, and I'd rather have the user adjust their OS to meet the higher need than have to shut down and restart postgresql as well. This is one of the settings that make it feel like a "toy" when you first open it. How many other high quality databases in the whole world restrict max connections to 32? The original choice of 32 was set because the original choice of 64 shared memory blocks as the most we could hope for on common OS installs. Now that we're looking at cranking that up to 1000, shouldn't max connections get a look too? You don't have to be using apache to need more than 32 simo connections. Heck, how many postgresql databases do you figure are in production with that setting still in there? My guess is not many. I'm not saying we should do this to make benchmarks better either, I'm saying we should do it to improve the user experience. A limit of 32 connects makes things tough for a beginning DBA, not only does he find out the problem while his database is under load the first time, but then he can't fix it without shutting down and restarting postgresql. If the max is set to 200 or 500 and he starts running out of semaphores, that's a problem he can address while his database is still up and running in most operating systems, at least in the ones I use. So, my main point is that any setting that requires you to shut down postgresql to make the change, we should pick a compromise value that means you never likely will have to shut down the database once you've started it up and it's under load. shared buffers, max connects, etc... should not need tweaking for 95% or more of the users if we can help it. It would be nice if we could find a set of numbers that reduce the number of problems users have, so all I'm doing is looking for the sweetspot, which is NOT 32 max connections. ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org