Re: [PATCHES] Proof-of-concept for initdb-time shared_buffers selection

2003-07-06 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
 Should the default max number of connections first try something greater
 than what Apache sets by default (256 for prefork, 400 for worker)?

We could do that.  I'm a little worried about setting default values
that are likely to cause problems with exhausting the kernel's fd table
(nfiles limit).  If anyone actually tries to run 256 or 400 backends
without having increased nfiles and/or twiddled our
max_files_per_process setting, they're likely to have serious problems.
(There could be some objection even to max_connections 100 on this
ground.)

We could imagine having initdb reduce max_files_per_process to prevent
such problems, but then you'd be talking about giving up performance to
accommodate a limit that the user might not ever approach in practice.
You really don't want the thing selecting parameters on the basis of
unrealistic estimates of what max_connections needs to be.

Ultimately there's no substitute for some user input about what they're
planning to do with the database, and possibly adjustment of kernel
settings along with PG settings, if you're planning to run serious
applications.  initdb can't be expected to do this unless you want to
make it interactive, which would certainly make the RPM guys really
unhappy.

I'd rather see such considerations pushed off to a separate tool,
some kind of configuration wizard perhaps.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[PATCHES] Proof-of-concept for initdb-time shared_buffers selection

2003-07-04 Thread Tom Lane
The attached patch shows how initdb can dynamically determine reasonable
shared_buffers and max_connections settings that will work on the
current machine.  It consists of two trivial adjustments: one rips out
the PrivateMemory code, so that a standalone backend will allocate a
shared memory segment the same way as a postmaster would do, and the
second adds a simple test loop in initdb that sees how large a setting
will still allow the backend to start.

The patch isn't quite complete since I didn't bother adding the few
lines of sed hacking needed to actually insert the selected values into
the installed postgresql.conf file, but that's just another few minutes'
work.  Adjusting the documentation to match would take a bit longer.

We might also want to tweak initdb to print a warning message if it's
forced to select very small values, but I didn't do that yet.

Questions for the list:

1. Does this approach seem like a reasonable solution to our problem
of some machines having unrealistically small kernel limits on shared
memory?

2. If so, can I get away with applying this post-feature-freeze?  I can
argue that it's a bug fix, but perhaps some will disagree.

3. What should be the set of tested values?  I have it as
   buffers: first to work of 1000 900 800 700 600 500 400 300 200 100 50
   connections: first to work of 100 50 40 30 20 10
but we could certainly argue for different rules.

regards, tom lane


*** src/backend/port/sysv_shmem.c.orig  Thu May  8 15:17:07 2003
--- src/backend/port/sysv_shmem.c   Fri Jul  4 14:47:51 2003
***
*** 45,52 
  static void *InternalIpcMemoryCreate(IpcMemoryKey memKey, uint32 size);
  static void IpcMemoryDetach(int status, Datum shmaddr);
  static void IpcMemoryDelete(int status, Datum shmId);
- static void *PrivateMemoryCreate(uint32 size);
- static void PrivateMemoryDelete(int status, Datum memaddr);
  static PGShmemHeader *PGSharedMemoryAttach(IpcMemoryKey key,

IpcMemoryId *shmid, void *addr);
  
--- 45,50 
***
*** 243,283 
  }
  
  
- /* 
-  *private memory support
-  *
-  * Rather than allocating shmem segments with IPC_PRIVATE key, we
-  * just malloc() the requested amount of space.  This code emulates
-  * the needed shmem functions.
-  * 
-  */
- 
- static void *
- PrivateMemoryCreate(uint32 size)
- {
-   void   *memAddress;
- 
-   memAddress = malloc(size);
-   if (!memAddress)
-   {
-   fprintf(stderr, PrivateMemoryCreate: malloc(%u) failed\n, size);
-   proc_exit(1);
-   }
-   MemSet(memAddress, 0, size);/* keep Purify quiet */
- 
-   /* Register on-exit routine to release storage */
-   on_shmem_exit(PrivateMemoryDelete, PointerGetDatum(memAddress));
- 
-   return memAddress;
- }
- 
- static void
- PrivateMemoryDelete(int status, Datum memaddr)
- {
-   free(DatumGetPointer(memaddr));
- }
- 
- 
  /*
   * PGSharedMemoryCreate
   *
--- 241,246 
***
*** 289,294 
--- 252,260 
   * collision with non-Postgres shmem segments.The idea here is to detect and
   * re-use keys that may have been assigned by a crashed postmaster or backend.
   *
+  * makePrivate means to always create a new segment, rather than attach to
+  * or recycle any existing segment.
+  *
   * The port number is passed for possible use as a key (for SysV, we use
   * it to generate the starting shmem key).In a standalone backend,
   * zero will be passed.
***
*** 323,342 
  
for (;;NextShmemSegID++)
{
-   /* Special case if creating a private segment --- just malloc() it */
-   if (makePrivate)
-   {
-   memAddress = PrivateMemoryCreate(size);
-   break;
-   }
- 
/* Try to create new segment */
memAddress = InternalIpcMemoryCreate(NextShmemSegID, size);
if (memAddress)
break;  /* successful create and 
attach */
  
/* Check shared memory and possibly remove and recreate */
!   
if ((hdr = (PGShmemHeader *) memAddress = PGSharedMemoryAttach(
NextShmemSegID, shmid, 
UsedShmemSegAddr)) == NULL)
continue;   /* can't attach, not one of 
mine */
--- 289,304 
  
for (;;NextShmemSegID++)
{
/* Try to create new segment */
memAddress = InternalIpcMemoryCreate(NextShmemSegID, size);
if (memAddress)
break;  /*