Re: [PATCHES] default resource limits

2005-12-24 Thread Peter Eisentraut
Am Samstag, 24. Dezember 2005 00:20 schrieb Andrew Dunstan:
 The rationale is one connection per apache thread (which on Windows
 defaults to 400). If people think this is too many I could live with
 winding it back a bit - the defaults number of apache workers on Unix is
 250, IIRC.

It's 150.  I don't mind increasing the current 100 to 150, although I find 
tying this to apache pretty bogus.

I really don't like the prospect of making the defaults platform specific, 
especially if the only rationale for that would be apache does it.  Why 
does apache allocate more connections on Windows anyway?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] default resource limits

2005-12-24 Thread Robert Treat
On Saturday 24 December 2005 06:22, Peter Eisentraut wrote:
 Am Samstag, 24. Dezember 2005 00:20 schrieb Andrew Dunstan:
  The rationale is one connection per apache thread (which on Windows
  defaults to 400). If people think this is too many I could live with
  winding it back a bit - the defaults number of apache workers on Unix is
  250, IIRC.

 It's 150.  I don't mind increasing the current 100 to 150, although I find
 tying this to apache pretty bogus.

 I really don't like the prospect of making the defaults platform specific,
 especially if the only rationale for that would be apache does it.  Why
 does apache allocate more connections on Windows anyway?


Maybe we should write something in to check if apache is installed if we're so 
concerned about that usage... I already know that I set the connection limits 
lower on most of the installations I do (given than most installations are 
not production webservers).  There is also the argument to be made that just 
because systems these days have more memory doesn't mean we have to use it. 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PATCHES] default resource limits

2005-12-23 Thread Andrew Dunstan



I wrote:




Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 

Nearly everyone seems to agree that the default for max_fsm_pages is 
woefully low, so I would like to have the default for this set 
unconditionally to 200,000 rather than 20,000. The cost would be 
just over 1Mb of shared memory, if the docs are correct. 
Alternatively, we could put this into the mix that is calculated by 
initdb, scaling it linearly with shared_buffers (but with the 
default still at 200,000).
  



 

I would also like to propose a more modest increase in 
max_connections and shared_buffers by a factor of 3.
  



I don't mind having initdb try larger values to see if they work, but
if you are suggesting that we try to force adoption of larger settings
I'll resist it.
 



OK, works for me. The only thing I suggested might be set in stone was 
max_fsm_pages; I always envisioned the others being tested as now by 
initdb.


Factor of three seems mighty weird.  The existing numbers (100 and 
1000)

at least have the defensibility of being round.


 



What numbers would you like? If what I suggested seems odd, how about 
targets of 400 connections, 4000 shared_buffers and 200,000 
max_fsm_pages?






Here's a patch that does what I had in mind. On my modest workstation it 
tops out at 400 connections and 2500/125000 
shared_buffers/max_fsm_pages. An idle postmaster with these settings 
consumed less than 4% of the 380Mb of memory, according to top, making 
it still dwarfed by X, mozilla, apache and amavisd among other memory hogs.


Comments welcome.

cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] default resource limits

2005-12-23 Thread Andrew Dunstan



er patch attached this time

Andrew Dunstan wrote:




I wrote:




Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 

Nearly everyone seems to agree that the default for max_fsm_pages 
is woefully low, so I would like to have the default for this set 
unconditionally to 200,000 rather than 20,000. The cost would be 
just over 1Mb of shared memory, if the docs are correct. 
Alternatively, we could put this into the mix that is calculated by 
initdb, scaling it linearly with shared_buffers (but with the 
default still at 200,000).
  




 

I would also like to propose a more modest increase in 
max_connections and shared_buffers by a factor of 3.
  




I don't mind having initdb try larger values to see if they work, but
if you are suggesting that we try to force adoption of larger settings
I'll resist it.
 



OK, works for me. The only thing I suggested might be set in stone 
was max_fsm_pages; I always envisioned the others being tested as now 
by initdb.


Factor of three seems mighty weird.  The existing numbers (100 and 
1000)

at least have the defensibility of being round.

 



What numbers would you like? If what I suggested seems odd, how about 
targets of 400 connections, 4000 shared_buffers and 200,000 
max_fsm_pages?






Here's a patch that does what I had in mind. On my modest workstation 
it tops out at 400 connections and 2500/125000 
shared_buffers/max_fsm_pages. An idle postmaster with these settings 
consumed less than 4% of the 380Mb of memory, according to top, making 
it still dwarfed by X, mozilla, apache and amavisd among other memory 
hogs.


Comments welcome.

cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

Index: src/bin/initdb/initdb.c
===
RCS file: /cvsroot/pgsql/src/bin/initdb/initdb.c,v
retrieving revision 1.101
diff -c -r1.101 initdb.c
*** src/bin/initdb/initdb.c	9 Dec 2005 15:51:14 -	1.101
--- src/bin/initdb/initdb.c	23 Dec 2005 20:29:15 -
***
*** 120,125 
--- 120,126 
  /* defaults */
  static int	n_connections = 10;
  static int	n_buffers = 50;
+ static int  n_fsm_pages = 2; 
  
  /*
   * Warning messages for authentication methods
***
*** 1090,1096 
  test_connections(void)
  {
  	char		cmd[MAXPGPATH];
! 	static const int conns[] = {100, 50, 40, 30, 20, 10};
  	static const int len = sizeof(conns) / sizeof(int);
  	int			i,
  status;
--- 1091,1098 
  test_connections(void)
  {
  	char		cmd[MAXPGPATH];
! 	static const int conns[] = {400, 300, 250, 200, 150, 
! 100, 50, 40, 30, 20, 10};
  	static const int len = sizeof(conns) / sizeof(int);
  	int			i,
  status;
***
*** 1125,1146 
  test_buffers(void)
  {
  	char		cmd[MAXPGPATH];
! 	static const int bufs[] = {1000, 900, 800, 700, 600, 500,
! 	400, 300, 200, 100, 50};
  	static const int len = sizeof(bufs) / sizeof(int);
  	int			i,
! status;
  
! 	printf(_(selecting default shared_buffers ... ));
  	fflush(stdout);
  
  	for (i = 0; i  len; i++)
  	{
  		snprintf(cmd, sizeof(cmd),
   %s\%s\ -boot -x0 %s 
   -c shared_buffers=%d -c max_connections=%d template1 
    \%s\  \%s\ 21%s,
   SYSTEMQUOTE, backend_exec, boot_options,
   bufs[i], n_connections,
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
--- 1127,1155 
  test_buffers(void)
  {
  	char		cmd[MAXPGPATH];
! 	static const int bufs[] = {
! 	  4000, 3500, 3000, 2500, 2000, 1500,
! 	  1000, 900, 800, 700, 600, 500,
! 	  400, 300, 200, 100, 50
! 	};
  	static const int len = sizeof(bufs) / sizeof(int);
  	int			i,
! status,
! 	test_max_fsm_pages;
  
! 	printf(_(selecting default shared_buffers/max_fsm_pages ... ));
  	fflush(stdout);
  
  	for (i = 0; i  len; i++)
  	{
+ 		test_max_fsm_pages = bufs[i]  1000 ? 50 * bufs[i] : 2;
  		snprintf(cmd, sizeof(cmd),
   %s\%s\ -boot -x0 %s 
+  -c max_fsm_pages=%d 
   -c shared_buffers=%d -c max_connections=%d template1 
    \%s\  \%s\ 21%s,
   SYSTEMQUOTE, backend_exec, boot_options,
+  test_max_fsm_pages,
   bufs[i], n_connections,
   DEVNULL, DEVNULL, SYSTEMQUOTE);
  		status = system(cmd);
***
*** 1150,1157 
  	if (i = len)
  		i = len - 1;
  	n_buffers = bufs[i];
  
! 	printf(%d\n, n_buffers);
  }
  
  /*
--- 1159,1167 
  	if (i = len)
  		i = len - 1;
  	n_buffers = bufs[i];
+ 	n_fsm_pages = test_max_fsm_pages;
  
! 	printf(%d/%d\n, n_buffers, n_fsm_pages);
  }
  
  /*
***
*** 1177,1182 
--- 1187,1195 
  	snprintf(repltok, sizeof(repltok), shared_buffers = %d, n_buffers);
  	conflines = replace_token(conflines, #shared_buffers = 1000, repltok);
  
+ 	snprintf(repltok, sizeof(repltok), max_fsm_pages = %d, n_fsm_pages);
+ 	conflines = replace_token(conflines, #max_fsm_pages = 2, repltok);
+ 
  #if DEF_PGPORT != 5432
  	

Re: [PATCHES] default resource limits

2005-12-23 Thread daveg
On Fri, Dec 23, 2005 at 03:38:56PM -0500, Andrew Dunstan wrote:
 
 What numbers would you like? If what I suggested seems odd, how about 
 targets of 400 connections, 4000 shared_buffers and 200,000 
 max_fsm_pages?
 
 
 Here's a patch that does what I had in mind. On my modest workstation it 
 tops out at 400 connections and 2500/125000 
 shared_buffers/max_fsm_pages. An idle postmaster with these settings 
 consumed less than 4% of the 380Mb of memory, according to top, making 
 it still dwarfed by X, mozilla, apache and amavisd among other memory hogs.

I don't understand the motivation for so many connections by default, it
seems wasteful in most cases.

-dg

-- 
David Gould  [EMAIL PROTECTED]
If simplicity worked, the world would be overrun with insects.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] default resource limits

2005-12-23 Thread Andrew Dunstan



daveg wrote:


On Fri, Dec 23, 2005 at 03:38:56PM -0500, Andrew Dunstan wrote:
 

What numbers would you like? If what I suggested seems odd, how about 
targets of 400 connections, 4000 shared_buffers and 200,000 
max_fsm_pages?
 

Here's a patch that does what I had in mind. On my modest workstation it 
tops out at 400 connections and 2500/125000 
shared_buffers/max_fsm_pages. An idle postmaster with these settings 
consumed less than 4% of the 380Mb of memory, according to top, making 
it still dwarfed by X, mozilla, apache and amavisd among other memory hogs.
   



I don't understand the motivation for so many connections by default, it
seems wasteful in most cases.


 



The rationale is one connection per apache thread (which on Windows 
defaults to 400). If people think this is too many I could live with 
winding it back a bit - the defaults number of apache workers on Unix is 
250, IIRC.


Here's why iot matters: during Hurricane Katrina, one web site that was 
collecting details on people missing etc. found its application failing 
because apache/php wanted more connections (out of the box) than the out 
of the box postgres default. Luckily we were able to advise the operator 
on how to fix it very quickly, but having these in some sort of sync 
seems reasonable.


Of course, if you use connection pooling you can probably wind the 
number back  quite a lot.


There's no magic right answer - but on even entry level retail desktop 
machines the extra used from this is now quite a small drop in the 
memory bucket - the very lowest come with 256Mb of memory, and all but 
the very lowest come with 512Mb or 1Gb. So why should we argue over a 
handful of megabytes in raising limits last set 3 or 4 years ago?


cheers

andrew

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] default resource limits

2005-12-23 Thread Tom Lane
daveg [EMAIL PROTECTED] writes:
 I don't understand the motivation for so many connections by default, it
 seems wasteful in most cases.

I think Andrew is thinking about database-backed Apache servers ...

Some quick checks say that CVS tip's demand for shared memory increases
by about 26kB per max_connection slot. (Almost all of this is lock table
space; very possibly we could afford to decrease max_locks_per_connection
when max_connections is large, to buy back some of that.)  So boosting
the default from 100 to 400 would eat an additional 7.8mB of shared
memory if we don't do anything with max_locks_per_connection.  This is
probably not a lot on modern machines.

A bigger concern is the increase in semaphores or whatever the local
platform uses instead.  I'd be *real* strongly tempted to bound the
default at 100 on Darwin, for example, because on that platform each
semaphore is an open file that has to be passed down to every backend.
Uselessly large max_connections therefore slows backend launch and
risks running the whole machine out of filetable slots.  I don't know
what the story is on Windows but it might have some problems with large
numbers of semas too --- anyone know?

Also, some more thought needs to be given to the tradeoff between
shared_buffers and max_connections.  Given a constrained SHMMAX value,
I think the patch as-is will expend too much space on connections and
not enough on buffers --- the * 5 in test_connections() probably needs
a second look.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings