[PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread **Rod MacNeil
Hi All,

I have a server running CentOS5 with 6gb of memory that will run postgres
8.3 exclusively.
I would like to allocate 4gb of the memory to shared buffers for postgres.
I have modified some kernel settings as follows:

shmall  1048576 pages  4,294,967,296 bytes
shmmax 4,294,967,295 bytes

I can set the postgres config to shared_buffers = 2700MB but no higher.
If I try shared_buffers = 2750MB the server fails to start with a message it
cannot allocate memory:

2010-01-29 11:24:39 EST FATAL:  shmat(id=1638400) failed: Cannot allocate
memory

Is there some other setting that could be limiting the amount I can
allocate?

Excerpt from postgresql.conf:

# - Memory -

shared_buffers = 2750MB # min 128kB or max_connections*16kB
# (change requires restart)
temp_buffers = 32MB # min 800kB
max_prepared_transactions = 10  # can be 0 or more
# (change requires restart)
# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
work_mem = 2MB  # min 64kB
maintenance_work_mem = 32MB # min 1MB
#max_stack_depth = 2MB  # min 100kB


Any help appreciated, Thanks

Rod


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread Richard Neill



**Rod MacNeil wrote:

Hi All,

I have a server running CentOS5 with 6gb of memory that will run 
postgres 8.3 exclusively.

I would like to allocate 4gb of the memory to shared buffers for postgres.


It might be worth pausing at this point:

The various postgresql tuning guides usually suggest that on a dedicated 
system, you should give postgres about 1/4 of the RAM for shared 
buffers, while telling it that the effective_cache_size = 1/2 RAM.


Postgres will make good use of the OS cache as a file-cache - the 
effective_cache_size setting is advisory to postgres that it can 
expect about this much data to be in RAM.


Also, If you are setting up a new system, it's probably worth going for 
8.4.2. Postgres is relatively easy to build from source.


HTH,

Richard

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread Scott Marlowe
On Fri, Jan 29, 2010 at 9:37 AM, **Rod MacNeil
rmacn...@interactdirect.com wrote:
 Hi All,

 I have a server running CentOS5 with 6gb of memory that will run postgres
 8.3 exclusively.
 I would like to allocate 4gb of the memory to shared buffers for postgres.
 I have modified some kernel settings as follows:

 shmall  1048576 pages  4,294,967,296 bytes
 shmmax 4,294,967,295 bytes

 I can set the postgres config to shared_buffers = 2700MB but no higher.
 If I try shared_buffers = 2750MB the server fails to start with a message it
 cannot allocate memory:

Are you running 32 or 64 bit Centos?

Also, that's a rather high setting for shared_buffers on a 6G machine.
 Generally 2G or so should be plenty unless you have actual data sets
that are larger than that.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread Ing . Marcos Luís Ortíz Valmaseda

Richard Neill escribió:



**Rod MacNeil wrote:

Hi All,

I have a server running CentOS5 with 6gb of memory that will run 
postgres 8.3 exclusively.
I would like to allocate 4gb of the memory to shared buffers for 
postgres.


It might be worth pausing at this point:

The various postgresql tuning guides usually suggest that on a 
dedicated system, you should give postgres about 1/4 of the RAM for 
shared buffers, while telling it that the effective_cache_size = 1/2 RAM.


Postgres will make good use of the OS cache as a file-cache - the 
effective_cache_size setting is advisory to postgres that it can 
expect about this much data to be in RAM.


Also, If you are setting up a new system, it's probably worth going 
for 8.4.2. Postgres is relatively easy to build from source.


HTH,

Richard

All these values has to be combined with the others: shared_buffers, 
work_mem,etc.
My recommendation is to go down a little the shmmax and the 
shared_buffers values.

Is very necessary that you have these values so high?

Regards


--
-
TIP 4: No hagas 'kill -9' a postmaster
Ing. Marcos Luís Ortíz Valmaseda
PostgreSQL System DBA  DWH -- BI Apprentice

Centro de Tecnologías de Almacenamiento y Análisis de Datos (CENTALAD)
Universidad de las Ciencias Informáticas

Linux User # 418229

-- PostgreSQL --

http://www.postgresql-es.org
http://www.postgresql.org
http://www.planetpostgresql.org

-- DWH + BI --

http://www.tdwi.org

---


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread jose javier parra sanchez
You are probably running 32bit OS. So the problem is that the OS
cannot allocate more than 3G of memory continuous . Then the only
solution is to migrate to a 64bit OS.

2010/1/29 **Rod MacNeil rmacn...@interactdirect.com:
 Hi All,

 I have a server running CentOS5 with 6gb of memory that will run postgres
 8.3 exclusively.
 I would like to allocate 4gb of the memory to shared buffers for postgres.
 I have modified some kernel settings as follows:

 shmall  1048576 pages  4,294,967,296 bytes
 shmmax 4,294,967,295 bytes

 I can set the postgres config to shared_buffers = 2700MB but no higher.
 If I try shared_buffers = 2750MB the server fails to start with a message it
 cannot allocate memory:

 2010-01-29 11:24:39 EST FATAL:  shmat(id=1638400) failed: Cannot allocate
 memory

 Is there some other setting that could be limiting the amount I can
 allocate?

 Excerpt from postgresql.conf:

 # - Memory -

 shared_buffers = 2750MB # min 128kB or max_connections*16kB
     # (change requires restart)
 temp_buffers = 32MB # min 800kB
 max_prepared_transactions = 10  # can be 0 or more
     # (change requires restart)
 # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
 memory
 # per transaction slot, plus lock space (see max_locks_per_transaction).
 work_mem = 2MB  # min 64kB
 maintenance_work_mem = 32MB # min 1MB
 #max_stack_depth = 2MB  # min 100kB


 Any help appreciated, Thanks

 Rod



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread Cédric Villemain
2010/1/29 Richard Neill rn...@cam.ac.uk:


 **Rod MacNeil wrote:

 Hi All,

 I have a server running CentOS5 with 6gb of memory that will run postgres
 8.3 exclusively.
 I would like to allocate 4gb of the memory to shared buffers for postgres.

 It might be worth pausing at this point:

 The various postgresql tuning guides usually suggest that on a dedicated
 system, you should give postgres about 1/4 of the RAM for shared buffers,
 while telling it that the effective_cache_size = 1/2 RAM.

 Postgres will make good use of the OS cache as a file-cache - the
 effective_cache_size setting is advisory to postgres that it can expect
 about this much data to be in RAM.

AFAIK effective_cache_size is estimation of OS Page Cache + Estimated
Cache in shared_buffers.


 Also, If you are setting up a new system, it's probably worth going for
 8.4.2. Postgres is relatively easy to build from source.

 HTH,

 Richard

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance




-- 
Cédric Villemain

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread **Rod MacNeil
Thanx, I will try out that recommendation.


On Fri, Jan 29, 2010 at 11:53 AM, Richard Neill rn...@cam.ac.uk wrote:



 **Rod MacNeil wrote:

 Hi All,

 I have a server running CentOS5 with 6gb of memory that will run postgres
 8.3 exclusively.
 I would like to allocate 4gb of the memory to shared buffers for postgres.


 It might be worth pausing at this point:

 The various postgresql tuning guides usually suggest that on a dedicated
 system, you should give postgres about 1/4 of the RAM for shared buffers,
 while telling it that the effective_cache_size = 1/2 RAM.

 Postgres will make good use of the OS cache as a file-cache - the
 effective_cache_size setting is advisory to postgres that it can expect
 about this much data to be in RAM.

 Also, If you are setting up a new system, it's probably worth going for
 8.4.2. Postgres is relatively easy to build from source.

 HTH,

 Richard

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance




-- 
Rod MacNeil
Senior Software Engineer
Interact Direct Marketing, Inc.
www.interactdirect.com
rmacn...@interactdirect.com
Primary Phone Mississauga Ontario: 905-278-4086
Alternate Phone London Ontario: 519-438-6245, Ext 183


[PERFORM] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic slow createdb)

2010-01-29 Thread Greg Stark
On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 That function *seriously* needs documentation, in particular the fact
 that it's a no-op on machines without the right kernel call.  The name
 you've chosen is very bad for those semantics.  I'd pick something
 else myself.  Maybe pg_start_data_flush or something like that?


I would like to make one token argument in favour of the name I
picked. If it doesn't convince I'll change it since we can always
revisit the API down the road.

I envision having two function calls, pg_fsync_start() and
pg_fsync_finish(). The latter will wait until the data synced in the
first call is actually synced. The fall-back if there's no
implementation of this would be for fsync_start() to be a noop (or
something unreliable like posix_fadvise) and fsync_finish() to just be
a regular fsync.

I think we can accomplish this with sync_file_range() but I need to
read up on how it actually works a bit more. In this case it doesn't
make a difference since when we call fsync_finish() it's going to be
for the entire file and nothing else will have been writing to these
files. But for wal writing and checkpointing it might have very
different performance characteristics.

The big objection to this is that then we don't really have an api for
FADV_DONT_NEED which is more about cache policy than about syncing to
disk. So for example a sequential scan might want to indicate that it
isn't planning on reading the buffers it's churning through but
doesn't want to force them to be written sooner than otherwise and is
never going to call fsync_finish().



-- 
greg

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Limited Shared Buffer Problem

2010-01-29 Thread Greg Smith

Cédric Villemain wrote:

AFAIK effective_cache_size is estimation of OS Page Cache + Estimated
Cache in shared_buffers.
  


Yes, the total value you set is used as is, and should include both 
pieces of memory.  The planner doesn't add the shared_buffers value to 
the total first for you, as some people might guess it would. 

The only thing effective_cache_size is used for is estimating how 
expensive an index is likely to be to use, to make decisions like when 
to do an index-based scan instead of just scanning the table itself.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Constraint propagating for equal fields

2010-01-29 Thread Greg Stark
2010/1/28 Віталій Тимчишин tiv...@gmail.com

 I've always thought that PostgreSQL would propagate constraint from field1 to 
 field2 if condition says field1=field2, but this does not seem the case:

version?

--
greg

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Slow query: table iteration (8.3)

2010-01-29 Thread Glenn Maynard
Hitting a performance issues that I'm not sure how to diagnose.

SELECT highscores_for_steps_and_card(s.id, 591, 1) FROM stomp_steps s;
Seq Scan on stomp_steps s  (cost=0.00..793.52 rows=2902 width=4)
(actual time=26509.919..26509.919 rows=0 loops=1)
Total runtime: 26509.972 ms

The inner function looks like this:

CREATE FUNCTION highscores_for_steps_and_card(steps_id int, card_id
int, count int) RETURNS SETOF INTEGER LANGUAGE SQL AS $$
SELECT r.id FROM stomp_round r
WHERE ($1 IS NULL OR r.steps_id = $1) AND ($2 IS NULL OR
r.user_card_id = $2)
ORDER BY r.score DESC LIMIT $3
$$

 Limit  (cost=13.12..13.12 rows=1 width=8) (actual time=0.054..0.054
rows=0 loops=1)
   -  Sort  (cost=13.12..13.12 rows=1 width=8) (actual
time=0.051..0.051 rows=0 loops=1)
 Sort Key: score
 Sort Method:  quicksort  Memory: 17kB
 -  Bitmap Heap Scan on stomp_round r  (cost=9.09..13.11
rows=1 width=8) (actual time=0.036..0.036 rows=0 loops=1)
   Recheck Cond: ((280 = steps_id) AND (user_card_id = 591))
   -  BitmapAnd  (cost=9.09..9.09 rows=1 width=0) (actual
time=0.032..0.032 rows=0 loops=1)
 -  Bitmap Index Scan on stomp_round_steps_id
(cost=0.00..4.40 rows=20 width=0) (actual time=0.030..0.030 rows=0
loops=1)
   Index Cond: (280 = steps_id)
 -  Bitmap Index Scan on stomp_round_user_card_id
 (cost=0.00..4.44 rows=25 width=0) (never executed)
   Index Cond: (user_card_id = 591)
 Total runtime: 0.153 ms
(12 rows)

stomp_steps has about 1500 rows, so it finds 1500 high scores, one for
each stage.

I expected scalability issues from this on a regular drive, since
it'll be doing a ton of index seeking when not working out of cache,
so I expected to need to change to an SSD at some point (when it no
longer easily fits in cache).  However, I/O doesn't seem to be the
bottleneck yet.  If I run it several times, it consistently takes 26
seconds.  The entire database is in OS cache (find | xargs cat:
250ms).

I'm not sure why the full query (26s) is orders of magnitude slower
than 1500*0.150ms (225ms).  It's not a very complex query, and I'd
hope it's not being re-planned every iteration through the loop.  Any
thoughts?  Using SELECT to iterate over a table like this is very
useful (and I don't know any practical alternative), but it's
difficult to profile since it doesn't play nice with EXPLAIN ANALYZE.

-- 
Glenn Maynard

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance