On 19 Oct 2004 at 17:35, Josh Close wrote:

> Well, I didn't find a whole lot in the list-archives, so I emailed
> that list whith a few more questions. My postgres server is just
> crawling right now :(
> 

Unlike many other database engines the shared buffers of Postgres is 
not a private cache of the database data. It is a working area shared 
between all the backend processes. This needs to be tuned for number 
of connections and overall workload, *not* the amount of your database 
that you want to keep in memory. There is still lots of debate about what 
the "sweet spot" is. Maybe there isn't one, but its not normally 75% of 
RAM.

If anything, the effective_cache_size needs to be 75% of (available) 
RAM as this is telling Postgres the amount of your database the *OS* is 
likely to cache in memory.

Having  said that, I think you will need to define "crawling". Is it 
updates/inserts that are slow? This may be triggers/rules/referential 
integrity checking etc that is slowing it. If it is selects that are slow, this 
may be incorrect indexes or sub-optimal queries. You need to show us 
what you are trying to do and what the results are.

Regards,
Gary.


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to