On Wed, Apr 11, 2007 at 05:54:45AM +0800, John Summerfield wrote:
> Linux caches writes, I don't think it should be hitting disk at all. The

I _sure hope_ you don't care about this data, then.  That's not a
real safe way to work.  But. . .

> table being updated contains records 7482 (658K raw data) of which
> probably fewer than 2000 are being updated, and typically the same ones
> all the time: we're updating the date of the latest trade.

. . . this is likely your problem.  The updates probably get slower
and slower.  What's happening is that you're expiring a row _for each
update_, which means it _isn't_ the same row every time.  This is
approximately the worst use model for PostgreSQL's MVCC approach. 
Worse, though it's not the same row, you have to grovel through all
the dead rows to find the actually live one.  So that's probably
what's killing you.

> Laptop (1.25 Gbytes)
> shared_buffers = 1000                   # min 16 or max_connections*2,
> 8KB each

so you have 8000 K configured as your shared buffers there.  That's
as much as you'll ever use for shared memory by Postgres.  You can
probably bump a little in this case.  Your other config seems ok to
me.  But I don't think this is your problem -- the update pattern is.  

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
If they don't do anything, we don't need their acronym.
                --Josh Hamilton, on the US FEMA

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to