Adrian Klaver wrote: > I am hoping to hear more from people who have running 9.2 systems > w/ between 100m and 1b records, w/ streaming replication and heavy > data mining on the slaves (5-50m records read per hour by multiple > parallel processes), while from time to time (2-3 times/week) > between 20 and 50m records are inserted/updated within 24 hours.
I've run replication on that general scale. IMV, when you are using PostgreSQL hot standby and streaming replication you need to decide whether a particular replica is primarily for recovery purposes, in which case you can't run queries which take very long without getting canceled for this reason, or primarily for reporting, in which case long-running queries can finish, but the data in the database may get relatively stale while they run. If you have multiple replicas, you probably want to configure them differently in this regard. -Kevin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general