On 11/17/2012 07:33 AM, T. E. Lawrence wrote:
Have you looked at the below?:

http://www.postgresql.org/docs/9.2/interactive/hot-standby.html#HOT-STANDBY-CONFLICT

25.5.2. Handling Query Conflicts

Yes, thank you!

I am hoping to hear more from people who have running 9.2 systems w/ between 
100m and 1b records, w/ streaming replication and heavy data mining on the 
slaves (5-50m records read per hour by multiple parallel processes), while from 
time to time (2-3 times/week) between 20 and 50m records are inserted/updated 
within 24 hours.

How do they resolve this situation.

For us retry + switch slave works quite well right now, without touching the db 
configuration in this respect yet.

But may be there are different approaches.

The conditions you cite above are outside my experience and I will leave it to others for specific suggestions.

On a more conceptual level, assuming asynchronous replication, I see the following.

1) In a given database data changes are like wave fronts flowing across a sea of data.

2) Replication introduces another wave front in the movement of data from parent to child.

3) Querying that wave front in the child becomes a timing issue.

4) There are settings to mitigate that timing issue but not eliminate it. To do so would require more information exchange between the parent/child then takes place currently.

5) If it is essential to work the wave front then turbulence is to be expected and dealt with. See your solution. I too would be interested in method that is not some variation of what you do.

6) If working the wave front is not essential, then other strategies come into play. For instance partitioning, where older more 'settled' data can be segregated and worked on.







--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to