Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
On 4/11/12, 乔志强 qiaozhiqi...@leadcoretech.com wrote: Yes, increase wal_keep_segments. Even if you set wal_keep_segments to 64, the amount of disk space for WAL files is only 1GB, so there is no need to worry so much, I think. No? But when a transaction larger than 1GB... Then you may need WAL space larger than 1GB as well. For replication to work, it seems likely that you may need to have sufficient WAL space to handle a row, possibly the entire transaction.. But since a single statement can update thousands or millions of rows, do you always need enough WAL space to hold the entire transaction? So in sync streaming replication, if master delete WAL before sent to the only standby, all transaction will fail forever, the master tries to avoid a PANIC error rather than termination of replication. but in sync replication, termination of replication is THE bigger PANIC error. That's somewhat debatable. Would I rather have a master that PANICED or a slave that lost replication? I would choose the latter. A third option, which may not even be feasible, would be to have the master fail the transaction if synchronous replication cannot be achieved, although that might have negative consequences as well. Another question: Does master send WAL to standby before the transaction commit ? That's another question for the core team, I suspect. A related question is what happens if there is a rollback? -- Mike Nolan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
Michael Nolan htf...@gmail.com wrote: On 4/11/12, 乔志强 qiaozhiqi...@leadcoretech.com wrote: But when a transaction larger than 1GB... Then you may need WAL space larger than 1GB as well. For replication to work, it seems likely that you may need to have sufficient WAL space to handle a row, possibly the entire transaction.. But since a single statement can update thousands or millions of rows, do you always need enough WAL space to hold the entire transaction? No. Does master send WAL to standby before the transaction commit ? Yes. A related question is what happens if there is a rollback? PostgreSQL doesn't use a rollback log; WAL files can be reclaimed as soon as the work they represent has been persisted to the database by a CHECKPOINT, even if it is not committed. Because there can be multiple versions of each row in the base table, each with its own xmin (telling which transaction committed it) and xmax (telling which transaction expired it) visibiliity checking can handle the commits and rollbacks correctly. It also uses a commit log (CLOG), hint bits, and other structures to help resolve visibility. It is a complex topic, but it does work. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
On 4/11/12, Kevin Grittner kevin.gritt...@wicourts.gov wrote: Michael Nolan htf...@gmail.com wrote: On 4/11/12, 乔志强 qiaozhiqi...@leadcoretech.com wrote: But when a transaction larger than 1GB... Then you may need WAL space larger than 1GB as well. For replication to work, it seems likely that you may need to have sufficient WAL space to handle a row, possibly the entire transaction.. But since a single statement can update thousands or millions of rows, do you always need enough WAL space to hold the entire transaction? No. Does master send WAL to standby before the transaction commit ? Yes. A related question is what happens if there is a rollback? PostgreSQL doesn't use a rollback log; WAL files can be reclaimed as soon as the work they represent has been persisted to the database by a CHECKPOINT, even if it is not committed. Because there can be multiple versions of each row in the base table, each with its own xmin (telling which transaction committed it) and xmax (telling which transaction expired it) visibiliity checking can handle the commits and rollbacks correctly. It also uses a commit log (CLOG), hint bits, and other structures to help resolve visibility. It is a complex topic, but it does work. Thanks, Kevin. That does lead to a question about the problem that started this thread, though. How does one determine how big the WAL space needs to be to not cause streaming replication to fail? Or maybe this is a bug after all? -- Mike Nolan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
On Wed, Apr 11, 2012 at 3:31 PM, 乔志强 qiaozhiqi...@leadcoretech.com wrote: So in sync streaming replication, if master delete WAL before sent to the only standby, all transaction will fail forever, the master tries to avoid a PANIC error rather than termination of replication. but in sync replication, termination of replication is THE bigger PANIC error. I see your point. When there are backends waiting for replication, the WAL files which the standby might not have received yet must not be removed. If they are removed, replication keeps failing forever because required WAL files don't exist in the master, and then waiting backends will never be released unless replication mode is changed to async. This should be avoided. To fix this issue, we should prevent the master from deleting the WAL files including the minimum waiting LSN or bigger ones. I'll think more and implement the patch. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
On 4/11/12, Fujii Masao masao.fu...@gmail.com wrote: On Wed, Apr 11, 2012 at 3:31 PM, 乔志强 qiaozhiqi...@leadcoretech.com wrote: So in sync streaming replication, if master delete WAL before sent to the only standby, all transaction will fail forever, the master tries to avoid a PANIC error rather than termination of replication. but in sync replication, termination of replication is THE bigger PANIC error. I see your point. When there are backends waiting for replication, the WAL files which the standby might not have received yet must not be removed. If they are removed, replication keeps failing forever because required WAL files don't exist in the master, and then waiting backends will never be released unless replication mode is changed to async. This should be avoided. To fix this issue, we should prevent the master from deleting the WAL files including the minimum waiting LSN or bigger ones. I'll think more and implement the patch. With asynchonous replication, does the master even know if a slave fails because of a WAL problem? And does/should it care? Isn't there a separate issue with synchronous replication? If it fails, what's the appropriate action to take on the master? PANICing it seems to be a bad idea, but having transactions never complete because they never hear back from the synchronous slave (for whatever reason) seems bad too. -- Mike Nolan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
On Thu, Apr 12, 2012 at 12:56 AM, Fujii Masao masao.fu...@gmail.com wrote: On Wed, Apr 11, 2012 at 3:31 PM, 乔志强 qiaozhiqi...@leadcoretech.com wrote: So in sync streaming replication, if master delete WAL before sent to the only standby, all transaction will fail forever, the master tries to avoid a PANIC error rather than termination of replication. but in sync replication, termination of replication is THE bigger PANIC error. I see your point. When there are backends waiting for replication, the WAL files which the standby might not have received yet must not be removed. If they are removed, replication keeps failing forever because required WAL files don't exist in the master, and then waiting backends will never be released unless replication mode is changed to async. This should be avoided. On second thought, we can avoid the issue by just increasing wal_keep_segments enough. Even if the issue happens and some backends get stuck to wait for replication, we can release them by taking fresh backup and restarting the standby from that backup. This is the basic procedure to restart replication after replication is terminated because required WAL files are removed from the master. So this issue might not be worth implementing the patch for now (though I'm not against improving things in the future), but it seems just a tuning-problem of wal_keep_segments. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?
On Wed, Apr 11, 2012 at 12:35 PM, Fujii Masao masao.fu...@gmail.com wrote: On Thu, Apr 12, 2012 at 12:56 AM, Fujii Masao masao.fu...@gmail.com wrote: On Wed, Apr 11, 2012 at 3:31 PM, 乔志强 qiaozhiqi...@leadcoretech.com wrote: So in sync streaming replication, if master delete WAL before sent to the only standby, all transaction will fail forever, the master tries to avoid a PANIC error rather than termination of replication. but in sync replication, termination of replication is THE bigger PANIC error. I see your point. When there are backends waiting for replication, the WAL files which the standby might not have received yet must not be removed. If they are removed, replication keeps failing forever because required WAL files don't exist in the master, and then waiting backends will never be released unless replication mode is changed to async. This should be avoided. On second thought, we can avoid the issue by just increasing wal_keep_segments enough. Even if the issue happens and some backends get stuck to wait for replication, we can release them by taking fresh backup and restarting the standby from that backup. This is the basic procedure to restart replication after replication is terminated because required WAL files are removed from the master. So this issue might not be worth implementing the patch for now (though I'm not against improving things in the future), but it seems just a tuning-problem of wal_keep_segments. We've talked about teaching the master to keep track of how far back all of its known standbys are, and retaining WAL back to that specific point, rather than the shotgun approach that is wal_keep_segments. It's not exactly clear what the interface to that should look like, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers