Re: BUG: Former primary node might stuck when started as a standby
Hi Aleksander, [ I'm writing this off-list to minimize noise, but we can continue the discussion in -hackers, if you wish ] 22.01.2024 14:00, Aleksander Alekseev wrote: Hi, But node1 knows that it's a standby now and it's expected to get all the WAL records from the primary, doesn't it? Yes, but node1 doesn't know if it always was a standby or not. What if node1 was always a standby, node2 was a primary, then node2 died and node3 is a new primary. Excuse me, but I still can't understand what could go wrong in this case. Let's suppose, node1 has WAL with the following contents before start: CPLOC | TL1R1 | TL1R2 | TL1R3 | while node2's WAL contains: TL1R1 | TL2R1 | TL2R2 | ... where CPLOC -- a checkpoint location, TLxRy -- a record y on a timeline x. I assume that requesting all WAL records from node2 without redoing local records should be the right thing. And even in the situation you propose: CPLOC | TL2R5 | TL2R6 | TL2R7 | while node3's WAL contains: TL2R5 | TL3R1 | TL3R2 | ... I see no issue with applying records from node3... If node1 sees inconsistency in the WAL records, it should report it and stop doing anything, since it doesn't has all the information needed to resolve the inconsistencies in all the possible cases. Only DBA has this information. I still wonder, what can be considered an inconsistency in this situation. Doesn't the exactly redo of all the local WAL records create the inconsistency here? For me, it's the question of an authoritative source, and if we had such a source, we should trust it's records only. Or in the other words, what if the record TL1R3, which node1 wrote to it's WAL, but didn't send to node2, happened to have an incorrect checksum (due to partial write, for example)? If I understand correctly, node1 will just stop redoing WAL at that position to receive all the following records from node2 and move forward without reporting the inconsistency (an extra WAL record). Best regards, Alexander
Re: BUG: Former primary node might stuck when started as a standby
Hi, > But node1 knows that it's a standby now and it's expected to get all the > WAL records from the primary, doesn't it? Yes, but node1 doesn't know if it always was a standby or not. What if node1 was always a standby, node2 was a primary, then node2 died and node3 is a new primary. If node1 sees inconsistency in the WAL records, it should report it and stop doing anything, since it doesn't has all the information needed to resolve the inconsistencies in all the possible cases. Only DBA has this information. > > It's been a while since I seriously played with replication, but if > > memory serves, a proper way to switch node1 to a replica mode would be > > to use pg_rewind on it first. > > Perhaps that's true generally, but as we can see, without the extra > records replayed, this scenario works just fine. Moreover, existing tests > rely on it, e.g., 009_twophase.pl or 012_subtransactions.pl (in fact, my > research of the issue was initiated per a test failure). I suggest focusing on particular flaky tests then and how to fix them. -- Best regards, Aleksander Alekseev
Re: BUG: Former primary node might stuck when started as a standby
Hi Aleksander, 19.01.2024 14:45, Aleksander Alekseev wrote: it might not go online, due to the error: new timeline N forked off current database system timeline M before current recovery point X/X [...] In this case, node1 wrote to it's WAL record 0/304DC68, but sent to node2 only record 0/304DBF0, then node2, being promoted to primary, forked a next timeline from it, but when node1 was started as a standby, it first replayed 0/304DC68 from WAL, and then could not switch to the new timeline starting from the previous position. Unless I'm missing something, this is just the right behavior of the system. Thank you for the answer! node1 has no way of knowing the history of node1/node2/nodeN promotion. It sees that it has more data and/or inconsistent timeline with another node and refuses to process further until DBA will intervene. But node1 knows that it's a standby now and it's expected to get all the WAL records from the primary, doesn't it? Maybe it could REDO from it's own WAL as little records as possible, before requesting records from the authoritative source... Is it supposed that it's more performance-efficient (not on the first restart, but on later ones)? What else can node1 do, drop the data? That's not how things are done in Postgres :) In case no other options exist (this behavior is really correct and the only possible), maybe the server should just stop? Can DBA intervene somehow to make the server proceed without stopping it? It's been a while since I seriously played with replication, but if memory serves, a proper way to switch node1 to a replica mode would be to use pg_rewind on it first. Perhaps that's true generally, but as we can see, without the extra records replayed, this scenario works just fine. Moreover, existing tests rely on it, e.g., 009_twophase.pl or 012_subtransactions.pl (in fact, my research of the issue was initiated per a test failure). Best regards, Alexander
Re: BUG: Former primary node might stuck when started as a standby
Hi, > it might not go online, due to the error: > new timeline N forked off current database system timeline M before current > recovery point X/X > [...] > In this case, node1 wrote to it's WAL record 0/304DC68, but sent to node2 > only record 0/304DBF0, then node2, being promoted to primary, forked a next > timeline from it, but when node1 was started as a standby, it first > replayed 0/304DC68 from WAL, and then could not switch to the new timeline > starting from the previous position. Unless I'm missing something, this is just the right behavior of the system. node1 has no way of knowing the history of node1/node2/nodeN promotion. It sees that it has more data and/or inconsistent timeline with another node and refuses to process further until DBA will intervene. What else can node1 do, drop the data? That's not how things are done in Postgres :) What if this is a very important data and node2 was promoted mistakenly, either manually or by a buggy script. It's been a while since I seriously played with replication, but if memory serves, a proper way to switch node1 to a replica mode would be to use pg_rewind on it first. -- Best regards, Aleksander Alekseev