Hi there, hope to find you well.

I'm attempting to develop a CDC on top of Postgres, currently using 12, the 
last minor, with a custom client, and I'm running into issues with data loss 
caused by out-of-order logical replication messages.

The problem is as follows: postgres streams A, B, D, G, K, I, P logical 
replication events, upon exit signal we stop consuming new events at LSN K, and 
we wait 30s for out-of-order events. Let's say that we only got A, (and K ofc) 
so in the following 30s, we get B, D, however, for whatever reason, G never 
arrived. As with pgoutput-based logical replication we have no way to calculate 
the next LSN, we have no idea that G was missing, so we assumed that it all 
arrived, committing K to postgres slot and shutdown. In the next run, our 
worker will start receiving data from K forward, and G is lost forever...
Meanwhile postgres moves forward with archiving and we can't go back to check 
if we lost anything. And even if we could, would be extremely inefficient.

In sum, the issue comes from the fact that postgres will stream events with 
unordered LSNs on high transactional systems, and that pgoutput doesn't have 
access to enough information to calculate the next or last LSN, so we have no 
way to check if we receive all the data that we are supposed to receive, 
risking committing an offset that we shouldn't as we didn't receive yet 
preceding data.

It seems very either to me that none of the open-source CDC projects that I 
looked into care about this. They always assume that the next LSN received 
is... well the next one, and commit that one, so upon restart, they are 
vulnerable to the same issue. So... either I'm missing something... or we have 
a generalized assumption causing data loss under certain conditions all over.

Am I missing any postgres mechanism that will allow me to at least detect that 
I'm missing data?

Thanks in advance for any clues on how to deal with this. It has been driving 
me nuts.

  *

Regards,
José Neves

Reply via email to