Flink CDC Issue Import created FLINK-34775:
----------------------------------------------

             Summary: [Bug] oracle cdc logminer can't catch up the latest 
records when scn huge increment occured.
                 Key: FLINK-34775
                 URL: https://issues.apache.org/jira/browse/FLINK-34775
             Project: Flink
          Issue Type: Bug
          Components: Flink CDC
            Reporter: Flink CDC Issue Import


### Search before asking

- [X] I searched in the 
[issues|https://github.com/ververica/flink-cdc-connectors/issues) and found 
nothing similar.


### Flink version

1.16.0

### Flink CDC version

2.3.0

### Database and its version

- oracle 11g
- oracle 12c

### Minimal reproduce step

1. create a simple cdc source table(connector='oracle-cdc')
2. this is no special requirements about sink
3. while oracle database instance scn huge increment, the 
LogminerStreamingChangeEvent can not catch up the latest record in time.

The problem is because `LogMinerQueryResultProcessor` can not feedback more 
reasonable `lastProcessedScn` after process mining view's data.  During data 
cycle processing,the first cycle could get the right last `processedScn`,the 
`endScn` will be reset to the `processedScn`, is will be used in the next cycle 
as `startScn`, unfortunately the next cycle can not query any mining view datas 
about the source table, the `startScn` can not be move an in short time any 
more. 
however oracle scn is already huge increment, the source table's new datas 
can't get in time.

the main code in `LogMinerStreamingChangeEventSource` :
```java
  final Scn lastProcessedScn = processor.getLastProcessedScn();
  if (!lastProcessedScn.isNull()
          && lastProcessedScn.compareTo(endScn) < 0) {
      // If the last processed SCN is before the endScn we need to
      // use the last processed SCN as the
      // next starting point as the LGWR buffer didn't flush all
      // entries from memory to disk yet.
      endScn = lastProcessedScn;
  }
  
  if (transactionalBuffer.isEmpty()) {
      LOGGER.debug(
              "Buffer is empty, updating offset SCN to {}",
              endScn);
      offsetContext.setScn(endScn];
  } 
```
BTW: The implementation has changed since version debezium 1.6.x, 
`LogminerStreamingChangeEvent` has been optimized in debezium-connector-oracle. 


### What did you expect to see?

reset the right `startScn` when there is no scn scan records.

### What did you see instead?

the first cycle `lastProcessedScn` worked as `startScn` in a long time.

### Anything else?

_No response_

### Are you willing to submit a PR?

- [X] I'm willing to submit a PR!

---------------- Imported from GitHub ----------------
Url: https://github.com/apache/flink-cdc/issues/1940
Created by: [green1893|https://github.com/green1893]
Labels: bug, 
Created at: Fri Feb 24 14:32:22 CST 2023
State: open




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to