leonardBang commented on code in PR #3619:
URL: https://github.com/apache/flink-cdc/pull/3619#discussion_r1821773381
##########
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/assigner/state/PendingSplitsStateSerializer.java:
##########
@@ -306,6 +322,59 @@ private StreamPendingSplitsState
deserializeStreamPendingSplitsState(DataInputDe
// Utilities
//
------------------------------------------------------------------------------------------
+ private void writeSplitFinishedCheckpointIds(
+ Map<String, Long> splitFinishedCheckpointIds, DataOutputSerializer
out)
+ throws IOException {
+ final int size = splitFinishedCheckpointIds.size();
+ out.writeInt(size);
+ for (Map.Entry<String, Long> splitFinishedCheckpointId :
+ splitFinishedCheckpointIds.entrySet()) {
+ out.writeUTF(splitFinishedCheckpointId.getKey());
+ out.writeLong(splitFinishedCheckpointId.getValue());
+ }
+ }
+
+ private void writeFinishedSplits(
+ Map<TableId, Set<String>> finishedSplits, DataOutputSerializer
out) throws IOException {
+ final int size = finishedSplits.size();
+ out.writeInt(size);
+ for (Map.Entry<TableId, Set<String>> tableSplits :
finishedSplits.entrySet()) {
+ out.writeUTF(tableSplits.getKey().toString());
+ out.writeInt(tableSplits.getValue().size());
+ for (String splitId : tableSplits.getValue()) {
+ out.writeUTF(splitId);
+ }
Review Comment:
I'm a little concern about current change which may introduce bigger memory
usage of JM side, could we avoid current design and use existed informations?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]