scottxing commented on PR #2221:
URL: https://github.com/apache/paimon/pull/2221#issuecomment-2082807344

   @JingsongLi @liming30  Paimon version 0.7, when a large amount of data 
passes through paimon cdc, about 100 million records are dropped to the paimon 
ods table. The table attribute sets changelog as input. Then, at this time, I 
write a flink sql job (using the consumer-id setting), and read This table is 
inserted into another paimon dwd table (the changelog attribute is lookup). 
After starting this job, the checkpoint has been stuck at 0% and cannot be 
completed, so the snapshot cannot be committed. As a result, my other flink sql 
job cannot check the paimon dwd table. to data. This leads to the phenomenon 
that a large amount of data from one paimon table must be completely written to 
another paimon table before it can then be transferred from this paimon table 
to the next. Data cannot flow smoothly from job to job like a stream. How to 
solve it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@paimon.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to