luowanghaoyun opened a new issue, #4724: URL: https://github.com/apache/paimon/issues/4724
### Search before asking - [X] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Paimon version 0.9.0 ### Compute Engine Flink ### Minimal reproduce step ```sql CREATE TABLE T (pk INT, a INT, ts INT) WITH ( 'bucket'='1', 'primary-key'='pk', 'full-compaction.delta-commits' = '1', 'record-level.expire-time'='1s', 'record-level.time-field'='ts') ``` ```sql -- batchsql 1, no compact, "dirty data" INSERT INTO T VALUES (1, 1, CAST(NULL AS INT)); -- batchsql 2,trigger compact INSERT INTO T VALUES (2, 2, 2); -- ERROR: Time field for record-level expire should not be null. ``` ### What doesn't meet your expectations? When a row of dirty data (the value of time field for record-level expire is null) writed into L0 files, the following compaction will fail. I think it is a dangerous phenomenon, if there is no snapshot to roll back to. So should there be stricter constraints on this time field? ### Anything else? _No response_ ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
