Tomccat3 opened a new issue, #5452: URL: https://github.com/apache/paimon/issues/5452
### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Paimon version paimon 1.0.1 ### Compute Engine flink 1.20.0 ### Minimal reproduce step Hi, i'm new to paimon,I encountered the following problem when using 1. create a table : ``` CREATE TABLE IF NOT EXISTS update_test ( d_st_2_did STRING COMMENT '设备 ID', d_dy_4_imppv BIGINT COMMENT '曝光 PV', d_dy_4_clickpv BIGINT COMMENT '点击 PV', d_dy_4_installpv BIGINT COMMENT '下载 PV', os_install BIGINT COMMENT 'Item 系统安装量', os_install_ys BIGINT COMMENT '昨日系统安装量', os_soaring DOUBLE COMMENT '昨日系统安装量 / 最近7天均值', activate_uv BIGINT COMMENT '活跃UV', yesterday_2_data_count BIGINT COMMENT '昨日使用量', remain_count_2 BIGINT COMMENT '次留量', remain_rate_2 DOUBLE COMMENT '次留率', yesterday_7_data_count BIGINT COMMENT '7天使用量', remain_count_7 BIGINT COMMENT '7天留存量', remain_rate_7 DOUBLE COMMENT '7天留存率', yesterday_30_data_count BIGINT COMMENT '30天使用量', remain_count_30 BIGINT COMMENT '30天留存量', remain_rate_30 DOUBLE COMMENT '30天留存率', remain_rate DOUBLE COMMENT '综合留存率', update_time TIMESTAMP(3) COMMENT '更新时间', PRIMARY KEY (d_st_2_did) NOT ENFORCED -- 主键包含设备 ID 和日期 ) WITH ( 'bucket' = '8', 'parquet.compression' = 'SNAPPY', 'snapshot.time-retained' = '72 h', 'bucket-key' = 'd_st_2_did', 'changelog-producer' = 'full-compaction', -- 选择 full-compaction ,在compaction后产生完整的changelog 'changelog-producer.compaction-interval' = '2 min', -- compaction 间隔时间 'merge-engine' = 'partial-update', 'partial-update.ignore-delete' = 'true', 'fields.d_dy_4_imppv.aggregate-function' = 'sum', 'fields.d_dy_4_clickpv.aggregate-function' = 'sum', 'fields.d_dy_4_installpv.aggregate-function' = 'sum', 'fields.update_time.sequence-group' = 'd_dy_4_imppv,d_dy_4_clickpv,d_dy_4_installpv', 'sequence.field' = 'update_time' ); ``` 2. launch a flink streming job update d_dy_4_imppv, d_dy_4_clickpv and d_dy_4_installpv;i found these three fields updated correctly; 3. launch another flink batch job to insert fields left : `INSERT INTO item_features_all SELECT itemid AS d_st_2_did, 0 as d_dy_4_imppv, 0 as d_dy_4_clickpv, 0 as d_dy_4_installpv, COALESCE(os_install, 0) AS os_install, COALESCE(os_install_ys, 0) AS os_install_ys, COALESCE(os_soaring, 0.0) AS os_soaring, COALESCE(activate_uv, 0) AS activate_uv, COALESCE(yesterday_2_data_count, 0) AS yesterday_2_data_count, COALESCE(remain_count_2, 0) AS remain_count_2, COALESCE(remain_rate_2, 0.0) AS remain_rate_2, COALESCE(yesterday_7_data_count, 0) AS yesterday_7_data_count, COALESCE(remain_count_7, 0) AS remain_count_7, COALESCE(remain_rate_7, 0.0) AS remain_rate_7, COALESCE(yesterday_30_data_count, 0) AS yesterday_30_data_count, COALESCE(remain_count_30, 0) AS remain_count_30, COALESCE(remain_rate_30, 0.0) AS remain_rate_30, COALESCE(remain_rate, 0.0) AS remain_rate, PROCTIME() AS update_time FROM hive_table -- source is a hive table WHERE itemid IS NOT NULL;`  4. run select * from item_features_all; 5. the batch job did not update any data ### What doesn't meet your expectations? i expect all the fields updated not only streaming job but also the flink batch job。 ### Anything else? _No response_ ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@paimon.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org