Adamyuanyuan opened a new issue, #10280:
URL: https://github.com/apache/seatunnel/issues/10280

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   When running SeaTunnel on Flink in STREAMING mode with Hive sink overwrite: 
true, the final Hive partition/table directory may lose previously committed 
files and end up containing only a subset of data (often only files from the 
last checkpoint).
   
   ### SeaTunnel Version
   
   2.3.12
   
   ### SeaTunnel Config
   
   ```conf
   {
       "env" : {
           "execution.parallelism" : 1,
           "job.mode" : "STREAMING"
       },
       "source" : [
           {
               "url" : 
"jdbc:mysql://10.xx.xx:xxx/xxx?useUnicode=true&characterEncoding=UTF-8&useSSL=false",
               "driver" : "com.mysql.cj.jdbc.Driver",
               "user" : "bi",
               "password" : "******",
               "query" : "SELECT xxx FROM xxxWHERE 1=1",
               "partition_column" : "id",
               "partition_num" : 32,
               "split.size" : 20000,
               "fetch_size" : 5000,
               "result_table_name" : "result_table",
               "plugin_name" : "Jdbc"
           }
       ],
       "sink" : [
           {
               "table_name" : "xxx.xxx",
               "metastore_uri" : "thrift://bigdata-vm-xx-hslpl:9083",
               "hdfs_site_path" : "datasource-conf/xxx/hdfs-site.xml",
               "hive_site_path" : "datasource-conf/xxx/hive-site.xml",
               "krb5_path" : "datasource-conf/xxx/krb5.conf",
               "kerberos_principal" : "hive/[email protected]",
               "kerberos_keytab_path" : "datasource-conf/xxx/hive.keytab",
               "overwrite" : true,
               "source_table_name" : "source_table",
               "compress_codec" : "SNAPPY",
               "plugin_name" : "Hive"
           }
       ],
       "transform" : [
           {
               "source_table_name" : "result_table",
               "result_table_name" : "source_table",
               "query" : "select *, '2025-12-16' as `pt` from result_table",
               "plugin_name" : "Sql"
           }
       ]
   }
   ```
   
   ### Running Command
   
   ```shell
   use flink API
   ```
   
   ### Error Exception
   
   ```log
   lost data.
   ```
   
   ### Zeta or Flink or Spark Version
   
   _No response_
   
   ### Java or Scala Version
   
   _No response_
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to