yinbenrong opened a new issue, #9824:
URL: https://github.com/apache/seatunnel/issues/9824

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   version :2.3.11
   engine :zeta
   config_file:
   `
   env {
     parallelism = 2
     job.mode = "STREAMING"
     checkpoint.interval = 1800000
     checkpoint.timeout = 1800000
   }
   
   source {
     MySQL-CDC {
       base-url = "jdbc:mysql://xxxx:3306/db_df_enterprise"
       username = "xx"
       password = "xx"
       table-names = ["db.table"]
       table-names-config = [
         {
           primaryKeys  = ["id"]
           snapshotSplitColumn = "id"
           table = "db.table"
         }
       ]
       snapshot.fetch.size = 2048
       snapshot.split.size = 4096
       startup.mode = "initial"
       debezium.time.precision.mode = "connect"
       connectionTimeZone = "Asia/Shanghai"
     }
   }
   
   sink {
     Hudi {
       table_dfs_path = "hdfs://xx/user/hudi/seatunnel/"
       database = "db"
       table_name = "table"
       record_key_fields = "id"
       table_type = "MERGE_ON_READ"
       conf_files_path = 
"/etc/hadoop/conf/hdfs-site.xml;/etc/hadoop/conf/core-site.xml;/etc/hadoop/conf/yarn-site.xml"
       batch_size = 5000
       upsert_shuffle_parallelism = 2
       batch_interval_ms = 30000
       use.kerberos = true
       kerberos.principal = "xx"
       kerberos.principal.file = "xx"
       timestamp_semantics = "local"
       hoodie.cleaner.policy="KEEP_LATEST_BY_HOURS"
       hoodie.cleaner.hours.retained = 845
     }
   }`
   
   
   
   ### SeaTunnel Version
   
   2.3.11
   
   ### SeaTunnel Config
   
   ```conf
   none
   ```
   
   ### Running Command
   
   ```shell
   bin/seatunnel.sh --config 
config/v2.streaming.mysql2hudi.db_df_enterprise.t_enterprise.increment --async 
-DJvmOption="-Xms16G -Xmx16G"
   ```
   
   ### Error Exception
   
   ```log
   `java.lang.RuntimeException: java.lang.RuntimeException: table db.table sink 
throw error
        at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:302)
        at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:70)
        at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:39)
        at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:27)
        at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.handleRecord(IntermediateBlockingQueue.java:75)
        at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.collect(IntermediateBlockingQueue.java:50)
        at 
org.apache.seatunnel.engine.server.task.flow.IntermediateQueueFlowLifeCycle.collect(IntermediateQueueFlowLifeCycle.java:51)
        at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.collect(TransformSeaTunnelTask.java:72)
        at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:165)
        at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.call(TransformSeaTunnelTask.java:77)
        at 
org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:694)
        at 
org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1023)
        at org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
   Caused by: java.lang.RuntimeException: table db.table sink throw error
        at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.subSinkErrorCheck(MultiTableSinkWriter.java:140)
        at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:192)
        at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:47)
        at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:268)
        ... 17 more
   Caused by: 
org.apache.seatunnel.connectors.seatunnel.hudi.exception.HudiConnectorException:
 ErrorCode:[COMMON-11], ErrorDescription:[Sink writer operation failed, such as 
(open, close) etc...] - Writing records to Hudi failed.
        at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.writeRecord(HudiRecordWriter.java:123)
        at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiSinkWriter.write(HudiSinkWriter.java:75)
        at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiSinkWriter.write(HudiSinkWriter.java:40)
        at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableWriterRunnable.run(MultiTableWriterRunnable.java:67)
        ... 6 more
   Caused by: org.apache.hudi.exception.HoodieUpsertException: Error upserting 
bucketType UPDATE for partition :0
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:250)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleInsertPartition(BaseJavaCommitActionExecutor.java:256)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.lambda$execute$0(BaseJavaCommitActionExecutor.java:123)
        at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:119)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:70)
        at 
org.apache.hudi.table.action.commit.BaseWriteHelper.write(BaseWriteHelper.java:58)
        at 
org.apache.hudi.table.action.commit.JavaInsertCommitActionExecutor.execute(JavaInsertCommitActionExecutor.java:46)
        at 
org.apache.hudi.table.HoodieJavaCopyOnWriteTable.insert(HoodieJavaCopyOnWriteTable.java:108)
        at 
org.apache.hudi.table.HoodieJavaCopyOnWriteTable.insert(HoodieJavaCopyOnWriteTable.java:84)
        at 
org.apache.hudi.client.HoodieJavaWriteClient.insert(HoodieJavaWriteClient.java:137)
        at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.executeWrite(HudiRecordWriter.java:172)
        at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.flush(HudiRecordWriter.java:157)
        at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.writeRecord(HudiRecordWriter.java:120)
        ... 9 more
   Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
        at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:151)
        at org.apache.hudi.table.HoodieTable.runMerge(HoodieTable.java:1099)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdateInternal(BaseJavaCommitActionExecutor.java:275)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdate(BaseJavaCommitActionExecutor.java:270)
        at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:243)
        ... 22 more
   Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
        at 
org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:75)
        at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:149)
        ... 26 more
   Caused by: org.apache.hudi.exception.HoodieUpsertException: Failed to close 
UpdateHandle
        at 
org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:455)
        at 
org.apache.hudi.table.action.commit.BaseMergeHelper$UpdateHandler.finish(BaseMergeHelper.java:59)
        at 
org.apache.hudi.table.action.commit.BaseMergeHelper$UpdateHandler.finish(BaseMergeHelper.java:44)
        at 
org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:72)
        ... 27 more
   Caused by: java.io.IOException: can not write 
ColumnIndex(null_pages:[false], min_values:[32 30 32 35 30 39 30 35 31 36 32 39 
34 32 31 33 39], max_values:[32 30 32 35 30 39 30 35 31 36 33 30 30 39 37 33 
32], boundary_order:ASCENDING, null_counts:[0])
        at org.apache.parquet.format.Util.write(Util.java:375)
        at org.apache.parquet.format.Util.writeColumnIndex(Util.java:69)
        at 
org.apache.parquet.hadoop.ParquetFileWriter.serializeColumnIndexes(ParquetFileWriter.java:1137)
        at 
org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:1100)
        at 
org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:132)
        at org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:319)
        at 
org.apache.hudi.io.hadoop.HoodieBaseParquetWriter.close(HoodieBaseParquetWriter.java:164)
        at 
org.apache.hudi.io.hadoop.HoodieAvroParquetWriter.close(HoodieAvroParquetWriter.java:88)
        at 
org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:431)
        ... 30 more
   Caused by: shaded.parquet.org.apache.thrift.transport.TTransportException: 
java.io.FileNotFoundException: File does not exist: 
/user/hudi/seatunnel/db/table/09a0f398-c87b-45a9-a210-925c0838b1e7-0_0-0-0_20250905163009732.parquet
 (inode 252738933) [Lease.  Holder: DFSClient_NONMAPREDUCE_61921914_245, 
pending creates: 1]
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2782)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2661)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
   
        at 
shaded.parquet.org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
        at 
shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeByteDirect(TCompactProtocol.java:484)
        at 
shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeByteDirect(TCompactProtocol.java:491)
        at 
shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeFieldBeginInternal(TCompactProtocol.java:262)
        at 
shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeFieldBegin(TCompactProtocol.java:244)
        at 
org.apache.parquet.format.InterningProtocol.writeFieldBegin(InterningProtocol.java:71)
        at 
org.apache.parquet.format.ColumnIndex$ColumnIndexStandardScheme.write(ColumnIndex.java:928)
        at 
org.apache.parquet.format.ColumnIndex$ColumnIndexStandardScheme.write(ColumnIndex.java:820)
        at org.apache.parquet.format.ColumnIndex.write(ColumnIndex.java:728)
        at org.apache.parquet.format.Util.write(Util.java:372)
        ... 38 more
   Caused by: java.io.FileNotFoundException: File does not exist: 
/user/hudi/seatunnel/db/table/09a0f398-c87b-45a9-a210-925c0838b1e7-0_0-0-0_20250905163009732.parquet
 (inode 252738933) [Lease.  Holder: DFSClient_NONMAPREDUCE_61921914_245, 
pending creates: 1]
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2782)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2661)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
   
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
        at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1088)
        at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1865)
        at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
   Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does 
not exist: 
/user/hudi/seatunnel/db/table/09a0f398-c87b-45a9-a210-925c0838b1e7-0_0-0-0_20250905163009732.parquet
 (inode 252738933) [Lease.  Holder: DFSClient_NONMAPREDUCE_61921914_245, 
pending creates: 1]
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2782)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2661)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
   
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)
        at org.apache.hadoop.ipc.Client.call(Client.java:1445)
        at org.apache.hadoop.ipc.Client.call(Client.java:1355)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy33.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:497)
        at sun.reflect.GeneratedMethodAccessor272.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy34.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1085)`
   ```
   
   ### Zeta or Flink or Spark Version
   
   _No response_
   
   ### Java or Scala Version
   
   _No response_
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to