[ 
https://issues.apache.org/jira/browse/FLINK-37873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17955083#comment-17955083
 ] 

Xin Chen edited comment on FLINK-37873 at 5/30/25 3:13 AM:
-----------------------------------------------------------

I am using Flink-sql-connector-hbase-1.4. 
{code:java}
flink-sql-connector-hbase-1.4_1.12.2.jar {code}
There are two main issues here:
 # Why does Flink's HBase connector not verify the existence of the table to be 
operated on? And we just prepare to flush the data directly into it?
 # When this table does not exist, it seems that the task will not have any 
error messages that affect the execution status. There is an error stack that 
only triggers the hbase client to print this stack when the connector is 
closed, but it does not affect the task status.

The task status displayed as successful is fatal and can mislead users into 
thinking that the data has been successfully inserted into HBase!


was (Author: JIRAUSER298666):
I am using Flink-sql-connector-hbase-1.4.

There are two main issues here:
 # Why does Flink's HBase connector not verify the existence of the table to be 
operated on? And we just prepare to flush the data directly into it?
 # When this table does not exist, it seems that the task will not have any 
error messages that affect the execution status. There is an error stack that 
only triggers the hbase client to print this stack when the connector is 
closed, but it does not affect the task status.

The task status displayed as successful is fatal and can mislead users into 
thinking that the data has been successfully inserted into HBase!

> 【Hbase-connector】Enable buffer flush, specify a non-existent table when 
> writing data to hbase, the Flink task displays successful
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-37873
>                 URL: https://issues.apache.org/jira/browse/FLINK-37873
>             Project: Flink
>          Issue Type: Bug
>            Reporter: Xin Chen
>            Priority: Major
>             Fix For: 1.16.2
>
>         Attachments: image-2025-05-30-11-02-54-974.png
>
>
>  If I have enabled the buffer flush, such as setting 
> ''sink.buffer-flush.max-rows'='1000', but when I specify a table that 
> definitely does not exist, that is, the ‘table-name‘ field is a completely 
> non-existent table, why does the flink task not report an error? It shows 
> successful execution, and there is no error log in the task. Only when 
> closed, there is a stack print.
>  
> The Flink SQL:
>  
> {code:java}
> tbEnv.executeSql("CREATE TABLE IF NOT EXISTS hTable (\n"
>         + "rowkey STRING,\n"
>         + "family1 ROW<q1 STRING,q2 STRING>,\n"
>         + "family2 ROW<q3 STRING,q4 STRING>,\n"
>         + "PRIMARY KEY (rowkey) NOT ENFORCED\n"
>         + ") WITH (\n"
>         + "'connector' = 'hbase-1.4',\n"
>         + "'table-name' = 'mytable3',\n" // mytable3 does not exist in 
> Hbase!!!
>         + "'sink.buffer-flush.max-rows' = '1000',\n"
>         + "'hbase.regionserver.kerberos.principal' = 
> 'hbase/[email protected]',\n"
>         + "'hbase.master.kerberos.principal' = 'hbase/[email protected]',\n"
>         + "'hbase.security.authentication' = 'kerberos',\n"
>         + "'hbase.ipc.client.fallback-to-simple-auth-allowed' = 'true',\n"
>         + "'zookeeper.quorum' = 
> 'hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181'\n"
>         + "    )"); 
> String sql3 = "insert into hTable values ('200', ROW('300','AAA'), 
> ROW('400','BBB'))";
> tbEnv.executeSql(sql3);{code}
> Flink task state:
>  
> !image-2025-05-30-11-02-54-974.png!
> Taskmanager.log:
> {code:java}
> 2025-05-30 09:11:53,825 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Establish 
> JobManager connection for job ffffffffb9bf812e0000000000000000.
> 2025-05-30 09:11:53,828 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Offer 
> reserved slots to the leader of job ffffffffb9bf812e0000000000000000.
> 2025-05-30 09:11:53,850 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate 
> slot 4e07d1d05b97f8882c94c3372b11e540.
> 2025-05-30 09:11:53,855 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate 
> slot 4e07d1d05b97f8882c94c3372b11e540.
> 2025-05-30 09:11:53,885 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Received 
> task Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS 
> EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
> _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f), deploy into slot with 
> allocation id 4e07d1d05b97f8882c94c3372b11e540.
> 2025-05-30 09:11:53,895 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Source: Values(tuples=[[{ 0 }]]) -> 
> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
> EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from CREATED to 
> DEPLOYING.
> 2025-05-30 09:11:53,898 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Loading JAR files for task Source: Values(tuples=[[{ 0 }]]) 
> -> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') 
> AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
> 2025-05-30 09:11:54,060 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Registering task at network: Source: Values(tuples=[[{ 0 
> }]]) -> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW 
> _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> 
> Sink: Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, 
> EXPR$1, EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
> 2025-05-30 09:11:54,084 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask          [] - No state 
> backend has been configured, using default (Memory / JobManager) 
> MemoryStateBackend (data in heap memory / checkpoints to JobManager) 
> (checkpoints: 'null', savepoints: 'null', asynchronous: TRUE, maxStateSize: 
> 5242880)
> 2025-05-30 09:11:54,092 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Source: Values(tuples=[[{ 0 }]]) -> 
> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
> EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from DEPLOYING 
> to RUNNING.
> 2025-05-30 09:11:54,178 WARN  org.apache.flink.metrics.MetricGroup            
>              [] - The operator name Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) exceeded the 80 characters length limit and was truncated.
> 2025-05-30 09:11:54,461 WARN  org.apache.flink.metrics.MetricGroup            
>              [] - The operator name Calc(select=[_UTF-16LE'100' AS EXPR$0, 
> (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
> _UTF-16LE'BBB') AS EXPR$2]) exceeded the 80 characters length limit and was 
> truncated.
> =============================================================================================================================================================
> 2025-05-30 09:11:54,494 INFO  
> org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - start open 
> ...
> 2025-05-30 09:11:54,525 WARN  
> org.apache.flink.connector.hbase.util.HBaseConfigurationUtil [] - Could not 
> find HBase configuration via any of the supported methods (Flink 
> configuration, environment variables).
> 2025-05-30 09:11:54,678 INFO  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper
>  [] - Process identifier=hconnection-0x1a4ac2a8 connecting to ZooKeeper 
> ensemble=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:zookeeper.version=3.4.14-HDP-22.10.2-efc83fdf3946366b6cd1191a5af00dd26735cde9,
>  built on 09/06/2022 11:01 GMT
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:host.name=192.168.234.96
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.version=1.8.0_342
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.vendor=Bisheng
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-0.oe2203.x86_64/jre
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.class.path=:xxxx
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.io.tmpdir=/tmp
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.compiler=<NA>
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:os.name=Linux
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:os.arch=amd64
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:os.version=5.10.0-60.66.0.91.oe2203.x86_64
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:user.name=root
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:user.home=/root
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:user.dir=/cloud/data/hadoop/yarn/nodemanager/local/usercache/hadoop/appcache/application_1747795725599_0302/container_1747795725599_0302_01_000002
> 2025-05-30 09:11:54,685 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Initiating 
> client connection, 
> connectString=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
>  sessionTimeout=90000 
> watcher=org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.PendingWatcher@4e2b3acc
> 2025-05-30 09:11:54,702 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - Client 
> successfully logged in.
> 2025-05-30 09:11:54,703 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
> thread started.
> 2025-05-30 09:11:54,703 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT valid 
> starting at:        Fri May 30 09:11:54 CST 2025
> 2025-05-30 09:11:54,703 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT 
> expires:                  Sat May 31 09:11:54 CST 2025
> 2025-05-30 09:11:54,704 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
> sleeping until: Sat May 31 05:19:36 CST 2025
> 2025-05-30 09:11:54,705 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.client.ZooKeeperSaslClient 
> [] - Client will use GSSAPI as SASL mechanism.
> 2025-05-30 09:11:54,706 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Opening 
> socket connection to server 
> hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181.
>  Will attempt to SASL-authenticate using Login Context section 'Client'
> 2025-05-30 09:11:54,707 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Socket 
> connection established to 
> hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
>  initiating session
> 2025-05-30 09:11:54,710 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Session 
> establishment complete on server 
> hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
>  sessionid = 0x1008fd670fd20a8, negotiated timeout = 90000
> 2025-05-30 09:11:54,922 INFO  
> org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - end open.
> =====================================================================================================================================================
> 2025-05-30 09:11:55,211 ERROR 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess [] 
> - Failed to get region location 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.TableNotFoundException: 
> Table 'mytable3' was not found, got: mytable2.
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1345)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1221)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:496)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:436)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:246)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:186)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.connector.hbase.sink.HBaseSinkFunction.close(HBaseSinkFunction.java:216)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:41)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:109)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$closeOperator$5(StreamOperatorWrapper.java:213)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.closeOperator(StreamOperatorWrapper.java:210)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$deferCloseOperatorToMailbox$3(StreamOperatorWrapper.java:185)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
> [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.tryYield(MailboxExecutorImpl.java:97)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.quiesceTimeServiceAndCloseOperator(StreamOperatorWrapper.java:162)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:131)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:439)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:627)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:589)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) 
> [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) 
> [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342]
> 2025-05-30 09:11:55,220 INFO  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
>  [] - Closing zookeeper sessionid=0x1008fd670fd20a8
> 2025-05-30 09:11:55,221 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Session: 
> 0x1008fd670fd20a8 closed
> 2025-05-30 09:11:55,221 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - 
> EventThread shut down for session: 0x1008fd670fd20a8
> 2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Source: Values(tuples=[[{ 0 }]]) -> 
> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
> EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from RUNNING to 
> FINISHED.
> 2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Freeing task resources for Source: Values(tuples=[[{ 0 }]]) 
> -> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') 
> AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f).
> 2025-05-30 09:11:55,228 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - 
> Un-registering task and sending final execution state FINISHED to JobManager 
> for task Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS 
> EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
> _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 5675710f80e829cc299d53501cd9765f.
> 2025-05-30 09:11:55,267 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Free slot 
> TaskSlot(index:0, state:ACTIVE, resource profile: 
> ResourceProfile{cpuCores=1.0000000000000000, taskHeapMemory=25.600mb 
> (26843542 bytes), taskOffHeapMemory=0 bytes, managedMemory=230.400mb 
> (241591914 bytes), networkMemory=64.000mb (67108864 bytes)}, allocationId: 
> 4e07d1d05b97f8882c94c3372b11e540, jobId: ffffffffb9bf812e0000000000000000).
> 2025-05-30 09:11:55,271 INFO  
> org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Remove job 
> ffffffffb9bf812e0000000000000000 from job leader monitoring. {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to