回复: flinkcdc:slave with the same server_uuid/server_id as this slave has connected to the master;

2022-02-16 文章 chengyanan1...@foxmail.com
hello,你好:
flink cdc是基于debezium实现的mysql实时同步,debezium是以slave 
server的方式去读取mysql的binlog日志,默认情况下是,系统会自动生成一个介于 5400 和 6400 
之间的随机数,作为debezium这个客户端的server-id,而这个id在mysql 
cluster中必须是唯一的,报这个错说明是有重复的server-id了,建议你显示的配上这个参数“server-id”,可以配置成一个数字或者一个范围:
另外当 scan.incremental.snapshot.enabled 
设置为true时(默认为true),则建议设置为范围,因为增量读取快照时,source是可以并行执行的,这些并行的客户端也必须有着唯一的server-id,增量读取快照的并行度由参数“parallelism.default”控制,而且server-id设置的范围必须要大于并行度。
详情参考:
https://ververica.github.io/flink-cdc-connectors/master/content/connectors/mysql-cdc.html#connector-options
 
配置页里关于  server-id 和 scan.incremental.snapshot.enabled 的解释



chengyanan1...@foxmail.com
 
发件人: maker_d...@foxmail.com
发送时间: 2022-02-15 14:13
收件人: user-zh@flink.apache.org
主题: flinkcdc:slave with the same server_uuid/server_id as this slave has 
connected to the master;
flink version:flink-1.13.5
cdc version:2.1.1
 
在使用flinkcdc同步多个表时遇到报错:
org.apache.flink.runtime.JobException: Recovery is suppressed by 
FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=3, 
backoffTimeMS=1)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:216)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:206)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:197)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:682)
at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435)
at sun.reflect.GeneratedMethodAccessor135.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:517)
at akka.actor.Actor.aroundReceive$(Actor.scala:515)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
exception
at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:223)
at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:294)
at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:69)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:423)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:204)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:684)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:639)
at 

退订

2022-02-16 文章 qhp...@hotmail.com




qhp...@hotmail.com


flink 不触发checkpoint

2022-02-16 文章 董少杰
flink读取csv文件建表,同时消费kafka数据建表,两张表join之后写入hdfs(hudi),读取csv数据的任务已经是finished状态,就会触发不了checkpoint,看有什么办法能让它正常触发checkpoint?
flink版本1.12.2。
谢谢!






| |
董少杰
|
|
eric21...@163.com
|

????

2022-02-16 文章 ????


转发: flink创建视图后,SELECT语句后使用OPTIONS报错

2022-02-16 文章 liangjinghong
感谢老师的回复,然而我的部署环境下的lib中没有您说的这个包,请问是要移除哪个包呢?

我的lib下有的包:
flink-csv-1.13.0.jar
flink-dist_2.11-1.13.0.jar
flink-json-1.13.0.jar
flink-shaded-zookeeper-3.4.14.jar
flink-sql-connector-mysql-cdc-2.1.1.jar
flink-table_2.11-1.13.0.jar
flink-table-blink_2.11-1.13.0.jar
log4j-1.2-api-2.12.1.jar
log4j-api-2.12.1.jar
log4j-core-2.12.1.jar
log4j-slf4j-impl-2.12.1.jar
-邮件原件-
发件人: godfrey he [mailto:godfre...@gmail.com] 
发送时间: 2022年2月16日 16:47
收件人: user-zh 
主题: Re: flink创建视图后,SELECT语句后使用OPTIONS报错

Hi liangjinghong,

原因是 blink planner 中引入并修改了 SqlTableRef 类, 而 Legacy planner 中没有引入 SqlTableRef
类,从而导致加载到了Calcite 中 SqlTableRef (该类有问题)。
解决方案:如果只使用到了blink planner,可以把legacy planner 的包冲lib下移除。

Best,
Godfrey

liangjinghong  于2022年2月14日周一 17:26写道:

> 各位老师们好,以下代码在开发环境中可以执行,打包部署后报错:
>
> 代码:
>
> CREATE VIEW used_num_common
>
> (toolName,region,type,flavor,used_num)
>
> AS
>
> select info.toolName as toolName,r.regionName as
> region,f.type,f.flavor,count(1) as used_num from
>
> tbl_schedule_job/*+ OPTIONS('server-id'='1001-1031') */ job
>
> join
>
> tbl_schedule_task/*+ OPTIONS('server-id'='2001-2031') */ task
>
> on job.jobId = task.jobId
>
> join
>
> tbl_broker_node/*+ OPTIONS('server-id'='3001-3031') */  node
>
> on task.nodeId = node.id
>
> join
>
> tbl_app_info/*+ OPTIONS('server-id'='4001-4031') */ info
>
> on job.appId = info.appId
>
> join
>
> tbl_region r
>
> on node.region/*+ OPTIONS('server-id'='5001-5031') */ = r.region
>
> join
>
> tbl_flavor/*+ OPTIONS('server-id'='6001-6031') */  f
>
> on node.resourcesSpec = f.flavor
>
> where job.jobStatus in ('RUNNING','ERROR','INITING')
>
> and task.taskStatus in ('RUNNING','ERROR','INITING')
>
> and node.machineStatus <> 'DELETED'
>
> and toolName is not null
>
> group by info.toolName,r.regionName,f.type,f.flavor
>
> …
>
> 打包部署后报错如下:
>
> The main method caused an error: class org.apache.calcite.sql.SqlSyntax$6:
> SPECIAL
>
> 2022-02-08 13:33:39,350 WARN
> org.apache.flink.client.deployment.application.DetachedApplicationRunn
> er []
> - Could not execute application:
>
> org.apache.flink.client.program.ProgramInvocationException: The main 
> method caused an error: class org.apache.calcite.sql.SqlSyntax$6: 
> SPECIAL
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(Package
> dProgram.java:372)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeF
> orExecution(PackagedProgram.java:222)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:11
> 4)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.deployment.application.DetachedApplicationRunn
> er.tryExecuteJobs(DetachedApplicationRunner.java:84)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.deployment.application.DetachedApplicationRunn
> er.run(DetachedApplicationRunner.java:70)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$hand
> leRequest$0(JarRunHandler.java:102)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFutu
> re.java:1700)
> [?:?]
>
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515
> )
> [?:?]
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
>
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.r
> un(ScheduledThreadPoolExecutor.java:304)
> [?:?]
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1128)
> [?:?]
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:628)
> [?:?]
>
> at java.lang.Thread.run(Thread.java:829) [?:?]
>
> Caused by: java.lang.UnsupportedOperationException: class
> org.apache.calcite.sql.SqlSyntax$6: SPECIAL
>
> at org.apache.calcite.util.Util.needToImplement(Util.java:1075)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlSyntax$6.unparse(SqlSyntax.java:116)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at 
> org.apache.calcite.sql.SqlOperator.unparse(SqlOperator.java:329)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at 
> org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:453)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:101)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.calcite.sql.SqlJoin$SqlJoinOperator.unparse(SqlJoin.java:19
> 9)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at 
> org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:453)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:104)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.calcite.sql.SqlJoin$SqlJoinOperator.unparse(SqlJoin.java:19
> 9)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at 
> 

org.apache.flink.runtime.rpc.exceptions.FencingTokenException

2022-02-16 文章 casel.chen
Hello, 我有一个Flink 1.13.2 on native kubernetes application作业遇到如下异常,会是什么原因造成的?


Starting kubernetes-application as a console application on host 
dc-ads-ptfz-nspos-sib-trans-sum-6d9dbf587b-tgbmx.
ERROR StatusLogger Reconfiguration failed: No configuration found for 
'135fbaa4' at 'null' in 'null'
ERROR StatusLogger Reconfiguration failed: No configuration found for 
'6c130c45' at 'null' in 'null'
ERROR StatusLogger Reconfiguration failed: No configuration found for 
'6babf3bf' at 'null' in 'null'
18:25:22.588 [flink-akka.actor.default-dispatcher-5] ERROR 
org.apache.flink.runtime.rest.handler.job.JobDetailsHandler - Unhandled 
exception.
org.apache.flink.runtime.rpc.exceptions.FencingTokenException: Fencing token 
mismatch: Ignoring message LocalFencedMessage(87e23829e16a630dcaad2c0851744d0c, 
LocalRpcInvocation(requestExecutionGraphInfo(JobID, Time))) because the fencing 
token 87e23829e16a630dcaad2c0851744d0c did not match the expected fencing token 
92e3b018bda11d4c4598f9335644496a.
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:88)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
 ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.Actor.aroundReceive(Actor.scala:517) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.Actor.aroundReceive$(Actor.scala:515) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.ActorCell.invoke(ActorCell.scala:561) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.Mailbox.run(Mailbox.scala:225) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.Mailbox.exec(Mailbox.scala:235) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at 
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
[flink-dist_2.12-1.13.2.jar:1.13.2]
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
[flink-dist_2.12-1.13.2.jar:1.13.2]

Re: cep 的困惑

2022-02-16 文章 yue ma
图片打不开,可以发下代码看看

翼 之道  于2022年2月16日周三 17:44写道:

> 我写了一个demo程序进行简单的模式匹配,代码如下,但是每输入一个数据都是迟到的
>
>
>
> 每输入一个数据 都通过迟到的流进行输出,没有进行模式匹配的计算
>
>
>
> 请问这是为什么呢, 其他的复杂的模式匹配我都是验证成功的,这种最简单的为何得不到我想要的结果
>


cep 的困惑

2022-02-16 文章 翼 之道
我写了一个demo程序进行简单的模式匹配,代码如下,但是每输入一个数据都是迟到的
[cid:image006.png@01D82356.C91B7890]
[cid:image007.png@01D82356.C91B7890]

每输入一个数据 都通过迟到的流进行输出,没有进行模式匹配的计算

[cid:image008.png@01D82356.C91B7890]
请问这是为什么呢, 其他的复杂的模式匹配我都是验证成功的,这种最简单的为何得不到我想要的结果


关于将数据batch到一起处理再拆分后timestamp发生错乱的问题

2022-02-16 文章 yidan zhao
如图,这个是某个正常现象,分析如下:
数据A:ts1
数据B:ts2
假设ts1和ts2是2event time,并且属于不同窗口。
但是,当通过processFunc加个batch逻辑后,将2个元素放一起作为List这样输出到下一个算子处理,下个算子处理后再拆分输出。
此时,2个数据A和B的event time都会变成ts2。这导致了后续window操作的时间错乱。

不清楚大家有啥好的思路解决不。

目前2个考虑方案。
(1)拆分数据后重新生成event timestamp。
(2)processFunc换成keyedProcessFun,将窗口信息放入keyBy列表中。


Re:Re:flink sql jdbc sink事务提交问题

2022-02-16 文章 casel.chen
如果mysql配置不是auto commit,那么事务是在哪一步提交呢?

















在 2022-02-16 10:24:39,"Michael Ran"  写道:
>jdbc 连接 mysql 的driver  记得默认就是AutoCommit。phoenix不太清楚
>在 2022-02-15 13:25:07,"casel.chen"  写道:
>>最近在扩展flink sql jdbc 
>>connector以支持phoenix数据库,测试debug的时候发现数据能够通过PhoenixStatement.executeBatch()写入,但因为没有提交事务,所以其他人看不到。
>>源码中PhoenixPreparedStatement.execute()方法会调用executeMutation(statement)方法,继而判断connection.getAutoCommit()与否来执行connection.commit()方法。完了回到PhoenixStatement.executeBatch()执行flushIfNecessary()方法,里面根据connection.getAutoFlush()与否来执行connection.flush()操作。
>>一开始我没有在phoenix jdbc 
>>url上添加;autocommit=true参数,发现变化的数据并没有commit到数据库。后来添加了;autocommit=true参数后执行了connection.commit()方法才将数据提交成功。
>>
>>
>>有几个疑问:
>>1. 换成sink进mysql数据库就没有这个问题,难道不同数据库的jdbc sink行为会不一样么?
>>2. connection autoflush参数在哪里设置?跟autocommit区别是什么?
>>3. 
>>buffer条数满了或interval周期达到又或者checkpoint时就会执行flush操作,里面执行的是JdbcBatchingOutputFormat.flush方法,这里我也没有找到connection.commit()操作,数据是如何提交到数据库的呢?不开启事务情况下,执行完statement.executeBatch()就会提交么?


Re: flink创建视图后,SELECT语句后使用OPTIONS报错

2022-02-16 文章 godfrey he
Hi liangjinghong,

原因是 blink planner 中引入并修改了 SqlTableRef 类, 而 Legacy planner 中没有引入 SqlTableRef
类,从而导致加载到了Calcite 中 SqlTableRef (该类有问题)。
解决方案:如果只使用到了blink planner,可以把legacy planner 的包冲lib下移除。

Best,
Godfrey

liangjinghong  于2022年2月14日周一 17:26写道:

> 各位老师们好,以下代码在开发环境中可以执行,打包部署后报错:
>
> 代码:
>
> CREATE VIEW used_num_common
>
> (toolName,region,type,flavor,used_num)
>
> AS
>
> select info.toolName as toolName,r.regionName as
> region,f.type,f.flavor,count(1) as used_num from
>
> tbl_schedule_job/*+ OPTIONS('server-id'='1001-1031') */ job
>
> join
>
> tbl_schedule_task/*+ OPTIONS('server-id'='2001-2031') */ task
>
> on job.jobId = task.jobId
>
> join
>
> tbl_broker_node/*+ OPTIONS('server-id'='3001-3031') */  node
>
> on task.nodeId = node.id
>
> join
>
> tbl_app_info/*+ OPTIONS('server-id'='4001-4031') */ info
>
> on job.appId = info.appId
>
> join
>
> tbl_region r
>
> on node.region/*+ OPTIONS('server-id'='5001-5031') */ = r.region
>
> join
>
> tbl_flavor/*+ OPTIONS('server-id'='6001-6031') */  f
>
> on node.resourcesSpec = f.flavor
>
> where job.jobStatus in ('RUNNING','ERROR','INITING')
>
> and task.taskStatus in ('RUNNING','ERROR','INITING')
>
> and node.machineStatus <> 'DELETED'
>
> and toolName is not null
>
> group by info.toolName,r.regionName,f.type,f.flavor
>
> …
>
> 打包部署后报错如下:
>
> The main method caused an error: class org.apache.calcite.sql.SqlSyntax$6:
> SPECIAL
>
> 2022-02-08 13:33:39,350 WARN
> org.apache.flink.client.deployment.application.DetachedApplicationRunner []
> - Could not execute application:
>
> org.apache.flink.client.program.ProgramInvocationException: The main
> method caused an error: class org.apache.calcite.sql.SqlSyntax$6: SPECIAL
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102)
> ~[flink-dist_2.11-1.13.0.jar:1.13.0]
>
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
> [?:?]
>
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> [?:?]
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
>
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> [?:?]
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> [?:?]
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> [?:?]
>
> at java.lang.Thread.run(Thread.java:829) [?:?]
>
> Caused by: java.lang.UnsupportedOperationException: class
> org.apache.calcite.sql.SqlSyntax$6: SPECIAL
>
> at org.apache.calcite.util.Util.needToImplement(Util.java:1075)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlSyntax$6.unparse(SqlSyntax.java:116)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlOperator.unparse(SqlOperator.java:329)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:453)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:101)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.calcite.sql.SqlJoin$SqlJoinOperator.unparse(SqlJoin.java:199)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:453)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:104)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.calcite.sql.SqlJoin$SqlJoinOperator.unparse(SqlJoin.java:199)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:453)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:104)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at
> org.apache.calcite.sql.SqlJoin$SqlJoinOperator.unparse(SqlJoin.java:199)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:453)
> ~[flink-table_2.11-1.13.0.jar:1.13.0]
>
> at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:104)
>