我们要将当前在Hadoop Yarn上运行的flink
sql作业迁移到K8S上,状态存储介质要从HDFS更换到对象存储,以便作业能够从之前保存点恢复,升级对用户无感。
又因为flink作业状态文件内容中包含有绝对路径,所以不能通过物理直接复制文件的办法实现。
查了一下官网flink state processor api目前读取状态需要传参uid和flink状态类型,但问题是flink
sql作业的uid是自动生成的,状态类型我们也无法得知,请问有没有遍历目录下保存的所有状态并将其另存到另一个文件系统目录下的API ? 感觉state
processor
Hello all, I've realized that the previous mail had some error which caused
invisible text. So I'm resending the mail below.
Hello all, I have found that the Flink sql client doesn't work with
the *"partition
by"* clause.
Is this a bug?
It's a bit weird since when I execute the same sql with
Hi, Guozhen, our team also use flink as ad-hoc query engine. Can we talk
aboat it
Guozhen Yang 于2023年7月20日周四 11:58写道:
> Hi Rajat,
>
> We are using apache zeppelin as our entry point for submitting flink
> ad-hoc queries (and spark jobs actually).
>
> It supports interactive queries, data
Hello all, I have found that flink sql client doesn't work with "partition
by" clause.
Is this bug? It's bit weird since when I execute same sql with
tableEnv.executeSql(statement) code it works as expected. Has anyone
tackled this kind of issue? I have tested in flink 1.16.1 version.
Thanks in
hi all,
Hi,
如题,请教一下关于如何使用DataStream API实现有界流的join操作,我在调用join的时候必须要window,怎么避免,还是需要使用SQL
API才可以
感谢,
鱼
这个解决不了根本问题 主要是我们的任务比较多,业务上就需要保留几千个任务
| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制
在2023年07月28日 11:28,Shammon FY 写道:
Hi,
可以通过配置`jobstore.max-capacity`和`jobstore.expiration-time`控制保存的任务数,具体参数可以参考[1]
[1]