Hi,
I recently started working on a flink job using elasticsearch sql
connector. Our elasticsearch cluster required SSL certificate to be
connected and there is no option to set cert in the current version of
elasticsearch sql connector(Elasticsearch | Apache Flink
不是的哈。MVP 是 Minimum Viable Product (最简可行产品)的缩写,代表一个只实现了核心功能,听取早期用户反馈来后续进一步完善的版本。
Best,
Zhanghao Chen
From: guanyq
Sent: Saturday, April 2, 2022 14:56
To: user-zh@flink.apache.org
Subject: flink 1.15
看了FFA的分享(流批一体) Flink1.15版本推出 MVP版本,动态表存储的流批一体
请问MVP版本是收费版么?
Dear Flinkers:
As "CheckpointProperties#CHECKPOINT_RETAINED_ON_CANCELLATION" shows,
if Job stopped with JobStatus#FINISHED "CompletedCheckpointStore" will
discard all completed checkpoints.
My question is, why job on the FINISHED status the
CompletedCheckpointStore discard all completed
Hi!
The main difference at the moment is the programming language and the APIs
used to interact with Flink.
The flink-kubernetes-operator, uses Java and interacts with Flink using the
built in (native) clients.
The other operators have been around since earlier Flink versions. They all
use
Hi
I started looking into Flink recently more specifically the
flink-kubernetes-operator so I only know little about it. I found at least 3
other Flink K8s operators that Lyft, Google, and Spotify developed. Could
someone please enlighten me what is the difference of these third party Flink
Hi,
Got it, seems this way is not flexable enough, but still thanks so much for
your great support! Good wished!
Regards && Thanks
Hunk
At 2022-04-02 16:34:29, "Qingsheng Ren" wrote:
>Hi,
>
>If the schema of records is not fixed I’m afraid you have to do it in UDTF.
>
>Best,
>
Hi,
If the schema of records is not fixed I’m afraid you have to do it in UDTF.
Best,
Qingsheng
> On Apr 2, 2022, at 15:45, wang <24248...@163.com> wrote:
>
> Hi,
>
> Thanks for your quick response!
>
> And I tried the format "raw", seems it only support single physical column,
> and in
我觉得 流处理中,无论是一个一个处理,还是一批一批处理,强调了 连续性,自定义sql 在连续性的保证上,想到的比较好的方式是自增 id
的方式(这就意味着只接受 insert 操作),而在一批数据中 排序、去重,其实对于整体而言 收效不好说, 除非
每一批数据都严格的分区(如不同日期),不过过滤是有好处的。
Michael Ran 于2022年4月1日周五 11:00写道:
> 这个当初提过自定义SQL 数据集,但是社区否定了这种做法- -,但是从功能上来说,我们也是实现的自定义SQL结果集,进行join
> 之类的操作,在大数据集,以及一些数据排序、剔除重复等场景有一定优势
> 在
我觉得 流处理中,无论是一个一个处理,还是一批一批处理,强调了 连续性,自定义sql 在连续性的保证上,想到的比较好的方式是自增 id
的方式(这就意味着只接受 insert 操作),而在一批数据中 排序、去重,其实对于整体而言 收效不好说, 除非
每一批数据都严格的分区(如不同日期),不过过滤是有好处的。
Michael Ran 于2022年4月1日周五 11:00写道:
> 这个当初提过自定义SQL 数据集,但是社区否定了这种做法- -,但是从功能上来说,我们也是实现的自定义SQL结果集,进行join
> 之类的操作,在大数据集,以及一些数据排序、剔除重复等场景有一定优势
> 在
Thanks for everyone's help, by executing:
helm repo add operator-rc3
https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-0.1.0-rc3
Now it works correctly
On 04/2/2022 15:31,Gyula Fóra wrote:
As Biao Geng said this will be much easier after the first preview release.
Hi,
Thanks for your quick response!
And I tried the format "raw", seems it only support single physical column, and
in our project reqiurement, there are more than one hundred columns in sink
table. So I need combine those columns into one string in a single UDF?
Thanks && Regards,
Hunk
As Biao Geng said this will be much easier after the first preview release.
Which should become available on monday if all works out :)
Until then you can also test our last release candidate which will
hopefully become the release:
helm repo add operator-rc3
Hi Spoon,
The command in current doc (helm install flink-kubernetes-operator
helm/flink-kubernetes-operator) should be executed under the repo’s root dir
(e.g. ~/flink-kubernetes-operator/).
The community are working on to make this process
Hi,
You can construct the final json string in your UDTF, and write it to Kafka
sink table with only one field, which is the entire json string constructed in
UDTF, and use raw format [1] in the sink table:
CREATE TABLE TableSink (
`final_json_string` STRING
) WITH (
‘connector’ =
Hi, I am trying it according to the official documentation of
flink-kubernetes-operator, according to the description in 'Quick Start', when
I execute the command:
| helm install flink-kubernetes-operator helm/flink-kubernetes-operator |
it returns an error:
| Error: INSTALLATION FAILED:
看了FFA的分享(流批一体) Flink1.15版本推出 MVP版本,动态表存储的流批一体
请问MVP版本是收费版么?
Hi,
Thanks so much for your support!
But sorry to say I'm still confused about it. No matter what the udf looks
like, the first thing I need confirm is the type of 'content' in TableSink,
what's the type of it should be, should I use type Row, like this?
CREATE TABLE TableSink
17 matches
Mail list logo