Thank you both! I'll try to switch the scheduler to "AdaptiveBatchScheduler".
Best,
Irakli
From: Junrui Lee
Sent: 05 March 2024 03:50
To: user
Subject: Re: Batch mode execution
Hello Irakli,
The error is due to the fact that the Adaptive Scheduler doesn’t
Hi,退订可以发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 来取消订阅来自
user-zh@flink.apache.org 邮件列表的邮件,邮件列表的订阅管理,可以参考[1]
[1] https://flink.apache.org/zh/what-is-flink/community/
Best,
Shawn Huang
雷刚 于2024年2月29日周四 14:41写道:
> 退订
Flink有一个端到端延迟的指标,可以参考以下文档[1],看看是否有帮助。
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.18/zh/docs/ops/metrics/#end-to-end-latency-tracking
Best,
Shawn Huang
casel.chen 于2024年2月21日周三 15:31写道:
> flink sql作业从kafka消费mysql过来的canal
>
Hi Gabriele,
Quick answer: You can use the built-in window operators which have been
integrated with state backends including RocksDB.
Thanks,
Zakelly
On Tue, Mar 5, 2024 at 10:33 AM Zhanghao Chen
wrote:
> Hi Gabriele,
>
> I'd recommend extending the existing window function whenever
Hello Irakli,
The error is due to the fact that the Adaptive Scheduler doesn’t support
batch jobs, as detailed in the Flink documentation[1]. When operating in
reactive mode, Flink automatically decides the type of scheduler to use.
For batch execution, the default scheduler is
Hi Gabriele,
I'd recommend extending the existing window function whenever possible, as
Flink will automatically cover state management for you and no need to be
concerned with state backend details. Incremental aggregation for reduce state
size is also out of the box if your usage can be
Hi,
java.util.Date没有sql中的常规类型和它对应,因此使用的兜底的Raw类型(结构化类型)。实际上java.sql.Date
对应的是sql中的Date。
具体可以参考下这张表:https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/types/#data-type-extraction
--
Best!
Xuyang
在 2024-03-05 09:23:38,"ha.fen...@aisino.com" 写道:
>从流转换成Table
Hello Irakli and thank you for your question.
I guess that somehow Flink enters the "reactive" mode while the adaptive
scheduler is not configured.
I would go with 2 options to isolate your issue:
• Try with forcing the scheduling mode
Hello,
I have a Flink job which is processing bounded number of events. Initially, I
was running the job in the "STREAMING" mode, but I realized that running it in
the "BATCH" mode was better as I don't have to deal with the Watermark
Strategy. The job is reading the data from the Kafka topic
It should be compatible. There is no compatibility matrix but it is
compatible with most versions that are in use (at the different
companies/users etc)
Gyula
On Thu, Feb 29, 2024 at 6:21 AM 吴圣运 wrote:
> Hi,
>
> I'm using flink-operator-1.5.0 and I need to deploy it to Kubernetes 1.20.
> I
Dear Flink Community,
I am using Flink with the DataStream API and operators implemented using
RichedFunctions. I know that Flink provides a set of window-based
operators with time-based semantics and tumbling/sliding windows.
By reading the Flink documentation, I understand that there is
Hi Arjun,
I have raised a Jira for this case and attached a patch:
https://issues.apache.org/jira/browse/FLINK-34565
-Surendra
On Wed, Feb 21, 2024 at 12:48 AM Surendra Singh Lilhore <
surendralilh...@apache.org> wrote:
> Hi Arjun,
>
> Yes, direct support for external configuration files
12 matches
Mail list logo