Hi, Вова.
Junrui is right. As far as I know, every time a SQL is re-executed, Flink will
regenerate the plan, generate jobgraph,
and execute the job again. There is no cache to speed up this process. State
beckend is used when your job is stopped
and you want to continue running from the state
Hi Вова
In Flink, there is no built-in mechanism for caching SQL query results;
every query execution is independent, and results are not stored for future
queries. The StateBackend's role is to maintain operational states within
jobs, such as aggregations or windowing, which is critical for
flink版本1.18
场景如下:
A表字段:
id,update_time(date格式)
一条数据:
1,2023-01-12
现在我需要保留update_time+1年,大于当前日。
简单地写一个sql:
select
id,update_time
from A
where TIMESTAMPADD(YEAR,1,update_time) > CURRENT_DATE;
结果:
在2024年1月11日这一天,where条件达成,这条数据不会被过滤掉;
在2024年1月12日,sql并不会触发计算来过滤掉此条数据。
Hi, Tamir.
This is an expected behavior. The flink-connector-base is already included
in flink-dist and we will not package it in the externalized connectors.
You could see this issue[1] for more details.
Best,
Hang
[1] https://issues.apache.org/jira/browse/FLINK-30400?filter=-1
Tamir Sagi
Hi community:
I'm working on a flink cluster on YARN application mode,which is
authenticated by Kuberos.
It works well on flink run and flink list command as follows :
./bin/flink run-application -t yarn-application
> ./examples/streaming/TopSpeedWindowing.jar
> ./bin/flink list -t
This worked perfectly Xuyang, nice :)
Thanks!
On Thu, Jan 11, 2024 at 12:52 PM Xuyang wrote:
> Hi, Gyula.
> If you want flink to fill the unspecified column with NULL, you can try
> the following sql like :
> ```
> INSERT INTO Sink(a) SELECT a from Source
> ```
>
>
> --
> Best!
>
Hi
I updated dynamodb connector to 4.2.0-1.18 but it does not provide
flink-connector-base dependency where in 4.1.0-1.17 it does.[1]
it appears it its pom's definition only as test-jar in scope test
I'm working with custom
#org.apache.flink.connector.base.sink.writer.ElementConverter which
Hi, Gyula.
If you want flink to fill the unspecified column with NULL, you can try the
following sql like :
```
INSERT INTO Sink(a) SELECT a from Source
```
--
Best!
Xuyang
在 2024-01-11 16:10:47,"Giannis Polyzos" 写道:
Hi Gyula,
to the best of my knowledge, this is not feasible
看现象是这样,谢了,我抽空看下这块源码
| |
吴先生
|
|
15951914...@163.com
|
回复的原邮件
| 发件人 | Zakelly Lan |
| 发送日期 | 2024年1月11日 16:33 |
| 收件人 | |
| 主题 | Re: flink-checkpoint 问题 |
看了下代码,这个问题有可能的原因是:
1. flink是先创建chk目录,然后再打 Triggering checkpoint 的 log
的,所以有概率是目录创建了,但是log没输出trigger
2.
看了下代码,这个问题有可能的原因是:
1. flink是先创建chk目录,然后再打 Triggering checkpoint 的 log
的,所以有概率是目录创建了,但是log没输出trigger
2. 作业失败,和触发下一个cp,这是两个异步线程,所以有可能是先执行了创建25548目录的操作然后作业再失败,然后trigger
25548还没输出就退了。
版本1.14.5之后代码已经把上述1行为改了,先打log再创建目录,就不会有这样奇怪的问题了。
On Thu, Jan 11, 2024 at 3:03 PM 吴先生 <15951914...@163.com> wrote:
Hi Everyone,
I'm currently looking to understand the caching mechanism in Apache Flink
in general. As part of this exploration, I have a few questions related to
how Flink handles data caching, both in the context of SQL queries and more
broadly.
When I send a SQL query for example to
Hi Gyula,
to the best of my knowledge, this is not feasible and you will have to do
something like *CAST(NULL AS STRING)* to insert null values manually.
Best,
Giannis
On Thu, Jan 11, 2024 at 9:58 AM Gyula Fóra wrote:
> Hi All!
>
> Is it possible to insert into a table without specifying all
12 matches
Mail list logo