Hi Kirti,
I think you can refer to doc [1] and create a table in your S3 file system
(put your s3 path in the `path` field), then submit jobs to write and read
data with S3.
You can refer to [2] if your jobs are `DataStream`.
[1]
Hi Hangxiang,
感谢您的回应。
下面是该问题的关键代码,main_stream表是流数据源,数据事件流频约每笔500ms~1s,目前尝试将t1minStream和t5minStream
assignTimestampsAndWatermarks(WatermarkStrategy.noWatermarks())是不会产生这问题造成作业失败了,但输出会有数据丢失。
如有其他思路,麻烦你了。
String t1minSql = "SELECT rowtime, key, id, AVG(num) OVER w_t1min AS avg_t1min
FROM
Thanks Shammon.
Is there any way to verify that File Source reads files directly from S3?
Regards,
Kirti Dhar
From: Shammon FY
Sent: 25 September 2023 06:27
To: Kirti Dhar Upadhyay K
Cc: user@flink.apache.org
Subject: Re: Flink File Source: File read strategy
Hi Kirti,
I think the default
Hi,
可以通过设置 table.exec.state.ttl 来控制状态算子的 state TTL. 更多信息请参阅 [1]
[1]
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/table/concepts/overview/#%e7%8a%b6%e6%80%81%e7%ae%a1%e7%90%86
Best,
Jane
On Thu, Sep 21, 2023 at 5:17 PM faronzz wrote:
> 试试这个
Hi,
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自
user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2]
管理你的邮件订阅。
Please send email to user-zh-unsubscr...@flink.apache.org if you want to
unsubscribe the mail from user-zh@flink.apache.org ,
and you can refer [1][2] for more details.
Best,
Hi Alexis,
If you create OutputTag with the constructor `OutputTag(String id)`,
you need to make it anonymous for Flink to analyze the type
information. But if you use the constructor `OutputTag(String id,
TypeInformation typeInfo)`, you need not make it anonymous as you
have provided the type
Hi, 请问下是 SQL 作业还是 DataStream 作业,可以提供一些可复现的关键 SQL 或代码吗
On Sat, Sep 23, 2023 at 3:59 PM Phoes Huang wrote:
> Hi,
>
> 单机本地开发执行,遇到该问题,请问有人遇过并解决吗?
>
> 2023-09-23 13:52:03.989 INFO
> [flink-akka.actor.default-dispatcher-9][Execution.java:1445] - Interval
> Join (19/20)
>
Hi Kamal
Indeed, Flink does not handle this exception. When this exception occurs,
the Flink job will fail directly and internally keep restarting,
continuously creating new files.
Personally, I think this logic can be optimized. When this exception
occurs, the file with the exception should be
Hi, Yifan.
Unfortunately, IIUC, we could get the key and value type only by reading
related sql codes currently.
I think it's useful if we could support SQL semantics for the Processor
API, but it indeed will take lots of effort.
On Thu, Sep 21, 2023 at 12:05 PM Yifan He via user
wrote:
> Hi
在使用使用jemalloc内存分配器一段时间后,出现checkpoint
超时,任务卡住的情况,哪位遇到过呢?flink版本:flink-1.13.2,jiemalloc版本:5.3.0
Hello,
Can you please share that why Flink is not able to handle exception and keeps
on creating files continuously without closing?
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 21 September 2023 07:58 AM
To: Feng Jin
Cc: user@flink.apache.org
Subject: RE: About Flink parquet format
Yes.
After using the jemalloc memory allocator for a period of time, checkpoint
timeout occurs and tasks are stuck. Who has encountered this? flink
version:1.13.2, jiemalloc version: 5.3.0
After using the jemalloc memory allocator for a period of time, checkpoint
timeout occurs and tasks are stuck. Who has encountered this? flink
version:1.13.2, jiemalloc version: 5.3.0
Hi Kirti,
I think the default file `Source` does not download files locally in Flink,
but reads them directly from S3. However, Flink also supports configuring
temporary directories through `io.tmp.dirs`. If it is a user-defined
source, it can be obtained from FlinkS3FileSystem. After the Flink
请教一下,flink的两阶段提交对于sink算子,预提交是在做检查点的哪个阶段触发的?预提交时具体是做了什么工作?
15 matches
Mail list logo