各位大佬,
背景:
实际测试flink读Kafka 数据写入hudi, checkpoint的间隔时间是1min,
state.backend分别为filesystem,测试结果如下:
写hudi的checkpoint 的延迟
写iceberg得延迟:
疑问: hudi的checkpoint的文件数据比iceberg要大很多,如何降低flink写hudi的checkpoint的延迟?
| |
博星
|
|
15868861...@163.com
|
1.left
join
??flink sql apidatastream api??
| ?? | <1227581...@qq.com.INVALID> |
| | 2024??06??16?? 20:35 |
| ?? | user-zh |
| ?? | |
| | Flinkjoin??n?? |
??
??
1DWD??KafkaDWD
2Kafka
Hi Biao,
I agree with you that this exception is not very meaningful and can be
noisy in the JM logs, especially when running large-scale batch jobs in a
session cluster.
IIRC, there isn't a current config to filter out or silence such exceptions
in batch mode. So I've created a JIRA ticket (
hello, i am trying to do vector assembling with flink 1.15 but i have this. How
can i solve it please? 2024-06-16 03:47:24 DEBUG Main:114 - Assembled Data
Table Schema: root
|-- tripId: INT
|-- stopId: INT
|-- routeId: INT
|-- stopSequence: INT
|-- speed: DOUBLE
|-- currentStatus: INT