.n??
| ?? | <1227581...@qq.com.INVALID> |
| | 2024??06??16?? 21:08 |
| ?? | user-zh |
| ?? | |
| | ??Flink????join??n?? |
???
??flink sql apidatastream api??
| ?? | <1227581...@qq.com.INVALID> |
| | 2024??06??16?? 20:35 |
| ?? | user-zh |
| ?? | |
| ???? | Flink????join?
??
1DWD??KafkaDWD
2Kafka
3??FlinkKafka1,2??FlinkKafka??DWD1
>"user-zh"
>
>: 2021??2??25??(??)
??Flink
SQL Join
??
| |
JasonLee
|
|
jasonlee1...@163.com
|
??
??2021??02??25?? 14:40??Suhan ??
benchao??joinrocketmqflink??kafka
+ rocket mq
??flink?
??-- --
??: "lxk7...@163.com"
017...@163.com>;
> 发送时间: 2020年7月6日(星期一) 中午11:12
> 收件人: "user-zh"
> 主题: Re: 【Flink Join内存问题】
>
>
>
> regular join确实是这样,所以量大的话可以用interval join 、temporal join
>
> > 2020年7月5日 下午3:50,忝忝向仧 <153488...@qq.com> 写道:
> >
> > Hi,all:
> >
Hi:
interval joinkey?
interval join??join??regular
join??stream??key?
.
-- --
??:
regular join确实是这样,所以量大的话可以用interval join 、temporal join
> 2020年7月5日 下午3:50,忝忝向仧 <153488...@qq.com> 写道:
>
> Hi,all:
>
> 我看源码里写到JoinedStreams:
> 也就是说join时候都是走内存计算的,那么如果某个stream的key值过多,会导致oom
> 那么有什么预防措施呢?
> 将key值多的一边进行打散?
>
>
> Right now, the join is being evaluated in memory so you need to ensu
Hi,all:
??JoinedStreams:
join??stream??key??oom
?
??key
Right now, the join is being evaluated in memory so you need to ensure that the
number
* of elements per key does not get too high. Oth
??datastream
| |
jimandlice
|
|
??jimandl...@163.com
|
Signature is customized by Netease Mail Master
??2020??05??16?? 23:00??1048262223 ??
??dataset api??
-- --
??: "jimandlice
??dataset api??
-- --
??: "jimandlice"
??: FlinkJoin??.
ds1??ds2kafkaevent
time??watermark??3s:
??join???
watermark_time>=window_endt
??: FlinkJoin??.
ds1??ds2kafkaevent
time??watermark??3s:
??join???
watermark_time>=window_endt
15 matches
Mail list logo