Re: 回复: Split a stream into any number of streams

2019-09-17 文章 cai yi
你这个场景貌似可以用Broadcast来广播自定义的事件规则然后join数据流, 之后可以在process中进行处理... 在 2019/9/17 下午4:52,“venn” 写入: 恐怕不行,sideoutput 和 split 都需要先知道要分多少个流 如sideoutput 需要先定义tag: val late = new OutputTag[LateDataEvent]("late") -邮件原件- 发件人:

回复: Split a stream into any number of streams

2019-09-17 文章 Jun Zhang
对DataStream进行keyBy操作是否能解决呢? --原始邮件-- 发件人:"venn"https://stackoverflow.com/questions/53588554/apache-flink-using-filter -or-split-to-split-a-stream regards.

Re: Split a stream into any number of streams

2019-09-17 文章 王佩
是这样的。比方有1000个事件(通过某个字段区分,事件会继续增加),都在一个kafka topic中。 Flink 从Kafka读取数据后是一个DataStream,我想将每个事件分流出来,这样每个事件都是一个DataStream,后续,可以对每个事件做各种各样的处理,如DataStream异步IO、DataStream Sink 到Parquet。 1、如果用split...select,由于select(事件名),这里的事件名必须是某个确定的。 2、如果用side output,要提前定义output tag,我有1000个事件(事件会继续增加),这样就需要定义1000+

Re: Split a stream into any number of streams

2019-09-16 文章 cai yi
可以使用Side Output, 将输入流根据不同需求发送到自定义的不同的OutputTag中,最后可以使用DataStream.getSideOutput(outputTag)取出你需要的流进行处理! 在 2019/9/17 上午10:05,“Wesley Peng” 写入: on 2019/9/17 9:55, 王佩 wrote: > I want to split a stream into any number of streams according to a field, > and then process the

Re: Split a stream into any number of streams

2019-09-16 文章 Wesley Peng
Hi on 2019/9/17 10:28, 王佩 wrote: * // How should I do it?* splitStream.select("productID1").print(); If I understand for that correctly, you want somewhat the dynamic number of Sinks? regards

Re: Split a stream into any number of streams

2019-09-16 文章 Wesley Peng
on 2019/9/17 9:55, 王佩 wrote: I want to split a stream into any number of streams according to a field, and then process the split stream one by one. I think that should be easy done. refer to: https://stackoverflow.com/questions/53588554/apache-flink-using-filter-or-split-to-split-a-stream

Split a stream into any number of streams

2019-09-16 文章 王佩
Hi All, I want to split a stream into any number of streams according to a field, and then process the split stream one by one. Can this be achieved? What should I do? Regards, Pei