Re: How can I handle backpressure with event time.

2017-11-05 Thread yunfan123
It seems can't support consume multi topics with different deserialization
schema.
I use protobuf, different topic mapping to different schema.



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/


Re: How can I handle backpressure with event time.

2017-05-25 Thread Eron Wright
Try setting the assigner on the Kafka consumer, rather than on the
DataStream:
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/
kafka.html#kafka-consumers-and-timestamp-extractionwatermark-emission

I believe this will produce a per-partition assigner and forward only the
minimum watermark across all partitions.

Hope this helps,
-Eron

On Thu, May 25, 2017 at 3:21 AM, yunfan123 <yunfanfight...@foxmail.com>
wrote:

> For example, I want to merge two kafka topics (named topicA and topicB) by
> the specific key with a max timeout.
> I use event time and class BoundedOutOfOrdernessTimestampExtractor to
> generate water mark.
> When some partitions of topicA be delayed by backpressure, and the delays
> exceeds my max timeout.
> It results in all of my delayed partition in topicA (also corresponding
> data
> in topicB) can't be merged.
> What I want is if backpressure happens, consumers can only consume depends
> on my event time.
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/How-can-I-
> handle-backpressure-with-event-time-tp13313.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>