On 2021/01/13 07:58, vinay.raic...@t-systems.com wrote:
Not sure about your proposal regarding Point 3:
* firstly how is it ensured that the stream is closed? If I understand the doc correctly the stream will be established starting with the latest timestamp (hmm... is it not a standard behaviour?) and will never finish (UNBOUNDED),

On the first question of standard behaviour: the default is to start from the group offsets that are available in Kafka. This uses the configured consumer group. I think it's better to be explicit, though, and specify sth like `EARLIEST` or `LATEST`, etc.

And yes, the stream will start but never stop with this version of the Kafka connector. Only when you use the new `KafkaSource` can you also specify an end timestamp that will make the Kafka source shut down eventually.

* secondly it is still not clear how to get the latest event at a given time point in the past?

You are referring to getting a single record, correct? I don't think this is possible with Flink. All you can do is get a stream from Kafka that is potentially bounded by a start timestamp and/or end timestamp.

Best,
Aljoscha

Reply via email to