Hi Robert,
sorry for the late reply, I just did a quick test up, this seems working:
1. during the time checkpoints could expire, but once the thread is not
blocked, it will continue checkpointing
2. this guarantees the message ordering
Thanks a lot!
Eleanore
On Tue, Dec 15, 2020 at 10:42 PM Rob
What you can also do is rely on Flink's backpressure mechanism: If the map
operator that validates the messages detects that the external system is
down, it blocks until the system is up again.
This effectively causes the whole streaming job to pause: the Kafka source
won't read new messages.
On T
Hi Guowei and Arvid,
Thanks for the suggestion. I wonder if it makes sense and possible that the
operator will produce a side output message telling the source to 'pause',
and the same side output as the side input to the source, based on which,
the source would pause and resume?
Thanks a lot!
El
Hi Eleanore,
if the external system is down, you could simply fail the job after a given
timeout (for example, using asyncIO). Then the job would restart using the
restarting policies.
If your state is rather small (and thus recovery time okay), you would
pretty much get your desired behavior. Th
Hi, Eleanore
1. AFAIK I think only the job could "pause" itself. For example the
"query" external system could pause when the external system is down.
2. Maybe you could try the "iterate" and send the failed message back to
retry if you use the DataStream api.
Best,
Guowei
On Mon, Nov 30, 2020
Hi experts,
Here is my use case, it's a flink stateless streaming job for message
validation.
1. read from a kafka topic
2. perform validation of message, which requires query external system
2a. the metadata from the external system will be cached in memory
for 15minutes
2b. there i