Hi all,

Currently, we are exploring the various features of Flink and need some
clarification on the below-mentioned questions.


   - I have a stateless Flink application where the source and sink are two
   different Kafka topics. Is there any benefit in adding checkpointing for
   this application?. will it help in some way for the rewind and replays
   while restarting from the failure?

   - I have a stateful use case where events are processed based on a set
   of dynamic rules provided by an external system, say a Kafka source. Also,
   the actual events are distinguishable based on a key.A broadcast function
   is used for broadcasting the dynamic rules and storing the same in Flink
   state.

   So my question is, processing the incoming streams based on these rules
   stored in Flink state per key is efficient or not ( i am using rocksdb as
   state-backend ) ?

   What about using an external cache for this?

   Is stateful function a good contender here?

   -  Is there any benefit in using Apache camel along with Flink ?



Thanks
Jessy

Reply via email to