Hi there,

I am researching running one flink job to support customized event driven
workflow executions. The use case is to support running various workflows
that listen to a set of kafka topics and performing various rpc checks, a
user travel through multiple stages in a rule execution(workflow
execution). e.g

kafka topic : user click stream
rpc checks:

if user is member,
if user has shown interest of signup


​workflows:
​

workflow 1: user click -> if user is member do A then do B
workflow 2: user click -> if user has shown interest of signup then do A
otherwise wait for 60 mins and try recheck, expire in 24 hours

The goal is as I said to run workflow1 & workflow2 in one flink job.

Initial thinking describes below

sources are series of kafka topics, all events go through coMap,cache
lookup event -> rules mapping and fan out to multiple {rule, user} tuple.
Based on rule definition and stage user is in a given rule, it do series of
async rpc check and side outputs to various of sinks.

   - If a {rule, user} tuple needs to stay in a operator states longer (1
   day), there should be a window following async rpc checks with customized
   purgetrigger firing those passes and purge either pass check or expired
   tuples.
   - If a {rule, user} execute to a stage which waits for a kafka event, it
   should be added to cache and hookup with coMap lookups near sources


 Does that makes sense?

Thanks,
Chen

Reply via email to