Hello all,

At first, I have a question posted on
http://stackoverflow.com/questions/37732978/join-two-streams-using-a-count-based-window
. I am re-posting this on the mailing list in case some of you are not on
SO.

In addition, I would like to know what is the difference between Flink and
other Streaming engines on data-granularity transport and processing. To be
more precise, I am aware that Storm sends tuples using Netty (by filling up
queues) and a Bolt's logic is executed per tuple. Spark, employs
micro-batches to simulate streaming and (I am not entirely certain) each
task performs processing on a micro-batch. What about Flink? How are tuples
transferred and processed. Any explanation and or article/blog-post/link is
more than welcome.

Thanks

-- 
Nikos R. Katsipoulakis,
Department of Computer Science
University of Pittsburgh

Reply via email to