Hi all,

I am trying to think about the essential differences between operators in
Flink and Spark. Especially when I am using Keyed Windows then a reduce
operation.
In Flink we develop an application that can logically separate these two
operators. Thus after a keyed window I can use
.reduce/aggregate/fold/apply() functions [1].
In Spark we have window/reduceByKeyAndWindow functions which to me appears
it is less flexible in the options to use with a keyed window operation [2].
Moreover, when these two applications are deployed in a Flink and Spark
cluster respectively, what are the differences between their physical
operators running in the cluster?

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html#windows
[2]
https://spark.apache.org/docs/latest/streaming-programming-guide.html#window-operations

Thanks,
Felipe
*--*
*-- Felipe Gutierrez*

*-- skype: felipe.o.gutierrez*
*--* *https://felipeogutierrez.blogspot.com
<https://felipeogutierrez.blogspot.com>*

Reply via email to