Hi A.V.

Add a few more points to the previous two answers. There is no clear answer
to this question. In addition to resource issues, it depends on the size of
the messages you are dealing with and the complexity of the logic. If you
don't consider a lot of extra factors, look at the performance of the
framework purely. You can go online to find some benchmark programs, test
and cross-check. Here is a very popular benchmark[1] for stream computing,
you can refer to it for testing.

Best,
Vino

[1]: https://github.com/yahoo/streaming-benchmarks

Michael Latta <mla...@technomage.com> 于2019年10月23日周三 下午11:05写道:

> There are a lot of variables. How many cores are allocated, how much ram,
> etc. there are companies doing billions of events per day and more.
>
> Tell your boss it has proven to have extremely flat horizontal scaling.
> Meaning you can get it to process almost any number given sufficient
> hardware.
>
> You will need to do a proof of concept on your data and your analysis to
> know how much hardware is required for your problem.
>
>
> Michael
>
> On Oct 23, 2019, at 7:24 AM, A. V. <aanvall...@live.nl> wrote:
>
> 
> Hi,
>
> My boss wants to know how many events Flink can process, analyse etc. per
> second? I cant find this in the documentation.
>
>
>

Reply via email to