The answer to your question is "It depends". You build your cluster right
and size your messages right and tune your producers right, you can achieve
near real time transport of terabytes of data a day.


There's been plenty of articles written about Kafka performance. E.g.,
https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines

Kind regards,

Liam Clarke

On Wed, 12 Sep. 2018, 7:32 pm Chanchal Chatterji, <
chanchal.chatte...@infosys.com> wrote:

> Hi,
>
> In the process of mainframe modernization, we are attempting to stream
> Mainframe data to AWS Cloud , using Kafka.  We are planning to use Kafka
> 'Producer API' at mainframe side and 'Connector API' on the cloud side.
> Since our data is processed by a module called 'Central dispatch' located
> in Mainframe and is sent to Kafka.  We want to know what is rate of volume
> Kafka can handle.  The other end of Kafka is connected to an AWS S3 Bucket
> As sink.  Please help us to provide this info or else please help to
> connect with relevant person who can help us to understand this.
>
> Thanks and Regards
>
> Chanchal Chatterji
> Principal Consultant,
> Infosys Ltd.
> Electronic city Phase-1,
> Bangalore-560100
> Contacts : 9731141606/ 8105120766
>
>

Reply via email to