I'm assuming you are sending data in a continuous stream and not a
single large batch:

500GB a day = 20GB an hour = 5MB a second.

A minimal 3 node cluster should work. You also need enough storage for
reasonable retention period (15TB per month).




On Thu, Jun 18, 2015 at 10:39 AM, Khanda.Rajat <rajat.kha...@igt.com> wrote:
> Hi,
> I have a requirement of transferring around 500 GB of logs from app server to 
> hdfs per day. What will be the ideal kafka cluster size?
>
> Thanks
> Rajat
> CONFIDENTIALITY NOTICE: This message is the property of International Game 
> Technology PLC and/or its subsidiaries and may contain proprietary, 
> confidential or trade secret information. This message is intended solely for 
> the use of the addressee. If you are not the intended recipient and have 
> received this message in error, please delete this message from your system. 
> Any unauthorized reading, distribution, copying, or other use of this message 
> or its attachments is strictly prohibited.

Reply via email to