Hello,
I would ask if there are some cases/examples in which Kafka has been used
in the backend of an ingestion pipeline for IoT data, with the purpose to
make it scalable.

My case is briefly doing this:
- a web api is waiting data from IoT devices (10 million expeted by day).
Data are not ordered in terms of time, we could receive old data that the
device can't sent previously.
- then data are stored in a database to be processed
- a database job pulls and process the data (create the average, min, max,
over/under quota, add the internal sensor id from the serial number,...)

I'm wondering if Kafka could be the right choice to leave database pulling
and which benefits it brings.
I really appreciate if there is an example or case study.

Thank you very much
Paolo

Reply via email to