Here we have a scenario that we are considering to use Kafka as a
fault-tolerant message system.

There is an external source that keeps generating data. We are thinking of
deploying Kafka to store the produced data. It is assumed that the external
source is safe enough and would not crash or any bad things, since we have
no control on it. Also we can not deploy something like Kafka producer on
the source.
Therefore we are planning to configure the Kafka producer to receive the
data from the source, and then the brokers serve as the down-streamer of
the producer.

We understand that the brokers are fault-tolerant by replication. How does
Kafka handle the failure of the producers? How about the data cached in the
producer’s accumulator, but not sent to the broker? Do we need to manage
this inside of producer by ourselves?

Thanks in advance,
Xin

Reply via email to