Here is an interesting use case - To upgrade a topology without any
downtime. Let's say, the topology has only Kafka as a source and two
versions of it are running (different topology names of course) in parallel
and sharing the kafka input load.

In old kafka spout, rolling upgrade is not possible, partition assignment
is derived from the number of tasks in the topology.

In new kafka spout, partition assignment is done externally by Kafka
server. If I deploy two different topologies with same *kafka consumer
group id*, is it fair to assume that load will be automatically distributed
across topologies? Are there any corner cases to consider?

-- 
Regards,
Abhishek Agarwal

Reply via email to