Hi All,
We were able to run a stream processing application against a fairly decent
load of messages in production environment.

To make the system robust say the stream processing application crashes, is
there a way to make it auto start from the point when it crashed?

Also is there any concept like running the same application in a cluster,
where one fails, other takes over, until we bring back up the failed node
of streams application.

If yes, is there any guidelines or some knowledge base we can look at to
understand how this would work.

Is there way like in spark, where the driver program distributes the tasks
across various nodes in a cluster, is there something similar in kafka
streaming too.

Thanks
Sachin

Reply via email to