Hi, Jason

This scenario is supported.
Just set config option
auto.leader.rebalance.enable=false
And use tool kafka-preferred-replica-election.sh

If you want to move leader from one host to another use tool 
kafka-reassign-partitions.sh with same replica list but other order

22.08.2016, 20:36, "Jason Aliyetti" <j.aliye...@gmail.com>:
> I have a use case that requires a 2 node deployment with a Kafka-backed
> service with the following constraints:
>
> - All data must be persisted to node 1. If node 1 fails (regardless of the
> status of node 2), then the system must stop.
> - If node 2 is up, then it must stay in synch with node 1.
> - If node 2 fails, then service must not be disrupted, but as soon as it
> comes back up and rejoins ISR it must stay in synch.
>
> The deployment is basically a primary node and a cold node with real time
> replication, but no failover to the cold node.
>
> To achieve this I am considering adding a broker-level configuration option
> that would prevent a broker from becoming a leader for any topic partition
> it hosts - this would allow me to enforce that the cold node never take
> leadership for any topics. In conjunction with manipulating a topic's
> "min.insync.replicas" setting at runtime, I should be able to achieve the
> behavior desired (2 if both brokers up, 1 if the standby goes down).
>
> I know this sounds like an edgy use case, but does this sound like a
> reasonable approach? Are there any valid use cases around such a broker or
> topic level configuration (i.e. does this sound like a feature that would
> make sense to open a KIP against)?

Reply via email to