[ 
https://issues.apache.org/jira/browse/KAFKA-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Levani Kokhreidze resolved KAFKA-8727.
--------------------------------------
    Resolution: Duplicate

 Duplicate of KAFKA-6718

> Control over standby tasks host assignment
> ------------------------------------------
>
>                 Key: KAFKA-8727
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8727
>             Project: Kafka
>          Issue Type: New Feature
>          Components: streams
>            Reporter: Levani Kokhreidze
>            Priority: Minor
>
> *Motivation*
> As of now, Kafka Streams user has no control over to which host Kafka Streams 
> application will create standby task. In production deployments (especially 
> in Kubernetes) it's quite common to have multiple instances of the same Kafka 
> Streams application deployed across more than one "cluster" in order to have 
> high availability of the system.
> For example, if we have 6 Kafka Streams instances deployed across two 
> clusters, we'll get 3 Kafka Streams instances per cluster. With the current 
> implementation, Kafka Streams application may create "standby task" in the 
> same cluster as the active task. This is not the most optimal solution, 
> since, in case of cluster failure recovery time will be much bigger. This is 
> especially problematic for Kafka Streams application that manages large state.
>  
> *Possible Solution*
> It would be great if in the Kafka Streams configuration we could have a 
> possibility to inject dynamic environment variables and use that environment 
> variables to control over where standby task should be created.
> For example, suppose I have active task *1_1* with environment variable: 
> *CLUSTER_ID: main01* then stnadby task for *1_1* should be created where 
> *CLUSTER_ID* *!=* *main01*



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to