[ 
https://issues.apache.org/jira/browse/KAFKA-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Sasvari updated KAFKA-6703:
----------------------------------
    Description: 
Scenario:
 - MM whitelabel regexp matches multiple topics
 - destination cluster has 5 brokers with multiple topics replication factor 3
 - without partition reassign shut down 2 brokers
 - suppose a topic has no leader any more because it was off-sync and the 
leader and the rest of the replicas are hosted on the downed brokers.
 - so we have 1 topic with some partitions with leader -1
 - the rest of the matching topics has 3 replicas with leaders

MM will not produce into any of the matched topics until:
 - the "orphaned" topic removed or
 - the partition reassign carried out from the downed brokers (suppose you can 
turn these back on)

In the MirrorMaker logs, there are a lot of messages like the following ones:
{code}
[2018-03-22 19:55:32,522] DEBUG [Consumer clientId=consumer-1, 
groupId=console-consumer-43054] Coordinator discovery failed, refreshing 
metadata (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)

[2018-03-22 19:55:32,522] DEBUG [Consumer clientId=consumer-1, 
groupId=console-consumer-43054] Sending metadata request (type=MetadataRequest, 
topics=<ALL>) to node 192.168.1.102:9092 (id: 0 rack: null) 
(org.apache.kafka.clients.NetworkClient)

[2018-03-22 19:55:32,525] DEBUG Updated cluster metadata version 10 to 
Cluster(id = Y-qtoFP-RMq2uuVnkEKAAw, nodes = [192.168.1.102:9092 (id: 0 rack: 
null)], partitions = [Partition(topic = testR1P2, partition = 1, leader = none, 
replicas = [42], isr = [], offlineReplicas = [42]), Partition(topic = testR1P1, 
partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = []), 
Partition(topic = testAlive, partition = 0, leader = 0, replicas = [0], isr = 
[0], offlineReplicas = []), Partition(topic = testERRR, partition = 0, leader = 
0, replicas = [0], isr = [0], offlineReplicas = []), Partition(topic = 
testR1P2, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas 
= [])]) (org.apache.kafka.clients.Metadata)

[2018-03-22 19:55:32,525] DEBUG [Consumer clientId=consumer-1, 
groupId=console-consumer-43054] Sending FindCoordinator request to broker 
192.168.1.102:9092 (id: 0 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)

[2018-03-22 19:55:32,525] DEBUG [Consumer clientId=consumer-1, 
groupId=console-consumer-43054] Received FindCoordinator response 
ClientResponse(receivedTimeMs=1521744932525, latencyMs=0, disconnected=false, 
requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, 
clientId=consumer-1, correlationId=19), 
responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', 
error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)

[2018-03-22 19:55:32,526] DEBUG [Consumer clientId=consumer-1, 
groupId=console-consumer-43054] Group coordinator lookup failed: The 
coordinator is not available. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{code}

Interestingly, if MirrorMaker uses {{zookeeper.connect}} in its consumer 
properties file, then an OldConsumer is created, and it can make progress.



  was:
Scenario:
 - MM whitelabel regexp matches multiple topics
 - destination cluster has 5 brokers with multiple topics replication factor 3
 - without partition reassign shut down 2 brokers
 - suppose a topic has no leader any more because it was off-sync and the 
leader and the rest of the replicas are hosted on the downed brokers.
 - so we have 1 topic with some partitions with leader -1
 - the rest of the matching topics has 3 replicas with leaders

MM will not produce into any of the matched topics until:
 - the "orphaned" topic removed or
 - the partition reassign carried out from the downed brokers (suppose you can 
turn these back on)

In the MirrorMaker logs, there are a lot of messages like the following ones:
{code}
[2018-03-22 18:59:07,781] DEBUG [Consumer clientId=1-1, groupId=1] Sending 
FindCoordinator request to broker 192.168.1.102:9092 (id: 0 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,781] DEBUG [Consumer clientId=1-0, groupId=1] Sending 
FindCoordinator request to broker 192.168.1.102:9092 (id: 0 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,783] DEBUG [Consumer clientId=1-0, groupId=1] Received 
FindCoordinator response ClientResponse(receivedTimeMs=1521741547782, 
latencyMs=1, disconnected=false, 
requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, 
clientId=1-0, correlationId=71), 
responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', 
error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,783] DEBUG [Consumer clientId=1-1, groupId=1] Received 
FindCoordinator response ClientResponse(receivedTimeMs=1521741547782, 
latencyMs=1, disconnected=false, 
requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, 
clientId=1-1, correlationId=71), 
responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', 
error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,783] DEBUG [Consumer clientId=1-0, groupId=1] Group 
coordinator lookup failed: The coordinator is not available. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,783] DEBUG [Consumer clientId=1-1, groupId=1] Group 
coordinator lookup failed: The coordinator is not available. 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,783] DEBUG [Consumer clientId=1-0, groupId=1] Coordinator 
discovery failed, refreshing metadata 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-03-22 18:59:07,783] DEBUG [Consumer clientId=1-1, groupId=1] Coordinator 
discovery failed, refreshing metadata 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
{code}

Interestingly, if MirrorMaker uses {{zookeeper.connect}} in its consumer 
properties file, then an OldConsumer is created, and it can make progress.




> MirrorMaker cannot make progress when any matched topic from a whitelist 
> regexp has -1 leader
> ---------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-6703
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6703
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 1.1.0
>            Reporter: Attila Sasvari
>            Priority: Major
>
> Scenario:
>  - MM whitelabel regexp matches multiple topics
>  - destination cluster has 5 brokers with multiple topics replication factor 3
>  - without partition reassign shut down 2 brokers
>  - suppose a topic has no leader any more because it was off-sync and the 
> leader and the rest of the replicas are hosted on the downed brokers.
>  - so we have 1 topic with some partitions with leader -1
>  - the rest of the matching topics has 3 replicas with leaders
> MM will not produce into any of the matched topics until:
>  - the "orphaned" topic removed or
>  - the partition reassign carried out from the downed brokers (suppose you 
> can turn these back on)
> In the MirrorMaker logs, there are a lot of messages like the following ones:
> {code}
> [2018-03-22 19:55:32,522] DEBUG [Consumer clientId=consumer-1, 
> groupId=console-consumer-43054] Coordinator discovery failed, refreshing 
> metadata (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2018-03-22 19:55:32,522] DEBUG [Consumer clientId=consumer-1, 
> groupId=console-consumer-43054] Sending metadata request 
> (type=MetadataRequest, topics=<ALL>) to node 192.168.1.102:9092 (id: 0 rack: 
> null) (org.apache.kafka.clients.NetworkClient)
> [2018-03-22 19:55:32,525] DEBUG Updated cluster metadata version 10 to 
> Cluster(id = Y-qtoFP-RMq2uuVnkEKAAw, nodes = [192.168.1.102:9092 (id: 0 rack: 
> null)], partitions = [Partition(topic = testR1P2, partition = 1, leader = 
> none, replicas = [42], isr = [], offlineReplicas = [42]), Partition(topic = 
> testR1P1, partition = 0, leader = 0, replicas = [0], isr = [0], 
> offlineReplicas = []), Partition(topic = testAlive, partition = 0, leader = 
> 0, replicas = [0], isr = [0], offlineReplicas = []), Partition(topic = 
> testERRR, partition = 0, leader = 0, replicas = [0], isr = [0], 
> offlineReplicas = []), Partition(topic = testR1P2, partition = 0, leader = 0, 
> replicas = [0], isr = [0], offlineReplicas = [])]) 
> (org.apache.kafka.clients.Metadata)
> [2018-03-22 19:55:32,525] DEBUG [Consumer clientId=consumer-1, 
> groupId=console-consumer-43054] Sending FindCoordinator request to broker 
> 192.168.1.102:9092 (id: 0 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2018-03-22 19:55:32,525] DEBUG [Consumer clientId=consumer-1, 
> groupId=console-consumer-43054] Received FindCoordinator response 
> ClientResponse(receivedTimeMs=1521744932525, latencyMs=0, disconnected=false, 
> requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, 
> clientId=consumer-1, correlationId=19), 
> responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', 
> error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> [2018-03-22 19:55:32,526] DEBUG [Consumer clientId=consumer-1, 
> groupId=console-consumer-43054] Group coordinator lookup failed: The 
> coordinator is not available. 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
> {code}
> Interestingly, if MirrorMaker uses {{zookeeper.connect}} in its consumer 
> properties file, then an OldConsumer is created, and it can make progress.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to