Re: isr never update

2014-12-04 Thread Jun Rao
The state-change log does indicate an error. Is the issue easily
reproducible? If so, could you try this on the 0.8.2 beta release? We did
fix some controller related issues in 0.8.2.

Thanks,

Jun

On Tue, Dec 2, 2014 at 1:46 AM, Shangan Chen chenshangan...@gmail.com
wrote:

 I checked the max lag and it was 0.

 I grep state-change logs about topic-partition [org.nginx,32], and
 extract some related to broker 24 and broker 29 (controller switched from
 broker 24 to 29)


- on broker 29 (current controller):


 [2014-11-22 06:20:20,377] TRACE Controller 29 epoch 7 changed state of
 replica 29 for partition [org.nginx,32] from OnlineReplica to OnlineReplica
 (state.change.logger)
 *[2014-11-22 06:20:20,650] TRACE Controller 29 epoch 7 sending
 become-leader LeaderAndIsr request
 (Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId 0
 to broker 29 for partition [org.nginx,32] (state.change.logger)*
 [2014-11-22 06:20:20,664] TRACE Broker 29 received LeaderAndIsr request

 (LeaderAndIsrInfo:(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4),ReplicationFactor:2),AllReplicas:29,24)
 correlation id 0 from controller 29 epoch 7 for partition [org.nginx,32]
 (state.change.logger)
 *[2014-11-22 06:20:20,674] WARN Broker 29 received invalid LeaderAndIsr
 request with correlation id 0 from controller 29 epoch 7 with an older
 leader epoch 10 for partition [org.nginx,32], current leader epoch is 10
 (state.change.logger)*
 [2014-11-22 06:20:20,912] TRACE Controller 29 epoch 7 sending
 UpdateMetadata request
 (Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId 0
 to broker 23 for partition [org.nginx,32] (state.change.logger)
 *[2014-11-22 06:20:21,490] TRACE Controller 29 epoch 7 sending
 UpdateMetadata request
 (Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId 0
 to broker 29 for partition [org.nginx,32] (state.change.logger)*
 *[2014-11-22 06:20:21,945] TRACE Broker 29 cached leader info

 (LeaderAndIsrInfo:(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4),ReplicationFactor:2),AllReplicas:29,24)
 for partition [org.nginx,32] in response to UpdateMetadata request sent by
 controller 29 epoch 7 with correlation id 0 (state.change.logger)*

 [2014-11-22 06:20:28,703] TRACE Broker 29 received LeaderAndIsr request

 (LeaderAndIsrInfo:(Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6),ReplicationFactor:2),AllReplicas:29,24)
 correlation id 4897 from controller 24 epoch 6 for partition [org.nginx,32]
 (state.change.logger)
 [2014-11-22 06:20:28,703] WARN Broker 29 received LeaderAndIsr request
 correlation id 4897 with an old controller epoch 6. Latest known controller
 epoch is 7 (state.change.logger)


 *analysis:*
 controller 29 send become-leader LeaderAndIsr request with on old
 controller epoch, and broker 29 itself deem this request invalid, so do
 other brokers. And then controller 29 send updateMetadata request to all
 brokers and brokers cached leaderinfo with an old controller epoch.

 *question:*
 when the controller send become-leader LeaderAndIsr request or other
 updateMetadata request, will it check the current controller epoch and
 leader epoch ? It looks like the controller did not do any checking .
 Meanwhile, brokers will reject the LeaderAndIsr request with an old
 controller epoch, but will deal with updateMetadata request and cache it.


- on broker 24 (previous controller)


 [2014-11-22 06:18:11,095] TRACE Controller 24 epoch 6 sending
 UpdateMetadata request
 (Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId
 4886 to broker 36 for partition [org.nginx,32] (state.change.logger)
 [2014-11-22 06:20:17,553] TRACE Controller 24 epoch 6 sending
 UpdateMetadata request
 (Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId
 4892 to broker 34 for partition [org.nginx,32] (state.change.logger)
 [2014-11-22 06:20:21,905] TRACE Controller 24 epoch 6 started leader
 election for partition [org.mobile_grouprecommend_userdeletedata,9]
 (state.change.logger)
 [2014-11-22 06:20:21,911] TRACE Controller 24 epoch 6 elected leader 21 for
 Offline partition [org.mobile_grouprecommend_userdeletedata,9]
 (state.change.logger)
 [2014-11-22 06:20:27,412] TRACE Controller 24 epoch 6 changed state of
 replica 24 for partition [org.nginx,32] from OnlineReplica to
 OfflineReplica (state.change.logger)
 *[2014-11-22 06:20:28,701] TRACE Controller 24 epoch 6 sending
 become-leader LeaderAndIsr request
 (Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6) with correlationId 4897
 to broker 29 for partition [org.nginx,32] (state.change.logger)*
 [2014-11-22 06:20:28,713] TRACE Controller 24 epoch 6 sending
 UpdateMetadata request (Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6)
 with correlationId 4897 to broker 23 for partition [org.nginx,32]
 (state.change.logger)


 *analysis:*
 controller 24 and controller 29 were alive together,  controller 24 send
 become-leader LeaderAndIsr request to broker 

Re: isr never update

2014-12-02 Thread Shangan Chen
I checked the max lag and it was 0.

I grep state-change logs about topic-partition [org.nginx,32], and
extract some related to broker 24 and broker 29 (controller switched from
broker 24 to 29)


   - on broker 29 (current controller):


[2014-11-22 06:20:20,377] TRACE Controller 29 epoch 7 changed state of
replica 29 for partition [org.nginx,32] from OnlineReplica to OnlineReplica
(state.change.logger)
*[2014-11-22 06:20:20,650] TRACE Controller 29 epoch 7 sending
become-leader LeaderAndIsr request
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId 0
to broker 29 for partition [org.nginx,32] (state.change.logger)*
[2014-11-22 06:20:20,664] TRACE Broker 29 received LeaderAndIsr request
(LeaderAndIsrInfo:(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4),ReplicationFactor:2),AllReplicas:29,24)
correlation id 0 from controller 29 epoch 7 for partition [org.nginx,32]
(state.change.logger)
*[2014-11-22 06:20:20,674] WARN Broker 29 received invalid LeaderAndIsr
request with correlation id 0 from controller 29 epoch 7 with an older
leader epoch 10 for partition [org.nginx,32], current leader epoch is 10
(state.change.logger)*
[2014-11-22 06:20:20,912] TRACE Controller 29 epoch 7 sending
UpdateMetadata request
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId 0
to broker 23 for partition [org.nginx,32] (state.change.logger)
*[2014-11-22 06:20:21,490] TRACE Controller 29 epoch 7 sending
UpdateMetadata request
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId 0
to broker 29 for partition [org.nginx,32] (state.change.logger)*
*[2014-11-22 06:20:21,945] TRACE Broker 29 cached leader info
(LeaderAndIsrInfo:(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4),ReplicationFactor:2),AllReplicas:29,24)
for partition [org.nginx,32] in response to UpdateMetadata request sent by
controller 29 epoch 7 with correlation id 0 (state.change.logger)*

[2014-11-22 06:20:28,703] TRACE Broker 29 received LeaderAndIsr request
(LeaderAndIsrInfo:(Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6),ReplicationFactor:2),AllReplicas:29,24)
correlation id 4897 from controller 24 epoch 6 for partition [org.nginx,32]
(state.change.logger)
[2014-11-22 06:20:28,703] WARN Broker 29 received LeaderAndIsr request
correlation id 4897 with an old controller epoch 6. Latest known controller
epoch is 7 (state.change.logger)


*analysis:*
controller 29 send become-leader LeaderAndIsr request with on old
controller epoch, and broker 29 itself deem this request invalid, so do
other brokers. And then controller 29 send updateMetadata request to all
brokers and brokers cached leaderinfo with an old controller epoch.

*question:*
when the controller send become-leader LeaderAndIsr request or other
updateMetadata request, will it check the current controller epoch and
leader epoch ? It looks like the controller did not do any checking .
Meanwhile, brokers will reject the LeaderAndIsr request with an old
controller epoch, but will deal with updateMetadata request and cache it.


   - on broker 24 (previous controller)


[2014-11-22 06:18:11,095] TRACE Controller 24 epoch 6 sending
UpdateMetadata request
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId
4886 to broker 36 for partition [org.nginx,32] (state.change.logger)
[2014-11-22 06:20:17,553] TRACE Controller 24 epoch 6 sending
UpdateMetadata request
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId
4892 to broker 34 for partition [org.nginx,32] (state.change.logger)
[2014-11-22 06:20:21,905] TRACE Controller 24 epoch 6 started leader
election for partition [org.mobile_grouprecommend_userdeletedata,9]
(state.change.logger)
[2014-11-22 06:20:21,911] TRACE Controller 24 epoch 6 elected leader 21 for
Offline partition [org.mobile_grouprecommend_userdeletedata,9]
(state.change.logger)
[2014-11-22 06:20:27,412] TRACE Controller 24 epoch 6 changed state of
replica 24 for partition [org.nginx,32] from OnlineReplica to
OfflineReplica (state.change.logger)
*[2014-11-22 06:20:28,701] TRACE Controller 24 epoch 6 sending
become-leader LeaderAndIsr request
(Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6) with correlationId 4897
to broker 29 for partition [org.nginx,32] (state.change.logger)*
[2014-11-22 06:20:28,713] TRACE Controller 24 epoch 6 sending
UpdateMetadata request (Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6)
with correlationId 4897 to broker 23 for partition [org.nginx,32]
(state.change.logger)


*analysis:*
controller 24 and controller 29 were alive together,  controller 24 send
become-leader LeaderAndIsr request to broker 29,  and broker 29 found it
had an old controllerEpoch and did not process.

*question:*

can two controllers live together ? I think it should not happen.
controller 24 and controller 29 send LeaderAndIsr to other brokers. While
controller 24 has a newer LeaderEpoch
(LeaderAndIsrInfo:(Leader:29,ISR:29,LeaderEpoch:11,ControllerEpoch:6)),
controller 29 has 

Re: isr never update

2014-11-27 Thread Shangan Chen
my kafka version is kafka_2.10-0.8.1.1.jar

*state-change log:*

[2014-11-25 02:30:19,290] TRACE Controller 29 epoch 7 sending
UpdateMetadata request
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4) with correlationId
1803 to broker 20 for partition [org.nginx,32] (state.change.logger)

*controller log:*

[2014-11-22 09:17:02,327]  [org.nginx,32] -
(Leader:29,ISR:29,24,LeaderEpoch:10,ControllerEpoch:4)

*partition state in zookeeper:*

[zk: localhost:2181(CONNECTED) 4] get
/kafka08/brokers/topics/org.nginx/partitions/32/state
{controller_epoch:6,leader:29,version:1,leader_epoch:11,isr:[29]}
cZxid = 0x5641824ee
ctime = Fri Oct 10 12:53:47 CST 2014
mZxid = 0x5a4c870b8
mtime = Sat Nov 22 06:20:27 CST 2014
pZxid = 0x5641824ee
cversion = 0
dataVersion = 19
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 75
numChildren = 0


Based on the above information, controller and state change log has the
right information, but partition state in zookeeper was not updated and
never try to update.




On Tue, Nov 25, 2014 at 1:28 PM, Jun Rao jun...@gmail.com wrote:

 Which version of Kafka are you using? Any error in the controller and the
 state-change log?

 Thanks,

 Jun

 On Fri, Nov 21, 2014 at 5:59 PM, Shangan Chen chenshangan...@gmail.com
 wrote:

  In the initial state all replicas are in isr list, but sometimes when I
  check the topic state, the replica can never become isr even if actually
 it
  is synchronized. I saw in  the log, the leader print expand isr
 request,but
  did not work. I found a interesting thing, the shrink and expand request
  happened just after the controller switch. I don't know whether it is
  related, and the controller log is overwrite, so I can not verify. Is
 there
  anything I can do to trigger the isr update? Currently, I alter the
  zookeeper partition state, and it works, but it really need a lot of
 manual
  work to do as I have quite a lot of topics in my cluster. Some useful
  information is as follows.
 
  *my replica lag config for default:*
 
  replica.lag.time.max.ms=1
  replica.lag.max.messages=4000
 
  *controller info:*
 
  [zk: localhost:2181(CONNECTED) 4] get /kafka08/controller
  {version:1,brokerid:29,timestamp:1416608404008}
  cZxid = 0x5a4c85923
  ctime = Sat Nov 22 06:20:04 CST 2014
  mZxid = 0x5a4c85923
  mtime = Sat Nov 22 06:20:04 CST 2014
  pZxid = 0x5a4c85923
  cversion = 0
  dataVersion = 0
  aclVersion = 0
  ephemeralOwner = 0x5477ba622cb6c7d
  dataLength = 55
  numChildren = 0
 
 
  *topic info:*
 
  Topic:org.nginx PartitionCount:48   ReplicationFactor:2 Configs:
  Topic: org.nginxPartition: 0Leader: 17  Replicas:
  17,32 Isr: 17,32
  Topic: org.nginxPartition: 1Leader: 18  Replicas:
  18,33 Isr: 18,33
  Topic: org.nginxPartition: 2Leader: 19  Replicas:
  19,34 Isr: 34,19
  Topic: org.nginxPartition: 3Leader: 20  Replicas:
  20,35 Isr: 35,20
  Topic: org.nginxPartition: 4Leader: 21  Replicas:
  21,36 Isr: 21,36
  Topic: org.nginxPartition: 5Leader: 22  Replicas:
  22,17 Isr: 17,22
  Topic: org.nginxPartition: 6Leader: 23  Replicas:
  23,18 Isr: 18,23
  Topic: org.nginxPartition: 7Leader: 24  Replicas:
  24,19 Isr: 24,19
  Topic: org.nginxPartition: 8Leader: 25  Replicas:
  25,20 Isr: 25,20
  Topic: org.nginxPartition: 9Leader: 26  Replicas:
  26,21 Isr: 26,21
  Topic: org.nginxPartition: 10   Leader: 27  Replicas:
  27,22 Isr: 27,22
  Topic: org.nginxPartition: 11   Leader: 28  Replicas:
  28,23 Isr: 28,23
  Topic: org.nginxPartition: 12   Leader: 29  Replicas:
  29,24 Isr: 29
  Topic: org.nginxPartition: 13   Leader: 30  Replicas:
  30,25 Isr: 30,25
  Topic: org.nginxPartition: 14   Leader: 31  Replicas:
  31,26 Isr: 26,31
  Topic: org.nginxPartition: 15   Leader: 32  Replicas:
  32,27 Isr: 27,32
  Topic: org.nginxPartition: 16   Leader: 33  Replicas:
  33,28 Isr: 33,28
  Topic: org.nginxPartition: 17   Leader: 34  Replicas:
  34,29 Isr: 29,34
  Topic: org.nginxPartition: 18   Leader: 35  Replicas:
  35,30 Isr: 30,35
  Topic: org.nginxPartition: 19   Leader: 36  Replicas:
  36,31 Isr: 31,36
  Topic: org.nginxPartition: 20   Leader: 17  Replicas:
  17,32 Isr: 17,32
  Topic: org.nginxPartition: 21   Leader: 18  Replicas:
  18,33 Isr: 18,33
  Topic: org.nginxPartition: 22   Leader: 19  Replicas:
  19,34 Isr: 34,19
  Topic: org.nginxPartition: 23   Leader: 20  Replicas:
  20,35 Isr: 35,20
  Topic: org.nginxPartition: 24   Leader: 21  Replicas:
  21,36 Isr: 21,36
  Topic: org.nginxPartition: 25 

Re: isr never update

2014-11-24 Thread Jun Rao
Which version of Kafka are you using? Any error in the controller and the
state-change log?

Thanks,

Jun

On Fri, Nov 21, 2014 at 5:59 PM, Shangan Chen chenshangan...@gmail.com
wrote:

 In the initial state all replicas are in isr list, but sometimes when I
 check the topic state, the replica can never become isr even if actually it
 is synchronized. I saw in  the log, the leader print expand isr request,but
 did not work. I found a interesting thing, the shrink and expand request
 happened just after the controller switch. I don't know whether it is
 related, and the controller log is overwrite, so I can not verify. Is there
 anything I can do to trigger the isr update? Currently, I alter the
 zookeeper partition state, and it works, but it really need a lot of manual
 work to do as I have quite a lot of topics in my cluster. Some useful
 information is as follows.

 *my replica lag config for default:*

 replica.lag.time.max.ms=1
 replica.lag.max.messages=4000

 *controller info:*

 [zk: localhost:2181(CONNECTED) 4] get /kafka08/controller
 {version:1,brokerid:29,timestamp:1416608404008}
 cZxid = 0x5a4c85923
 ctime = Sat Nov 22 06:20:04 CST 2014
 mZxid = 0x5a4c85923
 mtime = Sat Nov 22 06:20:04 CST 2014
 pZxid = 0x5a4c85923
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x5477ba622cb6c7d
 dataLength = 55
 numChildren = 0


 *topic info:*

 Topic:org.nginx PartitionCount:48   ReplicationFactor:2 Configs:
 Topic: org.nginxPartition: 0Leader: 17  Replicas:
 17,32 Isr: 17,32
 Topic: org.nginxPartition: 1Leader: 18  Replicas:
 18,33 Isr: 18,33
 Topic: org.nginxPartition: 2Leader: 19  Replicas:
 19,34 Isr: 34,19
 Topic: org.nginxPartition: 3Leader: 20  Replicas:
 20,35 Isr: 35,20
 Topic: org.nginxPartition: 4Leader: 21  Replicas:
 21,36 Isr: 21,36
 Topic: org.nginxPartition: 5Leader: 22  Replicas:
 22,17 Isr: 17,22
 Topic: org.nginxPartition: 6Leader: 23  Replicas:
 23,18 Isr: 18,23
 Topic: org.nginxPartition: 7Leader: 24  Replicas:
 24,19 Isr: 24,19
 Topic: org.nginxPartition: 8Leader: 25  Replicas:
 25,20 Isr: 25,20
 Topic: org.nginxPartition: 9Leader: 26  Replicas:
 26,21 Isr: 26,21
 Topic: org.nginxPartition: 10   Leader: 27  Replicas:
 27,22 Isr: 27,22
 Topic: org.nginxPartition: 11   Leader: 28  Replicas:
 28,23 Isr: 28,23
 Topic: org.nginxPartition: 12   Leader: 29  Replicas:
 29,24 Isr: 29
 Topic: org.nginxPartition: 13   Leader: 30  Replicas:
 30,25 Isr: 30,25
 Topic: org.nginxPartition: 14   Leader: 31  Replicas:
 31,26 Isr: 26,31
 Topic: org.nginxPartition: 15   Leader: 32  Replicas:
 32,27 Isr: 27,32
 Topic: org.nginxPartition: 16   Leader: 33  Replicas:
 33,28 Isr: 33,28
 Topic: org.nginxPartition: 17   Leader: 34  Replicas:
 34,29 Isr: 29,34
 Topic: org.nginxPartition: 18   Leader: 35  Replicas:
 35,30 Isr: 30,35
 Topic: org.nginxPartition: 19   Leader: 36  Replicas:
 36,31 Isr: 31,36
 Topic: org.nginxPartition: 20   Leader: 17  Replicas:
 17,32 Isr: 17,32
 Topic: org.nginxPartition: 21   Leader: 18  Replicas:
 18,33 Isr: 18,33
 Topic: org.nginxPartition: 22   Leader: 19  Replicas:
 19,34 Isr: 34,19
 Topic: org.nginxPartition: 23   Leader: 20  Replicas:
 20,35 Isr: 35,20
 Topic: org.nginxPartition: 24   Leader: 21  Replicas:
 21,36 Isr: 21,36
 Topic: org.nginxPartition: 25   Leader: 22  Replicas:
 22,17 Isr: 17,22
 Topic: org.nginxPartition: 26   Leader: 23  Replicas:
 23,18 Isr: 18,23
 Topic: org.nginxPartition: 27   Leader: 24  Replicas:
 24,19 Isr: 24,19
 Topic: org.nginxPartition: 28   Leader: 25  Replicas:
 25,20 Isr: 25,20
 Topic: org.nginxPartition: 29   Leader: 26  Replicas:
 26,21 Isr: 26,21
 Topic: org.nginxPartition: 30   Leader: 27  Replicas:
 27,22 Isr: 27,22
 Topic: org.nginxPartition: 31   Leader: 28  Replicas:
 28,23 Isr: 28,23
 Topic: org.nginxPartition: 32   Leader: 29  Replicas:
 29,24 Isr: 29
 Topic: org.nginxPartition: 33   Leader: 30  Replicas:
 30,25 Isr: 30,25
 Topic: org.nginxPartition: 34   Leader: 31  Replicas:
 31,26 Isr: 26,31
 Topic: org.nginxPartition: 35   Leader: 32  Replicas:
 32,27 Isr: 27,32
 Topic: org.nginxPartition: 36   Leader: 33  Replicas:
 33,28 Isr: 33,28
 Topic: org.nginxPartition: 37   Leader: 34  Replicas:
 34,29 Isr: 29,34
 

isr never update

2014-11-21 Thread Shangan Chen
In the initial state all replicas are in isr list, but sometimes when I
check the topic state, the replica can never become isr even if actually it
is synchronized. I saw in  the log, the leader print expand isr request,but
did not work. I found a interesting thing, the shrink and expand request
happened just after the controller switch. I don't know whether it is
related, and the controller log is overwrite, so I can not verify. Is there
anything I can do to trigger the isr update? Currently, I alter the
zookeeper partition state, and it works, but it really need a lot of manual
work to do as I have quite a lot of topics in my cluster. Some useful
information is as follows.

*my replica lag config for default:*

replica.lag.time.max.ms=1
replica.lag.max.messages=4000

*controller info:*

[zk: localhost:2181(CONNECTED) 4] get /kafka08/controller
{version:1,brokerid:29,timestamp:1416608404008}
cZxid = 0x5a4c85923
ctime = Sat Nov 22 06:20:04 CST 2014
mZxid = 0x5a4c85923
mtime = Sat Nov 22 06:20:04 CST 2014
pZxid = 0x5a4c85923
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x5477ba622cb6c7d
dataLength = 55
numChildren = 0


*topic info:*

Topic:org.nginx PartitionCount:48   ReplicationFactor:2 Configs:
Topic: org.nginxPartition: 0Leader: 17  Replicas:
17,32 Isr: 17,32
Topic: org.nginxPartition: 1Leader: 18  Replicas:
18,33 Isr: 18,33
Topic: org.nginxPartition: 2Leader: 19  Replicas:
19,34 Isr: 34,19
Topic: org.nginxPartition: 3Leader: 20  Replicas:
20,35 Isr: 35,20
Topic: org.nginxPartition: 4Leader: 21  Replicas:
21,36 Isr: 21,36
Topic: org.nginxPartition: 5Leader: 22  Replicas:
22,17 Isr: 17,22
Topic: org.nginxPartition: 6Leader: 23  Replicas:
23,18 Isr: 18,23
Topic: org.nginxPartition: 7Leader: 24  Replicas:
24,19 Isr: 24,19
Topic: org.nginxPartition: 8Leader: 25  Replicas:
25,20 Isr: 25,20
Topic: org.nginxPartition: 9Leader: 26  Replicas:
26,21 Isr: 26,21
Topic: org.nginxPartition: 10   Leader: 27  Replicas:
27,22 Isr: 27,22
Topic: org.nginxPartition: 11   Leader: 28  Replicas:
28,23 Isr: 28,23
Topic: org.nginxPartition: 12   Leader: 29  Replicas:
29,24 Isr: 29
Topic: org.nginxPartition: 13   Leader: 30  Replicas:
30,25 Isr: 30,25
Topic: org.nginxPartition: 14   Leader: 31  Replicas:
31,26 Isr: 26,31
Topic: org.nginxPartition: 15   Leader: 32  Replicas:
32,27 Isr: 27,32
Topic: org.nginxPartition: 16   Leader: 33  Replicas:
33,28 Isr: 33,28
Topic: org.nginxPartition: 17   Leader: 34  Replicas:
34,29 Isr: 29,34
Topic: org.nginxPartition: 18   Leader: 35  Replicas:
35,30 Isr: 30,35
Topic: org.nginxPartition: 19   Leader: 36  Replicas:
36,31 Isr: 31,36
Topic: org.nginxPartition: 20   Leader: 17  Replicas:
17,32 Isr: 17,32
Topic: org.nginxPartition: 21   Leader: 18  Replicas:
18,33 Isr: 18,33
Topic: org.nginxPartition: 22   Leader: 19  Replicas:
19,34 Isr: 34,19
Topic: org.nginxPartition: 23   Leader: 20  Replicas:
20,35 Isr: 35,20
Topic: org.nginxPartition: 24   Leader: 21  Replicas:
21,36 Isr: 21,36
Topic: org.nginxPartition: 25   Leader: 22  Replicas:
22,17 Isr: 17,22
Topic: org.nginxPartition: 26   Leader: 23  Replicas:
23,18 Isr: 18,23
Topic: org.nginxPartition: 27   Leader: 24  Replicas:
24,19 Isr: 24,19
Topic: org.nginxPartition: 28   Leader: 25  Replicas:
25,20 Isr: 25,20
Topic: org.nginxPartition: 29   Leader: 26  Replicas:
26,21 Isr: 26,21
Topic: org.nginxPartition: 30   Leader: 27  Replicas:
27,22 Isr: 27,22
Topic: org.nginxPartition: 31   Leader: 28  Replicas:
28,23 Isr: 28,23
Topic: org.nginxPartition: 32   Leader: 29  Replicas:
29,24 Isr: 29
Topic: org.nginxPartition: 33   Leader: 30  Replicas:
30,25 Isr: 30,25
Topic: org.nginxPartition: 34   Leader: 31  Replicas:
31,26 Isr: 26,31
Topic: org.nginxPartition: 35   Leader: 32  Replicas:
32,27 Isr: 27,32
Topic: org.nginxPartition: 36   Leader: 33  Replicas:
33,28 Isr: 33,28
Topic: org.nginxPartition: 37   Leader: 34  Replicas:
34,29 Isr: 29,34
Topic: org.nginxPartition: 38   Leader: 35  Replicas:
35,30 Isr: 30,35
Topic: org.nginxPartition: 39   Leader: 36  Replicas:
36,31 Isr: 31,36
Topic: org.nginxPartition: 40   Leader: 17  Replicas:
17,32 Isr: 17,32
Topic: org.nginx