[ 
https://issues.apache.org/jira/browse/KAFKA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947198#comment-16947198
 ] 

ASF GitHub Bot commented on KAFKA-8122:
---------------------------------------

mjsax commented on pull request #7470: KAFKA-8122: Fix Kafka Streams EOS 
integration test
URL: https://github.com/apache/kafka/pull/7470
 
 
   We tried to use `commitRequested` to synchronize the test progress, however, 
this mechanism seems to be broken. Note that `context.commit();` is a request 
that KS should commit asap -- however, after `context.commit()` returned only 
an internal flag is set and the commit is not executed yet.
   
   Hence, after the counter is increased to 2, there is no guarantee what 
happens next: either we commit, or we might actually `poll()` for new data and 
if `writeInputData(uncommittedDataBeforeFailure);` executed before KS `poll()` 
again, we might process more than 10 records per partition before we actually 
commit, and hence the test fails.
   
   A better way seems to be, to read the committed output data before writing 
new data to make sure the new data is not part of the first transactions and 
stays uncommitted before we inject the error.
   
   Call for review @guozhangwang @ableegoldman @abbccdda @cpettitt-confluent 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test EosIntegrationTest#shouldNotViolateEosIfOneTaskFailsWithState
> ------------------------------------------------------------------------
>
>                 Key: KAFKA-8122
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8122
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams, unit tests
>    Affects Versions: 2.3.0, 2.4.0
>            Reporter: Matthias J. Sax
>            Assignee: Matthias J. Sax
>            Priority: Major
>              Labels: flaky-test
>             Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3285/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState/]
> {quote}java.lang.AssertionError: Expected: <[KeyValue(0, 0), KeyValue(0, 1), 
> KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 10), KeyValue(0, 15), KeyValue(0, 
> 21), KeyValue(0, 28), KeyValue(0, 36), KeyValue(0, 45)]> but: was 
> <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 3), KeyValue(0, 6), KeyValue(0, 
> 10), KeyValue(0, 15), KeyValue(0, 21), KeyValue(0, 28), KeyValue(0, 36), 
> KeyValue(0, 45), KeyValue(0, 55), KeyValue(0, 66), KeyValue(0, 78), 
> KeyValue(0, 91), KeyValue(0, 105)]> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6) at 
> org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:212)
>  at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:414){quote}
> STDOUT
> {quote}[2019-03-17 01:19:51,971] INFO Created server with tickTime 800 
> minSessionTimeout 1600 maxSessionTimeout 16000 datadir 
> /tmp/kafka-10997967593034298484/version-2 snapdir 
> /tmp/kafka-5184295822696533708/version-2 
> (org.apache.zookeeper.server.ZooKeeperServer:174) [2019-03-17 01:19:51,971] 
> INFO binding to port /127.0.0.1:0 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:89) [2019-03-17 
> 01:19:51,973] INFO KafkaConfig values: advertised.host.name = null 
> advertised.listeners = null advertised.port = null 
> alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 30000 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 3600000 
> delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 604800000 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 60000 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 60000 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
> log.message.timestamp.difference.max.ms = 9223372036854775807 
> log.message.timestamp.type = CreateTime log.preallocate = false 
> log.retention.bytes = -1 log.retention.check.interval.ms = 300000 
> log.retention.hours = 168 log.retention.minutes = null log.retention.ms = 
> null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null 
> log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms 
> = 60000 max.connections = 2147483647 max.connections.per.ip = 2147483647 
> max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots 
> = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples 
> = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 
> min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 
> num.partitions = 1 num.recovery.threads.per.data.dir = 1 
> num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 
> offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 
> offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 
> offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 
> 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 
> offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 
> password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 
> password.encoder.iterations = 4096 password.encoder.key.length = 128 
> password.encoder.keyfactory.algorithm = null password.encoder.old.secret = 
> null password.encoder.secret = null port = 0 principal.builder.class = null 
> producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = 
> -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 
> quota.producer.default = 9223372036854775807 quota.window.num = 11 
> quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 
> replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 
> replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 
> replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms 
> = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms 
> = 30000 replication.quota.window.num = 11 
> replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 
> reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null 
> sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null 
> sasl.kerberos.kinit.cmd = /usr/bin/kinit 
> sasl.kerberos.min.time.before.relogin = 60000 
> sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name 
> = null sasl.kerberos.ticket.renew.jitter = 0.05 
> sasl.kerberos.ticket.renew.window.factor = 0.8 
> sasl.login.callback.handler.class = null sasl.login.class = null 
> sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds 
> = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter 
> = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI 
> sasl.server.callback.handler.class = null security.inter.broker.protocol = 
> PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 
> 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] 
> ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] 
> ssl.endpoint.identification.algorithm = https ssl.key.password = null 
> ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null 
> ssl.keystore.password = null ssl.keystore.type = JKS 
> ssl.principal.mapping.rules = [DEFAULT] ssl.protocol = TLS ssl.provider = 
> null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = 
> PKIX ssl.truststore.location = null ssl.truststore.password = null 
> ssl.truststore.type = JKS 
> transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 
> transaction.max.timeout.ms = 900000 
> transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 
> transaction.state.log.load.buffer.size = 5242880 
> transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 
> transaction.state.log.replication.factor = 3 
> transaction.state.log.segment.bytes = 104857600 
> transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = 
> false zookeeper.connect = 127.0.0.1:40922 zookeeper.connection.timeout.ms = 
> null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 
> 10000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 
> (kafka.server.KafkaConfig:279) [2019-03-17 01:19:51,974] INFO starting 
> (kafka.server.KafkaServer:66) [2019-03-17 01:19:51,975] INFO Connecting to 
> zookeeper on 127.0.0.1:40922 (kafka.server.KafkaServer:66) [2019-03-17 
> 01:19:51,975] INFO [ZooKeeperClient] Initializing a new session to 
> 127.0.0.1:40922. (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 
> 01:19:51,976] INFO Initiating client connection, 
> connectString=127.0.0.1:40922 sessionTimeout=10000 
> watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@c49df6a 
> (org.apache.zookeeper.ZooKeeper:442) [2019-03-17 01:19:51,976] INFO Opening 
> socket connection to server localhost/127.0.0.1:40922. Will not attempt to 
> authenticate using SASL (unknown error) 
> (org.apache.zookeeper.ClientCnxn:1029) [2019-03-17 01:19:51,976] INFO 
> [ZooKeeperClient] Waiting until connected. 
> (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 01:19:51,977] INFO Socket 
> connection established to localhost/127.0.0.1:40922, initiating session 
> (org.apache.zookeeper.ClientCnxn:879) [2019-03-17 01:19:51,977] INFO Accepted 
> socket connection from /127.0.0.1:59496 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:215) [2019-03-17 
> 01:19:51,977] INFO Client attempting to establish new session at 
> /127.0.0.1:59496 (org.apache.zookeeper.server.ZooKeeperServer:949) 
> [2019-03-17 01:19:51,977] INFO Creating new log file: log.1 
> (org.apache.zookeeper.server.persistence.FileTxnLog:213) [2019-03-17 
> 01:19:51,978] INFO Established session 0x102ebdac1b40000 with negotiated 
> timeout 10000 for client /127.0.0.1:59496 
> (org.apache.zookeeper.server.ZooKeeperServer:694) [2019-03-17 01:19:51,978] 
> INFO Session establishment complete on server localhost/127.0.0.1:40922, 
> sessionid = 0x102ebdac1b40000, negotiated timeout = 10000 
> (org.apache.zookeeper.ClientCnxn:1303) [2019-03-17 01:19:51,979] INFO 
> [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 
> 01:19:51,980] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40000 type:create cxid:0x2 zxid:0x3 txntype:-1 
> reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:51,983] INFO SessionTrackerImpl exited loop! 
> (org.apache.zookeeper.server.SessionTrackerImpl:163) [2019-03-17 
> 01:19:51,988] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40000 type:create cxid:0x6 zxid:0x7 txntype:-1 
> reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:51,988] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40000 type:create cxid:0x9 zxid:0xa txntype:-1 
> reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:51,991] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40000 type:create cxid:0x15 zxid:0x15 txntype:-1 
> reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:51,992] INFO Cluster ID = uGsrLrj_SQaCi6cpT07M4Q 
> (kafka.server.KafkaServer:66) [2019-03-17 01:19:51,992] WARN No 
> meta.properties file under dir 
> /tmp/junit16020146621422955757/junit17406374597406011269/meta.properties 
> (kafka.server.BrokerMetadataCheckpoint:70) [2019-03-17 01:19:51,994] INFO 
> KafkaConfig values: advertised.host.name = null advertised.listeners = null 
> advertised.port = null alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 30000 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 3600000 
> delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 604800000 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 60000 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 60000 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
> log.message.timestamp.difference.max.ms = 9223372036854775807 
> log.message.timestamp.type = CreateTime log.preallocate = false 
> log.retention.bytes = -1 log.retention.check.interval.ms = 300000 
> log.retention.hours = 168 log.retention.minutes = null log.retention.ms = 
> null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null 
> log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms 
> = 60000 max.connections = 2147483647 max.connections.per.ip = 2147483647 
> max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots 
> = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples 
> = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 
> min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 
> num.partitions = 1 num.recovery.threads.per.data.dir = 1 
> num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 
> offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 
> offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 
> offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 
> 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 
> offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 
> password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 
> password.encoder.iterations = 4096 password.encoder.key.length = 128 
> password.encoder.keyfactory.algorithm = null password.encoder.old.secret = 
> null password.encoder.secret = null port = 0 principal.builder.class = null 
> producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = 
> -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 
> quota.producer.default = 9223372036854775807 quota.window.num = 11 
> quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 
> replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 
> replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 
> replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms 
> = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms 
> = 30000 replication.quota.window.num = 11 
> replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 
> reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null 
> sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null 
> sasl.kerberos.kinit.cmd = /usr/bin/kinit 
> sasl.kerberos.min.time.before.relogin = 60000 
> sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name 
> = null sasl.kerberos.ticket.renew.jitter = 0.05 
> sasl.kerberos.ticket.renew.window.factor = 0.8 
> sasl.login.callback.handler.class = null sasl.login.class = null 
> sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds 
> = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter 
> = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI 
> sasl.server.callback.handler.class = null security.inter.broker.protocol = 
> PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 
> 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] 
> ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] 
> ssl.endpoint.identification.algorithm = https ssl.key.password = null 
> ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null 
> ssl.keystore.password = null ssl.keystore.type = JKS 
> ssl.principal.mapping.rules = [DEFAULT] ssl.protocol = TLS ssl.provider = 
> null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = 
> PKIX ssl.truststore.location = null ssl.truststore.password = null 
> ssl.truststore.type = JKS 
> transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 
> transaction.max.timeout.ms = 900000 
> transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 
> transaction.state.log.load.buffer.size = 5242880 
> transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 
> transaction.state.log.replication.factor = 3 
> transaction.state.log.segment.bytes = 104857600 
> transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = 
> false zookeeper.connect = 127.0.0.1:40922 zookeeper.connection.timeout.ms = 
> null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 
> 10000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 
> (kafka.server.KafkaConfig:279) [2019-03-17 01:19:51,996] INFO KafkaConfig 
> values: advertised.host.name = null advertised.listeners = null 
> advertised.port = null alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 30000 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 3600000 
> delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 604800000 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit16020146621422955757/junit17406374597406011269 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 60000 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 60000 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
> log.message.timestamp.difference.max.ms = 9223372036854775807 
> log.message.timestamp.type = CreateTime log.preallocate = false 
> log.retention.bytes = -1 log.retention.check.interval.ms = 300000 
> log.retention.hours = 168 log.retention.minutes = null log.retention.ms = 
> null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null 
> log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms 
> = 60000 max.connections = 2147483647 max.connections.per.ip = 2147483647 
> max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots 
> = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples 
> = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 
> min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 
> num.partitions = 1 num.recovery.threads.per.data.dir = 1 
> num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 
> offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 
> offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 
> offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 
> 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 
> offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 
> password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 
> password.encoder.iterations = 4096 password.encoder.key.length = 128 
> password.encoder.keyfactory.algorithm = null password.encoder.old.secret = 
> null password.encoder.secret = null port = 0 principal.builder.class = null 
> producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = 
> -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 
> quota.producer.default = 9223372036854775807 quota.window.num = 11 
> quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 
> replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 
> replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 
> replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms 
> = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms 
> = 30000 replication.quota.window.num = 11 
> replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 
> reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null 
> sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null 
> sasl.kerberos.kinit.cmd = /usr/bin/kinit 
> sasl.kerberos.min.time.before.relogin = 60000 
> sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name 
> = null sasl.kerberos.ticket.renew.jitter = 0.05 
> sasl.kerberos.ticket.renew.window.factor = 0.8 
> sasl.login.callback.handler.class = null sasl.login.class = null 
> sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds 
> = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter 
> = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI 
> sasl.server.callback.handler.class = null security.inter.broker.protocol = 
> PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 
> 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] 
> ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] 
> ssl.endpoint.identification.algorithm = https ssl.key.password = null 
> ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null 
> ssl.keystore.password = null ssl.keystore.type = JKS 
> ssl.principal.mapping.rules = [DEFAULT] ssl.protocol = TLS ssl.provider = 
> null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = 
> PKIX ssl.truststore.location = null ssl.truststore.password = null 
> ssl.truststore.type = JKS 
> transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 
> transaction.max.timeout.ms = 900000 
> transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 
> transaction.state.log.load.buffer.size = 5242880 
> transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 
> transaction.state.log.replication.factor = 3 
> transaction.state.log.segment.bytes = 104857600 
> transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = 
> false zookeeper.connect = 127.0.0.1:40922 zookeeper.connection.timeout.ms = 
> null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 
> 10000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 
> (kafka.server.KafkaConfig:279) [2019-03-17 01:19:51,999] INFO 
> [ThrottledChannelReaper-Fetch]: Starting 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:19:51,999] INFO [ThrottledChannelReaper-Produce]: Starting 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:19:51,999] INFO [ThrottledChannelReaper-Request]: Starting 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:19:52,000] INFO Loading logs. (kafka.log.LogManager:66) [2019-03-17 
> 01:19:52,001] INFO Logs loading complete in 0 ms. (kafka.log.LogManager:66) 
> [2019-03-17 01:19:52,001] INFO Starting log cleanup with a period of 300000 
> ms. (kafka.log.LogManager:66) [2019-03-17 01:19:52,002] INFO Starting log 
> flusher with a default period of 9223372036854775807 ms. 
> (kafka.log.LogManager:66) [2019-03-17 01:19:52,002] INFO Starting the log 
> cleaner (kafka.log.LogCleaner:66) [2019-03-17 01:19:52,004] INFO 
> [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner:66) [2019-03-17 
> 01:19:52,024] INFO Awaiting socket connections on slocalhost:39982. 
> (kafka.network.Acceptor:66) [2019-03-17 01:19:52,027] INFO [SocketServer 
> brokerId=0] Created data-plane acceptor and processors for endpoint : 
> EndPoint(localhost,0,ListenerName(PLAINTEXT),PLAINTEXT) 
> (kafka.network.SocketServer:66) [2019-03-17 01:19:52,028] INFO [SocketServer 
> brokerId=0] Started 1 acceptor threads for data-plane 
> (kafka.network.SocketServer:66) [2019-03-17 01:19:52,028] INFO 
> [ExpirationReaper-0-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,029] INFO [ExpirationReaper-0-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,029] INFO [ExpirationReaper-0-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,029] INFO [ExpirationReaper-0-ElectPreferredLeader]: 
> Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,031] INFO [LogDirFailureHandler]: Starting 
> (kafka.server.ReplicaManager$LogDirFailureHandler:66) [2019-03-17 
> 01:19:52,031] INFO Creating /brokers/ids/0 (is it secure? false) 
> (kafka.zk.KafkaZkClient:66) [2019-03-17 01:19:52,033] INFO Stat of the 
> created znode at /brokers/ids/0 is: 
> 24,24,1552785592032,1552785592032,1,0,0,72879868776546304,190,0,24 
> (kafka.zk.KafkaZkClient:66) [2019-03-17 01:19:52,033] INFO Registered broker 
> 0 at path /brokers/ids/0 with addresses: 
> ArrayBuffer(EndPoint(localhost,39982,ListenerName(PLAINTEXT),PLAINTEXT)), 
> czxid (broker epoch): 24 (kafka.zk.KafkaZkClient:66) [2019-03-17 
> 01:19:52,034] WARN No meta.properties file under dir 
> /tmp/junit16020146621422955757/junit17406374597406011269/meta.properties 
> (kafka.server.BrokerMetadataCheckpoint:70) [2019-03-17 01:19:52,086] INFO 
> [ControllerEventThread controllerId=0] Starting 
> (kafka.controller.ControllerEventManager$ControllerEventThread:66) 
> [2019-03-17 01:19:52,087] INFO [ExpirationReaper-0-topic]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,087] INFO [ExpirationReaper-0-Heartbeat]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,088] INFO [ExpirationReaper-0-Rebalance]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:19:52,088] INFO Successfully created /controller_epoch with 
> initial epoch 0 (kafka.zk.KafkaZkClient:66) [2019-03-17 01:19:52,089] INFO 
> [GroupCoordinator 0]: Starting up. 
> (kafka.coordinator.group.GroupCoordinator:66) [2019-03-17 01:19:52,089] INFO 
> [GroupCoordinator 0]: Startup complete. 
> (kafka.coordinator.group.GroupCoordinator:66) [2019-03-17 01:19:52,090] INFO 
> [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 
> milliseconds. (kafka.coordinator.group.GroupMetadataManager:66) [2019-03-17 
> 01:19:52,090] INFO [Controller id=0] 0 successfully elected as the 
> controller. Epoch incremented to 1 and epoch zk version is now 1 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,090] INFO 
> [Controller id=0] Registering handlers (kafka.controller.KafkaController:66) 
> [2019-03-17 01:19:52,091] INFO [ProducerId Manager 0]: Acquired new 
> producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) 
> by writing to Zk with path version 1 
> (kafka.coordinator.transaction.ProducerIdManager:66) [2019-03-17 
> 01:19:52,092] INFO [Controller id=0] Deleting log dir event notifications 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,092] INFO 
> [Controller id=0] Deleting isr change notifications 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,093] INFO 
> [Controller id=0] Initializing controller context 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,094] INFO 
> [TransactionCoordinator id=0] Starting up. 
> (kafka.coordinator.transaction.TransactionCoordinator:66) [2019-03-17 
> 01:19:52,094] INFO [Controller id=0] Initialized broker epochs cache: Map(0 
> -> 24) (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,094] INFO 
> [TransactionCoordinator id=0] Startup complete. 
> (kafka.coordinator.transaction.TransactionCoordinator:66) [2019-03-17 
> 01:19:52,095] INFO [Transaction Marker Channel Manager 0]: Starting 
> (kafka.coordinator.transaction.TransactionMarkerChannelManager:66) 
> [2019-03-17 01:19:52,098] INFO [/config/changes-event-process-thread]: 
> Starting 
> (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66) 
> [2019-03-17 01:19:52,098] INFO [RequestSendThread controllerId=0] Starting 
> (kafka.controller.RequestSendThread:66) [2019-03-17 01:19:52,099] INFO 
> [Controller id=0] Partitions being reassigned: Map() 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,099] INFO 
> [Controller id=0] Currently active brokers in the cluster: Set(0) 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,099] INFO 
> [Controller id=0] Currently shutting brokers in the cluster: Set() 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,099] INFO 
> [Controller id=0] Current list of topics in the cluster: Set() 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,099] INFO 
> [Controller id=0] Fetching topic deletions in progress 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,100] INFO 
> [Controller id=0] List of topics to be deleted: 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,100] INFO 
> [SocketServer brokerId=0] Started data-plane processors for 1 acceptors 
> (kafka.network.SocketServer:66) [2019-03-17 01:19:52,100] INFO [Controller 
> id=0] List of topics ineligible for deletion: 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,101] INFO Kafka 
> version: 2.3.0-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser:109) 
> [2019-03-17 01:19:52,101] INFO Kafka commitId: 78d2111339e621ce 
> (org.apache.kafka.common.utils.AppInfoParser:110) [2019-03-17 01:19:52,101] 
> INFO [Controller id=0] Initializing topic deletion manager 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,101] INFO 
> [KafkaServer id=0] started (kafka.server.KafkaServer:66) [2019-03-17 
> 01:19:52,102] INFO [Controller id=0] Sending update metadata request 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,102] INFO 
> [ReplicaStateMachine controllerId=0] Initializing replica state 
> (kafka.controller.ReplicaStateMachine:66) [2019-03-17 01:19:52,102] INFO 
> [ReplicaStateMachine controllerId=0] Triggering online replica state changes 
> (kafka.controller.ReplicaStateMachine:66) [2019-03-17 01:19:52,102] INFO 
> [ReplicaStateMachine controllerId=0] Started replica state machine with 
> initial state -> Map() (kafka.controller.ReplicaStateMachine:66) [2019-03-17 
> 01:19:52,102] INFO [RequestSendThread controllerId=0] Controller 0 connected 
> to localhost:39982 (id: 0 rack: null) for sending state change requests 
> (kafka.controller.RequestSendThread:66) [2019-03-17 01:19:52,103] INFO 
> [PartitionStateMachine controllerId=0] Initializing partition state 
> (kafka.controller.PartitionStateMachine:66) [2019-03-17 01:19:52,103] INFO 
> [PartitionStateMachine controllerId=0] Triggering online partition state 
> changes (kafka.controller.PartitionStateMachine:66) [2019-03-17 01:19:52,103] 
> INFO KafkaConfig values: advertised.host.name = null advertised.listeners = 
> null advertised.port = null alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 30000 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 3600000 
> delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 604800000 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 
> log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 
> log.cleanup.policy = [delete] log.dir = 
> /tmp/junit5742927953559521435/junit4694147204844585114 log.dirs = null 
> log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = 
> null log.flush.offset.checkpoint.interval.ms = 60000 
> log.flush.scheduler.interval.ms = 9223372036854775807 
> log.flush.start.offset.checkpoint.interval.ms = 60000 
> log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 
> log.message.downconversion.enable = true log.message.format.version = 2.2-IV1 
> log.message.timestamp.difference.max.ms = 9223372036854775807 
> log.message.timestamp.type = CreateTime log.preallocate = false 
> log.retention.bytes = -1 log.retention.check.interval.ms = 300000 
> log.retention.hours = 168 log.retention.minutes = null log.retention.ms = 
> null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null 
> log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms 
> = 60000 max.connections = 2147483647 max.connections.per.ip = 2147483647 
> max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots 
> = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples 
> = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 
> min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 
> num.partitions = 1 num.recovery.threads.per.data.dir = 1 
> num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 
> offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 
> offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 
> offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 
> 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 
> offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 
> password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 
> password.encoder.iterations = 4096 password.encoder.key.length = 128 
> password.encoder.keyfactory.algorithm = null password.encoder.old.secret = 
> null password.encoder.secret = null port = 0 principal.builder.class = null 
> producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = 
> -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 
> quota.producer.default = 9223372036854775807 quota.window.num = 11 
> quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 
> replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 
> replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 
> replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms 
> = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms 
> = 30000 replication.quota.window.num = 11 
> replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 
> reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null 
> sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null 
> sasl.kerberos.kinit.cmd = /usr/bin/kinit 
> sasl.kerberos.min.time.before.relogin = 60000 
> sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name 
> = null sasl.kerberos.ticket.renew.jitter = 0.05 
> sasl.kerberos.ticket.renew.window.factor = 0.8 
> sasl.login.callback.handler.class = null sasl.login.class = null 
> sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds 
> = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter 
> = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI 
> sasl.server.callback.handler.class = null security.inter.broker.protocol = 
> PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 
> 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] 
> ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] 
> ssl.endpoint.identification.algorithm = https ssl.key.password = null 
> ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null 
> ssl.keystore.password = null ssl.keystore.type = JKS 
> ssl.principal.mapping.rules = [DEFAULT] ssl.protocol = TLS ssl.provider = 
> null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = 
> PKIX ssl.truststore.location = null ssl.truststore.password = null 
> ssl.truststore.type = JKS 
> transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 
> transaction.max.timeout.ms = 900000 
> transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 
> transaction.state.log.load.buffer.size = 5242880 
> transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 
> transaction.state.log.replication.factor = 3 
> transaction.state.log.segment.bytes = 104857600 
> transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = 
> false zookeeper.connect = 127.0.0.1:40922 zookeeper.connection.timeout.ms = 
> null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 
> 10000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 
> (kafka.server.KafkaConfig:279) [2019-03-17 01:19:52,104] INFO 
> [PartitionStateMachine controllerId=0] Started partition state machine with 
> initial state -> Map() (kafka.controller.PartitionStateMachine:66) 
> [2019-03-17 01:19:52,105] INFO [Controller id=0] Ready to serve as the new 
> controller with epoch 1 (kafka.controller.KafkaController:66) [2019-03-17 
> 01:19:52,106] INFO [Controller id=0] Removing partitions Set() from the list 
> of reassigned partitions in zookeeper (kafka.controller.KafkaController:66) 
> [2019-03-17 01:19:52,106] INFO [Controller id=0] No more partitions need to 
> be reassigned. Deleting zk path /admin/reassign_partitions 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,106] INFO starting 
> (kafka.server.KafkaServer:66) [2019-03-17 01:19:52,107] INFO Connecting to 
> zookeeper on 127.0.0.1:40922 (kafka.server.KafkaServer:66) [2019-03-17 
> 01:19:52,107] INFO [Controller id=0] Partitions undergoing preferred replica 
> election: (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,107] 
> INFO [Controller id=0] Partitions that completed preferred replica election: 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,107] INFO 
> [ZooKeeperClient] Initializing a new session to 127.0.0.1:40922. 
> (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 01:19:52,107] INFO 
> [Controller id=0] Skipping preferred replica election for partitions due to 
> topic deletion: (kafka.controller.KafkaController:66) [2019-03-17 
> 01:19:52,108] INFO Initiating client connection, 
> connectString=127.0.0.1:40922 sessionTimeout=10000 
> watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@10b62cc8 
> (org.apache.zookeeper.ZooKeeper:442) [2019-03-17 01:19:52,108] INFO 
> [Controller id=0] Resuming preferred replica election for partitions: 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,108] INFO 
> [Controller id=0] Starting preferred replica leader election for partitions 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,109] INFO 
> [ZooKeeperClient] Waiting until connected. 
> (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 01:19:52,109] INFO Got 
> user-level KeeperException when processing sessionid:0x102ebdac1b40000 
> type:multi cxid:0x38 zxid:0x1c txntype:-1 reqpath:n/a aborting remaining 
> multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode 
> = NoNode for /admin/preferred_replica_election 
> (org.apache.zookeeper.server.PrepRequestProcessor:596) [2019-03-17 
> 01:19:52,109] INFO Opening socket connection to server 
> localhost/127.0.0.1:40922. Will not attempt to authenticate using SASL 
> (unknown error) (org.apache.zookeeper.ClientCnxn:1029) [2019-03-17 
> 01:19:52,110] INFO [Controller id=0] Starting the controller scheduler 
> (kafka.controller.KafkaController:66) [2019-03-17 01:19:52,110] INFO Accepted 
> socket connection from /127.0.0.1:59508 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:215) [2019-03-17 
> 01:19:52,110] INFO Socket connection established to 
> localhost/127.0.0.1:40922, initiating session 
> (org.apache.zookeeper.ClientCnxn:879) [2019-03-17 01:19:52,110] INFO Client 
> attempting to establish new session at /127.0.0.1:59508 
> (org.apache.zookeeper.server.ZooKeeperServer:949) [2019-03-17 01:19:52,111] 
> INFO Established session 0x102ebdac1b40001 with negotiated timeout 10000 for 
> client /127.0.0.1:59508 (org.apache.zookeeper.server.ZooKeeperServer:694) 
> [2019-03-17 01:19:52,111] INFO Session establishment complete on server 
> localhost/127.0.0.1:40922, sessionid = 0x102ebdac1b40001, negotiated timeout 
> = 10000 (org.apache.zookeeper.ClientCnxn:1303) [2019-03-17 01:19:52,112] INFO 
> [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 
> 01:19:52,112] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x1 zxid:0x1e txntype:-1 
> reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for 
> /consumers (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:52,113] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x2 zxid:0x1f txntype:-1 
> reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for 
> /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,113] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x3 zxid:0x20 txntype:-1 
> reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NodeExists for 
> /brokers/topics (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,114] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x4 zxid:0x21 txntype:-1 
> reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = NodeExists for 
> /config/changes (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,114] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x5 zxid:0x22 txntype:-1 
> reqpath:n/a Error Path:/admin/delete_topics Error:KeeperErrorCode = 
> NodeExists for /admin/delete_topics 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:52,115] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x6 zxid:0x23 txntype:-1 
> reqpath:n/a Error Path:/brokers/seqid Error:KeeperErrorCode = NodeExists for 
> /brokers/seqid (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,116] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x7 zxid:0x24 txntype:-1 
> reqpath:n/a Error Path:/isr_change_notification Error:KeeperErrorCode = 
> NodeExists for /isr_change_notification 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:52,116] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x8 zxid:0x25 txntype:-1 
> reqpath:n/a Error Path:/latest_producer_id_block Error:KeeperErrorCode = 
> NodeExists for /latest_producer_id_block 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:52,117] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0x9 zxid:0x26 txntype:-1 
> reqpath:n/a Error Path:/log_dir_event_notification Error:KeeperErrorCode = 
> NodeExists for /log_dir_event_notification 
> (org.apache.zookeeper.server.PrepRequestProcessor:653) [2019-03-17 
> 01:19:52,117] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0xa zxid:0x27 txntype:-1 
> reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for 
> /config/topics (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,118] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0xb zxid:0x28 txntype:-1 
> reqpath:n/a Error Path:/config/clients Error:KeeperErrorCode = NodeExists for 
> /config/clients (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,118] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0xc zxid:0x29 txntype:-1 
> reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for 
> /config/users (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,119] INFO Got user-level KeeperException when processing 
> sessionid:0x102ebdac1b40001 type:create cxid:0xd zxid:0x2a txntype:-1 
> reqpath:n/a Error Path:/config/brokers Error:KeeperErrorCode = NodeExists for 
> /config/brokers (org.apache.zookeeper.server.PrepRequestProcessor:653) 
> [2019-03-17 01:19:52,119] INFO Cluster ID = uGsrLrj_SQaCi6cpT07M4Q 
> (kafka.server.KafkaServer:66) [2019-03-17 01:19:52,120] WARN No 
> meta.properties file under dir 
> /tmp/junit5742927953559521435/junit4694147204844585114/meta.properties 
> (kafka.server.BrokerMetadataCheckpoint:70) [2019-03-17 01:19:52,122] INFO 
> KafkaConfig values: advertised.host.name = null advertised.listeners = null 
> advertised.port = null alter.config.policy.class.name = null 
> alter.log.dirs.replication.quota.window.num = 11 
> alter.log.dirs.replication.quota.window.size.seconds = 1 
> authorizer.class.name = auto.create.topics.enable = false 
> auto.leader.rebalance.enable = true background.threads = 10 broker.id = 1 
> broker.id.generation.enable = true broker.rack = null 
> client.quota.callback.class = null compression.type = producer 
> connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 
> 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null 
> controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 
> controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 
> 30000 create.topic.policy.class.name = null default.replication.factor = 1 
> delegation.token.expiry.check.interval.ms = 3600000 
> delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null 
> delegation.token.max.lifetime.ms = 604800000 
> delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = 
> true fetch.purgatory.purge.interval.requests = 1000 
> group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 
> group.max.size = 2147483647 group.min.session.timeout.ms = 0 host.name = 
> localhost inter.broker.listener.name = null inter.broker.protocol.version = 
> 2.2-IV1 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] 
> leader.imbalance.check.interval.seconds = 300 
> leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = 
> PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 
> listeners = null log.cleaner.backoff.ms = 15000 
> log.cleaner.dedupe.buffer.size = 2097152 log.cleaner.delete.retention.ms = 
> 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 
> log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 
> 1.7976931348623157E308 ...[truncated 3464294 chars]... 
> tion.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,878] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-47 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-47 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,879] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-18 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-18 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,879] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-26 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-26 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,880] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-36 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-36 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,880] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-5 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-5 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,881] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-8 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-8 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,882] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-16 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-16 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,882] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-11 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-11 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,883] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-40 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-40 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,883] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-19 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-19 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,884] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-27 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-27 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,884] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-41 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-41 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,885] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-1 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-1 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,885] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-34 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-34 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,886] ERROR [Controller id=2 epoch=3] Controller 2 epoch 
> 3 failed to change state for partition __transaction_state-35 from 
> OnlinePartition to OnlinePartition (state.change.logger:76) 
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> __transaction_state-35 under strategy 
> ControlledShutdownPartitionLeaderElectionStrategy at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
>  at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
>  at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
>  at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.doControlledShutdown(KafkaController.scala:1094)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.$anonfun$handleProcess$1(KafkaController.scala:1056)
>  at scala.util.Try$.apply(Try.scala:213) at 
> kafka.controller.KafkaController$ControlledShutdown.handleProcess(KafkaController.scala:1056)
>  at 
> kafka.controller.PreemptableControllerEvent.process(KafkaController.scala:1809)
>  at 
> kafka.controller.PreemptableControllerEvent.process$(KafkaController.scala:1807)
>  at 
> kafka.controller.KafkaController$ControlledShutdown.process(KafkaController.scala:1047)
>  at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
>  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at 
> kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
>  at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89) 
> [2019-03-17 01:21:29,888] INFO [KafkaServer id=2] Remaining partitions to 
> move: 
> __transaction_state-42,__transaction_state-13,__transaction_state-46,__transaction_state-17,__transaction_state-34,__transaction_state-5,__transaction_state-38,__transaction_state-9,__transaction_state-26,__transaction_state-30,__transaction_state-1,__transaction_state-18,__transaction_state-22,__transaction_state-12,__transaction_state-45,__transaction_state-16,__transaction_state-49,__transaction_state-4,__transaction_state-37,__transaction_state-8,__transaction_state-41,__transaction_state-29,__transaction_state-0,__transaction_state-33,__transaction_state-21,__transaction_state-25,__transaction_state-11,__transaction_state-44,__transaction_state-15,__transaction_state-48,__transaction_state-3,__transaction_state-36,__transaction_state-7,__transaction_state-40,__transaction_state-28,__transaction_state-32,__transaction_state-20,__transaction_state-24,__transaction_state-10,__transaction_state-43,__transaction_state-14,__transaction_state-47,__transaction_state-2,__transaction_state-35,__transaction_state-6,__transaction_state-39,__transaction_state-27,__transaction_state-31,__transaction_state-19,__transaction_state-23
>  (kafka.server.KafkaServer:66) [2019-03-17 01:21:29,888] INFO [KafkaServer 
> id=2] Error from controller: NONE (kafka.server.KafkaServer:66) [2019-03-17 
> 01:21:30,060] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:30,963] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:31,866] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:32,869] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:34,072] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:34,889] WARN [KafkaServer id=2] Retrying controlled shutdown after the 
> previous attempt failed... (kafka.server.KafkaServer:70) [2019-03-17 
> 01:21:34,890] WARN [KafkaServer id=2] Proceeding to do an unclean shutdown as 
> all the controlled shutdown attempts failed (kafka.server.KafkaServer:70) 
> [2019-03-17 01:21:34,890] INFO [/config/changes-event-process-thread]: 
> Shutting down 
> (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66) 
> [2019-03-17 01:21:34,890] INFO [/config/changes-event-process-thread]: 
> Stopped 
> (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66) 
> [2019-03-17 01:21:34,891] INFO [/config/changes-event-process-thread]: 
> Shutdown completed 
> (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread:66) 
> [2019-03-17 01:21:34,891] INFO [SocketServer brokerId=2] Stopping socket 
> server request processors (kafka.network.SocketServer:66) [2019-03-17 
> 01:21:34,896] INFO [SocketServer brokerId=2] Stopped socket server request 
> processors (kafka.network.SocketServer:66) [2019-03-17 01:21:34,896] INFO 
> [data-plane Kafka Request Handler on Broker 2], shutting down 
> (kafka.server.KafkaRequestHandlerPool:66) [2019-03-17 01:21:34,897] INFO 
> [data-plane Kafka Request Handler on Broker 2], shut down completely 
> (kafka.server.KafkaRequestHandlerPool:66) [2019-03-17 01:21:34,898] INFO 
> [KafkaApi-2] Shutdown complete. (kafka.server.KafkaApis:66) [2019-03-17 
> 01:21:34,899] INFO [ExpirationReaper-2-topic]: Shutting down 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:34,984] INFO [ExpirationReaper-2-topic]: Stopped 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:34,984] INFO [ExpirationReaper-2-topic]: Shutdown completed 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:34,985] INFO [TransactionCoordinator id=2] Shutting down. 
> (kafka.coordinator.transaction.TransactionCoordinator:66) [2019-03-17 
> 01:21:34,986] INFO [ProducerId Manager 2]: Shutdown complete: last producerId 
> assigned 2001 (kafka.coordinator.transaction.ProducerIdManager:66) 
> [2019-03-17 01:21:34,986] INFO [Transaction State Manager 2]: Shutdown 
> complete (kafka.coordinator.transaction.TransactionStateManager:66) 
> [2019-03-17 01:21:34,987] INFO [Transaction Marker Channel Manager 2]: 
> Shutting down 
> (kafka.coordinator.transaction.TransactionMarkerChannelManager:66) 
> [2019-03-17 01:21:34,987] INFO [Transaction Marker Channel Manager 2]: 
> Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager:66) 
> [2019-03-17 01:21:34,987] INFO [Transaction Marker Channel Manager 2]: 
> Shutdown completed 
> (kafka.coordinator.transaction.TransactionMarkerChannelManager:66) 
> [2019-03-17 01:21:34,988] INFO [TransactionCoordinator id=2] Shutdown 
> complete. (kafka.coordinator.transaction.TransactionCoordinator:66) 
> [2019-03-17 01:21:34,988] INFO [GroupCoordinator 2]: Shutting down. 
> (kafka.coordinator.group.GroupCoordinator:66) [2019-03-17 01:21:34,989] INFO 
> [ExpirationReaper-2-Heartbeat]: Shutting down 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:34,992] INFO [ExpirationReaper-2-Heartbeat]: Stopped 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:34,992] INFO [ExpirationReaper-2-Heartbeat]: Shutdown 
> completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:34,993] INFO [ExpirationReaper-2-Rebalance]: Shutting down 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,175] INFO [ExpirationReaper-2-Rebalance]: Stopped 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,175] WARN [AdminClient clientId=adminclient-263] 
> Connection to node 0 (localhost/127.0.0.1:34298) could not be established. 
> Broker may not be available. (org.apache.kafka.clients.NetworkClient:725) 
> [2019-03-17 01:21:35,175] INFO [ExpirationReaper-2-Rebalance]: Shutdown 
> completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,176] INFO [GroupCoordinator 2]: Shutdown complete. 
> (kafka.coordinator.group.GroupCoordinator:66) [2019-03-17 01:21:35,176] INFO 
> [ReplicaManager broker=2] Shutting down (kafka.server.ReplicaManager:66) 
> [2019-03-17 01:21:35,176] INFO [LogDirFailureHandler]: Shutting down 
> (kafka.server.ReplicaManager$LogDirFailureHandler:66) [2019-03-17 
> 01:21:35,177] INFO [LogDirFailureHandler]: Stopped 
> (kafka.server.ReplicaManager$LogDirFailureHandler:66) [2019-03-17 
> 01:21:35,177] INFO [LogDirFailureHandler]: Shutdown completed 
> (kafka.server.ReplicaManager$LogDirFailureHandler:66) [2019-03-17 
> 01:21:35,177] INFO [ReplicaFetcherManager on broker 2] shutting down 
> (kafka.server.ReplicaFetcherManager:66) [2019-03-17 01:21:35,177] INFO 
> [ReplicaFetcherManager on broker 2] shutdown completed 
> (kafka.server.ReplicaFetcherManager:66) [2019-03-17 01:21:35,178] INFO 
> [ReplicaAlterLogDirsManager on broker 2] shutting down 
> (kafka.server.ReplicaAlterLogDirsManager:66) [2019-03-17 01:21:35,178] INFO 
> [ReplicaAlterLogDirsManager on broker 2] shutdown completed 
> (kafka.server.ReplicaAlterLogDirsManager:66) [2019-03-17 01:21:35,178] INFO 
> [ExpirationReaper-2-Fetch]: Shutting down 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,367] INFO [ExpirationReaper-2-Fetch]: Stopped 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,367] INFO [ExpirationReaper-2-Fetch]: Shutdown completed 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,368] INFO [ExpirationReaper-2-Produce]: Shutting down 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,385] INFO [ExpirationReaper-2-Produce]: Stopped 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,385] INFO [ExpirationReaper-2-Produce]: Shutdown 
> completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,386] INFO [ExpirationReaper-2-DeleteRecords]: Shutting 
> down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,520] INFO [ExpirationReaper-2-DeleteRecords]: Stopped 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,520] INFO [ExpirationReaper-2-DeleteRecords]: Shutdown 
> completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,521] INFO [ExpirationReaper-2-ElectPreferredLeader]: 
> Shutting down 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,720] INFO [ExpirationReaper-2-ElectPreferredLeader]: 
> Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:35,720] INFO [ExpirationReaper-2-ElectPreferredLeader]: 
> Shutdown completed 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper:66) 
> [2019-03-17 01:21:36,178] WARN [AdminClient clientId=adminclient-263] 
> Connection to node 0 (localhost/127.0.0.1:34298) could not be established. 
> Broker may not be available. (org.apache.kafka.clients.NetworkClient:725) 
> [2019-03-17 01:21:36,338] INFO [ReplicaManager broker=2] Shut down completely 
> (kafka.server.ReplicaManager:66) [2019-03-17 01:21:36,339] INFO Shutting 
> down. (kafka.log.LogManager:66) [2019-03-17 01:21:36,339] INFO Shutting down 
> the log cleaner. (kafka.log.LogCleaner:66) [2019-03-17 01:21:36,340] INFO 
> [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner:66) 
> [2019-03-17 01:21:36,340] INFO [kafka-log-cleaner-thread-0]: Stopped 
> (kafka.log.LogCleaner:66) [2019-03-17 01:21:36,340] INFO 
> [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner:66) 
> [2019-03-17 01:21:36,342] INFO [ProducerStateManager 
> partition=singlePartitionOutputTopic-0] Writing producer snapshot at offset 
> 42 (kafka.log.ProducerStateManager:66) [2019-03-17 01:21:36,346] INFO 
> [ProducerStateManager partition=__transaction_state-12] Writing producer 
> snapshot at offset 121 (kafka.log.ProducerStateManager:66) [2019-03-17 
> 01:21:36,347] INFO [ProducerStateManager partition=__consumer_offsets-8] 
> Writing producer snapshot at offset 148 (kafka.log.ProducerStateManager:66) 
> [2019-03-17 01:21:36,351] INFO [ProducerStateManager 
> partition=appId-1-store-changelog-1] Writing producer snapshot at offset 22 
> (kafka.log.ProducerStateManager:66) [2019-03-17 01:21:36,356] INFO 
> [ProducerStateManager partition=__transaction_state-0] Writing producer 
> snapshot at offset 12 (kafka.log.ProducerStateManager:66) [2019-03-17 
> 01:21:36,371] INFO [ProducerStateManager partition=__transaction_state-11] 
> Writing producer snapshot at offset 71 (kafka.log.ProducerStateManager:66) 
> [2019-03-17 01:21:36,375] INFO [ProducerStateManager 
> partition=__transaction_state-1] Writing producer snapshot at offset 16 
> (kafka.log.ProducerStateManager:66) [2019-03-17 01:21:36,504] INFO Shutdown 
> complete. (kafka.log.LogManager:66) [2019-03-17 01:21:36,505] INFO 
> [ControllerEventThread controllerId=2] Shutting down 
> (kafka.controller.ControllerEventManager$ControllerEventThread:66) 
> [2019-03-17 01:21:36,505] INFO [ControllerEventThread controllerId=2] Stopped 
> (kafka.controller.ControllerEventManager$ControllerEventThread:66) 
> [2019-03-17 01:21:36,505] INFO [ControllerEventThread controllerId=2] 
> Shutdown completed 
> (kafka.controller.ControllerEventManager$ControllerEventThread:66) 
> [2019-03-17 01:21:36,506] INFO [PartitionStateMachine controllerId=2] Stopped 
> partition state machine (kafka.controller.PartitionStateMachine:66) 
> [2019-03-17 01:21:36,507] INFO [ReplicaStateMachine controllerId=2] Stopped 
> replica state machine (kafka.controller.ReplicaStateMachine:66) [2019-03-17 
> 01:21:36,507] INFO [RequestSendThread controllerId=2] Shutting down 
> (kafka.controller.RequestSendThread:66) [2019-03-17 01:21:36,507] INFO 
> [RequestSendThread controllerId=2] Stopped 
> (kafka.controller.RequestSendThread:66) [2019-03-17 01:21:36,507] INFO 
> [RequestSendThread controllerId=2] Shutdown completed 
> (kafka.controller.RequestSendThread:66) [2019-03-17 01:21:36,509] INFO 
> [Controller id=2] Resigned (kafka.controller.KafkaController:66) [2019-03-17 
> 01:21:36,509] INFO [ZooKeeperClient] Closing. 
> (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 01:21:36,510] INFO Processed 
> session termination for sessionid: 0x102ebdac1b40002 
> (org.apache.zookeeper.server.PrepRequestProcessor:487) [2019-03-17 
> 01:21:36,510] INFO Session: 0x102ebdac1b40002 closed 
> (org.apache.zookeeper.ZooKeeper:693) [2019-03-17 01:21:36,510] INFO Closed 
> socket connection for client /127.0.0.1:59518 which had sessionid 
> 0x102ebdac1b40002 (org.apache.zookeeper.server.NIOServerCnxn:1056) 
> [2019-03-17 01:21:36,510] INFO EventThread shut down for session: 
> 0x102ebdac1b40002 (org.apache.zookeeper.ClientCnxn:522) [2019-03-17 
> 01:21:36,510] INFO [ZooKeeperClient] Closed. 
> (kafka.zookeeper.ZooKeeperClient:66) [2019-03-17 01:21:36,512] INFO 
> [ThrottledChannelReaper-Fetch]: Shutting down 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:37,180] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:37,249] INFO [ThrottledChannelReaper-Fetch]: Stopped 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:37,249] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:37,250] INFO [ThrottledChannelReaper-Produce]: Shutting down 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:38,183] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:38,249] INFO [ThrottledChannelReaper-Produce]: Stopped 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:38,249] INFO [ThrottledChannelReaper-Produce]: Shutdown completed 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:38,250] INFO [ThrottledChannelReaper-Request]: Shutting down 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:39,186] WARN [AdminClient clientId=adminclient-263] Connection to node 
> 0 (localhost/127.0.0.1:34298) could not be established. Broker may not be 
> available. (org.apache.kafka.clients.NetworkClient:725) [2019-03-17 
> 01:21:39,249] INFO [ThrottledChannelReaper-Request]: Stopped 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:39,249] INFO [ThrottledChannelReaper-Request]: Shutdown completed 
> (kafka.server.ClientQuotaManager$ThrottledChannelReaper:66) [2019-03-17 
> 01:21:39,250] INFO [SocketServer brokerId=2] Shutting down socket server 
> (kafka.network.SocketServer:66) [2019-03-17 01:21:39,274] INFO [SocketServer 
> brokerId=2] Shutdown completed (kafka.network.SocketServer:66) [2019-03-17 
> 01:21:39,277] INFO [KafkaServer id=2] shut down completed 
> (kafka.server.KafkaServer:66) [2019-03-17 01:21:39,293] INFO shutting down 
> (org.apache.zookeeper.server.ZooKeeperServer:502) [2019-03-17 01:21:39,294] 
> INFO Shutting down (org.apache.zookeeper.server.SessionTrackerImpl:226) 
> [2019-03-17 01:21:39,294] INFO Shutting down 
> (org.apache.zookeeper.server.PrepRequestProcessor:769) [2019-03-17 
> 01:21:39,294] INFO Shutting down 
> (org.apache.zookeeper.server.SyncRequestProcessor:208) [2019-03-17 
> 01:21:39,294] INFO PrepRequestProcessor exited loop! 
> (org.apache.zookeeper.server.PrepRequestProcessor:144) [2019-03-17 
> 01:21:39,295] INFO SyncRequestProcessor exited! 
> (org.apache.zookeeper.server.SyncRequestProcessor:186) [2019-03-17 
> 01:21:39,296] INFO shutdown of request processor complete 
> (org.apache.zookeeper.server.FinalRequestProcessor:403) [2019-03-17 
> 01:21:39,308] INFO NIOServerCnxn factory exited run method 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:242){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to