[ 
https://issues.apache.org/jira/browse/KAFKA-2271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14593646#comment-14593646
 ] 

Jason Gustafson commented on KAFKA-2271:
----------------------------------------

This patch may or may not fix the problem, but the only apparent difference in 
the output above is the extra space after the random host.name field. Since the 
randomness of that field (and the advertised host name) didn't seem significant 
to the test, I just used a static string instead.

I also noticed a couple other minor problems which could cause the test to fail.

> transient unit test failure in KafkaConfigConfigDefTest.testFromPropsToProps
> ----------------------------------------------------------------------------
>
>                 Key: KAFKA-2271
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2271
>             Project: Kafka
>          Issue Type: Sub-task
>          Components: core
>            Reporter: Jun Rao
>            Assignee: Jason Gustafson
>         Attachments: KAFKA-2271.patch
>
>
> Saw the following transient failure in jenkins.
>     java.lang.AssertionError: expected:<{num.io.threads=2051678117, 
> log.dir=/tmp/log, num.network.threads=442579598, 
> offsets.topic.num.partitions=1996793767, log.cleaner.enable=true, 
> inter.broker.protocol.version=0.8.3.X, host.name=????????? , 
> log.cleaner.backoff.ms=2080497098, log.segment.delete.delay.ms=516834257, 
> controller.socket.timeout.ms=444411414, queued.max.requests=673019914, 
> controlled.shutdown.max.retries=1810738435, num.replica.fetchers=1160759331, 
> socket.request.max.bytes=1453815395, log.flush.interval.ms=762170329, 
> offsets.topic.replication.factor=1011, 
> log.flush.offset.checkpoint.interval.ms=923125288, 
> security.inter.broker.protocol=PLAINTEXT, 
> zookeeper.session.timeout.ms=413974606, metrics.sample.window.ms=1000, 
> offsets.topic.compression.codec=1, 
> zookeeper.connection.timeout.ms=2068179601, 
> fetch.purgatory.purge.interval.requests=1242197204, 
> log.retention.bytes=692466534, log.dirs=/tmp/logs,/tmp/logs2, 
> replica.fetch.min.bytes=1791426389, compression.type=lz4, 
> log.roll.jitter.ms=356707666, log.cleaner.threads=2, 
> replica.lag.time.max.ms=1073834162, advertised.port=4321, 
> max.connections.per.ip.overrides=127.0.0.1:2, 127.0.0.2:3, 
> socket.send.buffer.bytes=1319605180, metrics.num.samples=2, port=1234, 
> replica.fetch.wait.max.ms=321, log.segment.bytes=468671022, 
> log.retention.minutes=772707425, auto.create.topics.enable=true, 
> replica.socket.receive.buffer.bytes=1923367476, 
> log.cleaner.io.max.bytes.per.second=0.2, zookeeper.sync.time.ms=2072589946, 
> log.roll.jitter.hours=2106718330, log.retention.check.interval.ms=906922522, 
> reserved.broker.max.id=100, unclean.leader.election.enable=true, 
> advertised.listeners=PLAINTEXT://:2909, 
> log.cleaner.io.buffer.load.factor=1.0, 
> consumer.min.session.timeout.ms=422104288, log.retention.ms=1496447411, 
> replica.high.watermark.checkpoint.interval.ms=118464842, 
> log.cleanup.policy=delete, log.cleaner.dedupe.buffer.size=3145729, 
> offsets.commit.timeout.ms=2084609508, min.insync.replicas=963487957, 
> zookeeper.connect=127.0.0.1:2181, 
> leader.imbalance.per.broker.percentage=148038876, 
> log.index.interval.bytes=242075900, 
> leader.imbalance.check.interval.seconds=1376263302, 
> offsets.retention.minutes=1781435041, socket.receive.buffer.bytes=369224522, 
> log.cleaner.delete.retention.ms=898157008, 
> replica.socket.timeout.ms=493318414, num.partitions=2, 
> offsets.topic.segment.bytes=852590082, default.replication.factor=549663639, 
> log.cleaner.io.buffer.size=905972186, offsets.commit.required.acks=-1, 
> num.recovery.threads.per.data.dir=1012415473, log.retention.hours=1115262747, 
> replica.fetch.max.bytes=2041540755, log.roll.hours=115708840, 
> metric.reporters=, message.max.bytes=1234, 
> log.cleaner.min.cleanable.ratio=0.6, offsets.load.buffer.size=1818565888, 
> delete.topic.enable=true, listeners=PLAINTEXT://:9092, 
> offset.metadata.max.bytes=1563320007, 
> controlled.shutdown.retry.backoff.ms=1270013702, 
> max.connections.per.ip=359602609, consumer.max.session.timeout.ms=2124317921, 
> log.roll.ms=241126032, advertised.host.name=??????????, 
> log.flush.scheduler.interval.ms=1548906710, 
> auto.leader.rebalance.enable=false, 
> producer.purgatory.purge.interval.requests=1640729755, 
> controlled.shutdown.enable=false, log.index.size.max.bytes=1748380064, 
> log.flush.interval.messages=982245822, broker.id=15, 
> offsets.retention.check.interval.ms=593078788, 
> replica.fetch.backoff.ms=394858256, background.threads=124969300, 
> connections.max.idle.ms=554679959}> but was:<{num.io.threads=2051678117, 
> log.dir=/tmp/log, num.network.threads=442579598, 
> offsets.topic.num.partitions=1996793767, 
> inter.broker.protocol.version=0.8.3.X, log.cleaner.enable=true, 
> host.name=?????????, log.cleaner.backoff.ms=2080497098, 
> log.segment.delete.delay.ms=516834257, 
> controller.socket.timeout.ms=444411414, 
> controlled.shutdown.max.retries=1810738435, queued.max.requests=673019914, 
> num.replica.fetchers=1160759331, socket.request.max.bytes=1453815395, 
> log.flush.interval.ms=762170329, offsets.topic.replication.factor=1011, 
> log.flush.offset.checkpoint.interval.ms=923125288, 
> security.inter.broker.protocol=PLAINTEXT, 
> zookeeper.session.timeout.ms=413974606, metrics.sample.window.ms=1000, 
> offsets.topic.compression.codec=1, 
> zookeeper.connection.timeout.ms=2068179601, 
> fetch.purgatory.purge.interval.requests=1242197204, 
> log.retention.bytes=692466534, log.dirs=/tmp/logs,/tmp/logs2, 
> compression.type=lz4, replica.fetch.min.bytes=1791426389, 
> log.roll.jitter.ms=356707666, log.cleaner.threads=2, 
> replica.lag.time.max.ms=1073834162, advertised.port=4321, 
> max.connections.per.ip.overrides=127.0.0.1:2, 127.0.0.2:3, 
> socket.send.buffer.bytes=1319605180, metrics.num.samples=2, port=1234, 
> replica.fetch.wait.max.ms=321, log.segment.bytes=468671022, 
> log.retention.minutes=772707425, auto.create.topics.enable=true, 
> replica.socket.receive.buffer.bytes=1923367476, 
> log.cleaner.io.max.bytes.per.second=0.2, zookeeper.sync.time.ms=2072589946, 
> log.roll.jitter.hours=2106718330, log.retention.check.interval.ms=906922522, 
> reserved.broker.max.id=100, unclean.leader.election.enable=true, 
> advertised.listeners=PLAINTEXT://:2909, 
> log.cleaner.io.buffer.load.factor=1.0, 
> consumer.min.session.timeout.ms=422104288, log.retention.ms=1496447411, 
> replica.high.watermark.checkpoint.interval.ms=118464842, 
> log.cleanup.policy=delete, log.cleaner.dedupe.buffer.size=3145729, 
> offsets.commit.timeout.ms=2084609508, min.insync.replicas=963487957, 
> leader.imbalance.per.broker.percentage=148038876, 
> zookeeper.connect=127.0.0.1:2181, offsets.retention.minutes=1781435041, 
> leader.imbalance.check.interval.seconds=1376263302, 
> log.index.interval.bytes=242075900, socket.receive.buffer.bytes=369224522, 
> log.cleaner.delete.retention.ms=898157008, 
> replica.socket.timeout.ms=493318414, num.partitions=2, 
> offsets.topic.segment.bytes=852590082, default.replication.factor=549663639, 
> offsets.commit.required.acks=-1, log.cleaner.io.buffer.size=905972186, 
> num.recovery.threads.per.data.dir=1012415473, log.retention.hours=1115262747, 
> replica.fetch.max.bytes=2041540755, log.roll.hours=115708840, 
> metric.reporters=, message.max.bytes=1234, 
> offsets.load.buffer.size=1818565888, log.cleaner.min.cleanable.ratio=0.6, 
> delete.topic.enable=true, listeners=PLAINTEXT://:9092, 
> offset.metadata.max.bytes=1563320007, 
> controlled.shutdown.retry.backoff.ms=1270013702, 
> max.connections.per.ip=359602609, consumer.max.session.timeout.ms=2124317921, 
> log.roll.ms=241126032, advertised.host.name=??????????, 
> log.flush.scheduler.interval.ms=1548906710, 
> auto.leader.rebalance.enable=false, 
> producer.purgatory.purge.interval.requests=1640729755, 
> controlled.shutdown.enable=false, log.index.size.max.bytes=1748380064, 
> log.flush.interval.messages=982245822, broker.id=15, 
> offsets.retention.check.interval.ms=593078788, 
> replica.fetch.backoff.ms=394858256, background.threads=124969300, 
> connections.max.idle.ms=554679959}>
>         at org.junit.Assert.fail(Assert.java:92)
>         at org.junit.Assert.failNotEquals(Assert.java:689)
>         at org.junit.Assert.assertEquals(Assert.java:127)
>         at org.junit.Assert.assertEquals(Assert.java:146)
>         at 
> unit.kafka.server.KafkaConfigConfigDefTest.testFromPropsToProps(KafkaConfigConfigDefTest.scala:257)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to