ableegoldman commented on a change in pull request #10554:
URL: https://github.com/apache/kafka/pull/10554#discussion_r620794381
##########
File path:
streams/src/test/java/org/apache/kafka/streams/integration/AdjustStreamThreadCountTest.java
##########
@@ -121,6 +125,21 @@ public void setup() {
);
}
+
+ private void publishDummyDataToTopic(final String inputTopic, final
EmbeddedKafkaCluster cluster) {
+ final Properties props = new Properties();
+ props.put("acks", "all");
+ props.put("retries", 1);
+ props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
cluster.bootstrapServers());
+ props.put(ProducerConfig.CLIENT_ID_CONFIG, "test-client");
+ props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
+ props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
+ final KafkaProducer<String, String> dummyProducer = new
KafkaProducer<>(props);
+ dummyProducer.send(new ProducerRecord<String, String>(inputTopic,
Integer.toString(4), Integer.toString(4)));
Review comment:
It might be a good idea to send a slightly larger batch of data, for
example I think in other integration tests we did like 10,000 records. We don't
necessarily need that many here but Streams should be fast enough that we may
as well do something like 1,000 - 5,000
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]