showuon commented on code in PR #12347:
URL: https://github.com/apache/kafka/pull/12347#discussion_r921159244


##########
core/src/test/scala/unit/kafka/log/LogManagerTest.scala:
##########
@@ -638,6 +641,221 @@ class LogManagerTest {
     assertTrue(logManager.partitionsInitializing.isEmpty)
   }
 
+  private def appendRecordsToLog(time: MockTime, parentLogDir: File, 
partitionId: Int, brokerTopicStats: BrokerTopicStats, expectedSegmentsPerLog: 
Int): Unit = {
+    def createRecords = TestUtils.singletonRecords(value = "test".getBytes, 
timestamp = time.milliseconds)
+    val tpFile = new File(parentLogDir, s"$name-$partitionId")
+
+    val log = LogTestUtils.createLog(tpFile, logConfig, brokerTopicStats, 
time.scheduler, time, 0, 0,
+      5 * 60 * 1000, 60 * 60 * 1000, 
LogManager.ProducerIdExpirationCheckIntervalMs)
+
+    val numMessages = 20
+    try {
+      for (_ <- 0 until numMessages) {
+        log.appendAsLeader(createRecords, leaderEpoch = 0)
+      }
+
+      assertEquals(expectedSegmentsPerLog, log.numberOfSegments)

Review Comment:
   > how do we enforce the expected number of segments?
   
   We can make sure the number of segments because we set "segment.byte=1024", 
and the dummy record size is 72 bytes each. So that we can confirm how many 
segments to be created. I've updated the test and add comments.
   
   > should we explicitly call log.roll()?
   
   No, I don't think we need `log.roll()` here, because we only need log 
segments filled with records for recovery. Besides, we don't want to update 
recovery checkpoint to affect the remaining segments metric results.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to