guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r743348733



##########
File path: 
hudi-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java
##########
@@ -385,6 +385,47 @@ public void testBasicWriteAndScan() throws IOException, 
URISyntaxException, Inte
     reader.close();
   }
 
+  @Test
+  public void testHugeLogFileWrite() throws IOException, URISyntaxException, 
InterruptedException {
+    Writer writer =
+        
HoodieLogFormat.newWriterBuilder().onParentPath(partitionPath).withFileExtension(HoodieLogFile.DELTA_EXTENSION)
+            
.withFileId("test-fileid1").overBaseCommit("100").withFs(fs).build();
+    Schema schema = getSimpleSchema();
+    List<IndexedRecord> records = SchemaTestUtil.generateTestRecords(0, 1000);
+    List<IndexedRecord> copyOfRecords = records.stream()
+        .map(record -> HoodieAvroUtils.rewriteRecord((GenericRecord) record, 
schema)).collect(Collectors.toList());
+    Map<HoodieLogBlock.HeaderMetadataType, String> header = new HashMap<>();
+    header.put(HoodieLogBlock.HeaderMetadataType.INSTANT_TIME, "100");
+    header.put(HoodieLogBlock.HeaderMetadataType.SCHEMA, 
getSimpleSchema().toString());
+    HoodieDataBlock dataBlock = getDataBlock(records, header);
+    long sizeOfOneBlock = dataBlock.getContent().get().length;
+    long writtenSize = 0;
+    int logBlockWrittenNum= 0;
+    while (writtenSize < Integer.MAX_VALUE) {
+      writer.appendBlock(dataBlock);
+      writtenSize += sizeOfOneBlock;
+      logBlockWrittenNum++;
+    }
+    writer.close();
+
+    Reader reader = HoodieLogFormat.newReader(fs, writer.getLogFile(), 
SchemaTestUtil.getSimpleSchema(), true, true);
+    assertTrue(reader.hasNext(), "We wrote a block, we should be able to read 
it");
+    HoodieLogBlock nextBlock = reader.next();
+    assertEquals(dataBlockType, nextBlock.getBlockType(), "The next block 
should be a data block");
+    HoodieDataBlock dataBlockRead = (HoodieDataBlock) nextBlock;
+    assertEquals(copyOfRecords.size(), dataBlockRead.getRecords().size(),
+        "Read records size should be equal to the written records size");
+    assertEquals(copyOfRecords, dataBlockRead.getRecords(),
+        "Both records lists should be the same. (ordering guaranteed)");
+    int logBlockReadNum = 1;
+    while (reader.hasNext()) {
+      reader.next();
+      logBlockReadNum++;
+    }
+    assertEquals(logBlockWrittenNum, logBlockReadNum, "All written log should 
be correctly found");

Review comment:
       > can we also test the overflow scenario(failure case). that's the 
actual fix right.
   
   Finished. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to