[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-05 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r742695171



##
File path: 
hudi-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java
##
@@ -385,6 +385,47 @@ public void testBasicWriteAndScan() throws IOException, 
URISyntaxException, Inte
 reader.close();
   }
 
+  @Test
+  public void testHugeLogFileWrite() throws IOException, URISyntaxException, 
InterruptedException {
+Writer writer =
+
HoodieLogFormat.newWriterBuilder().onParentPath(partitionPath).withFileExtension(HoodieLogFile.DELTA_EXTENSION)
+
.withFileId("test-fileid1").overBaseCommit("100").withFs(fs).build();
+Schema schema = getSimpleSchema();
+List records = SchemaTestUtil.generateTestRecords(0, 1000);
+List copyOfRecords = records.stream()
+.map(record -> HoodieAvroUtils.rewriteRecord((GenericRecord) record, 
schema)).collect(Collectors.toList());
+Map header = new HashMap<>();
+header.put(HoodieLogBlock.HeaderMetadataType.INSTANT_TIME, "100");
+header.put(HoodieLogBlock.HeaderMetadataType.SCHEMA, 
getSimpleSchema().toString());
+HoodieDataBlock dataBlock = getDataBlock(records, header);
+long sizeOfOneBlock = dataBlock.getContent().get().length;
+long writtenSize = 0;
+int logBlockWrittenNum= 0;
+while (writtenSize < Integer.MAX_VALUE) {
+  writer.appendBlock(dataBlock);
+  writtenSize += sizeOfOneBlock;
+  logBlockWrittenNum++;
+}
+writer.close();
+
+Reader reader = HoodieLogFormat.newReader(fs, writer.getLogFile(), 
SchemaTestUtil.getSimpleSchema(), true, true);
+assertTrue(reader.hasNext(), "We wrote a block, we should be able to read 
it");
+HoodieLogBlock nextBlock = reader.next();
+assertEquals(dataBlockType, nextBlock.getBlockType(), "The next block 
should be a data block");
+HoodieDataBlock dataBlockRead = (HoodieDataBlock) nextBlock;
+assertEquals(copyOfRecords.size(), dataBlockRead.getRecords().size(),
+"Read records size should be equal to the written records size");
+assertEquals(copyOfRecords, dataBlockRead.getRecords(),
+"Both records lists should be the same. (ordering guaranteed)");
+int logBlockReadNum = 1;
+while (reader.hasNext()) {
+  reader.next();
+  logBlockReadNum++;
+}
+assertEquals(logBlockWrittenNum, logBlockReadNum, "All written log should 
be correctly found");

Review comment:
   Finished.
   
   > can we also test the overflow scenario(failure case). that's the actual 
fix right.
   
   

##
File path: 
hudi-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java
##
@@ -385,6 +385,47 @@ public void testBasicWriteAndScan() throws IOException, 
URISyntaxException, Inte
 reader.close();
   }
 
+  @Test
+  public void testHugeLogFileWrite() throws IOException, URISyntaxException, 
InterruptedException {
+Writer writer =
+
HoodieLogFormat.newWriterBuilder().onParentPath(partitionPath).withFileExtension(HoodieLogFile.DELTA_EXTENSION)
+
.withFileId("test-fileid1").overBaseCommit("100").withFs(fs).build();
+Schema schema = getSimpleSchema();
+List records = SchemaTestUtil.generateTestRecords(0, 1000);
+List copyOfRecords = records.stream()
+.map(record -> HoodieAvroUtils.rewriteRecord((GenericRecord) record, 
schema)).collect(Collectors.toList());
+Map header = new HashMap<>();
+header.put(HoodieLogBlock.HeaderMetadataType.INSTANT_TIME, "100");
+header.put(HoodieLogBlock.HeaderMetadataType.SCHEMA, 
getSimpleSchema().toString());
+HoodieDataBlock dataBlock = getDataBlock(records, header);
+long sizeOfOneBlock = dataBlock.getContent().get().length;
+long writtenSize = 0;
+int logBlockWrittenNum= 0;
+while (writtenSize < Integer.MAX_VALUE) {
+  writer.appendBlock(dataBlock);
+  writtenSize += sizeOfOneBlock;
+  logBlockWrittenNum++;
+}
+writer.close();
+
+Reader reader = HoodieLogFormat.newReader(fs, writer.getLogFile(), 
SchemaTestUtil.getSimpleSchema(), true, true);
+assertTrue(reader.hasNext(), "We wrote a block, we should be able to read 
it");
+HoodieLogBlock nextBlock = reader.next();
+assertEquals(dataBlockType, nextBlock.getBlockType(), "The next block 
should be a data block");
+HoodieDataBlock dataBlockRead = (HoodieDataBlock) nextBlock;
+assertEquals(copyOfRecords.size(), dataBlockRead.getRecords().size(),
+"Read records size should be equal to the written records size");
+assertEquals(copyOfRecords, dataBlockRead.getRecords(),
+"Both records lists should be the same. (ordering guaranteed)");
+int logBlockReadNum = 1;
+while (reader.hasNext()) {
+  reader.next();
+  logBlockReadNum++;
+}
+assertEquals(logBlockWrittenNum, logBlockReadNum, "All written log should 
be correctly found");

Review comment:
   > can we also test 

[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-04 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r743348733



##
File path: 
hudi-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java
##
@@ -385,6 +385,47 @@ public void testBasicWriteAndScan() throws IOException, 
URISyntaxException, Inte
 reader.close();
   }
 
+  @Test
+  public void testHugeLogFileWrite() throws IOException, URISyntaxException, 
InterruptedException {
+Writer writer =
+
HoodieLogFormat.newWriterBuilder().onParentPath(partitionPath).withFileExtension(HoodieLogFile.DELTA_EXTENSION)
+
.withFileId("test-fileid1").overBaseCommit("100").withFs(fs).build();
+Schema schema = getSimpleSchema();
+List records = SchemaTestUtil.generateTestRecords(0, 1000);
+List copyOfRecords = records.stream()
+.map(record -> HoodieAvroUtils.rewriteRecord((GenericRecord) record, 
schema)).collect(Collectors.toList());
+Map header = new HashMap<>();
+header.put(HoodieLogBlock.HeaderMetadataType.INSTANT_TIME, "100");
+header.put(HoodieLogBlock.HeaderMetadataType.SCHEMA, 
getSimpleSchema().toString());
+HoodieDataBlock dataBlock = getDataBlock(records, header);
+long sizeOfOneBlock = dataBlock.getContent().get().length;
+long writtenSize = 0;
+int logBlockWrittenNum= 0;
+while (writtenSize < Integer.MAX_VALUE) {
+  writer.appendBlock(dataBlock);
+  writtenSize += sizeOfOneBlock;
+  logBlockWrittenNum++;
+}
+writer.close();
+
+Reader reader = HoodieLogFormat.newReader(fs, writer.getLogFile(), 
SchemaTestUtil.getSimpleSchema(), true, true);
+assertTrue(reader.hasNext(), "We wrote a block, we should be able to read 
it");
+HoodieLogBlock nextBlock = reader.next();
+assertEquals(dataBlockType, nextBlock.getBlockType(), "The next block 
should be a data block");
+HoodieDataBlock dataBlockRead = (HoodieDataBlock) nextBlock;
+assertEquals(copyOfRecords.size(), dataBlockRead.getRecords().size(),
+"Read records size should be equal to the written records size");
+assertEquals(copyOfRecords, dataBlockRead.getRecords(),
+"Both records lists should be the same. (ordering guaranteed)");
+int logBlockReadNum = 1;
+while (reader.hasNext()) {
+  reader.next();
+  logBlockReadNum++;
+}
+assertEquals(logBlockWrittenNum, logBlockReadNum, "All written log should 
be correctly found");

Review comment:
   > can we also test the overflow scenario(failure case). that's the 
actual fix right.
   
   Finished. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-04 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r742695171



##
File path: 
hudi-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java
##
@@ -385,6 +385,47 @@ public void testBasicWriteAndScan() throws IOException, 
URISyntaxException, Inte
 reader.close();
   }
 
+  @Test
+  public void testHugeLogFileWrite() throws IOException, URISyntaxException, 
InterruptedException {
+Writer writer =
+
HoodieLogFormat.newWriterBuilder().onParentPath(partitionPath).withFileExtension(HoodieLogFile.DELTA_EXTENSION)
+
.withFileId("test-fileid1").overBaseCommit("100").withFs(fs).build();
+Schema schema = getSimpleSchema();
+List records = SchemaTestUtil.generateTestRecords(0, 1000);
+List copyOfRecords = records.stream()
+.map(record -> HoodieAvroUtils.rewriteRecord((GenericRecord) record, 
schema)).collect(Collectors.toList());
+Map header = new HashMap<>();
+header.put(HoodieLogBlock.HeaderMetadataType.INSTANT_TIME, "100");
+header.put(HoodieLogBlock.HeaderMetadataType.SCHEMA, 
getSimpleSchema().toString());
+HoodieDataBlock dataBlock = getDataBlock(records, header);
+long sizeOfOneBlock = dataBlock.getContent().get().length;
+long writtenSize = 0;
+int logBlockWrittenNum= 0;
+while (writtenSize < Integer.MAX_VALUE) {
+  writer.appendBlock(dataBlock);
+  writtenSize += sizeOfOneBlock;
+  logBlockWrittenNum++;
+}
+writer.close();
+
+Reader reader = HoodieLogFormat.newReader(fs, writer.getLogFile(), 
SchemaTestUtil.getSimpleSchema(), true, true);
+assertTrue(reader.hasNext(), "We wrote a block, we should be able to read 
it");
+HoodieLogBlock nextBlock = reader.next();
+assertEquals(dataBlockType, nextBlock.getBlockType(), "The next block 
should be a data block");
+HoodieDataBlock dataBlockRead = (HoodieDataBlock) nextBlock;
+assertEquals(copyOfRecords.size(), dataBlockRead.getRecords().size(),
+"Read records size should be equal to the written records size");
+assertEquals(copyOfRecords, dataBlockRead.getRecords(),
+"Both records lists should be the same. (ordering guaranteed)");
+int logBlockReadNum = 1;
+while (reader.hasNext()) {
+  reader.next();
+  logBlockReadNum++;
+}
+assertEquals(logBlockWrittenNum, logBlockReadNum, "All written log should 
be correctly found");

Review comment:
   Finished.
   
   > can we also test the overflow scenario(failure case). that's the actual 
fix right.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-04 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r742695171



##
File path: 
hudi-common/src/test/java/org/apache/hudi/common/functional/TestHoodieLogFormat.java
##
@@ -385,6 +385,47 @@ public void testBasicWriteAndScan() throws IOException, 
URISyntaxException, Inte
 reader.close();
   }
 
+  @Test
+  public void testHugeLogFileWrite() throws IOException, URISyntaxException, 
InterruptedException {
+Writer writer =
+
HoodieLogFormat.newWriterBuilder().onParentPath(partitionPath).withFileExtension(HoodieLogFile.DELTA_EXTENSION)
+
.withFileId("test-fileid1").overBaseCommit("100").withFs(fs).build();
+Schema schema = getSimpleSchema();
+List records = SchemaTestUtil.generateTestRecords(0, 1000);
+List copyOfRecords = records.stream()
+.map(record -> HoodieAvroUtils.rewriteRecord((GenericRecord) record, 
schema)).collect(Collectors.toList());
+Map header = new HashMap<>();
+header.put(HoodieLogBlock.HeaderMetadataType.INSTANT_TIME, "100");
+header.put(HoodieLogBlock.HeaderMetadataType.SCHEMA, 
getSimpleSchema().toString());
+HoodieDataBlock dataBlock = getDataBlock(records, header);
+long sizeOfOneBlock = dataBlock.getContent().get().length;
+long writtenSize = 0;
+int logBlockWrittenNum= 0;
+while (writtenSize < Integer.MAX_VALUE) {
+  writer.appendBlock(dataBlock);
+  writtenSize += sizeOfOneBlock;
+  logBlockWrittenNum++;
+}
+writer.close();
+
+Reader reader = HoodieLogFormat.newReader(fs, writer.getLogFile(), 
SchemaTestUtil.getSimpleSchema(), true, true);
+assertTrue(reader.hasNext(), "We wrote a block, we should be able to read 
it");
+HoodieLogBlock nextBlock = reader.next();
+assertEquals(dataBlockType, nextBlock.getBlockType(), "The next block 
should be a data block");
+HoodieDataBlock dataBlockRead = (HoodieDataBlock) nextBlock;
+assertEquals(copyOfRecords.size(), dataBlockRead.getRecords().size(),
+"Read records size should be equal to the written records size");
+assertEquals(copyOfRecords, dataBlockRead.getRecords(),
+"Both records lists should be the same. (ordering guaranteed)");
+int logBlockReadNum = 1;
+while (reader.hasNext()) {
+  reader.next();
+  logBlockReadNum++;
+}
+assertEquals(logBlockWrittenNum, logBlockReadNum, "All written log should 
be correctly found");

Review comment:
   Finished.
   
   > can we also test the overflow scenario(failure case). that's the actual 
fix right.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-03 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r741670205



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Ummm. Yes we could have a test to write a huge log block to have a 
check. But it may affect UT performance a lot. Not sure if an UT relevant to 
such a reworking of existing logic is necessary.  Anyway, I'm glad to add a UT 
if it is compulsory.

##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Added.

##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Ummm. Yes we could have a test to write a huge log block to have a 
check. But it may affect UT performance a lot. Not sure if an UT relevant to 
such a reworking of existing logic is necessary.  Anyway, I'm glad to add a UT 
if it is compulsory.

##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Added.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-03 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r741670205



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Ummm. Yes we could have a test to write a huge log block to have a 
check. But it may affect UT performance a lot. Not sure if an UT relevant to 
such a reworking of existing logic is necessary.  Anyway, I'm glad to add a UT 
if it is compulsory.

##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Added.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-03 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r741670205



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Added.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hudi] guanziyue commented on a change in pull request #3912: [HUDI-2665] Fix overflow of huge log file in HoodieLogFormatWriter

2021-11-03 Thread GitBox


guanziyue commented on a change in pull request #3912:
URL: https://github.com/apache/hudi/pull/3912#discussion_r741670205



##
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java
##
@@ -148,10 +148,11 @@ public AppendResult appendBlocks(List 
blocks) throws IOException
 HoodieLogFormat.LogFormatVersion currentLogFormatVersion =
 new HoodieLogFormatVersion(HoodieLogFormat.CURRENT_VERSION);
 
-FSDataOutputStream outputStream = getOutputStream();
-long startPos = outputStream.getPos();
+FSDataOutputStream originalOutputStream = getOutputStream();

Review comment:
   Ummm. Yes we could have a test to write a huge log block to have a 
check. But it may affect UT performance a lot. Not sure if an UT relevant to 
such a reworking of existing logic is necessary.  Anyway, I'm glad to add a UT 
if it is compulsory.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org