zhangjun0x01 commented on a change in pull request #1704:
URL: https://github.com/apache/iceberg/pull/1704#discussion_r523890317



##########
File path: 
flink/src/test/java/org/apache/iceberg/flink/actions/TestRewriteDataFilesAction.java
##########
@@ -280,4 +280,91 @@ public void testRewriteLargeTableHasResiduals() throws 
IOException {
     // Assert the table records as expected.
     SimpleDataUtil.assertTableRecords(icebergTableUnPartitioned, expected);
   }
+
+  /**
+   * a test case to test avoid repeate compress
+   * <p>
+   * If datafile cannot be combined to CombinedScanTask with other DataFiles, 
the size of the CombinedScanTask list size
+   * is 1, so we remove these CombinedScanTasks to avoid compressed repeatedly.
+   * <p>
+   * In this test case,we generated 3 data files and set targetSizeInBytes 
greater than the largest file size so that
+   * it cannot be  combined a CombinedScanTask with other datafiles. The 
datafile with the largest file size will not be
+   * compressed.
+   * <p>
+   * For the same data, the file sizes of different formats are different. The 
file sizes of different formats generated
+   * by the data in the current test case are as follows:
+   * <p>
+   *   avro :
+   *  size  file
+   *  408 00000-0-5a218337-1742-4ed1-83d8-55e301da49b8-00001.avro
+   * 2390 00000-0-8f431924-ec8d-4957-a238-b8fe2b136210-00001.avro
+   *  408 00000-0-9c75bcc4-49f0-4722-9528-c1d5faa50fa7-00001.avro
+   *
+   * orc :
+   * size  file
+   * 1626 00000-0-260d42d1-f00f-4c5f-9628-5f41f6395093-00001.orc
+   *  331 00000-0-942fd38b-d7af-4ad2-a985-0e6ccdb4d8d3-00001.orc
+   *  333 00000-0-ad8f2c34-6cf7-43fe-990f-f8f6389d198e-00001.orc
+   *
+   * parquet :
+   * size  file
+   *  611 00000-0-84e1fd63-a840-4a23-983f-5247e9218cbe-00001.parquet
+   *  611 00000-0-91b070f0-7d17-487c-97ec-de0f0b09aa31-00001.parquet
+   * 2691 00000-0-e09c969d-d6ee-4a41-9e42-9dcbf42bc4e1-00001.parquet
+   *
+   * @throws IOException IOException
+   */
+  @Test
+  public void testRewriteAvoidRepeateCompress() throws IOException {
+    List<String> records = Lists.newArrayList();
+    List<Record> expected = Lists.newArrayList();
+    for (int i = 0; i < 500; i++) {
+      String data = String.valueOf(i) + "hello iceberg,hello flink";
+      records.add("(" + i + ",'" + data + "')");

Review comment:
       > 2. create a larger file by using FileAppender ( write few records 
until the file length exceed the given target file size).
   
   
   I took a look and thought it might not be easy to implement. We need to get 
the file length through the `length` method until the file reaches the target 
file size, and then close the appender.
   
   But for orc, we cannot get length from an open appending file, we can only 
get the file length when the file is closed, which is just the opposite of our 
needs.
   
     org.apache.iceberg.orc.OrcFileAppender#length method
   
   ```
     @Override
     public long length() {
       Preconditions.checkState(isClosed,
           "Cannot return length while appending to an open file.");
       return file.toInputFile().getLength();
     }
   ```
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to