THUMarkLau commented on code in PR #12476:
URL: https://github.com/apache/iotdb/pull/12476#discussion_r1632188800


##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/storageengine/dataregion/wal/buffer/WALBuffer.java:
##########
@@ -521,8 +520,9 @@ public void run() {
           forceFlag, syncingBuffer.position(), syncingBuffer.capacity(), 
usedRatio * 100);
 
       // flush buffer to os
+      double compressionRate = 1.0;
       try {
-        currentWALFileWriter.write(syncingBuffer, info.metaData);
+        compressionRate = currentWALFileWriter.write(syncingBuffer, 
info.metaData);

Review Comment:
   <img width="910" alt="image" 
src="https://github.com/apache/iotdb/assets/37140360/2011b00e-464e-4da6-a9d7-ffd1609e6b85";>
   Here we get the compression rate to update the WAL disk usage for each 
MemTable in subsequent updates. As for why we don't use the actual throughput, 
here are the reasons:
   
   1. The original design only needed to know the approximate size of the WAL, 
not very precise data volume. I consulted the original maintainer of the 
WAL(@HeimingZ )  for this.
   2. A WAL Buffer contains multiple WAL Entries for multiple MemTables, and we 
cannot know the exact size of each Entry after compression, unless we 
specifically design a compression algorithm to support this requirement. This 
is obviously stupid.
   
   The current modification meets the original requirements and the changes are 
not large. Using precise I/O quantities would only increase the work. Of 
course, I can add a TODO, and subsequent maintainers can do this if they want.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@iotdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to