Copilot commented on code in PR #16980:
URL: https://github.com/apache/iotdb/pull/16980#discussion_r2663928232


##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/execution/operator/process/DeviceViewIntoOperator.java:
##########
@@ -115,9 +128,15 @@ protected boolean processTsBlock(TsBlock inputTsBlock) {
     int readIndex = 0;
     while (readIndex < inputTsBlock.getPositionCount()) {
       int lastReadIndex = readIndex;
-      for (AbstractIntoOperator.InsertTabletStatementGenerator generator :
-          insertTabletStatementGenerators) {
-        lastReadIndex = Math.max(lastReadIndex, 
generator.processTsBlock(inputTsBlock, readIndex));
+
+      if (!insertTabletStatementGenerators.isEmpty()) {
+        InsertTabletStatementGenerator generatorOfCurrentDevice =
+            
insertTabletStatementGenerators.get(insertTabletStatementGenerators.size() - 1);
+        int rowCountBeforeProcess = generatorOfCurrentDevice.getRowCount();
+        lastReadIndex =
+            Math.max(
+                lastReadIndex, 
generatorOfCurrentDevice.processTsBlock(inputTsBlock, readIndex));
+        batchedRowCount += generatorOfCurrentDevice.getRowCount() - 
rowCountBeforeProcess;

Review Comment:
   The change at lines 132-140 modifies the processing logic to only process 
data for the current device (the last generator in the list). However, this 
creates an inconsistency with the original behavior:
   
   **Original behavior (commented out code at lines 131-133):**
   - Iterated through ALL generators in insertTabletStatementGenerators
   - Each generator could process the input TsBlock
   - This made sense when each generator represented a different target path
   
   **New behavior:**
   - Only processes the LAST generator (lines 133-139)
   - Assumes this generator is for the current device
   
   **Problem:**
   This change appears to be related to the bug mentioned in comment ID 002. If 
insertTabletStatementGenerators accumulates generators from multiple devices 
(which it does when insertMultiTabletsStatement is null but the list is not 
null on device switch), then only processing the last generator means:
   
   1. Data for devices other than the current one won't be processed
   2. The assumption that "last generator = current device" is fragile and 
depends on the device switching logic being correct
   
   Additionally, the row count tracking at lines 135-139 only tracks rows for 
the current device generator, which might be correct for the new design but 
represents a significant behavioral change.
   
   The commented-out code should either be removed entirely or this change 
should be reconsidered to ensure it aligns with the intended behavior.
   ```suggestion
           for (InsertTabletStatementGenerator generator : 
insertTabletStatementGenerators) {
             int rowCountBeforeProcess = generator.getRowCount();
             lastReadIndex =
                 Math.max(lastReadIndex, generator.processTsBlock(inputTsBlock, 
readIndex));
             batchedRowCount += generator.getRowCount() - rowCountBeforeProcess;
           }
   ```



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/execution/operator/process/DeviceViewIntoOperator.java:
##########
@@ -102,7 +110,12 @@ protected boolean processTsBlock(TsBlock inputTsBlock) {
           constructInsertMultiTabletsStatement(false);
       updateResultTsBlock();
 
-      insertTabletStatementGenerators = 
constructInsertTabletStatementGeneratorsByDevice(device);
+      if (insertMultiTabletsStatement != null || 
insertTabletStatementGenerators == null) {
+        insertTabletStatementGenerators = 
constructInsertTabletStatementGeneratorsByDevice(device);
+      } else {
+        insertTabletStatementGenerators.addAll(
+            constructInsertTabletStatementGeneratorsByDevice(device));
+      }

Review Comment:
   There's a potential issue with the logic flow when switching devices. At 
line 113, the condition checks if `insertMultiTabletsStatement != null || 
insertTabletStatementGenerators == null`. If insertMultiTabletsStatement is not 
null (meaning data needs to be flushed), the code recreates 
insertTabletStatementGenerators from scratch. However, if 
insertMultiTabletsStatement is null but insertTabletStatementGenerators is not 
null, it attempts to add to the existing list.
   
   The problem is that when a device changes, the old generators in 
insertTabletStatementGenerators are for the previous device(s), but now we're 
adding generators for the new device to that same list. This mixing of 
generators from different devices could lead to incorrect behavior when the 
list is later iterated (e.g., in findWritten method at lines 256-267, or when 
constructing InsertMultiTabletsStatement).
   
   Consider what happens:
   1. Process device A -> creates generators for A
   2. Switch to device B, no flush needed -> adds generators for B to the same 
list
   3. Later operations iterate over both A and B generators incorrectly
   
   The logic should either always recreate the list when switching devices, or 
maintain proper separation of generators by device.
   ```suggestion
         insertTabletStatementGenerators = 
constructInsertTabletStatementGeneratorsByDevice(device);
   ```



##########
integration-test/src/test/java/org/apache/iotdb/db/it/selectinto/IoTDBSelectIntoIT.java:
##########
@@ -623,89 +623,89 @@ public void testDataTypeIncompatible() {
     // test INT32
     assertTestFail(
         "select s_int32 into root.sg_type.d_1(s_boolean) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_boolean[BOOLEAN]) is 
not compatible with the data type of source column 
(root.sg_type.d_0.s_int32[INT32]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_boolean] caused by [data type of root.sg_type.d_1.s_boolean is 
not consistent, registered type BOOLEAN, inserting type INT32, timestamp 0, 
value 0]");
     assertTestFail(
         "select s_int32 into root.sg_type.d_1(s_text) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_text[TEXT]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_int32[INT32]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_text] caused by [data type of root.sg_type.d_1.s_text is not 
consistent, registered type TEXT, inserting type INT32, timestamp 0, value 0]");
 
     // test INT64
     assertTestFail(
         "select s_int64 into root.sg_type.d_1(s_int32) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int32[INT32]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_int64[INT64]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_int32] caused by [data type of root.sg_type.d_1.s_int32 is not 
consistent, registered type INT32, inserting type INT64, timestamp 0, value 
0]");
     assertTestFail(
         "select s_int64 into root.sg_type.d_1(s_float) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_float[FLOAT]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_int64[INT64]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_float] caused by [data type of root.sg_type.d_1.s_float is not 
consistent, registered type FLOAT, inserting type INT64, timestamp 0, value 
0]");
     assertTestFail(
         "select s_int64 into root.sg_type.d_1(s_boolean) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_boolean[BOOLEAN]) is 
not compatible with the data type of source column 
(root.sg_type.d_0.s_int64[INT64]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_boolean] caused by [data type of root.sg_type.d_1.s_boolean is 
not consistent, registered type BOOLEAN, inserting type INT64, timestamp 0, 
value 0]");
     assertTestFail(
         "select s_int64 into root.sg_type.d_1(s_text) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_text[TEXT]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_int64[INT64]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_text] caused by [data type of root.sg_type.d_1.s_text is not 
consistent, registered type TEXT, inserting type INT64, timestamp 0, value 0]");
 
     // test FLOAT
     assertTestFail(
         "select s_float into root.sg_type.d_1(s_int32) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int32[INT32]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_float[FLOAT]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_int32] caused by [data type of root.sg_type.d_1.s_int32 is not 
consistent, registered type INT32, inserting type FLOAT, timestamp 0, value 
0.0]");
     assertTestFail(
         "select s_float into root.sg_type.d_1(s_int64) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int64[INT64]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_float[FLOAT]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_int64] caused by [data type of root.sg_type.d_1.s_int64 is not 
consistent, registered type INT64, inserting type FLOAT, timestamp 0, value 
0.0]");
     assertTestFail(
         "select s_float into root.sg_type.d_1(s_boolean) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_boolean[BOOLEAN]) is 
not compatible with the data type of source column 
(root.sg_type.d_0.s_float[FLOAT]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_boolean] caused by [data type of root.sg_type.d_1.s_boolean is 
not consistent, registered type BOOLEAN, inserting type FLOAT, timestamp 0, 
value 0.0]");
     assertTestFail(
         "select s_float into root.sg_type.d_1(s_text) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_text[TEXT]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_float[FLOAT]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_text] caused by [data type of root.sg_type.d_1.s_text is not 
consistent, registered type TEXT, inserting type FLOAT, timestamp 0, value 
0.0]");
 
     // test DOUBLE
     assertTestFail(
         "select s_double into root.sg_type.d_1(s_int32) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int32[INT32]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_double[DOUBLE]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_int32] caused by [data type of root.sg_type.d_1.s_int32 is not 
consistent, registered type INT32, inserting type DOUBLE, timestamp 0, value 
0.0]");
     assertTestFail(
         "select s_double into root.sg_type.d_1(s_int64) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int64[INT64]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_double[DOUBLE]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_int64] caused by [data type of root.sg_type.d_1.s_int64 is not 
consistent, registered type INT64, inserting type DOUBLE, timestamp 0, value 
0.0]");
     assertTestFail(
         "select s_double into root.sg_type.d_1(s_float) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_float[FLOAT]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_double[DOUBLE]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_float] caused by [data type of root.sg_type.d_1.s_float is not 
consistent, registered type FLOAT, inserting type DOUBLE, timestamp 0, value 
0.0]");
     assertTestFail(
         "select s_double into root.sg_type.d_1(s_boolean) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_boolean[BOOLEAN]) is 
not compatible with the data type of source column 
(root.sg_type.d_0.s_double[DOUBLE]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_boolean] caused by [data type of root.sg_type.d_1.s_boolean is 
not consistent, registered type BOOLEAN, inserting type DOUBLE, timestamp 0, 
value 0.0]");
     assertTestFail(
         "select s_double into root.sg_type.d_1(s_text) from root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_text[TEXT]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_double[DOUBLE]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_text] caused by [data type of root.sg_type.d_1.s_text is not 
consistent, registered type TEXT, inserting type DOUBLE, timestamp 0, value 
0.0]");
 
     // test BOOLEAN
     assertTestFail(
         "select s_boolean into root.sg_type.d_1(s_int32) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int32[INT32]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_boolean[BOOLEAN]).");
+        "Error occurred while inserting tablets in SELECT INTO: Fail to insert 
measurements [s_int32] caused by [data type of root.sg_type.d_1.s_int32 is not 
consistent, registered type INT32, inserting type BOOLEAN, timestamp 0, value 
true]");
     assertTestFail(
         "select s_boolean into root.sg_type.d_1(s_int64) from 
root.sg_type.d_0;",
-        "The data type of target path (root.sg_type.d_1.s_int64[INT64]) is not 
compatible with the data type of source column 
(root.sg_type.d_0.s_boolean[BOOLEAN]).");
+        "301: Error occurred while inserting tablets in SELECT INTO: Fail to 
insert measurements [s_int64] caused by [data type of root.sg_type.d_1.s_int64 
is not consistent, registered type INT64, inserting type BOOLEAN, timestamp 0, 
value true]");

Review Comment:
   The expected error message at this line includes a "301: " prefix that is 
inconsistent with all other similar error messages in this test method. All 
other test cases expect error messages starting with "Error occurred while 
inserting tablets in SELECT INTO:", but this one expects "301: Error occurred 
while inserting tablets in SELECT INTO:". This inconsistency suggests either a 
copy-paste error or an unintentional test regression. The status code prefix 
should either be present in all cases or none.
   ```suggestion
           "Error occurred while inserting tablets in SELECT INTO: Fail to 
insert measurements [s_int64] caused by [data type of root.sg_type.d_1.s_int64 
is not consistent, registered type INT64, inserting type BOOLEAN, timestamp 0, 
value true]");
   ```



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/execution/operator/process/DeviceViewIntoOperator.java:
##########
@@ -102,7 +110,12 @@ protected boolean processTsBlock(TsBlock inputTsBlock) {
           constructInsertMultiTabletsStatement(false);
       updateResultTsBlock();

Review Comment:
   There's a potential issue where `updateResultTsBlock()` is called at line 
111 without checking if `resultTsBlockBuilder` is already full. If the builder 
becomes full during the update, subsequent device switches or updates may fail 
to add data properly. 
   
   Additionally, when the builder becomes full and returns a partial result via 
`tryToReturnPartialResult()` (which resets the builder), the code should ensure 
that the current device's result information is still properly recorded. 
Currently, when switching devices at line 108-111, `updateResultTsBlock()` is 
called for the previous device, but if this causes the builder to become full, 
the builder might be reset in `tryToReturnPartialResult()` before the update is 
complete.
   
   Consider adding a check before calling `updateResultTsBlock()` or ensuring 
that partial result handling happens at appropriate boundaries.



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/plan/analyze/AnalyzeVisitor.java:
##########
@@ -2415,13 +2413,7 @@ private void analyzeInto(
       intoDeviceMeasurementIterator.nextDevice();
     }
     deviceViewIntoPathDescriptor.validate();
-
-    // fetch schema of target paths
-    long startTime = System.nanoTime();
-    ISchemaTree targetSchemaTree = schemaFetcher.fetchSchema(targetPathTree, 
true, context);
-    QueryPlanCostMetricSet.getInstance()
-        .recordPlanCost(SCHEMA_FETCHER, System.nanoTime() - startTime);
-    deviceViewIntoPathDescriptor.bindType(targetSchemaTree);
+    deviceViewIntoPathDescriptor.bindType();

Review Comment:
   This change removes the schema fetching and validation logic that was 
previously performed during query analysis. The removed code (lines 2416-2422 
in the original) performed:
   1. Schema fetching for target paths
   2. Type compatibility validation between source and target
   3. View writability checks
   4. Auto-cast compatibility validation
   
   By removing this early validation, type incompatibility errors are now 
deferred until insertion time (as evidenced by the test changes). This is a 
significant behavioral change that:
   - Delays error detection from planning phase to execution phase
   - Changes error messages from semantic validation errors to insertion errors
   - May lead to partial data insertion before errors are detected
   
   While this change might improve pipeline efficiency by avoiding schema 
fetches, it represents a notable shift in error handling strategy. This should 
be clearly documented in the PR description, and consideration should be given 
to whether deferred validation is acceptable for all use cases, particularly 
for operations that might partially succeed before failing.



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/execution/operator/process/DeviceViewIntoOperator.java:
##########
@@ -187,4 +216,53 @@ public long ramBytesUsed() {
         + 
MemoryEstimationHelper.getEstimatedSizeOfAccountableObject(operatorContext)
         + resultTsBlockBuilder.getRetainedSizeInBytes();
   }
+
+  @Override
+  protected InsertMultiTabletsStatement 
constructInsertMultiTabletsStatement(boolean needCheck) {
+    if (insertTabletStatementGenerators == null
+        || (needCheck && 
!existFullStatement(insertTabletStatementGenerators))) {
+      return null;
+    }
+
+    List<InsertTabletStatement> insertTabletStatementList = new ArrayList<>();
+    try {
+      if (child.hasNextWithTimer()
+          && batchedRowCount < CONFIG.getSelectIntoInsertTabletPlanRowLimit()) 
{
+        return null;
+      }
+    } catch (InterruptedException e) {
+      Thread.currentThread().interrupt();
+      throw new IntoProcessException(e.getMessage());
+    } catch (Exception e) {
+      throw new IntoProcessException(e.getMessage());
+    }
+    for (InsertTabletStatementGenerator generator : 
insertTabletStatementGenerators) {
+      if (!generator.isEmpty()) {
+        
insertTabletStatementList.add(generator.constructInsertTabletStatement());
+      }
+    }
+    if (insertTabletStatementList.isEmpty()) {
+      return null;
+    }
+
+    InsertMultiTabletsStatement insertMultiTabletsStatement = new 
InsertMultiTabletsStatement();
+    
insertMultiTabletsStatement.setInsertTabletStatementList(insertTabletStatementList);
+    batchedRowCount = 0;

Review Comment:
   The `batchedRowCount` is reset to 0 at line 250 after constructing an 
InsertMultiTabletsStatement. However, there's a potential issue with the flow:
   
   1. When `constructInsertMultiTabletsStatement` is called with 
`needCheck=false` (e.g., from `tryToReturnResultTsBlock` at line 153), it will 
construct a statement and reset `batchedRowCount` to 0
   2. But if the method returns null (line 231 or 245), `batchedRowCount` is 
NOT reset
   3. This means `batchedRowCount` could accumulate indefinitely if statements 
are never constructed
   
   However, looking at the logic more carefully, the counter tracks rows added 
since the last flush. When a statement is constructed and about to be executed, 
resetting makes sense. But the timing of the reset might be problematic:
   
   - The reset happens before the statement is actually executed (it's just 
constructed)
   - If execution fails, the count has already been reset
   - If `constructInsertMultiTabletsStatement` is called multiple times before 
execution (e.g., in device switching scenarios), the count could be reset 
prematurely
   
   Consider moving the reset to happen after successful execution, or ensuring 
it's reset only when the statement will definitely be executed.



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/execution/operator/process/DeviceViewIntoOperator.java:
##########
@@ -187,4 +216,53 @@ public long ramBytesUsed() {
         + 
MemoryEstimationHelper.getEstimatedSizeOfAccountableObject(operatorContext)
         + resultTsBlockBuilder.getRetainedSizeInBytes();
   }
+
+  @Override
+  protected InsertMultiTabletsStatement 
constructInsertMultiTabletsStatement(boolean needCheck) {
+    if (insertTabletStatementGenerators == null
+        || (needCheck && 
!existFullStatement(insertTabletStatementGenerators))) {
+      return null;
+    }
+
+    List<InsertTabletStatement> insertTabletStatementList = new ArrayList<>();
+    try {
+      if (child.hasNextWithTimer()
+          && batchedRowCount < CONFIG.getSelectIntoInsertTabletPlanRowLimit()) 
{
+        return null;
+      }
+    } catch (InterruptedException e) {
+      Thread.currentThread().interrupt();
+      throw new IntoProcessException(e.getMessage());
+    } catch (Exception e) {
+      throw new IntoProcessException(e.getMessage());
+    }

Review Comment:
   The logic in this method has a potential issue with the ordering of checks. 
The method checks `child.hasNextWithTimer()` at line 229, which is a 
potentially blocking I/O operation that queries whether more data is available. 
This check happens inside the method that's called during `processTsBlock`, 
which means:
   
   1. It performs I/O checks even when `needCheck` is true and the statement is 
already full (line 223)
   2. The `child.hasNextWithTimer()` call could throw InterruptedException or 
other exceptions, which are caught and wrapped in IntoProcessException
   3. This introduces a dependency on the child operator's state within what 
should be a simple statement construction method
   
   A cleaner design would be to:
   - Check the `needCheck` and `existFullStatement` conditions first
   - Only perform the `child.hasNextWithTimer()` check if those pass
   - Consider whether this I/O operation belongs in this method at all, or if 
the decision to flush should be made at a higher level
   
   Additionally, the broad exception catch at line 236 (`catch (Exception e)`) 
masks potential programming errors and makes debugging difficult. Consider 
catching only expected exception types.
   ```suggestion
       if (insertTabletStatementGenerators == null) {
         return null;
       }
   
       boolean hasFullStatement = 
existFullStatement(insertTabletStatementGenerators);
       if (needCheck) {
         // When needCheck is true, we only proceed if there already exists a 
full statement.
         if (!hasFullStatement) {
           return null;
         }
       } else {
         // When needCheck is false, we may delay flushing to accumulate more 
rows
         // if the batch is not yet at the configured row limit and the child 
has more data.
         try {
           if (batchedRowCount < CONFIG.getSelectIntoInsertTabletPlanRowLimit()
               && child.hasNextWithTimer()) {
             return null;
           }
         } catch (InterruptedException e) {
           Thread.currentThread().interrupt();
           throw new IntoProcessException(e.getMessage(), e);
         } catch (Exception e) {
           throw new IntoProcessException(e.getMessage(), e);
         }
       }
   
       List<InsertTabletStatement> insertTabletStatementList = new 
ArrayList<>();
   ```



##########
iotdb-core/datanode/src/main/java/org/apache/iotdb/db/queryengine/execution/operator/process/DeviceViewIntoOperator.java:
##########
@@ -187,4 +216,53 @@ public long ramBytesUsed() {
         + 
MemoryEstimationHelper.getEstimatedSizeOfAccountableObject(operatorContext)
         + resultTsBlockBuilder.getRetainedSizeInBytes();
   }
+
+  @Override
+  protected InsertMultiTabletsStatement 
constructInsertMultiTabletsStatement(boolean needCheck) {
+    if (insertTabletStatementGenerators == null
+        || (needCheck && 
!existFullStatement(insertTabletStatementGenerators))) {
+      return null;
+    }
+
+    List<InsertTabletStatement> insertTabletStatementList = new ArrayList<>();
+    try {
+      if (child.hasNextWithTimer()
+          && batchedRowCount < CONFIG.getSelectIntoInsertTabletPlanRowLimit()) 
{
+        return null;
+      }
+    } catch (InterruptedException e) {
+      Thread.currentThread().interrupt();
+      throw new IntoProcessException(e.getMessage());
+    } catch (Exception e) {
+      throw new IntoProcessException(e.getMessage());
+    }
+    for (InsertTabletStatementGenerator generator : 
insertTabletStatementGenerators) {
+      if (!generator.isEmpty()) {
+        
insertTabletStatementList.add(generator.constructInsertTabletStatement());
+      }
+    }
+    if (insertTabletStatementList.isEmpty()) {
+      return null;
+    }
+
+    InsertMultiTabletsStatement insertMultiTabletsStatement = new 
InsertMultiTabletsStatement();
+    
insertMultiTabletsStatement.setInsertTabletStatementList(insertTabletStatementList);
+    batchedRowCount = 0;
+    return insertMultiTabletsStatement;
+  }
+
+  @Override
+  protected int findWritten(String device, String measurement) {
+    for (InsertTabletStatementGenerator generator : 
insertTabletStatementGenerators) {
+      if (!Objects.equals(generator.getDevice(), device)) {
+        continue;
+      }
+      int writtenCountInCurrentGenerator = 
generator.getWrittenCount(measurement);
+      if (writtenCountInCurrentGenerator >= 0) {
+        return writtenCountInCurrentGenerator;
+      }
+      continue;

Review Comment:
   The `continue` statement at line 264 is redundant. After the `return` 
statement at line 262, the loop would naturally continue to the next iteration 
anyway. This redundant statement reduces code clarity.
   ```suggestion
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to