zhuzhurk commented on code in PR #21695:
URL: https://github.com/apache/flink/pull/21695#discussion_r1082917445


##########
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptivebatch/DefaultVertexParallelismAndInputInfosDeciderTest.java:
##########
@@ -542,6 +542,12 @@ public long getNumBytesProduced() {
             return producedBytes;
         }
 
+        @Override
+        public long getNumBytesProduced(
+                IndexRange partitionIndexRange, IndexRange 
subpartitionIndexRange) {
+            return producedBytes;

Review Comment:
   It's better to throw an `UnsupportedException` instead of returning a wrong 
value.



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AllToAllBlockingResultInfo.java:
##########
@@ -119,4 +120,17 @@ public List<Long> getAggregatedSubpartitionBytes() {
         checkState(aggregatedSubpartitionBytes != null, "Not all partition 
infos are ready");
         return Collections.unmodifiableList(aggregatedSubpartitionBytes);
     }
+
+    @Override
+    public long getExecutionVertexInputNumBytes(
+            IndexRange partitionIndexRange, IndexRange subpartitionIndexRange) 
{

Review Comment:
   I think @wanglijie95 means that we should check that the 
partitionIndexRange.startIndex == 0 and  partitionIndexRange.endIndex == 
numOfPartitions - 1. We should also check that subpartitionIndexRange < 
numOfSubpartitions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to