zhuzhurk commented on a change in pull request #13284:
URL: https://github.com/apache/flink/pull/13284#discussion_r485065322



##########
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/minicluster/MiniClusterITCase.java
##########
@@ -400,16 +400,23 @@ public void 
testJobWithAnOccasionallyFailingSenderVertex() throws Exception {
                try (final MiniCluster miniCluster = new MiniCluster(cfg)) {
                        miniCluster.start();
 
+                       // putting sender and receiver vertex in the same slot 
sharing group is required
+                       // to ensure all senders can be deployed. Otherwise 
this case can fail if the
+                       // expected failing sender is not deployed.

Review comment:
       Yes, by default each JobVertex is in a different slot sharing group, 
which is aligned to the previous behavior for null slot sharing group before we 
applies #13321.

##########
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/minicluster/MiniClusterITCase.java
##########
@@ -400,16 +400,23 @@ public void 
testJobWithAnOccasionallyFailingSenderVertex() throws Exception {
                try (final MiniCluster miniCluster = new MiniCluster(cfg)) {
                        miniCluster.start();
 
+                       // putting sender and receiver vertex in the same slot 
sharing group is required
+                       // to ensure all senders can be deployed. Otherwise 
this case can fail if the
+                       // expected failing sender is not deployed.

Review comment:
       Yes, by default each JobVertex is in a different slot sharing group, 
which is aligned to the previous behavior for null slot sharing group before we 
apply #13321.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to