siddhantsangwan commented on code in PR #8915:
URL: https://github.com/apache/ozone/pull/8915#discussion_r2409487422
##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplication.java:
##########
@@ -157,6 +170,68 @@ void pushUnknownContainer() throws Exception {
ReplicationSupervisor::getReplicationFailureCount);
}
+ /**
+ * Provides stream of different container sizes for tests.
+ */
+ public static Stream<Arguments> sizeProvider() {
+ return Stream.of(
+ Arguments.of("Normal 2MB", 2L * 1024L * 1024L),
+ Arguments.of("Overallocated 6MB", 6L * 1024L * 1024L)
+ );
+ }
+
+ /**
+ * Tests push replication of a container with over-allocated size.
+ * The target datanode will need to reserve double the container size,
+ * which is greater than the configured max container size.
+ */
+ @ParameterizedTest(name = "for {0}")
+ @MethodSource("sizeProvider")
+ void testPushWithOverAllocatedContainer(String testName, Long containerSize)
Review Comment:
Yes, what I'm saying is that creating a container that crosses its max size
limit can be a flaky process in the first place. That's because a container is
supposed to go from open to closed state when it's about to reach the 5 GB
limit to prevent it from crossing that limit. But the open to close transition
takes some time. So based on the timing, the test _could_ fail sometimes. But
it's good enough for now.
You can check
`org.apache.hadoop.ozone.container.common.impl.HddsDispatcher#sendCloseContainerActionIfNeeded`
to understand this better.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]