dawidwys commented on a change in pull request #34:
URL: https://github.com/apache/flink-benchmarks/pull/34#discussion_r724221931



##########
File path: 
src/main/java/org/apache/flink/benchmark/CheckpointingTimeBenchmark.java
##########
@@ -65,37 +61,20 @@
 import java.util.concurrent.Executors;
 import java.util.function.Function;
 
-import static java.util.concurrent.TimeUnit.MILLISECONDS;
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static 
org.apache.flink.api.common.eventtime.WatermarkStrategy.noWatermarks;
 
 /**
  * The test verifies that the debloating kicks in and properly downsizes 
buffers. In the end the
  * checkpoint should take ~2(number of rebalance) * DEBLOATING_TARGET.
- *
- * <p>Some info about the chosen numbers:

Review comment:
       I removed the note, as it is not straightforward to calculate and hard 
to keep it in sync. Moreover the calculations so far were "wrong" regarding the 
record size, which with `1b` of payload is equal to `~29b`.

##########
File path: 
src/main/java/org/apache/flink/benchmark/CheckpointingTimeBenchmark.java
##########
@@ -65,37 +61,20 @@
 import java.util.concurrent.Executors;
 import java.util.function.Function;
 
-import static java.util.concurrent.TimeUnit.MILLISECONDS;
 import static java.util.concurrent.TimeUnit.SECONDS;
 import static 
org.apache.flink.api.common.eventtime.WatermarkStrategy.noWatermarks;
 
 /**
  * The test verifies that the debloating kicks in and properly downsizes 
buffers. In the end the
  * checkpoint should take ~2(number of rebalance) * DEBLOATING_TARGET.
- *
- * <p>Some info about the chosen numbers:
- *
- * <ul>
- *   <li>The minimal memory segment size is decreased (256b) so that the 
scaling possibility is
- *       higher. Memory segments start with 4kb
- *   <li>A memory segment of the minimal size fits ~3 records (of size 64b), 
each record takes ~1ms
- *       to be processed by the sink
- *   <li>We have 2 (exclusive buffers) * 4 (parallelism) + 8 floating = 64 
buffers per gate, with
- *       300 ms debloating target and ~1ms/record processing speed, we can 
buffer 300/64 = ~4.5
- *       records in a buffer after debloating which means the size of a buffer 
is slightly above the
- *       minimal memory segment size.
- *   <li>The buffer debloating target of 300ms means a checkpoint should take 
~2(number of
- *       exchanges)*300ms=~600ms
- * </ul>
  */
 @OutputTimeUnit(SECONDS)
-@Warmup(iterations = 4)
 public class CheckpointingTimeBenchmark extends BenchmarkBase {
     public static final int JOB_PARALLELISM = 4;
-    public static final MemorySize START_MEMORY_SEGMENT_SIZE = 
MemorySize.parse("4 kb");

Review comment:
       I increased the memory segment size to increase the range in which 
buffer debloating works. After debloating, fully backpressured pipeline has 
buffers of `~1000-2000b`.

##########
File path: 
src/main/java/org/apache/flink/benchmark/CheckpointingTimeBenchmark.java
##########
@@ -284,8 +262,9 @@ protected int getNumberOfSlotsPerTaskManager() {
      */
     public static class SlowDiscardSink<T> implements SinkFunction<T> {
         @Override
-        public void invoke(T value, Context context) throws Exception {
-            Thread.sleep(1);
+        public void invoke(T value, Context context) {
+            final long startTime = System.nanoTime();
+            while (System.nanoTime() - startTime < 200_000) {}

Review comment:
       I replaced `Thread.sleep` with busy waiting. It improves saturation of 
network exchanges. With `Thread.sleep(1)` most of the unaligned checkpoints had 
no persisted in-flight data.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to