DonalEvans commented on a change in pull request #7429:
URL: https://github.com/apache/geode/pull/7429#discussion_r821889667
##########
File path:
geode-for-redis/src/distributedTest/java/org/apache/geode/redis/internal/commands/executor/string/StringsDUnitTest.java
##########
@@ -253,49 +252,23 @@ public void
strLen_returnsStringLengthWhileConcurrentlyUpdatingValues() {
}
@Test
- public void givenBucketsMoveDuringAppend_thenDataIsNotLost() throws
Exception {
- AtomicBoolean running = new AtomicBoolean(true);
-
- List<String> hashtags = new ArrayList<>();
- hashtags.add(clusterStartUp.getKeyOnServer("append", 1));
- hashtags.add(clusterStartUp.getKeyOnServer("append", 2));
- hashtags.add(clusterStartUp.getKeyOnServer("append", 3));
-
- Runnable task1 = () -> appendPerformAndVerify(1, hashtags.get(0), running);
- Runnable task2 = () -> appendPerformAndVerify(2, hashtags.get(1), running);
- Runnable task3 = () -> appendPerformAndVerify(3, hashtags.get(2), running);
-
- Future<Void> future1 = executor.runAsync(task1);
- Future<Void> future2 = executor.runAsync(task2);
- Future<Void> future3 = executor.runAsync(task3);
-
- for (int i = 0; i < 100 && running.get(); i++) {
- clusterStartUp.moveBucketForKey(hashtags.get(i % hashtags.size()));
- GeodeAwaitility.await().during(Duration.ofMillis(200)).until(() -> true);
- }
-
- running.set(false);
-
- future1.get();
- future2.get();
- future3.get();
+ public void givenBucketsMoveAndPrimarySwitches_thenNoDuplicateAppendsOccur()
{
+ String KEY = "APPEND";
+ AtomicInteger counter = new AtomicInteger(0);
+
+ new ConcurrentLoopingThreads(1000,
Review comment:
I think that for this test, we might prefer not to use
`ConcurrentLoopingThreads`, as using a `runWithAction()` ensures that each
thread does one iteration then waits for the action, then each thread does
another iteration etc. This means that the timing of the two threads is far
more restricted compared to the original approach, where the time at which an
APPEND started was independent of what the bucket moving thread was currently
doing, meaning that we can more effectively hit small timing windows.
We also probably don't want to do the same number of APPEND operations as
we're doing bucket moves, as a bucket move takes significantly longer than an
APPEND, meaning that the APPEND thread spends most of its time sitting waiting
for the bucket move thread to finish the current move.
##########
File path:
geode-for-redis/src/distributedTest/java/org/apache/geode/redis/internal/commands/executor/string/StringsDUnitTest.java
##########
@@ -253,49 +252,23 @@ public void
strLen_returnsStringLengthWhileConcurrentlyUpdatingValues() {
}
@Test
- public void givenBucketsMoveDuringAppend_thenDataIsNotLost() throws
Exception {
- AtomicBoolean running = new AtomicBoolean(true);
-
- List<String> hashtags = new ArrayList<>();
- hashtags.add(clusterStartUp.getKeyOnServer("append", 1));
- hashtags.add(clusterStartUp.getKeyOnServer("append", 2));
- hashtags.add(clusterStartUp.getKeyOnServer("append", 3));
-
- Runnable task1 = () -> appendPerformAndVerify(1, hashtags.get(0), running);
- Runnable task2 = () -> appendPerformAndVerify(2, hashtags.get(1), running);
- Runnable task3 = () -> appendPerformAndVerify(3, hashtags.get(2), running);
-
- Future<Void> future1 = executor.runAsync(task1);
- Future<Void> future2 = executor.runAsync(task2);
- Future<Void> future3 = executor.runAsync(task3);
-
- for (int i = 0; i < 100 && running.get(); i++) {
- clusterStartUp.moveBucketForKey(hashtags.get(i % hashtags.size()));
- GeodeAwaitility.await().during(Duration.ofMillis(200)).until(() -> true);
- }
-
- running.set(false);
-
- future1.get();
- future2.get();
- future3.get();
+ public void givenBucketsMoveAndPrimarySwitches_thenNoDuplicateAppendsOccur()
{
+ String KEY = "APPEND";
+ AtomicInteger counter = new AtomicInteger(0);
+
+ new ConcurrentLoopingThreads(1000,
+ i -> {
+ String appendString = "-" + KEY + "-" + i + "-";
+ jedisCluster.append(KEY, appendString);
+ counter.incrementAndGet();
+ },
+ i -> clusterStartUp.moveBucketForKey(KEY)).runWithAction(() -> {
+ clusterStartUp.switchPrimaryForKey(KEY, server1, server2, server3);
+ verifyAppendResult(KEY, counter.get());
Review comment:
The `verifyAppendResult()` method still contains the special logic for
tolerating duplicated appends, so that needs to be removed in order to confirm
that this fix is preventing appends being duplicated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]