ctubbsii commented on code in PR #5853:
URL: https://github.com/apache/accumulo/pull/5853#discussion_r2326237393


##########
core/src/test/java/org/apache/accumulo/core/crypto/CryptoTest.java:
##########
@@ -539,14 +540,18 @@ private void testMultipleThreads(Scope scope) throws 
Exception {
 
     var executor = Executors.newCachedThreadPool();
 
-    List<Future<Boolean>> verifyFutures = new ArrayList<>();
+    final int numTasks = 32;
+    List<Future<Boolean>> verifyFutures = new ArrayList<>(numTasks);
+    CountDownLatch startLatch = new CountDownLatch(numTasks);

Review Comment:
   > 3\. your suggested approach does not ensure that all tasks are actually at 
the same point in their operation like the current approach does
   
   True. It only ensures that *as many tasks as possible* are at the same 
point, based on the size of the executor's thread pool. If there's always 
enough threads to run all the tasks, then your original code is fine. It's just 
tricky to make sure that the executor has enough threads... you have to be very 
careful about which executor service you use, and deal with any nuances in its 
behavior across various environments (number of available CPUs, etc.), in order 
to avoid deadlock. I'm just not sure we need to do that much work here. The 
simpler solution I suggested seems sufficient, because it guarantees deadlock 
is avoided. And, of course, we can still do our best effort to make sure we ask 
for enough threads in the thread pool... but we don't get deadlocked if for 
some reason, we can't make use of all the ones we asked for at the same time.
   
   It's fine with me if you want to put in the care to ensure that deadlock 
doesn't occur... so long as you are aware of the risks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to