gnodet commented on code in PR #22287:
URL: https://github.com/apache/camel/pull/22287#discussion_r2999956105


##########
test-infra/camel-test-infra-common/src/main/java/org/apache/camel/test/infra/common/services/ContainerEnvironmentUtil.java:
##########
@@ -114,6 +114,9 @@ public static String containerName(Class cls) {
             if (annotation.serviceImplementationAlias().length > 0) {
                 name += "-" + annotation.serviceImplementationAlias()[0];
             }
+            // Append PID to avoid Docker container name conflicts when 
multiple
+            // modules run tests in parallel (e.g., via mvnd with multiple 
threads)
+            name += "-" + ProcessHandle.current().pid();

Review Comment:
   To clarify further: the singleton is **per-JVM**, not cross-JVM.
   
   **Within a JVM** (e.g., all test classes in `camel-elasticsearch`):
   - `SingletonServiceHolder.INSTANCE` is a static field — one instance per 
classloader
   - `store.computeIfAbsent("elastic", ...)` in JUnit's root 
`ExtensionContext.Store` ensures `initialize()` is called exactly once
   - All test classes share the same container — same PID, same container name, 
no conflict
   
   **Across JVMs** (e.g., `camel-elasticsearch` vs 
`camel-elasticsearch-rest-client` in separate mvnd daemons):
   - Each JVM has its own static holder, its own JUnit root context
   - Each creates its own Docker container independently
   - The containers were **never shared** — without the PID fix, the second JVM 
simply crashed with a `409 Conflict`
   
   So there was no actual container sharing happening across JVMs before either 
— it was just failing. The PID suffix makes it work by giving each JVM its own 
uniquely-named container.
   
   _Claude Code on behalf of Guillaume Nodet_



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to