Hello David,
Is the full build:
* CPU bound
* Memory bound
* IOPS bound
?
Which can be found using top/iostat metrics during its run.
If the later, then I guess that a good test would be to run the build
on an EBS volume of GP3 type (minimum 3000 IOPS, up to 16000 IOPS) and
see if that improves
Alright, here are some build times for the runs I've done. Results are quite
varied.
c1.xlarge : [INFO] Total time: 03:59 h
t2.medium : [INFO] Total time: 05:05 h
t2.xlarge : [INFO] Total time: 02:30 h
t3.medium : [INFO] Total time: 03:21 h
my.laptop : [INFO] Total time: 01:56 h
Specs for
FYI I updated the script yesterday so it uses --fail-never instead of
--fail-at-end.
It seems that even though our jobs have --fail-at-end, the actual behavior
we're getting is --fail-never.
With --fail-at-end the build skips most the modules and ends early, hence the
37 minute build time. Po
Hi,
So I've been doing some tests. Parallelism does not give us much savings
unless tests are awiting a lot (metrics for example) and a couple more
modules.
Otherwise, the CPU of the Jenkins slave is too overloaded to get any
benefit. As David mentioned yesterday on slack, they are 12+ years old
s
Thanks for your time and detailed testing!
Gruß
Richard
Am Donnerstag, dem 13.10.2022 um 17:10 +0200 schrieb Alex The Rocker:
> [+1] (non binding)
> Tested TomEE+ 8.0.13 with our web apps in VMs including using
> embedded ActiveMQ, also using servlets, JAX-RS, JAX-WS, JMS and
> Websockets on L
[+1] (non binding)
Tested TomEE+ 8.0.13 with our web apps in VMs including using
embedded ActiveMQ, also using servlets, JAX-RS, JAX-WS, JMS and
Websockets on Linux CentOS 7.9 with IBM Semeru 17.0.4 as the Java
runtime
+
Tested in Container-based services with same stack
No problems found !