All, Our startup time in the TomEE 8.0.0 milestones is much slower. We've historically had great startup speed, something we've worked on and it's been a point of pride for the project. A core part of the "Be small, Be certified, be Tomcat" slogan.
We definitely need to put some work into this again. I created a very rough prototype that allows us to benchmark TomEE versions against each other and see if we are trending up or down. I ran it on all our webprofile and plus versions from TomEE 1.0.0 to 8.0.0-M3 (build 1136). - https://issues.apache.org/jira/secure/attachment/12969800/startup-times.png - https://issues.apache.org/jira/secure/attachment/12969802/sort-by-total.txt - https://issues.apache.org/jira/secure/attachment/12969801/sort-by-startup.txt I dumped all the data into a JIRA so none of us have to search email for it in 5 years time. - https://issues.apache.org/jira/browse/TOMEE-2528 Historical data is great, but the effort was put in with the hope we could potentially make some improvements. A few opportunities come to mind. # Performance Testing in the build The trick with automating performance tests is you need a reliable way to pass/fail without human interaction or you haven't achieved much. You can't put in a fixed number like "must be faster than 5 seconds" as everyone's hardware is different. Scientific method to the rescue. We could introduce a control group and measure against that. The test could measure the speed of the current code + the previous two releases and assert the current code is not slower than the previous two releases by some margin. The test is then asserting simply, "we can't be slower than we were before", which works on any hardware. This will likely cause intermittent build failures, but it will also help us squash performance issues early. They would be most likely to happen on library upgrades. This also means doing them the day before a release would become much more of a no-no, which is perhaps not a bad thing. # Experimenting with Performance Improvements Tools like JMeter, Grinder, etc. don't help at all as they test throughput. Tools like YourKit, JProfiler etc are great for seeing where time is spent in code and can be used for looking for optimizations. In the end, however, they still take hours and it isn't always clear if you're moving forward. With this technique of a+b perf testing, we can race the previous code and our code with performance ideas and get bottom line feedback very quickly. You can test multiple builds so you can try a few ideas out at once and see which one moves the needle the most. If this tool is cleaned up enough and made very simple and command-line friendly, I think it could turn us all into performance-minded contributors. # Revisiting TomEE Embedded Already in the data, we can see 50% of our "new startup" time is extracting the server from the tar.gz. We could speed things up by 50% for our build and everyone's build, just by skipping that step. We have always had a TomEE Embedded distribution, but we've approached it with a completely different mindset than regular TomEE and therefore it has less functionality and simply could not complete with a TomEE zip. I believe we took a wrong turn at the start. In a plain Tomcat zip the server is started with an incredibly small classpath, basically just what is the bin/ dir. Tomcat will then load everything in the lib/ dir into a new classloader, grab a class from that new classloader and tell it to finish the job. It would be possible for us to write an "embedded" version that does exactly this, but using jars from the local Maven repo and would result in is being able to start stop many Tomcat/TomEE instances in one JVM. We would have a slightly different version of the Tomcat bootstrap jar, but everything after that point would be 100% the same. The embedded version would start 2x faster than a "remote" version but have the same functionality and cost us very little in terms of maintenance. -- David Blevins http://twitter.com/dblevins http://www.tomitribe.com