CI tooling might help here if we can run the tests on a dedicated agent (or at 
least one where only a single perf test happens concurrently). Without a 
dedicated agent, running the tests repeatedly might help smooth the noisy 
neighbors.

Matt Sicker

> On Oct 4, 2021, at 02:48, Ralph Goers <[email protected]> wrote:
> 
> Of course, running the benchmarks under Jenkins or as GitHub Actions would 
> be 
> almost useless since there would be no way to control what other workloads 
> were 
> running at the same time.
> 
> Ralph
> 
>> On Oct 4, 2021, at 12:39 AM, Ralph Goers <[email protected]> wrote:
>> 
>> If they can be run in Jenkins or GitHub Actions then there is hardware 
>> available. 
>> However, we would have no idea what the hardware is the test is running on, 
>> although the test could probably find a way to figure it out.
>> 
>> I don’t know of other tooling.
>> 
>> Ralph
>> 
>>>> On Oct 4, 2021, at 12:22 AM, Volkan Yazıcı <[email protected]> wrote:
>>> 
>>> Hello,
>>> 
>>> log4j-perf is nicely populated with various JMH benchmarks, yet it requires
>>> manual action to run them. Not to mention drawing comparisons between runs
>>> on varying Log4j, Java, OS, CPU, and concurrency configurations is close to
>>> being impossible. I am in the search of a F/OSS tool to facilitate such
>>> performance tests on a regular basis, e.g., once a week. In particular, the
>>> recent performance crusade Carter conquered triggered by Ceki's
>>> Log4j-vs-Logback comparison is a tangible example showing the necessity of
>>> such a performance test bed. In this context, I need some suggestions on
>>> 
>>> 1. Are there any (F/OSS?) tools that one can employ to run certain
>>> benchmarks, store the results, generate reports comparing the results with
>>> earlier runs?
>>> 2. Can Apache provide us VMs to run this tool on?
>>> 
>>> 
>>> Kind regards.
>> 
>> 
>> 
> 
> 

Reply via email to