I think it depends on the goal why to run that benchmarks. In ideal case, we 
need to run them on the same dedicated machine(s) and with the same 
configuration all the time but I’m not sure that it can be achieved in current 
infrastructure reality. 

On the other hand, IIRC, the initial goal of benchmarks, like Nexmark, was to 
detect fast any major regressions, especially between releases, that are not so 
sensitive to ideal conditions. And here we a field for improvements.

—
Alexey

> On 13 Sep 2022, at 22:57, Kenneth Knowles <k...@apache.org> wrote:
> 
> Good idea. I'm curious about our current benchmarks. Some of them run on 
> clusters, but I think some of them are running locally and just being noisy. 
> Perhaps this could improve that. (or if they are running on local Spark/Flink 
> then maybe the results are not really meaningful anyhow)
> 
> On Tue, Sep 13, 2022 at 2:54 AM Moritz Mack <mm...@talend.com 
> <mailto:mm...@talend.com>> wrote:
> Hi team,
> 
>  
> 
> I’m looking for some help to setup infrastructure to periodically run Java 
> microbenchmarks (JMH).
> 
> Results of these runs will be added to our community metrics (InfluxDB) to 
> help us track performance, see [1]. 
> 
>  
> 
> To prevent noisy runs this would require a dedicated Jenkins machine that 
> runs at most one job (benchmark) at a time. Benchmark runs take quite some 
> time, but on the other hand they don’t have to run very frequently (once a 
> week should be fine initially).
> 
>  
> 
> Thanks so much,
> 
> Moritz
> 
>  
> 
> [1] https://github.com/apache/beam/pull/23041 
> <https://github.com/apache/beam/pull/23041>
> As a recipient of an email from Talend, your contact personal data will be on 
> our systems. Please see our privacy notice. <https://www.talend.com/privacy/>
> 

Reply via email to