On Wed, Jun 25, 2025 at 5:03 PM Jakub Kicinski <k...@kernel.org> wrote:
>
> On Wed, 25 Jun 2025 16:45:49 -0700 Mina Almasry wrote:
> > Thank you for merging this. Kinda of a noob question: does this merge
> > mean that nipa will run this on new submitted patches already? Or do
> > I/someone need to do something to enable that? I've been clicking on
> > the contest for new patches like so:
> >
> > https://netdev.bots.linux.dev/contest.html?pw-n=0&branch=net-next-2025-06-25--21-00
> >
> > But I don't see this benchmark being run anywhere. I looked for docs
> > that already cover this but I couldn't find any.
>
> Right now to add a new TARGET one needs to have SSH access to the
> systems that run the tests :( The process of adding a runner is not
> automated. But this will probably need even more work because it's
> a performance test. We'd need some way of tracking numerical values
> and detecting regressions?
>

I actually did what you suggested earlier, I have the test report the
perf numbers but succeed always.

What I'm hoping to do is:

1. Have nipa run the benchmark always (or at least on patches that
touch pp code, if that's possible), and always succeed.
2. The pp reviewers can always check the contest results to manually
see if there is a regression. That's still great because it saves us
the time of cherry-pick series and running the tests ourselves (or
asking submitters to do that).
3. If we notice that the results between runs are stable, then we can
change the test to actually fail/warn if it detects a regression (if
fast path is > # of instructions, fail).
4. If we notice that the results have too much noise, then we can
improve the now merged benchmark to somehow make it more consistent.

FWIW, when I run the benchmark, I get very repeatable results across
runs, especially when measuring the fast path, but nipa's mileage may
vary.

-- 
Thanks,
Mina

Reply via email to