Hi!

Mathieu Othacehe <othac...@gnu.org> skribis:

> I just pushed support for computing and displaying metrics in Cuirass. I
> started with two metrics:
>
> * Builds per day
> * Average evaluation speed per specification.
>
> Those metrics can now be seen at:
>
> https://ci.guix.gnu.org/metrics
>
> and are updated every hour.

This is very cool, thumbs up!

> I plan to add more metrics such as:
>
> - Evaluation completion percentage.
> - Evaluation completion speed.
> - Failed evaluations percentage.
> - Pending builds per day.

That’d be awesome.

As discussed on IRC, builds per day should be compared to new
derivations per day.  For example, if on a day there’s 100 new
derivations and we only manage to build 10 of them, we have a problem.

BTW, in cuirass.log I noticed this:

--8<---------------cut here---------------start------------->8---
2020-09-14T21:16:21 Updating metric average-eval-duration-per-spec 
(guix-modular-master) to 414.8085106382979.
2020-09-14T21:16:21 Failed to compute metric 
average-10-last-eval-duration-per-spec (kernel-updates).
2020-09-14T21:16:21 Failed to compute metric 
average-100-last-eval-duration-per-spec (kernel-updates).
2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
(kernel-updates).
2020-09-14T21:16:21 Failed to compute metric 
average-10-last-eval-duration-per-spec (staging-staging).
2020-09-14T21:16:21 Failed to compute metric 
average-100-last-eval-duration-per-spec (staging-staging).
2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
(staging-staging).
2020-09-14T21:16:21 Failed to compute metric 
average-10-last-eval-duration-per-spec (version-1.0.1).
2020-09-14T21:16:21 Failed to compute metric 
average-100-last-eval-duration-per-spec (version-1.0.1).
2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
(version-1.0.1).
2020-09-14T21:16:21 Failed to compute metric 
average-10-last-eval-duration-per-spec (version-1.1.0).
2020-09-14T21:16:21 Failed to compute metric 
average-100-last-eval-duration-per-spec (version-1.1.0).
2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
(version-1.1.0).
2020-09-14T21:16:21 Failed to compute metric 
average-10-last-eval-duration-per-spec (wip-desktop).
2020-09-14T21:16:21 Failed to compute metric 
average-100-last-eval-duration-per-spec (wip-desktop).
2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
(wip-desktop).
--8<---------------cut here---------------end--------------->8---

Perhaps it can’t compute an average yet for these jobsets?

Thanks!

Ludo’.



Reply via email to