Hey Ludo,

> As discussed on IRC, builds per day should be compared to new
> derivations per day.  For example, if on a day there’s 100 new
> derivations and we only manage to build 10 of them, we have a problem.

I added this line, and they sadly do not overlap :(

> 2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
> (version-1.1.0).
> 2020-09-14T21:16:21 Failed to compute metric 
> average-10-last-eval-duration-per-spec (wip-desktop).
> 2020-09-14T21:16:21 Failed to compute metric 
> average-100-last-eval-duration-per-spec (wip-desktop).
> 2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
> (wip-desktop).
>
> Perhaps it can’t compute an average yet for these jobsets?

Yes as soon as those evaluations will be repaired, we should be able to
compute those metrics. I chose to keep the error messages as a
remainder.

I added various other metrics and updated the "/metrics" page. Once we
have a better view, we should think of adding thresholds on those
metrics.

Closing this one!

Thanks,

Mathieu

-- 
https://othacehe.org



Reply via email to