Hi,

Mathieu Othacehe <othac...@gnu.org> skribis:

>> As discussed on IRC, builds per day should be compared to new
>> derivations per day.  For example, if on a day there’s 100 new
>> derivations and we only manage to build 10 of them, we have a problem.
>
> I added this line, and they sadly do not overlap :(

It seems less bad than I thought though, and the rendering is pretty.
:-)

>> 2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
>> (version-1.1.0).
>> 2020-09-14T21:16:21 Failed to compute metric 
>> average-10-last-eval-duration-per-spec (wip-desktop).
>> 2020-09-14T21:16:21 Failed to compute metric 
>> average-100-last-eval-duration-per-spec (wip-desktop).
>> 2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
>> (wip-desktop).
>>
>> Perhaps it can’t compute an average yet for these jobsets?
>
> Yes as soon as those evaluations will be repaired, we should be able to
> compute those metrics. I chose to keep the error messages as a
> remainder.

Makes sense.

> I added various other metrics and updated the "/metrics" page. Once we
> have a better view, we should think of adding thresholds on those
> metrics.

Excellent.

Thanks a lot for closing this gap!

Ludo’.



Reply via email to