On 10 February 2011 05:41, David Gilbert <david.gilb...@linaro.org> wrote:

> On 10 February 2011 13:14, Mirsad Vojnikovic
> <mirsad.vojniko...@linaro.org> wrote:
> >
> >
> > On 10 February 2011 04:30, David Gilbert <david.gilb...@linaro.org>
> wrote:
>
> >> OK, there were a few cases I was thinking here:
> >>  1)  A batch of new machines arrives in the data centre; they are
> >> apparently
> >> identical - you want to run a benchmark on them all and make sure the
> >> variance
> >> between them is within the expected range.
> >>  2) Some upgrade has happened to a set of machines (e.g. new kernel/new
> >> linaro
> >> release) rolled out to them all - do they still all behave as expected?
> >>  3) You've got a test, it's results seem to vary wildly from run to run
> -
> >> is it
> >> consistent across machines in the farm?
> >
> > OK, I understand better now. For me this is still at test result level,
> i.e.
> > dashboard (launch-control) should produce such kind of reports. Cannot
> see
> > where this fits on scheduler level? When we give the possibility to run
> jobs
> > on specific boards, it should be easy to retrieve all needed test reports
> > from the dashboard.
>
> My only reason for thinking it was a scheduling issue is that you need a
> way
> to cause the same test to happen on all the machines in a group; not
> necessarily
> at the same time.
>

Ah, you are correct, that's excellent - a user story in the scheduler should
then look something like this: "Dave wants to rerun a previous test job from
test job history". Comments?

>
> Dave
> >
> >>
> >> Note these set of requirements come from using a similar testing farm.
> >>
> >> Dave
> >
> >
>
_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to