Writing a lot of performance tests creates the problem that those tests
will take a long time to run. The nature of performance tests is that each
test must run for a relatively long time to get meaningful results.
Therefore I doubt writing lots of different performance tests can scale.
(Maybe we can find ways to eliminate noise in very short tests, but that
might be research.)
Well we learn as we write more tests what works and what doesn’t. A factor
like length of run is something we learn about over time as we experiment.
My whole point here to provide an easy way for devs to experiment. We
currently do not have something like this available.
What the tests run on and how they integrate into our existing testing
infrastructure is an engineering problem we can solve.
One other thing to keep in mind if we're going to start doing performance
tests differently is https://bugzilla.mozilla.org/show_bug.cgi?id=846166.
Basically Chris suggests using eideticker for performance tests a lot
more.
Eideticker is interesting, but it's also not pliable. We'd love to have
eideticker tests running for metro but the odds of that happening anytime
soon are slim due to the overhead of getting it set up. I imagine making
changes or adding tests is probably not very easy either.
Something like eideticker is great as a research project or something that
is owned by a special team that augments it over time and produces data sets
we can use. But I seriously doubt devs on m-c will ever be able to spend a
few hours writing and then checking in an eideticker test.
Jim
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform