On 2 Aug 2001 12:32:03 GMT esteemed Ben Bucksch did hold forth thusly:
> What I'm missing here are good, old, normal bugs. We have a lot of bugs,
> places where Mozilla behaves unexpected, weird or plain wrong. Some are
> minor and can be ignored, others are highly visible or are very
> annoying. Some of them we might not even realize anymore, because we got
> to know them and instinctively work around them.
I think we need a metric that gives us an idea of the MTBF for a typical user.
eg if 10000 random web users ran Moz visiting whatever web sites they want
to visit then what bugs would they hit in the course of n hours of web
surfing? Of those bugs which would be significant problems for the users?
Bugz in Bugzilla just don't cut it as a standard of how buggy Moz is for
practical purposes. Bugzilla is important but is not sufficient in
determining how buggy Moz is at any given point in time. Measuring the
results of real users hitting web pages they chose would be a more
revealing measure IMO.
However, the "real users with real web pages" measure is also not adequate.
What such a measure misses is the quality of new features that existing
web pages may not yet use. So some other ways to test quality of the newer
features must also be used.
One thing that might be helpful for testing the newer features would be to
find a way to analyse pages out there to find pages that use newer tags,
XML and other newer stuff. A sort of web bot that cruised thru pages looking
for CSS and XML and stuff like that would be useful to find pages that would
give Moz a more advanced work-out.