Hi,
 
Consider software applications in general. Any application has
a) a number of bugs which exist in the code.
b) a number of bugs which will actually be encountered in the lifetime of the application. I.e., this is the number of bugs which a user/customer actually sees and hence may report.
 
Any ideas/opinions/guesses on what the ratio of these two bug counts might be, generally speaking ?
Any pointers to studies, theories, models, ... ?
 
I know very well that this is a vague, general question, with no "right" answer. But I hope it's reasonably clear what I'm shooting for anyway and I would appreciate your thoughts on this. The question should be of real-world importance when trying to estimate the value of a tool based on how many bugs it found in some application (since many of the found bugs would never have shown up anyway in real use).
 
Thank you very much!
 
- Henrik
 

Reply via email to