Some thoughts on quality
We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0. The fact that we're doing this, and their are no arguments against it, shows that we value quality. I'd like to take this a step further, and see what we can learn from the defects in AOO 4.0.0 and what we can do going forward to improve. Quality, in the end, is a process, not a state of grace. We improve by working smarter, not working harder. The goal should be to learn and improve, as individuals and as a community. Every regression that made it into 4.0.0 was added there by a programmer. And the defect went undetected by testers. This is not to blame. It just means that we're all human. We know that. We all make mistakes. I make mistakes. A quality process is not about becoming perfect, but about acknowledging that we make mistakes and that certain formal and informal practices are needed to prevent and detect these mistakes. But enough about generalities. I'm hoping you'll join with me in examining the 32 confirmed 4.0.0 regression defects and answering a few questions: 1) What caused the bug? What was the root cause? Note: programmer error is not really a cause. We should ask what caused the error. 2) What can we do to prevent bugs like this from being checked in? 3) Why wasn't the bug found during testing? Was it not covered by any existing test case? Was a test case run but the defect was not recognized? Was the defect introduced into the software after the tests had already been executed? 4) What can we do to ensure that bugs like this are caught during testing? So 2 basic questions -- what went wrong and how can we prevent it in the future, looked at from perspective of programmers and testers. If we can keep these questions in mind, and try to answer them, we may be able to find some patterns that can lead to some process changes for AOO 4.1. You can find the 4.0.0 regressions in Bugzilla here: https://issues.apache.org/ooo/buglist.cgi?cmdtype=doremremaction=runnamedcmd=400_regressionssharer_id=248521list_id=80834 Regards, -Rob - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org
Re: Some thoughts on quality
Dear Rob The 4.0 release was too ambitious - we should advance in smaller steps. Nothing compares to general public testing - betas and release candidates should not be avoided. TestLink cases should be less comprehesive (in terms of feature coverage) and more stress testing oriented. Regards, Edwin On Wed, Aug 14, 2013, at 19:59, Rob Weir wrote: We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0. The fact that we're doing this, and their are no arguments against it, shows that we value quality. I'd like to take this a step further, and see what we can learn from the defects in AOO 4.0.0 and what we can do going forward to improve. Quality, in the end, is a process, not a state of grace. We improve by working smarter, not working harder. The goal should be to learn and improve, as individuals and as a community. Every regression that made it into 4.0.0 was added there by a programmer. And the defect went undetected by testers. This is not to blame. It just means that we're all human. We know that. We all make mistakes. I make mistakes. A quality process is not about becoming perfect, but about acknowledging that we make mistakes and that certain formal and informal practices are needed to prevent and detect these mistakes. But enough about generalities. I'm hoping you'll join with me in examining the 32 confirmed 4.0.0 regression defects and answering a few questions: 1) What caused the bug? What was the root cause? Note: programmer error is not really a cause. We should ask what caused the error. 2) What can we do to prevent bugs like this from being checked in? 3) Why wasn't the bug found during testing? Was it not covered by any existing test case? Was a test case run but the defect was not recognized? Was the defect introduced into the software after the tests had already been executed? 4) What can we do to ensure that bugs like this are caught during testing? So 2 basic questions -- what went wrong and how can we prevent it in the future, looked at from perspective of programmers and testers. If we can keep these questions in mind, and try to answer them, we may be able to find some patterns that can lead to some process changes for AOO 4.1. You can find the 4.0.0 regressions in Bugzilla here: https://issues.apache.org/ooo/buglist.cgi?cmdtype=doremremaction=runnamedcmd=400_regressionssharer_id=248521list_id=80834 Regards, -Rob - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org
Re: Some thoughts on quality
On 14 August 2013 19:36, Edwin Sharp el...@mail-page.com wrote: Dear Rob The 4.0 release was too ambitious - we should advance in smaller steps. Nothing compares to general public testing - betas and release candidates should not be avoided. TestLink cases should be less comprehesive (in terms of feature coverage) and more stress testing oriented. Regards, Edwin On Wed, Aug 14, 2013, at 19:59, Rob Weir wrote: We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0. The fact that we're doing this, and their are no arguments against it, shows that we value quality. I'd like to take this a step further, and see what we can learn from the defects in AOO 4.0.0 and what we can do going forward to improve. Quality, in the end, is a process, not a state of grace. We improve by working smarter, not working harder. The goal should be to learn and improve, as individuals and as a community. Every regression that made it into 4.0.0 was added there by a programmer. And the defect went undetected by testers. This is not to blame. It just means that we're all human. We know that. We all make mistakes. I make mistakes. A quality process is not about becoming perfect, but about acknowledging that we make mistakes and that certain formal and informal practices are needed to prevent and detect these mistakes. But enough about generalities. I'm hoping you'll join with me in examining the 32 confirmed 4.0.0 regression defects and answering a few questions: 1) What caused the bug? What was the root cause? Note: programmer error is not really a cause. We should ask what caused the error. 2) What can we do to prevent bugs like this from being checked in? 3) Why wasn't the bug found during testing? Was it not covered by any existing test case? Was a test case run but the defect was not recognized? Was the defect introduced into the software after the tests had already been executed? 4) What can we do to ensure that bugs like this are caught during testing? So 2 basic questions -- what went wrong and how can we prevent it in the future, looked at from perspective of programmers and testers. If we can keep these questions in mind, and try to answer them, we may be able to find some patterns that can lead to some process changes for AOO 4.1. You can find the 4.0.0 regressions in Bugzilla here: https://issues.apache.org/ooo/buglist.cgi?cmdtype=doremremaction=runnamedcmd=400_regressionssharer_id=248521list_id=80834 Regards, -Rob I strongly believe that one of the things that went wrong is our limited possibility to retest (due to resources), when I look at our current manual testcases, a lot of those could be automated, e.g. with a simple UI macro, that would enable us to run these test cases with every build. It may sound like a dream but where I come from, we did that every night, and it caught a lot of regression bugs and sideeffects. A simple start, if to request that every bug fix, is issued with at least one test case (automated or manual). rgds jan I. - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org
Re: Some thoughts on quality
I apologize in advance if my note was note clear. I'm not at all interested in off-the-cuff opinions. We all have our opinions. But I'm only interested in fact-based analysis of the actual regressions reported in BZ. Specifically: what caused the actually defects that ended up in 4.0.0 and what could have been done to prevent it. General recommendations, like more time, not backed by specific analysis, are not very useful. And remember, there will never be enough time to improve quality with a suboptimal process. The goal should be (IMHO) to improve the process, i.e., work smarter, not harder. Regards, -Rob On Wed, Aug 14, 2013 at 12:59 PM, Rob Weir robw...@apache.org wrote: We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0. The fact that we're doing this, and their are no arguments against it, shows that we value quality. I'd like to take this a step further, and see what we can learn from the defects in AOO 4.0.0 and what we can do going forward to improve. Quality, in the end, is a process, not a state of grace. We improve by working smarter, not working harder. The goal should be to learn and improve, as individuals and as a community. Every regression that made it into 4.0.0 was added there by a programmer. And the defect went undetected by testers. This is not to blame. It just means that we're all human. We know that. We all make mistakes. I make mistakes. A quality process is not about becoming perfect, but about acknowledging that we make mistakes and that certain formal and informal practices are needed to prevent and detect these mistakes. But enough about generalities. I'm hoping you'll join with me in examining the 32 confirmed 4.0.0 regression defects and answering a few questions: 1) What caused the bug? What was the root cause? Note: programmer error is not really a cause. We should ask what caused the error. 2) What can we do to prevent bugs like this from being checked in? 3) Why wasn't the bug found during testing? Was it not covered by any existing test case? Was a test case run but the defect was not recognized? Was the defect introduced into the software after the tests had already been executed? 4) What can we do to ensure that bugs like this are caught during testing? So 2 basic questions -- what went wrong and how can we prevent it in the future, looked at from perspective of programmers and testers. If we can keep these questions in mind, and try to answer them, we may be able to find some patterns that can lead to some process changes for AOO 4.1. You can find the 4.0.0 regressions in Bugzilla here: https://issues.apache.org/ooo/buglist.cgi?cmdtype=doremremaction=runnamedcmd=400_regressionssharer_id=248521list_id=80834 Regards, -Rob - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org
Re: Some thoughts on quality
On Wed, Aug 14, 2013 at 1:55 PM, janI j...@apache.org wrote: On 14 August 2013 19:36, Edwin Sharp el...@mail-page.com wrote: Dear Rob The 4.0 release was too ambitious - we should advance in smaller steps. Nothing compares to general public testing - betas and release candidates should not be avoided. TestLink cases should be less comprehesive (in terms of feature coverage) and more stress testing oriented. Regards, Edwin On Wed, Aug 14, 2013, at 19:59, Rob Weir wrote: We're working now on AOO 4.0.1, to fix defects in AOO 4.0.0. The fact that we're doing this, and their are no arguments against it, shows that we value quality. I'd like to take this a step further, and see what we can learn from the defects in AOO 4.0.0 and what we can do going forward to improve. Quality, in the end, is a process, not a state of grace. We improve by working smarter, not working harder. The goal should be to learn and improve, as individuals and as a community. Every regression that made it into 4.0.0 was added there by a programmer. And the defect went undetected by testers. This is not to blame. It just means that we're all human. We know that. We all make mistakes. I make mistakes. A quality process is not about becoming perfect, but about acknowledging that we make mistakes and that certain formal and informal practices are needed to prevent and detect these mistakes. But enough about generalities. I'm hoping you'll join with me in examining the 32 confirmed 4.0.0 regression defects and answering a few questions: 1) What caused the bug? What was the root cause? Note: programmer error is not really a cause. We should ask what caused the error. 2) What can we do to prevent bugs like this from being checked in? 3) Why wasn't the bug found during testing? Was it not covered by any existing test case? Was a test case run but the defect was not recognized? Was the defect introduced into the software after the tests had already been executed? 4) What can we do to ensure that bugs like this are caught during testing? So 2 basic questions -- what went wrong and how can we prevent it in the future, looked at from perspective of programmers and testers. If we can keep these questions in mind, and try to answer them, we may be able to find some patterns that can lead to some process changes for AOO 4.1. You can find the 4.0.0 regressions in Bugzilla here: https://issues.apache.org/ooo/buglist.cgi?cmdtype=doremremaction=runnamedcmd=400_regressionssharer_id=248521list_id=80834 Regards, -Rob I strongly believe that one of the things that went wrong is our limited possibility to retest (due to resources), when I look at our current manual I wonder about that as well. That's one reason it would be good to know how many of the confirmed regressions were introduced late in the release process, and thus missed coverage in our full test pass. testcases, a lot of those could be automated, e.g. with a simple UI macro, that would enable us to run these test cases with every build. It may sound like a dream but where I come from, we did that every night, and it caught a lot of regression bugs and sideeffects. This begs the question: Is the functionality of the regressions covered by our test cases? Or are they covered but we didn't execute them? Or we executed them but didn't recognize the defect? I don't know (yet). A simple start, if to request that every bug fix, is issued with at least one test case (automated or manual). Often there is, though this information lives in Bugzilla. One thing we did on another (non open source) project is to mark defects in our bugtracking system that should become test cases. Not every bug did that. For example, a defect report to update a mispelling in the UI would not lead to a new test case. But many would. Regards, -Rob rgds jan I. - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org
Re: Some thoughts on quality
Am 08/14/2013 09:01 PM, schrieb Raphael Bircher: Am 14.08.13 20:21, schrieb Rob Weir: On Wed, Aug 14, 2013 at 1:36 PM, Edwin Sharp el...@mail-page.com wrote: Dear Rob The 4.0 release was too ambitious - we should advance in smaller steps. Nothing compares to general public testing - betas and release candidates should not be avoided. TestLink cases should be less comprehesive (in terms of feature coverage) and more stress testing oriented. The number to consider here is how many defects were found and fixed during the 4.0.0 testing, before the general public users had access? I assume it was quite substantial. If so, the TestLink usage was effective. In other words, we might have found fewer bugs without using it. In general, my feeling is that it's too early to do a retrospective and ask what was good, what needs to be improved? (to say it with SCRUM words ;-) ) Just after the first major release. We should look on the BZ query you have mentioned and see if there is one or more hotspots that should be improved fast. That's it. This is important to keep in mind: we want to prevent or find more bugs, but we're not starting from zero. We're starting from a process that does a lot of things right. I like the idea of a public beta. But consider the numbers. The 40 or so regressions that were reported came from an install base (based on download figures since 4.0.0 was released) of around 3 million users. Realistically, can we expect anywhere near that number in a public beta? Or is it more likely that a beta program has 10,000 users or fewer? I don't know the answer here. But certainly a well-publicized and used beta will find more than a beta used by just a few hundred users. The public beta is from my point of view realy important. Even you have only 10'000 Downloads of a beta, you have normaly verry experianced Users there, like power users from Companies. They provide realy valua feedback. So from my point of view, this is one of the moast important changes we have to do. For all Feature release a beta version. And don't forget, people are realy happy to do beta tests. but many of them are maybe not willing to follow a mailing list. A public beta release is of course not the golden solution but could activate some power users that give us the feedback we want and need. So, +1 for going this way. After the 4.1 release is done we can see if this was much better - and ask ourself why? :-) Marcus - To unsubscribe, e-mail: qa-unsubscr...@openoffice.apache.org For additional commands, e-mail: qa-h...@openoffice.apache.org