Hi,

regarding position _B) below: one should point out that the milestone will be delayed for more than 2-3 days in case something is wrong with the milestone. Creating a CWS, fixing the bug, rebuilding, repackaging, doing again the automated tests can possibly delay the milestone by a week or even more.

Another thing is what criteria do we establish for a bug which is severe enough to stop a milestone? Certainly not the ones we do for for major and minor releases, this is just a development milestone after all. But what is a bad milestone for - say - using the writer in a meaningful way can be a perfect milestone to base a new calc CWS on if it contains urgent needed new stuff.

Any criteria for such milestone stopper bug will be necessarily "subjective" in a way, at least everything short of saying that all automated tests must be successful. The latter criteria would be a nice one if it wouldn't take days to ensure this, I think we would become to inflexible if we adopt that.

I would plead for keeping the current way to release development milestones, that is, do the smoketest and than announce it. Maybe we should enhance it with a "milestone is really really bad, please don't use, QA will not accept anything which is based on this" notification mechanism.

+1 for _A)

Heiner


Jörg Jahnke wrote:
Hi,

the P1 issue i67982, that was found on the milestone m179 of SRC680, brought up the question whether there might be cases when it is useful to re-open a milestone that has already been announced as "ready for CWS usage", in order to integrate a P1 bugfix. As this matter does not only affect a few guys at Sun, I am trying to start this discussion again here on the OOo list.

I will try to summarize some of the points which were already brought up in the recent discussion (ruthlessly copying & pasting from other people's emails ;-)). Please excuse if this mail gets quite long thereby. In replying it might be useful to keep only the part which seem relevant to you.


_A) Reasons to always fix the issues on the next milestone_
After a milestone is announced on [EMAIL PROTECTED], developers nearly immediately start working on that milestone, they create childworkspaces or resync their existing CWS against it. For doing so they rely on having the cvs tags fixed.

What could be a reason to re-open an existing milestone?
1) A milestone could pass smoketest but nevertheless contain issues rendering it useless for parts of the stake holders. Example: current issue i67982 causing writer to crash on red linig or tables, build issues making a milestone totally unusable (build breaks) for OOo community developers. 2) A milestone could contain code integrated 'by accident' which is not allowed to be in the code line. For example license protected code not allowed to be distributed.

Ad 1) Developer perspective: for those not already working on that milestone it makes no difference, whether to wait for a rebuild or for a new milestone. For those who are already working on that milestone a re-open would cause additional trouble if they need to get the new fix. Getting it from a new milestone would be standard task with normal tooling support ('cwsresync'). Getting it from the same milestone requires manual work (throw away your solver, get the new one, rebuild if necessary). QA perspective: for serious QA you cannot accept that milestone as base for any CWS. No one knows by sure in what state such a CWS would be: does it already contain the late fix, or not? Of course you would gain a testable master milestone, but what is the difference in waiting for a rebuild of milestone x and waiting for milestone x+1 containg just that one fix? Technically you would win nothing. So, to summarize: no benefit for developers, more work to do for some developers, no real benefit for QA -> no solution at all. Scenario 1 does not need re-opening an existing milestone.

Ad 2) Allthough we did not have this situation yet, there may be cases where we are required to undo master commits regardless from all negative consequences.

What would be consequences of fixing bugs on existing milestone (in contrast to doing them ASAP for the next build)?
- More work for developers (see above).
- Unambiguity about the state of such a milestone and derived work. We have no versioning withing milestones, no one could tell whether something based on a redone milestone is done before or after that fix (see 'QA perspective' above). - Unambiguity about when a milestone is really ready for use. At the moment everyone can rely on the announce mails. If we start redoing milestones, when should a developer be shure a milestone is good? I f.e. would stop creating CWSs against the latest milestone, I would take the one before, just to be shure I do not have to redo my work. - Another question that comes up: what kind of P1 tasks is severe enough to redo a build? Who decides about that?

What would be consequences of always fixing bugs in the next available milestone? - Clear rules. What has been announced as finished is finished, no one will touch it again. Neither inside nor outside SUN. - There may be milestones known to be partly unusable. That's not new, we already had situations where CWS owners were forced to resync their CWS before QA before a ceratin milestone had bugs preventing proper testing of that CWS. Of course we should communicate this cases as early as possible to avoid unnecessary work for CWS owners and QA people.


_B) Wait with the "ready for CWS usage" announcement until QA approval_
The QA does not want to release products/builds where P1 issues are open, which lead to broken major functionality in SO/OOo. They want to have a build where test results are comparable with other builds and where the number of errors goes down from one version to the next. And where regressions can be found quickly, without searching in hundreds of new errors.

Developers sometimes want to have fast a build to open new CWS or resync an older CWS to use new functionality. But OTOH resyncing against an unusable milestone could be a waste of time.

So why not wait until automated testing has the results for a build?
What are the consequences :
- automated testing takes 2-3 days
- the development will get a new build for resync 2-3 days later
Is this acceptable?

In the beginning this delay might be difficult to handle. But when the process runs longer, the delay might only be important for some critical CWS or features.


Feedback is welcome. What do you think? Do you have other ideas how to solve the problem?

Jörg

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to