BTW, I've just noticed that the vote mail does not include a link to
the source tag.

So I don't know how anyone can have checked the contents of the source
archive against the code repo.

On 18 November 2017 at 20:41, sebb <seb...@gmail.com> wrote:
> On 18 November 2017 at 18:28, Jaroslav Tulach <jaroslav.tul...@gmail.com> 
> wrote:
>> 2017-11-17 22:59 GMT+01:00 sebb <seb...@gmail.com>:
>>
>>> On 17 November 2017 at 21:13, Jaroslav Tulach <jaroslav.tul...@gmail.com>
>>> wrote:
>>> > 17. 11. 2017 v 12:48, sebb <seb...@gmail.com>:
>>> >
>>> >>> On 16 November 2017 at 22:19, Jaroslav Tulach <
>>> jaroslav.tul...@gmail.com> wrote:
>>> >>> 72h is gone and (I think) we still need one more vote. C'mon it's a
>>> formality (version 1.5.1 is better than 1.5 and 1.5 was approved). Somebody
>>> please help us move on.
>>> >>
>>> >> Sorry to interrupt, but release approval is never a formality.
>>> >
>>> > As expected... I knew my comment would provoke a reaction. Too bad it
>>> didn't provoke a binding vote...
>>> >
>>> >> Each release must be separately approved.
>>> >> Things can go wrong when assembling the release artifacts.
>>> >> For example files can be omitted or spurious files can be included
>>> >> (did the RM use a clean workspace?)
>>> >
>>> > The release is prepared by a Jenkins job
>>> > https://builds.apache.org/view/Incubator%20Projects/job/
>>> incubator-netbeans-html4j-release/
>>> > The reason is simple - I don't trust myself to not make some stupid
>>> mistake and thus I automate as much as I can. Thanks to that I can remain
>>> convinced release 1.5.1 is better than previous version 1.5 and none of
>>> them contain any spurious files.
>>>
>>> Whilst automation helps to reduce errors, it cannot guarantee to eliminate
>>> them.
>>>
>>
>> Right. That is the reason why there are the human reviews.
>>
>>
>>> Can you prove that there are no bugs in the release script?
>>> Or any of the libraries that it depends on?
>>>
>>
>> I am not trying to, but: Can you tell me what is the probability that 3rd
>> human reviewer will find an error when:
>>
>> - the script is the same as in previous version
>> - the previous version was found OK by all human reviewers and approved
>> - the new version has already been successfully reviewed by two reviewers
>
> That was basically the scenario I mentioned in my previous comment.
>
>> I put my bet on the probability being extremely low and called the
>> remaining review a formality. Looks like I was the lucky winner (this time).
>
> My point is that the reviews are vital, and should not be dismissed as
> a mere formality.
> That route leads to complacency and potential errors (which is what
> happened in the example I mentioned).
>
>> -jt

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org

Reply via email to