On 01/11/16 13:20, Samuli Seppänen wrote:
> Hi,
> 
> Il 01/11/2016 13:10, Gert Doering ha scritto:
>> Hi,
>>
>> On Mon, Oct 31, 2016 at 11:55:08PM +0100, David Sommerseth wrote:
>>> How long will users be willing to wait?  I'd be really surprised if 2.4
>>> is out the door before Christmas 2016.  When also considering we've said
>>> that 2.4_alpha was soon ready for about 1 year or so before it really
>>> got released, I'd even say I am overly optimistic.
>>
>> If we could spend our time on a somewhat focused work on 2.4, instead
>> of exhausting ourselves discussing feature backports to 2.3, this could
>> happen faster.
>>
>> And more *focused* work: less rounds of review, buildbot explosions, etc.,
>> which eats up everyone else's precious time.
> 
> It would be good to:
> 
> 1) Catch the issues before buildbot
> 
> In my recent INSTALL-win32.txt removal patch the breakage could have 
> been avoided if I had been less sloppy and had actually tried to build 
> the thing before sending in the patch.
> 
> A basic "make check" goes a long way, but I intend to de-bashify the 
> Vagrant integration soonish, so that we can merge it. This should make 
> it easier for developers to catch runtime errors before sending in patches:
> 
> <https://github.com/OpenVPN/openvpn/pull/45>
> 
> 2) Have buildbot catch issues before code is in Git "master"
> 
> This could be solved by tracking an "experimental" or "buildslave" 
> branch in Git, which would be basically Git "master" preview with forced 
> update (history rewrite) option. Normal people should never use this 
> branch, as the history would get rewritten whenever a patch would have 
> to be rolled back. This would help ensure that Git "master" is always in 
> a good shape.
> 
> That said, doing 1) properly should help mitigate the risk of Git 
> "master" breaking often. The less trivial issues will not be found on a 
> single developer machine anyways.

I thought that we initially started with buildbot to *identify* such
build errors before *release*; as we happened to have a few releases
where we broke in similar ways during releases.  IIRC, the argument
which surfaced back then was something along the line that it is too
much to expect developers to do builds with all reasonable configuration
options.  So if that has changed over the year, I surely have missed
that memo.

To avoid this, we *must* have this automated somehow; otherwise things
either won't improve or people just avoid contributing due to too heavy
process - which some already do complain about today.

From the topics for the meeting next week I see proposals for using
patchwork (funny enough, I suggested this many years ago but it was
considered too much back then).  Hook that up against something which
runs some thorough tests on some buildbots (without spamming all
failers) and report back to the patch - that can truly help.  On the
other hand, we need to ensure we have some way of controlling whom can
trigger such builds.  I don't think we should allow random patches from
lesser known contributors to the ML to be kicked off automatically (for
security reasons); those will need a quick review and a confirmation it
is safe to test.  Once that process gives all green lights, we can do
the proper review and apply if ACKed.

With that said ... Having the master branch always run perfectly is a
nice goal, but it is an utopia to believe we will reach that goal and it
always will be perfect.  We will need to accept, whether we like or not,
that git master will explode from time to time.  When that happen, we
fix it as soon as possible (preferably within the same working day) and
move on.  No reasons to make more fuzz about that.

All of us who I consider core developers have from time to time sent
patches which have been applied and which exploded things in various
ways.  So there's no need to point fingers at anyone particular, because
this is how things have been and will continue to be.  Some periods
better than others, some periods are worse.  But we do what we always
do; we fix it and move on.

[...snip...]

>> I still think the timeline "end of 2016" should be doable - there's
>> some reasoning to meet that: it will make the next Debian release.
> 
> If Debian 9 is frozen by the end of the year, then that is a good goal.

Yes, that is a good goal.  And we *must* reach that one, IMO.


-- 
kind regards,

David Sommerseth
OpenVPN Technologies, Inc


Attachment: signature.asc
Description: OpenPGP digital signature

------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Openvpn-devel mailing list
Openvpn-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-devel

Reply via email to