On Mar 30, 2011, at 9:19 AM, Robert Muir wrote:

> On Wed, Mar 30, 2011 at 8:22 AM, Grant Ingersoll <[email protected]> wrote:
>> (Long post, please bear with me and please read!)
>> 
>> Now that we have the release done (I'm working through the publication 
>> process now), I want to start the process of thinking about how we can 
>> improve the release process.  As I see it, building the artifacts and 
>> checking the legal items are now almost completely automated and testable at 
>> earlier stages in the game.
>> 
> 
> Thanks for writing this up. Here is my major beef with 2 concrete suggestions:
> 
> It seems the current process is that we all develop and develop and at
> some point we agree we want to try to release. At this point its the
> RM's job to "polish a turd", and no serious community participation
> takes place until an RC is actually produced: so its a chicken-and-egg
> thing, perhaps with the RM even declaring publicly 'i dont expect this
> to actually pass, i'm just building this to make you guys look at it'.
> 
> I think its probably hard/impossible to force people to review this
> stuff before an RC, for some reason a VOTE seems to be the only thing
> for people to take it seriously.
> 
> But what we can do is ask ourselves, how did the codebase become a
> turd in the first place? Because at one point we released off the code
> and the packaging was correct, there weren't javadocs warnings, and
> there weren't licensing issues, etc.
> 
> So I think an important step would be to try to make more of this
> "continuous", in other words, we did all the work to fix up the
> codebase to make it releasable, lets implement things to enforce it
> stays this way. It seems we did this for some things (e.g. code
> correctness with the unit tests and licensing with the license
> checker) but there is more to do.
> 
> A. implement the hudson-patch capability to vote -1 on patches that
> break things as soon as they go on the JIRA issues. this is really
> early feedback and I think will go a long way.

+1.  I asked on [email protected] if there was any "standard" way of doing this, or if 
there is a place someone can point me at to get this going.


> B. increase the scope of our 'ant test'/hudson runs to check more
> things. For example, it would be nice if they failed on javadocs
> warnings. Its insane if you think about it: we go to a ton of effort
> to implement really cruel and picky unit tests to verify the
> correctness of our code, but you can almost break the packaging and
> documentation completely and the build still passes.

+1 on failing on javadocs.

Also, what about code coverage?  We run all this Clover stuff, but how do we 
incorporate that into our dev. cycle?

> 
> Anyway, we spend a lot of time on trying to make our code correct, but
> our build is a bit messy. I know if we look at the time we spend on
> search performance and correctness, and applied even 1% of this effort
> to our build system to make it fast, picky, and and cleaner, that we
> would be in much better shape as a development team, with a faster
> compile/test/debug cycle to boot... I think there is a lot of
> low-hanging fruit here, and I think this thread has encouraged me to
> revisit the build and try to straighten some of this out.

Yeah, our build is a bit messy, lots of recursion.  I'm still not totally happy 
w/ how license checking is hooked in.



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to