> On Jul 25, 2018, at 10:48 AM, Chris Lambertus <c...@apache.org> wrote:
> 
> On-demand resources are certainly being considered (and we had these in the 
> past,) but I will point out that ephemeral (“on-demand”) cloud builds are in 
> direct opposition to some of the points brought up by Allen in the other 
> jenkins storage thread, in that they tend to rely on persistent object 
> storage in their workspaces to improve the efficiency of their builds. 
> Perhaps this would be less of an issue with an on-demand instance which would 
> theoretically have no resource contention?

        Likely. 

        A lot of work went into greatly reducing the amount of time Hadoop 
spent in the build queue and running on the nodes. It was “the big one” but I 
feel like that’s not so true or at least harder to prove anymore.  I estimate 
we shaved days off of the queue from 5 years ago.  Part of that was keeping 
caches, since the ‘Hadoop’ queue nodes were large.  But I feel like 
significantly more work went into “reducing the stupidity” out of the CI jobs 
though. 

Two examples:

        * For source changes, only building and unit testing the relevant parts 
of a patch. e.g., a patch that changes code in module A should only see module 
A’s unit tests run.  Let the nightlies sort out any inter-module brokenness 
post-commit.

        * if a patch is for documentation, only run mvn site.  If a patch is 
for shell code, only run shellcheck and relevant unit tests. Running the java 
unit tests is pointless.

        Building everything every time is a waste of time for modularized 
source trees.

        Combined with the walls put up around the docker containers (e.g., 
limiting how many processes can be launched at one time, memory limits, etc), I 
personally felt much better that, other than disk space, the Hadoop jobs were 
being exemplary citizens vs. pre-Yetus.

Reply via email to