We already do automated compile testing for Scala 2.11 similar to Hadoop
versions:

https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/job/Spark-master-Scala211-Compile/buildTimeTrend


If you look, this build takes 7-10 minutes, so it's a nontrivial increase
to add it to all new PR's. Also, it's only broken once in the last few
months  (despite many patches going in) - a pretty low failure rate. For
scenarios like this it's better to test it asynchronously. We can even just
revert a patch immediately if it's found to break 2.11.

Put another way - we typically have 1000 patches or more per release. Even
at one jenkins run per patch: 7 minutes * 1000 = 7 days of developer
productivity loss. Compare that to having a few times where we have to
revert a patch and ask someone to resubmit (which maybe takes at most one
hour)... it's not worth it.

- Patrick

On Mon, Oct 12, 2015 at 8:24 AM, Sean Owen <so...@cloudera.com> wrote:

> There are many Jenkins jobs besides the pull request builder that
> build against various Hadoop combinations, for example, in the
> background. Is there an obstacle to building vs 2.11 on both Maven and
> SBT this way?
>
> On Mon, Oct 12, 2015 at 2:55 PM, Iulian Dragoș
> <iulian.dra...@typesafe.com> wrote:
> > Anything that can be done by a machine should be done by a machine. I am
> not
> > sure we have enough data to say it's only once or twice per release, and
> > even if we were to issue a PR for each breakage, it's additional load on
> > committers and reviewers, not to mention our own work. I personally don't
> > see how 2-3 minutes of compute time per PR can justify hours of work plus
> > reviews.
>

Reply via email to