+1 on it, Mark. Compatibility between minor (X.something) nor subminor (X.Y.something) releases shouldn't be an issues.
There are multiple examples of not following this simple principle in the world of OSS. And that - arguably - makes the adoption to be a royal pain in the behind. The numbers are cheap, so why not to push what's planned in 0.8.1 to be 0.9; and 0.9 to be earmarked as 1.0 (if we are planning on making some API compatibility promisses at that moment in time). While not ideal, it will give us a bit more breathing room for stabilization of the S[p,h]ark stack. Also, this will be a sort of clear sign for the downstream if major upgrade is needed or not, which is helpful in terms of expectations management. Cos On Mon, Oct 28, 2013 at 03:22PM, Mark Hamstra wrote: > Or more to the point: What is our commitment to backward compatibility in > point releases? > > Many Java developers will come to a library or platform versioned as x.y.z > with the expectation that if their own code worked well using x.y.(z-1) as > a dependency, then moving up to x.y.z will be painless and trivial. That > is not looking like it will be the case for Spark 0.8.0 and 0.8.1. > > We only need to look at Shark as an example of code built with a dependency > on Spark to see the problem. Shark 0.8.0 works with Spark 0.8.0. Shark > 0.8.0 does not build with Spark 0.8.1-SNAPSHOT. Presumably that lack of > backwards compatibility will continue into the eventual release of Spark > 0.8.1, and that makes life hard on developers using Spark and Shark. For > example, a developer using the released version of Shark but wanting to > pick up the bug fixes in Spark doesn't have a good option anymore since > 0.8.1-SNAPSHOT (or the eventual 0.8.1 release) doesn't work, and moving to > the wild and woolly development on the master branches of Spark and Shark > is not a good idea for someone trying to develop production code. In other > words, all of the bug fixes in Spark 0.8.1 are not accessible to this > developer until such time as there are available 0.8.1-compatible versions > of Shark and anything else built on Spark that this developer is using. > > The only other option is trying to cherry-pick commits from, e.g., Shark > 0.9.0-SNAPSHOT into Shark 0.8.0 until Shark 0.8.0 has been brought up to a > point where it works with Spark 0.8.1. But an application developer > shouldn't need to do that just to get the bug fixes in Spark 0.8.1, and it > is not immediately obvious just which Shark commits are necessary and > sufficient to produce a correct, Spark-0.8.1-compatible version of Shark > (indeed, there is no guarantee that such a thing is even possible.) Right > now, I believe that 67626ae3eb6a23efc504edf5aedc417197f072cf, > 488930f5187264d094810f06f33b5b5a2fde230a and > bae19222b3b221946ff870e0cee4dba0371dea04 are necessary to get Shark to work > with Spark 0.8.1-SNAPSHOT, but that those commits are not sufficient (Shark > builds against Spark 0.8.1-SNAPSHOT with those cherry-picks, but I'm still > seeing runtime errors.) > > In short, this is not a good situation, and we probably need a real 0.8 > maintenance branch that maintains backward compatibility with 0.8.0, > because (at least to me) the current branch-0.8 of Spark looks more like > another active development branch (in addition to the master and scala-2.10 > branches) than it does a maintenance branch.
