Perhaps being optimistic but I would assume that Spark users would be keeping up to date and not as embedded/invested in specific versions of Hadoop/HBase that warrant the 0.8.x code stream. So I would expect that we update master to 1.0.x and not necessarily patch everything back to 0.8.x. Perhaps an only as requested. A quick survey of the current major distributors (HortonWorks, Cloudera, MapR) all of them have moved to supporting Spark 1.0.x with their latest distributions.
On Thu, Aug 21, 2014 at 4:09 PM, Josh Wills <[email protected]> wrote: > Hey all, > > I think it's about time to roll out the next Crunch release; we've fixed > ~50 issues since our last release, which is our typical cadence. Among the > remaining open issues, I'd like to get CRUNCH-425 and CRUNCH-436 (both > Crunch-on-Spark fixes) committed to both the 0.8 and master branches. > > I'm debating what to do with CRUNCH-410, which upgrades Spark to 1.0.0 (and > soon, 1.0.2 to get a few Spark fixes.) On the one hand, I'm tempted to only > commit it to master and not 0.8, which would keep with our practice of > leaving 0.8 against "old" versions (HBase 0.94, etc.) and only bringing > version changes into master (and the new releases against it.) The rub, of > course, is that the API changes in Spark between Spark 0.9.0 and 1.0.0 will > mean that patching things against both versions will be a bit of a hassle. > Very much open to suggestions on what people think the right course of > action is here. > > Thanks! > J > > -- > Director of Data Science > Cloudera <http://www.cloudera.com> > Twitter: @josh_wills <http://twitter.com/josh_wills> >
