+1 on longer release cycle at schedule and more maintenance releases.

_____________________________
From: Mark Hamstra <m...@clearstorydata.com<mailto:m...@clearstorydata.com>>
Sent: Tuesday, September 27, 2016 2:01 PM
Subject: Re: [discuss] Spark 2.x release cadence
To: Reynold Xin <r...@databricks.com<mailto:r...@databricks.com>>
Cc: <dev@spark.apache.org<mailto:dev@spark.apache.org>>


+1

And I'll dare say that for those with Spark in production, what is more 
important is that maintenance releases come out in a timely fashion than that 
new features are released one month sooner or later.

On Tue, Sep 27, 2016 at 12:06 PM, Reynold Xin 
<r...@databricks.com<mailto:r...@databricks.com>> wrote:
We are 2 months past releasing Spark 2.0.0, an important milestone for the 
project. Spark 2.0.0 deviated (took 6 month from the regular release cadence we 
had for the 1.x line, and we never explicitly discussed what the release 
cadence should look like for 2.x. Thus this email.

During Spark 1.x, roughly every three months we make a new 1.x feature release 
(e.g. 1.5.0 comes out three months after 1.4.0). Development happened primarily 
in the first two months, and then a release branch was cut at the end of month 
2, and the last month was reserved for QA and release preparation.

During 2.0.0 development, I really enjoyed the longer release cycle because 
there was a lot of major changes happening and the longer time was critical for 
thinking through architectural changes as well as API design. While I don't 
expect the same degree of drastic changes in a 2.x feature release, I do think 
it'd make sense to increase the length of release cycle so we can make better 
designs.

My strawman proposal is to maintain a regular release cadence, as we did in 
Spark 1.x, and increase the cycle from 3 months to 4 months. This effectively 
gives us ~50% more time to develop (in reality it'd be slightly less than 50% 
since longer dev time also means longer QA time). As for maintenance releases, 
I think those should still be cut on-demand, similar to Spark 1.x, but more 
aggressively.

To put this into perspective, 4-month cycle means we will release Spark 2.1.0 
at the end of Nov or early Dec (and branch cut / code freeze at the end of Oct).

I am curious what others think.





Reply via email to