Hey All, I wanted to announce the the Spark 1.1 release window: June 1 - Merge window opens July 25 - Cut-off for new pull requests August 1 - Merge window closes (code freeze), QA period starts August 15+ - RC's and voting
This is consistent with the "3 month" release cycle we are targeting. I'd really encourage people submitting larger features to do so during the month of June, as features submitted closer to the window closing could end up getting pushed into the next release. I wanted to reflect a bit as well on the 1.0 release. First, thanks to everyone who was involved in this release. It was the largest release ever and it's something we should all be proud of. In the 1.0 release, we cleaned up and consolidated several parts of the Spark code base. In particular, we consolidated the previously fragmented process of submitting Spark jobs across a wide variety of environments {YARN/Mesos/Standalone, Windows/Unix, Python/Java/Scala}. We also brought the three language API's into much closer alignment. These were difficult (but critical) tasks towards having a stable deployment environment on which higher level libraries can build. These cross-cutting changes also had associated test burden resulting in an extended QA period. The 1.1, 1.2, 1.3, family of releases are intended to be smaller releases, and I'd like to deliver them with very predictable timing to the community. This will mean being fairly strict about freezes and investing in QA infrastructure to allow us to get through voting more quickly. With 1.0 shipped, now is a great time to catch up on code reviews and look at outstanding patches. Despite the large queue, we've actually been consistently merging/closing about 80% of proposed PR's, which is definitely good (for instance, we have 170 outstanding out of 950 proposed), but there remain a lot of people waiting on reviews, and it's something everyone can help with! Thanks again to everyone involved. Looking forward to more great releases! - Patrick