Hey Holden,

Thanks for bringing this up!  I think we usually cut patch releases when
there are enough fixes to justify it.  Sometimes just a few weeks after the
release.  I guess if we are at 3 months Spark 2.1.0 was a pretty good
release :)

That said, it is probably time. I was about to start thinking about 2.2 as
well (we are a little past the posted code-freeze deadline), so I'm happy
to push the buttons etc (this is a very good description
<http://spark.apache.org/release-process.html> if you are curious). I would
love help watching JIRA, posting the burn down on issues and shepherding in
any critical patches.  Feel free to ping me off-line if you like to
coordinate.

Unless there are any objections, how about we aim for an RC of 2.1.1 on
Monday and I'll also plan to cut branch-2.2 then?  (I'll send a separate
email on this as well).

Michael

On Mon, Mar 13, 2017 at 1:40 PM, Holden Karau <hol...@pigscanfly.ca> wrote:

> I'd be happy to do the work of coordinating a 2.1.1 release if that's a
> thing a committer can do (I think the release coordinator for the most
> recent Arrow release was a committer and the final publish step took a PMC
> member to upload but other than that I don't remember any issues).
>
> On Mon, Mar 13, 2017 at 1:05 PM Sean Owen <so...@cloudera.com> wrote:
>
>> It seems reasonable to me, in that other x.y.1 releases have followed ~2
>> months after the x.y.0 release and it's been about 3 months since 2.1.0.
>>
>> Related: creating releases is tough work, so I feel kind of bad voting
>> for someone else to do that much work. Would it make sense to deputize
>> another release manager to help get out just the maintenance releases? this
>> may in turn mean maintenance branches last longer. Experienced hands can
>> continue to manage new minor and major releases as they require more
>> coordination.
>>
>> I know most of the release process is written down; I know it's also
>> still going to be work to make it 100% documented. Eventually it'll be
>> necessary to make sure it's entirely codified anyway.
>>
>> Not pushing for it myself, just noting I had heard this brought up in
>> side conversations before.
>>
>>
>> On Mon, Mar 13, 2017 at 7:07 PM Holden Karau <hol...@pigscanfly.ca>
>> wrote:
>>
>> Hi Spark Devs,
>>
>> Spark 2.1 has been out since end of December
>> <http://apache-spark-developers-list.1001551.n3.nabble.com/ANNOUNCE-Announcing-Apache-Spark-2-1-0-td20390.html>
>> and we've got quite a few fixes merged for 2.1.1
>> <https://issues.apache.org/jira/browse/SPARK-18281?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1%20ORDER%20BY%20updated%20DESC%2C%20priority%20DESC%2C%20created%20ASC>
>> .
>>
>> On the Python side one of the things I'd like to see us get out into a
>> patch release is a packaging fix (now merged) before we upload to PyPI &
>> Conda, and we also have the normal batch of fixes like toLocalIterator for
>> large DataFrames in PySpark.
>>
>> I've chatted with Felix & Shivaram who seem to think the R side is
>> looking close to in good shape for a 2.1.1 release to submit to CRAN (if
>> I've miss-spoken my apologies). The two outstanding issues that are being
>> tracked for R are SPARK-18817, SPARK-19237.
>>
>> Looking at the other components quickly it seems like structured
>> streaming could also benefit from a patch release.
>>
>> What do others think - are there any issues people are actively targeting
>> for 2.1.1? Is this too early to be considering a patch release?
>>
>> Cheers,
>>
>> Holden
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>

Reply via email to