Which new functionalities are you referring to? In Spark SQL, most of the
major features in Spark 3.0 are difficult/time-consuming to backport. For
example, adaptive query execution. Releasing a new version is not hard, but
backporting/reviewing/maintaining these features are very time-consuming.

Which old APIs are broken? If the impact is big, we should add them back
based on our former discussion
http://apache-spark-developers-list.1001551.n3.nabble.com/Proposal-Modification-to-Spark-s-Semantic-Versioning-Policy-td28938.html

Thanks,

Xiao


On Fri, Jun 12, 2020 at 2:38 PM Holden Karau <hol...@pigscanfly.ca> wrote:

> Hi Folks,
>
> As we're getting closer to Spark 3 I'd like to revisit a Spark 2.5
> release. Spark 3 brings a number of important changes, and by its nature is
> not backward compatible. I think we'd all like to have as smooth an upgrade
> experience to Spark 3 as possible, and I believe that having a Spark 2
> release some of the new functionality while continuing to support the older
> APIs and current Scala version would make the upgrade path smoother.
>
> This pattern is not uncommon in other Hadoop ecosystem projects, like
> Hadoop itself and HBase.
>
> I know that Ryan Blue has indicated he is already going to be maintaining
> something like that internally at Netflix, and we'll be doing the same
> thing at Apple. It seems like having a transitional release could benefit
> the community with easy migrations and help avoid duplicated work.
>
> I want to be clear I'm volunteering to do the work of managing a 2.5
> release, so hopefully, this wouldn't create any substantial burdens on the
> community.
>
> Cheers,
>
> Holden
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.):
> https://amzn.to/2MaRAG9  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau
>


-- 
<https://databricks.com/sparkaisummit/north-america>

Reply via email to