I mainly mean: - [SPARK-35801] Row-level operations in Data Source V2 - [SPARK-37166] Storage Partitioned Join
For which the PR: - https://github.com/apache/spark/pull/35395 - https://github.com/apache/spark/pull/35657 are actively being reviewed. It seems there are ongoing PRs for other SPIPs as well but I'm not involved in those so not quite sure whether they are intended for 3.3 release. Chao Chao On Mon, Mar 14, 2022 at 8:53 PM Xiao Li <gatorsm...@gmail.com> wrote: > > Could you please list which features we want to finish before the branch cut? > How long will they take? > > Xiao > > Chao Sun <sunc...@apache.org> 于2022年3月14日周一 13:30写道: >> >> Hi Max, >> >> As there are still some ongoing work for the above listed SPIPs, can we >> still merge them after the branch cut? >> >> Thanks, >> Chao >> >> On Mon, Mar 14, 2022 at 6:12 AM Maxim Gekk >> <maxim.g...@databricks.com.invalid> wrote: >>> >>> Hi All, >>> >>> Since there are no actual blockers for Spark 3.3.0 and significant >>> objections, I am going to cut branch-3.3 after 15th March at 00:00 PST. >>> Please, let us know if you have any concerns about that. >>> >>> Best regards, >>> Max Gekk >>> >>> >>> On Thu, Mar 3, 2022 at 9:44 PM Maxim Gekk <maxim.g...@databricks.com> wrote: >>>> >>>> Hello All, >>>> >>>> I would like to bring on the table the theme about the new Spark release >>>> 3.3. According to the public schedule at >>>> https://spark.apache.org/versioning-policy.html, we planned to start the >>>> code freeze and release branch cut on March 15th, 2022. Since this date is >>>> coming soon, I would like to take your attention on the topic and gather >>>> objections that you might have. >>>> >>>> Bellow is the list of ongoing and active SPIPs: >>>> >>>> Spark SQL: >>>> - [SPARK-31357] DataSourceV2: Catalog API for view metadata >>>> - [SPARK-35801] Row-level operations in Data Source V2 >>>> - [SPARK-37166] Storage Partitioned Join >>>> >>>> Spark Core: >>>> - [SPARK-20624] Add better handling for node shutdown >>>> - [SPARK-25299] Use remote storage for persisting shuffle data >>>> >>>> PySpark: >>>> - [SPARK-26413] RDD Arrow Support in Spark Core and PySpark >>>> >>>> Kubernetes: >>>> - [SPARK-36057] Support Customized Kubernetes Schedulers >>>> >>>> Probably, we should finish if there are any remaining works for Spark 3.3, >>>> and switch to QA mode, cut a branch and keep everything on track. I would >>>> like to volunteer to help drive this process. >>>> >>>> Best regards, >>>> Max Gekk --------------------------------------------------------------------- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org