Late +1, thank you, Dongjoon!

> On May 19, 2021, at 10:47 AM, Jungtaek Lim <kabhwan.opensou...@gmail.com> 
> wrote:
> 
> Late +1 here as well, thanks for volunteering!
> 
> 2021년 5월 19일 (수) 오전 11:24, 郑瑞峰 <ruife...@foxmail.com 
> <mailto:ruife...@foxmail.com>>님이 작성:
> late +1. thanks Dongjoon!
> 
> 
> ------------------ 原始邮件 ------------------
> 发件人: "Dongjoon Hyun" <dongjoon.h...@gmail.com 
> <mailto:dongjoon.h...@gmail.com>>;
> 发送时间: 2021年5月19日(星期三) 凌晨1:29
> 收件人: "Wenchen Fan"<cloud0...@gmail.com <mailto:cloud0...@gmail.com>>;
> 抄送: "Xiao Li"<lix...@databricks.com <mailto:lix...@databricks.com>>;"Kent 
> Yao"<yaooq...@gmail.com <mailto:yaooq...@gmail.com>>;"John 
> Zhuge"<jzh...@apache.org <mailto:jzh...@apache.org>>;"Hyukjin 
> Kwon"<gurwls...@gmail.com <mailto:gurwls...@gmail.com>>;"Holden 
> Karau"<hol...@pigscanfly.ca <mailto:hol...@pigscanfly.ca>>;"Takeshi 
> Yamamuro"<linguin....@gmail.com 
> <mailto:linguin....@gmail.com>>;"dev"<dev@spark.apache.org 
> <mailto:dev@spark.apache.org>>;"Yuming Wang"<wgy...@gmail.com 
> <mailto:wgy...@gmail.com>>;
> 主题: Re: Apache Spark 3.1.2 Release?
> 
> Thank you all! I'll start to prepare.
> 
> Bests,
> Dongjoon.
> 
> On Tue, May 18, 2021 at 12:53 AM Wenchen Fan <cloud0...@gmail.com 
> <mailto:cloud0...@gmail.com>> wrote:
> +1, thanks!
> 
> On Tue, May 18, 2021 at 1:37 PM Xiao Li <lix...@databricks.com 
> <mailto:lix...@databricks.com>> wrote:
> +1 Thanks, Dongjoon!
> 
> Xiao
> 
> 
> 
> On Mon, May 17, 2021 at 8:45 PM Kent Yao <yaooq...@gmail.com 
> <mailto:yaooq...@gmail.com>> wrote:
> +1. thanks Dongjoon
> 
> Kent Yao 
> @ Data Science Center, Hangzhou Research Institute, NetEase Corp.
> a spark enthusiast
> kyuubi <https://github.com/yaooqinn/kyuubi>is a unified multi-tenant JDBC 
> interface for large-scale data processing and analytics, built on top of 
> Apache Spark <http://spark.apache.org/>.
> spark-authorizer <https://github.com/yaooqinn/spark-authorizer>A Spark SQL 
> extension which provides SQL Standard Authorization for Apache Spark 
> <http://spark.apache.org/>.
> spark-postgres <https://github.com/yaooqinn/spark-postgres> A library for 
> reading data from and transferring data to Postgres / Greenplum with Spark 
> SQL and DataFrames, 10~100x faster.
> itatchi <https://github.com/yaooqinn/spark-func-extras>A library that brings 
> useful functions from various modern database management systems to Apache 
> Spark <http://spark.apache.org/>.
> 
> 
>      
> 
> On 05/18/2021 10:57,John Zhuge<jzh...@apache.org> <mailto:jzh...@apache.org> 
> wrote:
> +1, thanks Dongjoon!
> 
> On Mon, May 17, 2021 at 7:50 PM Yuming Wang <wgy...@gmail.com 
> <mailto:wgy...@gmail.com>> wrote:
> +1.
> 
> On Tue, May 18, 2021 at 9:06 AM Hyukjin Kwon <gurwls...@gmail.com 
> <mailto:gurwls...@gmail.com>> wrote:
> +1 thanks for driving me
> 
> On Tue, 18 May 2021, 09:33 Holden Karau, <hol...@pigscanfly.ca 
> <mailto:hol...@pigscanfly.ca>> wrote:
> +1 and thanks for volunteering to be the RM :)
> 
> On Mon, May 17, 2021 at 4:09 PM Takeshi Yamamuro <linguin....@gmail.com 
> <mailto:linguin....@gmail.com>> wrote:
> Thank you, Dongjoon~ sgtm, too.
> 
> On Tue, May 18, 2021 at 7:34 AM Cheng Su <chen...@fb.com.invalid> wrote:
> +1 for a new release, thanks Dongjoon!
> 
> Cheng Su
> 
> On 5/17/21, 2:44 PM, "Liang-Chi Hsieh" <vii...@gmail.com 
> <mailto:vii...@gmail.com>> wrote:
> 
>     +1 sounds good. Thanks Dongjoon for volunteering on this!
> 
> 
>     Liang-Chi
> 
> 
>     Dongjoon Hyun-2 wrote
>     > Hi, All.
>     > 
>     > Since Apache Spark 3.1.1 tag creation (Feb 21),
>     > new 172 patches including 9 correctness patches and 4 K8s patches 
> arrived
>     > at branch-3.1.
>     > 
>     > Shall we make a new release, Apache Spark 3.1.2, as the second release 
> at
>     > 3.1 line?
>     > I'd like to volunteer for the release manager for Apache Spark 3.1.2.
>     > I'm thinking about starting the first RC next week.
>     > 
>     > $ git log --oneline v3.1.1..HEAD | wc -l
>     >      172
>     > 
>     > # Known correctness issues
>     > SPARK-34534     New protocol FetchShuffleBlocks in OneForOneBlockFetcher
>     > lead to data loss or correctness
>     > SPARK-34545     PySpark Python UDF return inconsistent results when
>     > applying 2 UDFs with different return type to 2 columns together
>     > SPARK-34681     Full outer shuffled hash join when building left side
>     > produces wrong result
>     > SPARK-34719     fail if the view query has duplicated column names
>     > SPARK-34794     Nested higher-order functions broken in DSL
>     > SPARK-34829     transform_values return identical values when it's used
>     > with udf that returns reference type
>     > SPARK-34833     Apply right-padding correctly for correlated subqueries
>     > SPARK-35381     Fix lambda variable name issues in nested DataFrame
>     > functions in R APIs
>     > SPARK-35382     Fix lambda variable name issues in nested DataFrame
>     > functions in Python APIs
>     > 
>     > # Notable K8s patches since K8s GA
>     > SPARK-34674    Close SparkContext after the Main method has finished
>     > SPARK-34948    Add ownerReference to executor configmap to fix leakages
>     > SPARK-34820    add apt-update before gnupg install
>     > SPARK-34361    In case of downscaling avoid killing of executors already
>     > known by the scheduler backend in the pod allocator
>     > 
>     > Bests,
>     > Dongjoon.
> 
> 
> 
> 
> 
>     --
>     Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/ 
> <http://apache-spark-developers-list.1001551.n3.nabble.com/> 
> 
>     ---------------------------------------------------------------------
>     To unsubscribe e-mail: dev-unsubscr...@spark.apache.org 
> <mailto:dev-unsubscr...@spark.apache.org>
> 
> 
> 
> 
> -- 
> ---
> Takeshi Yamamuro
> -- 
> Twitter: https://twitter.com/holdenkarau <https://twitter.com/holdenkarau>
> Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
>  <https://amzn.to/2MaRAG9>
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau 
> <https://www.youtube.com/user/holdenkarau>
> 
> -- 
> John Zhuge
> 
> 
> -- 
> 

Reply via email to