Hi all,
I am going to prepare the realease of 3.0.1 RC1, with the help of Wenchen.




------------------ 原始邮件 ------------------
发件人:                                                                            
                                            "Jason Moore"                       
                                                             
<jason.mo...@quantium.com.au.INVALID&gt;;
发送时间:&nbsp;2020年7月30日(星期四) 上午10:35
收件人:&nbsp;"dev"<dev@spark.apache.org&gt;;

主题:&nbsp;Re: [DISCUSS] Apache Spark 3.0.1 Release



  
Hi all,
 
&nbsp;
 
Discussion around 3.0.1 seems to have trickled away.&nbsp; What was blocking 
the release process kicking off?&nbsp; I can see some unresolved bugs raised 
against 3.0.0, but conversely there were quite a few  critical correctness 
fixes waiting to be released.
 
&nbsp;
 
Cheers,
 
Jason.
 
&nbsp;
  
From: Takeshi Yamamuro <linguin....@gmail.com&gt;
 Date: Wednesday, 15 July 2020 at 9:00 am
 To: Shivaram Venkataraman <shiva...@eecs.berkeley.edu&gt;
 Cc: "dev@spark.apache.org" <dev@spark.apache.org&gt;
 Subject: Re: [DISCUSS] Apache Spark 3.0.1 Release
 
  
&nbsp;
 
   
&gt; Just wanted to check if there are any blockers that we are still waiting 
for to start the new release process.
 
 
I don't see any on-going blocker in my area.
  
Thanks for the notification.
 
  
&nbsp;
 
  
Bests,
 
  
Tkaeshi
 
 
 
&nbsp;
   
On Wed, Jul 15, 2020 at 4:03 AM Dongjoon Hyun <dongjoon.h...@gmail.com&gt; 
wrote:
 
     
Hi, Yi.
  
&nbsp;
 
  
Could you explain why you think that is a blocker? For the given example from 
the JIRA description,
 
  
&nbsp;
 
 
 
     spark.udf.register("key", udf((m: Map[String, String]) =&gt; 
m.keys.head.toInt)) 
 
 
    Seq(Map("1" -&gt; "one", "2" -&gt; 
"two")).toDF("a").createOrReplaceTempView("t") 
 
 
    checkAnswer(sql("SELECT key(a) AS k FROM t GROUP BY key(a)"), Row(1) :: 
Nil) 
 
 
     
&nbsp;
 
  
Apache Spark 3.0.0 seems to work like the following.
 
  
&nbsp;
 
 
 
      
scala&gt; spark.version
 
 
 
 
     
res0: String = 3.0.0
 
 
 
 
     
&nbsp;
 
 
 
 
     
scala&gt; spark.udf.register("key", udf((m: Map[String, String]) =&gt; 
m.keys.head.toInt))
 
 
 
 
     
res1: org.apache.spark.sql.expressions.UserDefinedFunction = 
SparkUserDefinedFunction($Lambda$1958/948653928@5d6bed7b,IntegerType,List(Some(class[value[0]:
 map<string,string&gt;])),None,false,true)
 
 
 
 
     
&nbsp;
 
 
 
 
     
scala&gt; Seq(Map("1" -&gt; "one", "2" -&gt; 
"two")).toDF("a").createOrReplaceTempView("t")
 
 
 
 
     
&nbsp;
 
 
 
 
     
scala&gt; sql("SELECT key(a) AS k FROM t GROUP BY key(a)").collect
 
 
 
 
     
res3: Array[org.apache.spark.sql.Row] = Array([1])
 
 
 
 
     
&nbsp;
 
  
Could you provide a reproducible example?
 
  
&nbsp;
 
  
Bests,
 
  
Dongjoon.
 
  
&nbsp;
 
 
 
 
 
&nbsp;
   
On Tue, Jul 14, 2020 at 10:04 AM Yi Wu <yi...@databricks.com&gt; wrote:
 
    
This probably be a blocker: https://issues.apache.org/jira/browse/SPARK-32307
 
 
&nbsp;
   
On Tue, Jul 14, 2020 at 11:13 PM Sean Owen <sro...@gmail.com&gt; wrote:
 
  
https://issues.apache.org/jira/browse/SPARK-32234 ?
 
 On Tue, Jul 14, 2020 at 9:57 AM Shivaram Venkataraman
 <shiva...@eecs.berkeley.edu&gt; wrote:
 &gt;
 &gt; Hi all
 &gt;
 &gt; Just wanted to check if there are any blockers that we are still waiting 
for to start the new release process.
 &gt;
 &gt; Thanks
 &gt; Shivaram
 &gt;
  
 
  
  
 

 
  
&nbsp;
 
 
-- 
    
---
 Takeshi Yamamuro

Reply via email to