[jira] [Resolved] (SPARK-38595) add support to load json file in case in-sensitive way

2022-03-18 Thread TANG ZHAO (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG ZHAO resolved SPARK-38595.
---
Resolution: Duplicate

duplicate to 

> add support to load json file in case in-sensitive way
> --
>
> Key: SPARK-38595
> URL: https://issues.apache.org/jira/browse/SPARK-38595
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, Spark Core, SQL
>Affects Versions: 3.1.0, 3.1.1, 3.1.2, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38601) AttributeError: module 'databricks.koalas' has no attribute 'DateOffset'

2022-03-18 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509142#comment-17509142
 ] 

Hyukjin Kwon commented on SPARK-38601:
--

[~prakharsandhu] does it work if you use {{pd.DateOffset}}? cc [~XinrongM] FYI

> AttributeError: module 'databricks.koalas' has no attribute 'DateOffset'
> 
>
> Key: SPARK-38601
> URL: https://issues.apache.org/jira/browse/SPARK-38601
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.0.0
>Reporter: Prakhar Sandhu
>Priority: Major
>
> I am working on replacing Pandas library to Koalas Library in my python repo 
> in VS Code. But Koalas module does not seem to have DateOffset() module 
> similar to what pandas has.
> I tried this :
> {code:java}
> import databricks.koalas as ks 
> kdf["date_col_2"] = kdf["date_col_1"] - ks.DateOffset(months=cycle_info_gap)
>  {code}
> It results in the below error :
> {code:java}
> AttributeError: module 'databricks.koalas' has no attribute 'DateOffset' 
> {code}
> Is there any alternative for this in Koalas?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38596) add support to load json file in case in-sensitive way

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-38596:
-
Target Version/s:   (was: 3.2.1)

> add support to load json file in case in-sensitive way
> --
>
> Key: SPARK-38596
> URL: https://issues.apache.org/jira/browse/SPARK-38596
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, Spark Core, SQL
>Affects Versions: 3.1.0, 3.1.1, 3.1.2, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38596) add support to load json file in case in-sensitive way

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-38596.
--
Resolution: Duplicate

> add support to load json file in case in-sensitive way
> --
>
> Key: SPARK-38596
> URL: https://issues.apache.org/jira/browse/SPARK-38596
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, Spark Core, SQL
>Affects Versions: 3.1.0, 3.1.1, 3.1.2, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38595) add support to load json file in case in-sensitive way

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-38595:
-
Target Version/s:   (was: 3.2.1)

> add support to load json file in case in-sensitive way
> --
>
> Key: SPARK-38595
> URL: https://issues.apache.org/jira/browse/SPARK-38595
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, Spark Core, SQL
>Affects Versions: 3.1.0, 3.1.1, 3.1.2, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38568) Upgrade ZSTD-JNI to 1.5.2-2

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-38568.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 35877
[https://github.com/apache/spark/pull/35877]

> Upgrade ZSTD-JNI to 1.5.2-2
> ---
>
> Key: SPARK-38568
> URL: https://issues.apache.org/jira/browse/SPARK-38568
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38568) Upgrade ZSTD-JNI to 1.5.2-2

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-38568:


Assignee: Yuming Wang

> Upgrade ZSTD-JNI to 1.5.2-2
> ---
>
> Key: SPARK-38568
> URL: https://issues.apache.org/jira/browse/SPARK-38568
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38603) Qualified star selection produces duplicated common columns after join then alias

2022-03-18 Thread Yves Li (Jira)
Yves Li created SPARK-38603:
---

 Summary: Qualified star selection produces duplicated common 
columns after join then alias
 Key: SPARK-38603
 URL: https://issues.apache.org/jira/browse/SPARK-38603
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.2.0
 Environment: OS: Ubuntu 18.04.5 LTS
Scala version: 2.12.15
Reporter: Yves Li


When joining two DataFrames and then aliasing the result, selecting columns 
from the resulting Dataset by a qualified star produces duplicates of the 
joined columns.
{code:scala}
scala> val df1 = Seq((1, 10), (2, 20)).toDF("a", "x")
df1: org.apache.spark.sql.DataFrame = [a: int, x: int]

scala> val df2 = Seq((2, 200), (3, 300)).toDF("a", "y")
df2: org.apache.spark.sql.DataFrame = [a: int, y: int]

scala> val joined = df1.join(df2, "a").alias("joined")
joined: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, x: 
int ... 1 more field]

scala> joined.select("*").show()
+---+---+---+
|  a|  x|  y|
+---+---+---+
|  2| 20|200|
+---+---+---+

scala> joined.select("joined.*").show()
+---+---+---+---+
|  a|  a|  x|  y|
+---+---+---+---+
|  2|  2| 20|200|
+---+---+---+---+

scala> joined.select("*").select("joined.*").show()
+---+---+---+
|  a|  x|  y|
+---+---+---+
|  2| 20|200|
+---+---+---+ {code}
This appears to be introduced by SPARK-34527, leading to some surprising 
behaviour. Using an earlier version, such as Spark 3.0.2, produces the same 
output for all three {{{}show(){}}}s.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38194) Make memory overhead factor configurable

2022-03-18 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509129#comment-17509129
 ] 

Apache Spark commented on SPARK-38194:
--

User 'Kimahriman' has created a pull request for this issue:
https://github.com/apache/spark/pull/35912

> Make memory overhead factor configurable
> 
>
> Key: SPARK-38194
> URL: https://issues.apache.org/jira/browse/SPARK-38194
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes, Mesos, YARN
>Affects Versions: 3.4.0
>Reporter: Adam Binford
>Assignee: Adam Binford
>Priority: Major
> Fix For: 3.4.0
>
>
> Currently if the memory overhead is not provided for a Yarn job, it defaults 
> to 10% of the respective driver/executor memory. This 10% is hard-coded and 
> the only way to increase memory overhead is to set the exact memory overhead. 
> We have seen more than 10% memory being used, and it would be helpful to be 
> able to set the default overhead factor so that the overhead doesn't need to 
> be pre-calculated for any driver/executor memory size. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38602) Upgrade Kafka to 3.1.1

2022-03-18 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-38602:
-

 Summary: Upgrade Kafka to 3.1.1
 Key: SPARK-38602
 URL: https://issues.apache.org/jira/browse/SPARK-38602
 Project: Spark
  Issue Type: Bug
  Components: Build
Affects Versions: 3.3.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38308) Select of a stream of window expressions fails

2022-03-18 Thread Bruce Robbins (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruce Robbins updated SPARK-38308:
--
Affects Version/s: 3.4.0

> Select of a stream of window expressions fails
> --
>
> Key: SPARK-38308
> URL: https://issues.apache.org/jira/browse/SPARK-38308
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.3, 3.2.1, 3.3.0, 3.4.0
>Reporter: Bruce Robbins
>Priority: Major
>
> The following query fails:
> {noformat}
> val df = spark.range(0, 20).map { x =>
>   (x % 4, x + 1, x + 2)
> }.toDF("a", "b", "c")
> import org.apache.spark.sql.expressions._
> val w = Window.partitionBy("a").orderBy("b")
> val selectExprs = Stream(
>   sum("c").over(w.rowsBetween(Window.unboundedPreceding, 
> Window.currentRow)).as("sumc"),
>   avg("c").over(w.rowsBetween(Window.unboundedPreceding, 
> Window.currentRow)).as("avgc")
> )
> df.select(selectExprs: _*).show(false)
> {noformat}
> It fails with the following error:
> {noformat}
> org.apache.spark.sql.AnalysisException: Resolved attribute(s) avgc#23 missing 
> from c#16L,a#14L,b#15L,sumc#21L in operator !Project [c#16L, a#14L, b#15L, 
> sumc#21L, sumc#21L, avgc#23].;
> {noformat}
> If you change the Stream of window expressions to a Vector or List, the query 
> succeeds.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk updated SPARK-38600:
-
Issue Type: Bug  (was: Improvement)

> Include unit into the sql string of TIMESTAMPADD/DIFF 
> --
>
> Key: SPARK-38600
> URL: https://issues.apache.org/jira/browse/SPARK-38600
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
> Fix For: 3.3.0
>
>
> After https://github.com/apache/spark/pull/35805, the sql method doesn't 
> include unit anymore. The ticket aims to override the sql method in the 
> TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-38600.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

Issue resolved by pull request 35911
[https://github.com/apache/spark/pull/35911]

> Include unit into the sql string of TIMESTAMPADD/DIFF 
> --
>
> Key: SPARK-38600
> URL: https://issues.apache.org/jira/browse/SPARK-38600
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
> Fix For: 3.3.0
>
>
> After https://github.com/apache/spark/pull/35805, the sql method doesn't 
> include unit anymore. The ticket aims to override the sql method in the 
> TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk updated SPARK-38600:
-
Affects Version/s: 3.3.0

> Include unit into the sql string of TIMESTAMPADD/DIFF 
> --
>
> Key: SPARK-38600
> URL: https://issues.apache.org/jira/browse/SPARK-38600
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
> Fix For: 3.3.0
>
>
> After https://github.com/apache/spark/pull/35805, the sql method doesn't 
> include unit anymore. The ticket aims to override the sql method in the 
> TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38597) Enable resource limited spark k8s IT in GA

2022-03-18 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508823#comment-17508823
 ] 

Apache Spark commented on SPARK-38597:
--

User 'Yikun' has created a pull request for this issue:
https://github.com/apache/spark/pull/35830

> Enable resource limited spark k8s IT in GA
> --
>
> Key: SPARK-38597
> URL: https://issues.apache.org/jira/browse/SPARK-38597
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Project Infra
>Affects Versions: 3.4.0
>Reporter: Yikun Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38597) Enable resource limited spark k8s IT in GA

2022-03-18 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-38597:


Assignee: Apache Spark

> Enable resource limited spark k8s IT in GA
> --
>
> Key: SPARK-38597
> URL: https://issues.apache.org/jira/browse/SPARK-38597
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Project Infra
>Affects Versions: 3.4.0
>Reporter: Yikun Jiang
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38597) Enable resource limited spark k8s IT in GA

2022-03-18 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-38597:


Assignee: (was: Apache Spark)

> Enable resource limited spark k8s IT in GA
> --
>
> Key: SPARK-38597
> URL: https://issues.apache.org/jira/browse/SPARK-38597
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Project Infra
>Affects Versions: 3.4.0
>Reporter: Yikun Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38597) Enable resource limited spark k8s IT in GA

2022-03-18 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508824#comment-17508824
 ] 

Apache Spark commented on SPARK-38597:
--

User 'Yikun' has created a pull request for this issue:
https://github.com/apache/spark/pull/35830

> Enable resource limited spark k8s IT in GA
> --
>
> Key: SPARK-38597
> URL: https://issues.apache.org/jira/browse/SPARK-38597
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Project Infra
>Affects Versions: 3.4.0
>Reporter: Yikun Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38601) AttributeError: module 'databricks.koalas' has no attribute 'DateOffset'

2022-03-18 Thread Prakhar Sandhu (Jira)
Prakhar Sandhu created SPARK-38601:
--

 Summary: AttributeError: module 'databricks.koalas' has no 
attribute 'DateOffset'
 Key: SPARK-38601
 URL: https://issues.apache.org/jira/browse/SPARK-38601
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 3.0.0
Reporter: Prakhar Sandhu


I am working on replacing Pandas library to Koalas Library in my python repo in 
VS Code. But Koalas module does not seem to have DateOffset() module similar to 
what pandas has.

I tried this :
{code:java}
import databricks.koalas as ks 
kdf["date_col_2"] = kdf["date_col_1"] - ks.DateOffset(months=cycle_info_gap)
 {code}
It results in the below error :
{code:java}
AttributeError: module 'databricks.koalas' has no attribute 'DateOffset' {code}
Is there any alternative for this in Koalas?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508744#comment-17508744
 ] 

Apache Spark commented on SPARK-38600:
--

User 'MaxGekk' has created a pull request for this issue:
https://github.com/apache/spark/pull/35911

> Include unit into the sql string of TIMESTAMPADD/DIFF 
> --
>
> Key: SPARK-38600
> URL: https://issues.apache.org/jira/browse/SPARK-38600
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
>
> After https://github.com/apache/spark/pull/35805, the sql method doesn't 
> include unit anymore. The ticket aims to override the sql method in the 
> TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-38600:


Assignee: Max Gekk  (was: Apache Spark)

> Include unit into the sql string of TIMESTAMPADD/DIFF 
> --
>
> Key: SPARK-38600
> URL: https://issues.apache.org/jira/browse/SPARK-38600
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
>
> After https://github.com/apache/spark/pull/35805, the sql method doesn't 
> include unit anymore. The ticket aims to override the sql method in the 
> TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-38600:


Assignee: Apache Spark  (was: Max Gekk)

> Include unit into the sql string of TIMESTAMPADD/DIFF 
> --
>
> Key: SPARK-38600
> URL: https://issues.apache.org/jira/browse/SPARK-38600
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Apache Spark
>Priority: Major
>
> After https://github.com/apache/spark/pull/35805, the sql method doesn't 
> include unit anymore. The ticket aims to override the sql method in the 
> TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38600) Include unit into the sql string of TIMESTAMPADD/DIFF

2022-03-18 Thread Max Gekk (Jira)
Max Gekk created SPARK-38600:


 Summary: Include unit into the sql string of TIMESTAMPADD/DIFF 
 Key: SPARK-38600
 URL: https://issues.apache.org/jira/browse/SPARK-38600
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.4.0
Reporter: Max Gekk
Assignee: Max Gekk


After https://github.com/apache/spark/pull/35805, the sql method doesn't 
include unit anymore. The ticket aims to override the sql method in the 
TIMESTAMPADD and TIMESTAMPDIFF expressions, and prepend unit. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38599) support load json file in case-insensitive way

2022-03-18 Thread TANG ZHAO (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG ZHAO updated SPARK-38599:
--
Affects Version/s: (was: 3.0.0)
   (was: 3.1.0)
   (was: 3.0.1)
   (was: 3.0.2)
   (was: 3.2.0)
   (was: 3.0.3)
   (was: 3.2.1)

> support load json file in case-insensitive way
> --
>
> Key: SPARK-38599
> URL: https://issues.apache.org/jira/browse/SPARK-38599
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, SQL
>Affects Versions: 3.1.1
>Reporter: TANG ZHAO
>Priority: Major
>
> The task is to load json files into dataFrame.
>  
> Currently we use this method:
> // textfile is rdd[string], read from json files
> val table = spark.table(hiveTableName)
> val hiveSchema = table.schema
> var df = spark.read.option("mode", 
> "DROPMALFORMED").schema(hiveSchema).json(textfile)
>  
> The problem is that the field in hiveSchema is all in lower-case,  however 
> the field of json string have upper case. 
> For example:
> hive schema:
> (id  bigint,  name string)
>  
> json string
> {"Id":123, "Name":"Tom"}
>  
> in this case,  the json string will not be loaded into dataFrame
> I have to use the schema of hive table, due to business requirement, that's 
> the pre-condition.
> currently I have to transform the key in json string to lower case, like 
> \{"id":123, "name":"Tom"}
>  
> but I was wondering if there's any better solution for this issue?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38599) support load json file in case-insensitive way

2022-03-18 Thread TANG ZHAO (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508719#comment-17508719
 ] 

TANG ZHAO commented on SPARK-38599:
---

I'd like to contribute to this issue.

> support load json file in case-insensitive way
> --
>
> Key: SPARK-38599
> URL: https://issues.apache.org/jira/browse/SPARK-38599
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, SQL
>Affects Versions: 3.1.1
>Reporter: TANG ZHAO
>Priority: Major
>
> The task is to load json files into dataFrame.
>  
> Currently we use this method:
> // textfile is rdd[string], read from json files
> val table = spark.table(hiveTableName)
> val hiveSchema = table.schema
> var df = spark.read.option("mode", 
> "DROPMALFORMED").schema(hiveSchema).json(textfile)
>  
> The problem is that the field in hiveSchema is all in lower-case,  however 
> the field of json string have upper case. 
> For example:
> hive schema:
> (id  bigint,  name string)
>  
> json string
> {"Id":123, "Name":"Tom"}
>  
> in this case,  the json string will not be loaded into dataFrame
> I have to use the schema of hive table, due to business requirement, that's 
> the pre-condition.
> currently I have to transform the key in json string to lower case, like 
> \{"id":123, "name":"Tom"}
>  
> but I was wondering if there's any better solution for this issue?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38598) add support to load json file in case in-sensitive way

2022-03-18 Thread TANG ZHAO (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG ZHAO resolved SPARK-38598.
---
Resolution: Duplicate

> add support to load json file in case in-sensitive way
> --
>
> Key: SPARK-38598
> URL: https://issues.apache.org/jira/browse/SPARK-38598
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, Spark Core, SQL
>Affects Versions: 3.1.0, 3.1.1, 3.1.2, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38599) support load json file in case-insensitive way

2022-03-18 Thread TANG ZHAO (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG ZHAO updated SPARK-38599:
--
Description: 
The task is to load json files into dataFrame.

 

Currently we use this method:

// textfile is rdd[string], read from json files

val table = spark.table(hiveTableName)
val hiveSchema = table.schema
var df = spark.read.option("mode", 
"DROPMALFORMED").schema(hiveSchema).json(textfile)

 

The problem is that the field in hiveSchema is all in lower-case,  however the 
field of json string have upper case. 

For example:

hive schema:

(id  bigint,  name string)

 

json string

{"Id":123, "Name":"Tom"}

 

in this case,  the json string will not be loaded into dataFrame

I have to use the schema of hive table, due to business requirement, that's the 
pre-condition.

currently I have to transform the key in json string to lower case, like 
\{"id":123, "name":"Tom"}

 

but I was wondering if there's any better solution for this issue?

  was:
The task is to load json files into hive table.

 

Currently we use this method:

// textfile is rdd[string], read from json files

val table = spark.table(hiveTableName)
val hiveSchema = table.schema
var df = spark.read.option("mode", 
"DROPMALFORMED").schema(hiveSchema).json(textfile)

 

The problem is that the field in hiveSchema is all in lower-case,  however the 
field of json string have upper case. 

for example:

 


> support load json file in case-insensitive way
> --
>
> Key: SPARK-38599
> URL: https://issues.apache.org/jira/browse/SPARK-38599
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, SQL
>Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.1.0, 3.1.1, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>
> The task is to load json files into dataFrame.
>  
> Currently we use this method:
> // textfile is rdd[string], read from json files
> val table = spark.table(hiveTableName)
> val hiveSchema = table.schema
> var df = spark.read.option("mode", 
> "DROPMALFORMED").schema(hiveSchema).json(textfile)
>  
> The problem is that the field in hiveSchema is all in lower-case,  however 
> the field of json string have upper case. 
> For example:
> hive schema:
> (id  bigint,  name string)
>  
> json string
> {"Id":123, "Name":"Tom"}
>  
> in this case,  the json string will not be loaded into dataFrame
> I have to use the schema of hive table, due to business requirement, that's 
> the pre-condition.
> currently I have to transform the key in json string to lower case, like 
> \{"id":123, "name":"Tom"}
>  
> but I was wondering if there's any better solution for this issue?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38599) support load json file in case-insensitive way

2022-03-18 Thread TANG ZHAO (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG ZHAO updated SPARK-38599:
--
Description: 
The task is to load json files into hive table.

 

Currently we use this method:

// textfile is rdd[string], read from json files

val table = spark.table(hiveTableName)
val hiveSchema = table.schema
var df = spark.read.option("mode", 
"DROPMALFORMED").schema(hiveSchema).json(textfile)

 

The problem is that the field in hiveSchema is all in lower-case,  however the 
field of json string have upper case. 

for example:

 

> support load json file in case-insensitive way
> --
>
> Key: SPARK-38599
> URL: https://issues.apache.org/jira/browse/SPARK-38599
> Project: Spark
>  Issue Type: New Feature
>  Components: Input/Output, SQL
>Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.1.0, 3.1.1, 3.2.0, 3.2.1
>Reporter: TANG ZHAO
>Priority: Major
>
> The task is to load json files into hive table.
>  
> Currently we use this method:
> // textfile is rdd[string], read from json files
> val table = spark.table(hiveTableName)
> val hiveSchema = table.schema
> var df = spark.read.option("mode", 
> "DROPMALFORMED").schema(hiveSchema).json(textfile)
>  
> The problem is that the field in hiveSchema is all in lower-case,  however 
> the field of json string have upper case. 
> for example:
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38599) support load json file in case-insensitive way

2022-03-18 Thread TANG ZHAO (Jira)
TANG ZHAO created SPARK-38599:
-

 Summary: support load json file in case-insensitive way
 Key: SPARK-38599
 URL: https://issues.apache.org/jira/browse/SPARK-38599
 Project: Spark
  Issue Type: New Feature
  Components: Input/Output, SQL
Affects Versions: 3.2.1, 3.2.0, 3.1.1, 3.1.0, 3.0.3, 3.0.2, 3.0.1, 3.0.0
Reporter: TANG ZHAO






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38598) add support to load json file in case in-sensitive way

2022-03-18 Thread TANG ZHAO (Jira)
TANG ZHAO created SPARK-38598:
-

 Summary: add support to load json file in case in-sensitive way
 Key: SPARK-38598
 URL: https://issues.apache.org/jira/browse/SPARK-38598
 Project: Spark
  Issue Type: New Feature
  Components: Input/Output, Spark Core, SQL
Affects Versions: 3.2.1, 3.2.0, 3.1.2, 3.1.1, 3.1.0
Reporter: TANG ZHAO






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38597) Enable resource limited spark k8s IT in GA

2022-03-18 Thread Yikun Jiang (Jira)
Yikun Jiang created SPARK-38597:
---

 Summary: Enable resource limited spark k8s IT in GA
 Key: SPARK-38597
 URL: https://issues.apache.org/jira/browse/SPARK-38597
 Project: Spark
  Issue Type: Bug
  Components: Kubernetes, Project Infra
Affects Versions: 3.4.0
Reporter: Yikun Jiang






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38593) Incorporate numRowsDroppedByWatermark metric from SessionWindowStateStoreRestoreExec into StateOperatorProgress

2022-03-18 Thread Jungtaek Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim updated SPARK-38593:
-
Fix Version/s: 3.3.0

> Incorporate numRowsDroppedByWatermark metric from 
> SessionWindowStateStoreRestoreExec into StateOperatorProgress
> ---
>
> Key: SPARK-38593
> URL: https://issues.apache.org/jira/browse/SPARK-38593
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Assignee: Apache Spark
>Priority: Major
> Fix For: 3.3.0, 3.4.0
>
>
> Although we added `numRowsDroppedByWatermark` to 
> `SessionWindowStateStoreRestoreExec`, but currently only `StateStoreWriter` 
> will be collected metrics for `StateOperatorProgress`. So if we need 
> `numRowsDroppedByWatermark` from `SessionWindowStateStoreRestoreExec` to be 
> used in streaming listener, we need to incorporate 
> `SessionWindowStateStoreRestoreExec` into `StateOperatorProgress`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38593) Incorporate numRowsDroppedByWatermark metric from SessionWindowStateStoreRestoreExec into StateOperatorProgress

2022-03-18 Thread Jungtaek Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim reassigned SPARK-38593:


Assignee: Jungtaek Lim  (was: Apache Spark)

> Incorporate numRowsDroppedByWatermark metric from 
> SessionWindowStateStoreRestoreExec into StateOperatorProgress
> ---
>
> Key: SPARK-38593
> URL: https://issues.apache.org/jira/browse/SPARK-38593
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Assignee: Jungtaek Lim
>Priority: Major
> Fix For: 3.3.0, 3.4.0
>
>
> Although we added `numRowsDroppedByWatermark` to 
> `SessionWindowStateStoreRestoreExec`, but currently only `StateStoreWriter` 
> will be collected metrics for `StateOperatorProgress`. So if we need 
> `numRowsDroppedByWatermark` from `SessionWindowStateStoreRestoreExec` to be 
> used in streaming listener, we need to incorporate 
> `SessionWindowStateStoreRestoreExec` into `StateOperatorProgress`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38593) Incorporate numRowsDroppedByWatermark metric from SessionWindowStateStoreRestoreExec into StateOperatorProgress

2022-03-18 Thread Jungtaek Lim (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved SPARK-38593.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 35909
[https://github.com/apache/spark/pull/35909]

> Incorporate numRowsDroppedByWatermark metric from 
> SessionWindowStateStoreRestoreExec into StateOperatorProgress
> ---
>
> Key: SPARK-38593
> URL: https://issues.apache.org/jira/browse/SPARK-38593
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Assignee: Apache Spark
>Priority: Major
> Fix For: 3.4.0
>
>
> Although we added `numRowsDroppedByWatermark` to 
> `SessionWindowStateStoreRestoreExec`, but currently only `StateStoreWriter` 
> will be collected metrics for `StateOperatorProgress`. So if we need 
> `numRowsDroppedByWatermark` from `SessionWindowStateStoreRestoreExec` to be 
> used in streaming listener, we need to incorporate 
> `SessionWindowStateStoreRestoreExec` into `StateOperatorProgress`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38596) add support to load json file in case in-sensitive way

2022-03-18 Thread TANG ZHAO (Jira)
TANG ZHAO created SPARK-38596:
-

 Summary: add support to load json file in case in-sensitive way
 Key: SPARK-38596
 URL: https://issues.apache.org/jira/browse/SPARK-38596
 Project: Spark
  Issue Type: New Feature
  Components: Input/Output, Spark Core, SQL
Affects Versions: 3.2.1, 3.2.0, 3.1.2, 3.1.1, 3.1.0
Reporter: TANG ZHAO






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38595) add support to load json file in case in-sensitive way

2022-03-18 Thread TANG ZHAO (Jira)
TANG ZHAO created SPARK-38595:
-

 Summary: add support to load json file in case in-sensitive way
 Key: SPARK-38595
 URL: https://issues.apache.org/jira/browse/SPARK-38595
 Project: Spark
  Issue Type: New Feature
  Components: Input/Output, Spark Core, SQL
Affects Versions: 3.2.1, 3.2.0, 3.1.2, 3.1.1, 3.1.0
Reporter: TANG ZHAO






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38585) Simplify the code of TreeNode.clone()

2022-03-18 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-38585:
-
Description: 
SPARK-28057 adds {{forceCopy}} to private {{mapChildren}} method in 
{{TreeNode}} to realize the {{clone()}} method.

After SPARK-34989, the call corresponding to {{forceCopy=false}} is changed to 
use {{{}withNewChildren{}}}, and {{forceCopy}} always true and the private 
{{mapChildren}} only used by {{clone()}} method.

  was:SPARK-28057 adds `forceCopy` arg to `mapChildren` method in `TreeNode` to 
realize the object clone(), and after SPARK-34989, the call corresponding to 
`forceCopy=false` is changed to use `withNewChildren`, and `forceCopy`  always 
true.


> Simplify the code of TreeNode.clone()
> -
>
> Key: SPARK-38585
> URL: https://issues.apache.org/jira/browse/SPARK-38585
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Minor
>
> SPARK-28057 adds {{forceCopy}} to private {{mapChildren}} method in 
> {{TreeNode}} to realize the {{clone()}} method.
> After SPARK-34989, the call corresponding to {{forceCopy=false}} is changed 
> to use {{{}withNewChildren{}}}, and {{forceCopy}} always true and the private 
> {{mapChildren}} only used by {{clone()}} method.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38585) Simplify the code of TreeNode.clone()

2022-03-18 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-38585:
-
Summary: Simplify the code of TreeNode.clone()  (was:  Remove the constant 
arg `forceCopy` for the private method `mapChildren` in `TreeNode`)

> Simplify the code of TreeNode.clone()
> -
>
> Key: SPARK-38585
> URL: https://issues.apache.org/jira/browse/SPARK-38585
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Minor
>
> SPARK-28057 adds `forceCopy` arg to `mapChildren` method in `TreeNode` to 
> realize the object clone(), and after SPARK-34989, the call corresponding to 
> `forceCopy=false` is changed to use `withNewChildren`, and `forceCopy`  
> always true.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38594) Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in RBackend

2022-03-18 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-38594:


Assignee: (was: Apache Spark)

> Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in 
> RBackend
> ---
>
> Key: SPARK-38594
> URL: https://issues.apache.org/jira/browse/SPARK-38594
> Project: Spark
>  Issue Type: Improvement
>  Components: R
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38594) Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in RBackend

2022-03-18 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508642#comment-17508642
 ] 

Apache Spark commented on SPARK-38594:
--

User 'LuciferYang' has created a pull request for this issue:
https://github.com/apache/spark/pull/35910

> Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in 
> RBackend
> ---
>
> Key: SPARK-38594
> URL: https://issues.apache.org/jira/browse/SPARK-38594
> Project: Spark
>  Issue Type: Improvement
>  Components: R
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38594) Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in RBackend

2022-03-18 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-38594:


Assignee: Apache Spark

> Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in 
> RBackend
> ---
>
> Key: SPARK-38594
> URL: https://issues.apache.org/jira/browse/SPARK-38594
> Project: Spark
>  Issue Type: Improvement
>  Components: R
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Assignee: Apache Spark
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38594) Change to use `NettyUtils` to create `EventLoop` and `ChannelClass` in RBackend

2022-03-18 Thread Yang Jie (Jira)
Yang Jie created SPARK-38594:


 Summary: Change to use `NettyUtils` to create `EventLoop` and 
`ChannelClass` in RBackend
 Key: SPARK-38594
 URL: https://issues.apache.org/jira/browse/SPARK-38594
 Project: Spark
  Issue Type: Improvement
  Components: R
Affects Versions: 3.4.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38583) to_timestamp should allow numeric types

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-38583.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

Issue resolved by pull request 35887
[https://github.com/apache/spark/pull/35887]

> to_timestamp should allow numeric types
> ---
>
> Key: SPARK-38583
> URL: https://issues.apache.org/jira/browse/SPARK-38583
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Hyukjin Kwon
>Assignee: Hyukjin Kwon
>Priority: Major
> Fix For: 3.3.0
>
>
> SPARK-38240 mistakenly disallowed numeric type at to_timestamp. We should 
> allow it back:
> {code}
> spark.range(1).selectExpr("to_timestamp(id)").show()
> {code}
> *Before*
> {code}
> +---+
> |   to_timestamp(id)|
> +---+
> |1970-01-01 09:00:00|
> +---+
> {code}
> *After*
> {code}
> +-+
> | to_timestamp(id)|
> +-+
> | null|
> +-+
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-38583) to_timestamp should allow numeric types

2022-03-18 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-38583:


Assignee: Hyukjin Kwon

> to_timestamp should allow numeric types
> ---
>
> Key: SPARK-38583
> URL: https://issues.apache.org/jira/browse/SPARK-38583
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: Hyukjin Kwon
>Assignee: Hyukjin Kwon
>Priority: Major
>
> SPARK-38240 mistakenly disallowed numeric type at to_timestamp. We should 
> allow it back:
> {code}
> spark.range(1).selectExpr("to_timestamp(id)").show()
> {code}
> *Before*
> {code}
> +---+
> |   to_timestamp(id)|
> +---+
> |1970-01-01 09:00:00|
> +---+
> {code}
> *After*
> {code}
> +-+
> | to_timestamp(id)|
> +-+
> | null|
> +-+
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org