[jira] [Resolved] (SPARK-39724) Remove duplicate `.setAccessible(true)` in `kvstore.KVTypeInfo`

2022-07-09 Thread Huaxin Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huaxin Gao resolved SPARK-39724.

Fix Version/s: 3.4.0
 Assignee: Yang Jie
   Resolution: Fixed

> Remove duplicate `.setAccessible(true)`  in `kvstore.KVTypeInfo`
> 
>
> Key: SPARK-39724
> URL: https://issues.apache.org/jira/browse/SPARK-39724
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Minor
> Fix For: 3.4.0
>
>
> {code:java}
>     for (Method m : type.getDeclaredMethods()) {
>       KVIndex idx = m.getAnnotation(KVIndex.class);
>       if (idx != null) {
>         checkIndex(idx, indices);
>         Preconditions.checkArgument(m.getParameterTypes().length == 0,
>           "Annotated method %s::%s should not have any parameters.", 
> type.getName(), m.getName());
>         m.setAccessible(true);
>         indices.put(idx.value(), idx);
>         m.setAccessible(true);
>         accessors.put(idx.value(), new MethodAccessor(m));
>       } {code}
> The above code has duplicate calls to `.setAccessible(true)`.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-38714) Interval multiplication error

2022-07-09 Thread Pablo Langa Blanco (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pablo Langa Blanco resolved SPARK-38714.

Resolution: Resolved

> Interval multiplication error
> -
>
> Key: SPARK-38714
> URL: https://issues.apache.org/jira/browse/SPARK-38714
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
> Environment: branch-3.3,  Java 8
>  
>Reporter: chong
>Priority: Major
>
> Code gen have error when multipling interval by a decimal.
>  
> $SPARK_HOME/bin/spark-shell
>  
> import org.apache.spark.sql.Row
> import java.time.Duration
> import java.time.Period
> import org.apache.spark.sql.types._
> val data = Seq(Row(new java.math.BigDecimal("123456789.11")))
> val schema = StructType(Seq(
> StructField("c1", DecimalType(9, 2)),
> ))
> val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)
> df.selectExpr("interval '100' second * c1").show(false)
> errors are:
> *{color:#FF}java.lang.AssertionError: assertion failed:{color}*
> Decimal$DecimalIsFractional
> while compiling: 
> during phase: globalPhase=terminal, enteringPhase=jvm
> library version: version 2.12.15
> compiler version: version 2.12.15
> reconstructed args: -classpath -Yrepl-class-based -Yrepl-outdir 
> /tmp/spark-83a0cda4-dd0b-472e-ad8b-a4b33b85f613/repl-06489815-5366-4aa0-9419-f01abda8d041
> last tree to typer: TypeTree(class Byte)
> tree position: line 6 of 
> tree tpe: Byte
> symbol: (final abstract) class Byte in package scala
> symbol definition: final abstract class Byte extends (a ClassSymbol)
> symbol package: scala
> symbol owners: class Byte
> call site: constructor $eval in object $eval in package $line21
> == Source file context for tree position ==
> 3
> 4 object $eval {
> 5 lazy val $result = 
> $line21.$read.INSTANCE.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.res0
> 6 lazy val $print: {_}root{_}.java.lang.String = {
> 7 $line21.$read.INSTANCE.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw
> 8
> 9 ""
> at 
> scala.reflect.internal.SymbolTable.throwAssertionError(SymbolTable.scala:185)
> at scala.reflect.internal.Symbols$Symbol.completeInfo(Symbols.scala:1525)
> at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1514)
> at scala.reflect.internal.Symbols$Symbol.flatOwnerInfo(Symbols.scala:2353)
> at 
> scala.reflect.internal.Symbols$ClassSymbol.companionModule0(Symbols.scala:3346)
> at 
> scala.reflect.internal.Symbols$ClassSymbol.companionModule(Symbols.scala:3348)
> at 
> scala.reflect.internal.Symbols$ModuleClassSymbol.sourceModule(Symbols.scala:3487)
> at 
> scala.reflect.internal.Symbols.$anonfun$forEachRelevantSymbols$1$adapted(Symbols.scala:3802)
> at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at scala.reflect.internal.Symbols.markFlagsCompleted(Symbols.scala:3799)
> at scala.reflect.internal.Symbols.markFlagsCompleted$(Symbols.scala:3805)
> at scala.reflect.internal.SymbolTable.markFlagsCompleted(SymbolTable.scala:28)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.finishSym$1(UnPickler.scala:324)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readSymbol(UnPickler.scala:342)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readSymbolRef(UnPickler.scala:645)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readType(UnPickler.scala:413)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.$anonfun$readSymbol$10(UnPickler.scala:357)
> at scala.reflect.internal.pickling.UnPickler$Scan.at(UnPickler.scala:188)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readSymbol(UnPickler.scala:357)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.$anonfun$run$1(UnPickler.scala:96)
> at scala.reflect.internal.pickling.UnPickler$Scan.run(UnPickler.scala:88)
> at scala.reflect.internal.pickling.UnPickler.unpickle(UnPickler.scala:47)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.unpickleOrParseInnerClasses(ClassfileParser.scala:1186)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.parseClass(ClassfileParser.scala:468)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.$anonfun$parse$2(ClassfileParser.scala:161)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.$anonfun$parse$1(ClassfileParser.scala:147)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.parse(ClassfileParser.scala:130)
> at 
> scala.tools.nsc.symtab.SymbolLoaders$ClassfileLoader.doComplete(SymbolLoaders.scala:343)
> at 
> scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.complete(SymbolLoaders.scala:250)
> at 
> scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.load(SymbolLoaders.scala:269)
> at scala.reflect.internal.Symbols$Symbol.exists

[jira] [Commented] (SPARK-38714) Interval multiplication error

2022-07-09 Thread Pablo Langa Blanco (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17564619#comment-17564619
 ] 

Pablo Langa Blanco commented on SPARK-38714:


I have tested it in master and branch 3.3 and it's solved. 

> Interval multiplication error
> -
>
> Key: SPARK-38714
> URL: https://issues.apache.org/jira/browse/SPARK-38714
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
> Environment: branch-3.3,  Java 8
>  
>Reporter: chong
>Priority: Major
>
> Code gen have error when multipling interval by a decimal.
>  
> $SPARK_HOME/bin/spark-shell
>  
> import org.apache.spark.sql.Row
> import java.time.Duration
> import java.time.Period
> import org.apache.spark.sql.types._
> val data = Seq(Row(new java.math.BigDecimal("123456789.11")))
> val schema = StructType(Seq(
> StructField("c1", DecimalType(9, 2)),
> ))
> val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)
> df.selectExpr("interval '100' second * c1").show(false)
> errors are:
> *{color:#FF}java.lang.AssertionError: assertion failed:{color}*
> Decimal$DecimalIsFractional
> while compiling: 
> during phase: globalPhase=terminal, enteringPhase=jvm
> library version: version 2.12.15
> compiler version: version 2.12.15
> reconstructed args: -classpath -Yrepl-class-based -Yrepl-outdir 
> /tmp/spark-83a0cda4-dd0b-472e-ad8b-a4b33b85f613/repl-06489815-5366-4aa0-9419-f01abda8d041
> last tree to typer: TypeTree(class Byte)
> tree position: line 6 of 
> tree tpe: Byte
> symbol: (final abstract) class Byte in package scala
> symbol definition: final abstract class Byte extends (a ClassSymbol)
> symbol package: scala
> symbol owners: class Byte
> call site: constructor $eval in object $eval in package $line21
> == Source file context for tree position ==
> 3
> 4 object $eval {
> 5 lazy val $result = 
> $line21.$read.INSTANCE.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.res0
> 6 lazy val $print: {_}root{_}.java.lang.String = {
> 7 $line21.$read.INSTANCE.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw.$iw
> 8
> 9 ""
> at 
> scala.reflect.internal.SymbolTable.throwAssertionError(SymbolTable.scala:185)
> at scala.reflect.internal.Symbols$Symbol.completeInfo(Symbols.scala:1525)
> at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1514)
> at scala.reflect.internal.Symbols$Symbol.flatOwnerInfo(Symbols.scala:2353)
> at 
> scala.reflect.internal.Symbols$ClassSymbol.companionModule0(Symbols.scala:3346)
> at 
> scala.reflect.internal.Symbols$ClassSymbol.companionModule(Symbols.scala:3348)
> at 
> scala.reflect.internal.Symbols$ModuleClassSymbol.sourceModule(Symbols.scala:3487)
> at 
> scala.reflect.internal.Symbols.$anonfun$forEachRelevantSymbols$1$adapted(Symbols.scala:3802)
> at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
> at scala.reflect.internal.Symbols.markFlagsCompleted(Symbols.scala:3799)
> at scala.reflect.internal.Symbols.markFlagsCompleted$(Symbols.scala:3805)
> at scala.reflect.internal.SymbolTable.markFlagsCompleted(SymbolTable.scala:28)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.finishSym$1(UnPickler.scala:324)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readSymbol(UnPickler.scala:342)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readSymbolRef(UnPickler.scala:645)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readType(UnPickler.scala:413)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.$anonfun$readSymbol$10(UnPickler.scala:357)
> at scala.reflect.internal.pickling.UnPickler$Scan.at(UnPickler.scala:188)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.readSymbol(UnPickler.scala:357)
> at 
> scala.reflect.internal.pickling.UnPickler$Scan.$anonfun$run$1(UnPickler.scala:96)
> at scala.reflect.internal.pickling.UnPickler$Scan.run(UnPickler.scala:88)
> at scala.reflect.internal.pickling.UnPickler.unpickle(UnPickler.scala:47)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.unpickleOrParseInnerClasses(ClassfileParser.scala:1186)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.parseClass(ClassfileParser.scala:468)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.$anonfun$parse$2(ClassfileParser.scala:161)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.$anonfun$parse$1(ClassfileParser.scala:147)
> at 
> scala.tools.nsc.symtab.classfile.ClassfileParser.parse(ClassfileParser.scala:130)
> at 
> scala.tools.nsc.symtab.SymbolLoaders$ClassfileLoader.doComplete(SymbolLoaders.scala:343)
> at 
> scala.tools.nsc.symtab.SymbolLoaders$SymbolLoader.complete(SymbolLoaders.scala:250)
> at 
> scala.tools.nsc.symtab.SymbolLoaders$S

[jira] [Assigned] (SPARK-39728) Test for parity of SQL functions between Python and JVM DataFrame API's

2022-07-09 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-39728:


Assignee: Apache Spark

> Test for parity of SQL functions between Python and JVM DataFrame API's
> ---
>
> Key: SPARK-39728
> URL: https://issues.apache.org/jira/browse/SPARK-39728
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Tests
>Affects Versions: 3.3.0
>Reporter: Andrew Ray
>Assignee: Apache Spark
>Priority: Minor
>
> Add a unit test that compares the available list of Python DataFrame 
> functions in pyspark.sql.functions with those available in the Scala/Java 
> DataFrame API in org.apache.spark.sql.functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-39728) Test for parity of SQL functions between Python and JVM DataFrame API's

2022-07-09 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-39728:


Assignee: (was: Apache Spark)

> Test for parity of SQL functions between Python and JVM DataFrame API's
> ---
>
> Key: SPARK-39728
> URL: https://issues.apache.org/jira/browse/SPARK-39728
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Tests
>Affects Versions: 3.3.0
>Reporter: Andrew Ray
>Priority: Minor
>
> Add a unit test that compares the available list of Python DataFrame 
> functions in pyspark.sql.functions with those available in the Scala/Java 
> DataFrame API in org.apache.spark.sql.functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-39728) Test for parity of SQL functions between Python and JVM DataFrame API's

2022-07-09 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17564618#comment-17564618
 ] 

Apache Spark commented on SPARK-39728:
--

User 'aray' has created a pull request for this issue:
https://github.com/apache/spark/pull/37144

> Test for parity of SQL functions between Python and JVM DataFrame API's
> ---
>
> Key: SPARK-39728
> URL: https://issues.apache.org/jira/browse/SPARK-39728
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Tests
>Affects Versions: 3.3.0
>Reporter: Andrew Ray
>Priority: Minor
>
> Add a unit test that compares the available list of Python DataFrame 
> functions in pyspark.sql.functions with those available in the Scala/Java 
> DataFrame API in org.apache.spark.sql.functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-39728) Test for parity of SQL functions between Python and JVM DataFrame API's

2022-07-09 Thread Andrew Ray (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Ray updated SPARK-39728:
---
Priority: Minor  (was: Major)

> Test for parity of SQL functions between Python and JVM DataFrame API's
> ---
>
> Key: SPARK-39728
> URL: https://issues.apache.org/jira/browse/SPARK-39728
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Tests
>Affects Versions: 3.3.0
>Reporter: Andrew Ray
>Priority: Minor
>
> Add a unit test that compares the available list of Python DataFrame 
> functions in pyspark.sql.functions with those available in the Scala/Java 
> DataFrame API in org.apache.spark.sql.functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-39728) Test for parity of SQL functions between Python and JVM DataFrame API's

2022-07-09 Thread Andrew Ray (Jira)
Andrew Ray created SPARK-39728:
--

 Summary: Test for parity of SQL functions between Python and JVM 
DataFrame API's
 Key: SPARK-39728
 URL: https://issues.apache.org/jira/browse/SPARK-39728
 Project: Spark
  Issue Type: Improvement
  Components: PySpark, Tests
Affects Versions: 3.3.0
Reporter: Andrew Ray


Add a unit test that compares the available list of Python DataFrame functions 
in pyspark.sql.functions with those available in the Scala/Java DataFrame API 
in org.apache.spark.sql.functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-24815) Structured Streaming should support dynamic allocation

2022-07-09 Thread Santokh Singh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-24815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17564614#comment-17564614
 ] 

Santokh Singh edited comment on SPARK-24815 at 7/9/22 7:45 PM:
---

Pretty much interested in this feature. With {{mapGroupsWithState}} api in 
structured streaming, or generic state management and sharing state across 
executors, would externalizing state help? I am aware of rocksDB being one way.


was (Author: JIRAUSER292561):
Pretty much interested in this feature. With {{mapGroupsWithState}} api in 
structured streaming, or generic state management and sharing state across 
executors, would externalizing state help? I am aware rocksDB being one way.

> Structured Streaming should support dynamic allocation
> --
>
> Key: SPARK-24815
> URL: https://issues.apache.org/jira/browse/SPARK-24815
> Project: Spark
>  Issue Type: Improvement
>  Components: Scheduler, Spark Core, Structured Streaming
>Affects Versions: 2.3.1
>Reporter: Karthik Palaniappan
>Priority: Minor
>
> For batch jobs, dynamic allocation is very useful for adding and removing 
> containers to match the actual workload. On multi-tenant clusters, it ensures 
> that a Spark job is taking no more resources than necessary. In cloud 
> environments, it enables autoscaling.
> However, if you set spark.dynamicAllocation.enabled=true and run a structured 
> streaming job, the batch dynamic allocation algorithm kicks in. It requests 
> more executors if the task backlog is a certain size, and removes executors 
> if they idle for a certain period of time.
> Quick thoughts:
> 1) Dynamic allocation should be pluggable, rather than hardcoded to a 
> particular implementation in SparkContext.scala (this should be a separate 
> JIRA).
> 2) We should make a structured streaming algorithm that's separate from the 
> batch algorithm. Eventually, continuous processing might need its own 
> algorithm.
> 3) Spark should print a warning if you run a structured streaming job when 
> Core's dynamic allocation is enabled



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-24815) Structured Streaming should support dynamic allocation

2022-07-09 Thread Santokh Singh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-24815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17564614#comment-17564614
 ] 

Santokh Singh commented on SPARK-24815:
---

Pretty much interested in this feature. With {{mapGroupsWithState}} api in 
structured streaming, or generic state management and sharing state across 
executors, would externalizing state help? I am aware rocksDB being one way.

> Structured Streaming should support dynamic allocation
> --
>
> Key: SPARK-24815
> URL: https://issues.apache.org/jira/browse/SPARK-24815
> Project: Spark
>  Issue Type: Improvement
>  Components: Scheduler, Spark Core, Structured Streaming
>Affects Versions: 2.3.1
>Reporter: Karthik Palaniappan
>Priority: Minor
>
> For batch jobs, dynamic allocation is very useful for adding and removing 
> containers to match the actual workload. On multi-tenant clusters, it ensures 
> that a Spark job is taking no more resources than necessary. In cloud 
> environments, it enables autoscaling.
> However, if you set spark.dynamicAllocation.enabled=true and run a structured 
> streaming job, the batch dynamic allocation algorithm kicks in. It requests 
> more executors if the task backlog is a certain size, and removes executors 
> if they idle for a certain period of time.
> Quick thoughts:
> 1) Dynamic allocation should be pluggable, rather than hardcoded to a 
> particular implementation in SparkContext.scala (this should be a separate 
> JIRA).
> 2) We should make a structured streaming algorithm that's separate from the 
> batch algorithm. Eventually, continuous processing might need its own 
> algorithm.
> 3) Spark should print a warning if you run a structured streaming job when 
> Core's dynamic allocation is enabled



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-39727) Upgrade joda-time from 2.10.13 to 2.10.14

2022-07-09 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-39727:


Assignee: Apache Spark

> Upgrade joda-time from 2.10.13 to 2.10.14
> -
>
> Key: SPARK-39727
> URL: https://issues.apache.org/jira/browse/SPARK-39727
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: BingKun Pan
>Assignee: Apache Spark
>Priority: Minor
>
> joda-time 2.10.14 was released, which supports the latest TZ database of 
> 2022agtz.
> release notes: https://www.joda.org/joda-time/changes-report.html#a2.10.14



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-39727) Upgrade joda-time from 2.10.13 to 2.10.14

2022-07-09 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-39727:


Assignee: (was: Apache Spark)

> Upgrade joda-time from 2.10.13 to 2.10.14
> -
>
> Key: SPARK-39727
> URL: https://issues.apache.org/jira/browse/SPARK-39727
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: BingKun Pan
>Priority: Minor
>
> joda-time 2.10.14 was released, which supports the latest TZ database of 
> 2022agtz.
> release notes: https://www.joda.org/joda-time/changes-report.html#a2.10.14



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-39727) Upgrade joda-time from 2.10.13 to 2.10.14

2022-07-09 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17564573#comment-17564573
 ] 

Apache Spark commented on SPARK-39727:
--

User 'panbingkun' has created a pull request for this issue:
https://github.com/apache/spark/pull/37143

> Upgrade joda-time from 2.10.13 to 2.10.14
> -
>
> Key: SPARK-39727
> URL: https://issues.apache.org/jira/browse/SPARK-39727
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.3.0
>Reporter: BingKun Pan
>Priority: Minor
>
> joda-time 2.10.14 was released, which supports the latest TZ database of 
> 2022agtz.
> release notes: https://www.joda.org/joda-time/changes-report.html#a2.10.14



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-39727) Upgrade joda-time from 2.10.13 to 2.10.14

2022-07-09 Thread BingKun Pan (Jira)
BingKun Pan created SPARK-39727:
---

 Summary: Upgrade joda-time from 2.10.13 to 2.10.14
 Key: SPARK-39727
 URL: https://issues.apache.org/jira/browse/SPARK-39727
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.3.0
Reporter: BingKun Pan


joda-time 2.10.14 was released, which supports the latest TZ database of 
2022agtz.

release notes: https://www.joda.org/joda-time/changes-report.html#a2.10.14



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-39720) Implement tableExists/getTable in SparkR for 3L namespace

2022-07-09 Thread Ruifeng Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruifeng Zheng updated SPARK-39720:
--
Summary: Implement tableExists/getTable in SparkR for 3L namespace  (was: 
Make createTable/cacheTable/uncacheTable/refreshTable/tableExists in SparkR 
support 3L namespace)

> Implement tableExists/getTable in SparkR for 3L namespace
> -
>
> Key: SPARK-39720
> URL: https://issues.apache.org/jira/browse/SPARK-39720
> Project: Spark
>  Issue Type: Sub-task
>  Components: R
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org