[jira] [Updated] (SPARK-49599) Upgrade snappy-java to 1.1.10.7

2024-09-11 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49599:
-
Priority: Minor  (was: Major)

> Upgrade snappy-java to 1.1.10.7
> ---
>
> Key: SPARK-49599
> URL: https://issues.apache.org/jira/browse/SPARK-49599
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49599) Upgrade snappy-java to 1.1.10.7

2024-09-11 Thread Yang Jie (Jira)
Yang Jie created SPARK-49599:


 Summary: Upgrade snappy-java to 1.1.10.7
 Key: SPARK-49599
 URL: https://issues.apache.org/jira/browse/SPARK-49599
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-49518) Use build-helper-maven-plugin to manage the code for volcano

2024-09-04 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49518:
-
Component/s: Kubernetes
 (was: Spark Core)

> Use build-helper-maven-plugin to manage the code for volcano
> 
>
> Key: SPARK-49518
> URL: https://issues.apache.org/jira/browse/SPARK-49518
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, Kubernetes
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49483) Upgrade `commons-lang3` to 3.17.0

2024-09-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49483.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47948
[https://github.com/apache/spark/pull/47948]

>  Upgrade `commons-lang3` to 3.17.0
> --
>
> Key: SPARK-49483
> URL: https://issues.apache.org/jira/browse/SPARK-49483
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49455) Refactor `StagingInMemoryTableCatalog` to override the non-deprecated functions

2024-09-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49455.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47924
[https://github.com/apache/spark/pull/47924]

> Refactor `StagingInMemoryTableCatalog` to override the non-deprecated 
> functions
> ---
>
> Key: SPARK-49455
> URL: https://issues.apache.org/jira/browse/SPARK-49455
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49455) Refactor `StagingInMemoryTableCatalog` to override the non-deprecated functions

2024-09-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49455:


Assignee: Yang Jie

> Refactor `StagingInMemoryTableCatalog` to override the non-deprecated 
> functions
> ---
>
> Key: SPARK-49455
> URL: https://issues.apache.org/jira/browse/SPARK-49455
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-49460) NPE error in EmptyRelationExec.cleanupResources()

2024-08-30 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17878279#comment-17878279
 ] 

Yang Jie edited comment on SPARK-49460 at 8/31/24 2:14 AM:
---

Issue resolved by pull request 47931

[https://github.com/apache/spark/pull/47931]


was (Author: luciferyang):
Fixed by https://github.com/apache/spark/pull/47931

> NPE error in EmptyRelationExec.cleanupResources()
> -
>
> Key: SPARK-49460
> URL: https://issues.apache.org/jira/browse/SPARK-49460
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0, 3.5.2, 3.5.3
>Reporter: Ziqi Liu
>Assignee: Ziqi Liu
>Priority: Major
>  Labels: pull-request-available
>
> This bug was introduced in [https://github.com/apache/spark/pull/46830] : 
> *{{cleanupResources}}* might be executed on the executor where {{*logical* is 
> null.}}
>  
> A simple repro
> {code:java}
> spark.sql("create table t1left (a int, b int);")
> spark.sql("insert into t1left values (1, 1), (2,2), (3,3);")
> spark.sql("create table t1right (a int, b int);")
> spark.sql("create table t1empty (a int, b int);")
> spark.sql("insert into t1right values (2,20), (4, 40);")
> spark.sql("""
>   |with leftT as (
>   |  with erp as (
>   |select
>   |  *
>   |from
>   |  t1left
>   |  join t1empty on t1left.a = t1empty.a
>   |  join t1right on t1left.a = t1right.a
>   |  )
>   |  SELECT
>   |CASE
>   |  WHEN COUNT(*) = 0 THEN 4
>   |  ELSE NULL
>   |END AS a
>   |  FROM
>   |erp
>   |  HAVING
>   |COUNT(*) = 0
>   |)
>   |select
>   |  /*+ MERGEJOIN(t1right) */
>   |  *
>   |from
>   |  leftT
>   |  join t1right on leftT.a = t1right.a""").collect() {code}
>  
> error stacktrace:
> {code:java}
> 24/08/29 17:52:08 ERROR TaskSetManager: Task 0 in stage 9.0 failed 1 times; 
> aborting job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 
> (TID 10) (192.168.3.181 executor driver): java.lang.NullPointerException: 
> Cannot invoke 
> "org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.foreach(scala.Function1)"
>  because the return value of 
> "org.apache.spark.sql.execution.EmptyRelationExec.logical()" is null
>         at 
> org.apache.spark.sql.execution.EmptyRelationExec.cleanupResources(EmptyRelationExec.scala:86)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$cleanupResources$1(SparkPlan.scala:571)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$cleanupResources$1$adapted(SparkPlan.scala:571)
>         at scala.collection.immutable.Vector.foreach(Vector.scala:2124)
> 
>         at 
> org.apache.spark.sql.execution.SparkPlan.cleanupResources(SparkPlan.scala:571)
>         at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage8.processNext(Unknown
>  Source)
>         at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>         at 
> org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388)
>         at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:901)
>         at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:901)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
>         at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
>         at org.apache.spark.scheduler.Task.run(Task.scala:146)
>         at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:644)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49460) NPE error in EmptyRelationExec.cleanupResources()

2024-08-30 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49460.
--
Resolution: Fixed

Fixed by https://github.com/apache/spark/pull/47931

> NPE error in EmptyRelationExec.cleanupResources()
> -
>
> Key: SPARK-49460
> URL: https://issues.apache.org/jira/browse/SPARK-49460
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0, 3.5.2, 3.5.3
>Reporter: Ziqi Liu
>Assignee: Ziqi Liu
>Priority: Major
>  Labels: pull-request-available
>
> This bug was introduced in [https://github.com/apache/spark/pull/46830] : 
> *{{cleanupResources}}* might be executed on the executor where {{*logical* is 
> null.}}
>  
> A simple repro
> {code:java}
> spark.sql("create table t1left (a int, b int);")
> spark.sql("insert into t1left values (1, 1), (2,2), (3,3);")
> spark.sql("create table t1right (a int, b int);")
> spark.sql("create table t1empty (a int, b int);")
> spark.sql("insert into t1right values (2,20), (4, 40);")
> spark.sql("""
>   |with leftT as (
>   |  with erp as (
>   |select
>   |  *
>   |from
>   |  t1left
>   |  join t1empty on t1left.a = t1empty.a
>   |  join t1right on t1left.a = t1right.a
>   |  )
>   |  SELECT
>   |CASE
>   |  WHEN COUNT(*) = 0 THEN 4
>   |  ELSE NULL
>   |END AS a
>   |  FROM
>   |erp
>   |  HAVING
>   |COUNT(*) = 0
>   |)
>   |select
>   |  /*+ MERGEJOIN(t1right) */
>   |  *
>   |from
>   |  leftT
>   |  join t1right on leftT.a = t1right.a""").collect() {code}
>  
> error stacktrace:
> {code:java}
> 24/08/29 17:52:08 ERROR TaskSetManager: Task 0 in stage 9.0 failed 1 times; 
> aborting job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 
> (TID 10) (192.168.3.181 executor driver): java.lang.NullPointerException: 
> Cannot invoke 
> "org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.foreach(scala.Function1)"
>  because the return value of 
> "org.apache.spark.sql.execution.EmptyRelationExec.logical()" is null
>         at 
> org.apache.spark.sql.execution.EmptyRelationExec.cleanupResources(EmptyRelationExec.scala:86)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$cleanupResources$1(SparkPlan.scala:571)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$cleanupResources$1$adapted(SparkPlan.scala:571)
>         at scala.collection.immutable.Vector.foreach(Vector.scala:2124)
> 
>         at 
> org.apache.spark.sql.execution.SparkPlan.cleanupResources(SparkPlan.scala:571)
>         at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage8.processNext(Unknown
>  Source)
>         at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>         at 
> org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388)
>         at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:901)
>         at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:901)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
>         at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
>         at org.apache.spark.scheduler.Task.run(Task.scala:146)
>         at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:644)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49460) NPE error in EmptyRelationExec.cleanupResources()

2024-08-30 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49460:


Assignee: Ziqi Liu

> NPE error in EmptyRelationExec.cleanupResources()
> -
>
> Key: SPARK-49460
> URL: https://issues.apache.org/jira/browse/SPARK-49460
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0, 3.5.2, 3.5.3
>Reporter: Ziqi Liu
>Assignee: Ziqi Liu
>Priority: Major
>  Labels: pull-request-available
>
> This bug was introduced in [https://github.com/apache/spark/pull/46830] : 
> *{{cleanupResources}}* might be executed on the executor where {{*logical* is 
> null.}}
>  
> A simple repro
> {code:java}
> spark.sql("create table t1left (a int, b int);")
> spark.sql("insert into t1left values (1, 1), (2,2), (3,3);")
> spark.sql("create table t1right (a int, b int);")
> spark.sql("create table t1empty (a int, b int);")
> spark.sql("insert into t1right values (2,20), (4, 40);")
> spark.sql("""
>   |with leftT as (
>   |  with erp as (
>   |select
>   |  *
>   |from
>   |  t1left
>   |  join t1empty on t1left.a = t1empty.a
>   |  join t1right on t1left.a = t1right.a
>   |  )
>   |  SELECT
>   |CASE
>   |  WHEN COUNT(*) = 0 THEN 4
>   |  ELSE NULL
>   |END AS a
>   |  FROM
>   |erp
>   |  HAVING
>   |COUNT(*) = 0
>   |)
>   |select
>   |  /*+ MERGEJOIN(t1right) */
>   |  *
>   |from
>   |  leftT
>   |  join t1right on leftT.a = t1right.a""").collect() {code}
>  
> error stacktrace:
> {code:java}
> 24/08/29 17:52:08 ERROR TaskSetManager: Task 0 in stage 9.0 failed 1 times; 
> aborting job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 
> (TID 10) (192.168.3.181 executor driver): java.lang.NullPointerException: 
> Cannot invoke 
> "org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.foreach(scala.Function1)"
>  because the return value of 
> "org.apache.spark.sql.execution.EmptyRelationExec.logical()" is null
>         at 
> org.apache.spark.sql.execution.EmptyRelationExec.cleanupResources(EmptyRelationExec.scala:86)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$cleanupResources$1(SparkPlan.scala:571)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$cleanupResources$1$adapted(SparkPlan.scala:571)
>         at scala.collection.immutable.Vector.foreach(Vector.scala:2124)
> 
>         at 
> org.apache.spark.sql.execution.SparkPlan.cleanupResources(SparkPlan.scala:571)
>         at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage8.processNext(Unknown
>  Source)
>         at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>         at 
> org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50)
>         at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388)
>         at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:901)
>         at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:901)
>         at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)
>         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
>         at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
>         at org.apache.spark.scheduler.Task.run(Task.scala:146)
>         at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:644)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49119) Fix the inconsistency of syntax `show columns` between v1 and v2

2024-08-30 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49119:


Assignee: BingKun Pan

> Fix the inconsistency of syntax `show columns` between v1 and v2
> 
>
> Key: SPARK-49119
> URL: https://issues.apache.org/jira/browse/SPARK-49119
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49119) Fix the inconsistency of syntax `show columns` between v1 and v2

2024-08-30 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49119.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47628
[https://github.com/apache/spark/pull/47628]

> Fix the inconsistency of syntax `show columns` between v1 and v2
> 
>
> Key: SPARK-49119
> URL: https://issues.apache.org/jira/browse/SPARK-49119
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49457) Remove uncommon curl option --retry-all-errors

2024-08-29 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49457:


Assignee: Cheng Pan

> Remove uncommon curl option --retry-all-errors
> --
>
> Key: SPARK-49457
> URL: https://issues.apache.org/jira/browse/SPARK-49457
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49457) Remove uncommon curl option --retry-all-errors

2024-08-29 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49457.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47926
[https://github.com/apache/spark/pull/47926]

> Remove uncommon curl option --retry-all-errors
> --
>
> Key: SPARK-49457
> URL: https://issues.apache.org/jira/browse/SPARK-49457
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49455) Refactor `StagingInMemoryTableCatalog` to override the non-deprecated functions

2024-08-28 Thread Yang Jie (Jira)
Yang Jie created SPARK-49455:


 Summary: Refactor `StagingInMemoryTableCatalog` to override the 
non-deprecated functions
 Key: SPARK-49455
 URL: https://issues.apache.org/jira/browse/SPARK-49455
 Project: Spark
  Issue Type: Improvement
  Components: SQL, Tests
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49446) Upgrade jetty to 11.0.23

2024-08-28 Thread Yang Jie (Jira)
Yang Jie created SPARK-49446:


 Summary: Upgrade jetty to 11.0.23
 Key: SPARK-49446
 URL: https://issues.apache.org/jira/browse/SPARK-49446
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49440) Clean up unused LogKey definitions

2024-08-27 Thread Yang Jie (Jira)
Yang Jie created SPARK-49440:


 Summary: Clean up unused LogKey definitions
 Key: SPARK-49440
 URL: https://issues.apache.org/jira/browse/SPARK-49440
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-49334) `str_to_map` should check whether the `collation` values of all parameter types are the same

2024-08-21 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49334:
-
Parent: SPARK-46837
Issue Type: Sub-task  (was: Improvement)

> `str_to_map` should check whether the `collation` values of all parameter 
> types are the same
> 
>
> Key: SPARK-49334
> URL: https://issues.apache.org/jira/browse/SPARK-49334
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49327) Upgrade commons-compress to 1.27.1

2024-08-21 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49327:


Assignee: BingKun Pan

> Upgrade commons-compress to 1.27.1
> --
>
> Key: SPARK-49327
> URL: https://issues.apache.org/jira/browse/SPARK-49327
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Critical
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49327) Upgrade commons-compress to 1.27.1

2024-08-21 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49327.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47821
[https://github.com/apache/spark/pull/47821]

> Upgrade commons-compress to 1.27.1
> --
>
> Key: SPARK-49327
> URL: https://issues.apache.org/jira/browse/SPARK-49327
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49335) Upgrade maven to 3.9.9

2024-08-21 Thread Yang Jie (Jira)
Yang Jie created SPARK-49335:


 Summary: Upgrade maven to 3.9.9
 Key: SPARK-49335
 URL: https://issues.apache.org/jira/browse/SPARK-49335
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49075) Upgrade JUnit5 related to the latest version

2024-08-19 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49075.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47560
[https://github.com/apache/spark/pull/47560]

> Upgrade JUnit5 related to the latest version
> 
>
> Key: SPARK-49075
> URL: https://issues.apache.org/jira/browse/SPARK-49075
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, Tests
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-49297) Flaky test: `SPARK-46957: Migrated shuffle files should be able to cleanup from executor` with Java 21

2024-08-19 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17875033#comment-17875033
 ] 

Yang Jie commented on SPARK-49297:
--

The Maven daily test with Java 21 also failed, but the error was different:

- [https://github.com/apache/spark/actions/runs/10454868804/job/28948522185]
{code:java}
- SPARK-46957: Migrated shuffle files should be able to cleanup from executor 
*** FAILED ***
18848  java.io.UncheckedIOException: java.nio.file.NoSuchFileException: 
/home/runner/work/spark/spark/core/target/tmp/spark-87f59bc6-b996-42cd-9775-2f704b67f773/executor-e0a030d4-434b-46b3-bdbf-81d4908bb0f5/blockmgr-d88ed5dd-c1c8-4713-9433-10694e736a8e/3a
18849  at 
java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87)
18850  at 
java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103)
18851  at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
18852  at 
java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1939)
18853  at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
18854  at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
18855  at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
18856  at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
18857  at 
java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
18858  at org.apache.commons.io.FileUtils.toList(FileUtils.java:3025)
18859  ...
18860  Cause: java.nio.file.NoSuchFileException: 
/home/runner/work/spark/spark/core/target/tmp/spark-87f59bc6-b996-42cd-9775-2f704b67f773/executor-e0a030d4-434b-46b3-bdbf-81d4908bb0f5/blockmgr-d88ed5dd-c1c8-4713-9433-10694e736a8e/3a
18861  at 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
18862  at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
18863  at 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
18864  at 
java.base/sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
18865  at 
java.base/sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:171)
18866  at 
java.base/sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
18867  at java.base/java.nio.file.Files.readAttributes(Files.java:1853)
18868  at 
java.base/java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:226)
18869  at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:277)
18870  at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
18871  ... {code}

> Flaky test: `SPARK-46957: Migrated shuffle files should be able to cleanup 
> from executor` with Java 21
> --
>
> Key: SPARK-49297
> URL: https://issues.apache.org/jira/browse/SPARK-49297
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * [https://github.com/apache/spark/actions/runs/10446893635/job/28925140545]
> {code:java}
> [info] - SPARK-46957: Migrated shuffle files should be able to cleanup from 
> executor *** FAILED *** (35 seconds, 200 milliseconds)
> 15718[info]   0 was not greater than or equal to 4 
> (BlockManagerDecommissionIntegrationSuite.scala:423)
> 15719[info]   org.scalatest.exceptions.TestFailedException:
> 15720[info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
> 15721[info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
> 15722[info]   at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
> 15723[info]   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
> 15724[info]   at 
> org.apache.spark.storage.BlockManagerDecommissionIntegrationSuite.$anonfun$new$10(BlockManagerDecommissionIntegrationSuite.scala:423)
> 15725[info]   at 
> org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
> 15726[info]   at 
> org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
> 15727[info]   at 
> org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
> 15728[info]   at 
> org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
> 15729[info]   at 
> org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
> 15730[info]   at 
> org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:155)
> 15731[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
> 15732[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
> 15733[info]   at

[jira] [Resolved] (SPARK-48505) Simplify the implementation of Utils#isG1GC

2024-08-19 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48505.
--
Resolution: Won't Fix

> Simplify the implementation of Utils#isG1GC
> ---
>
> Key: SPARK-48505
> URL: https://issues.apache.org/jira/browse/SPARK-48505
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-49297) Flaky test: `SPARK-46957: Migrated shuffle files should be able to cleanup from executor` with Java 21

2024-08-18 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17874784#comment-17874784
 ] 

Yang Jie commented on SPARK-49297:
--

>From 2024-08-16 to 2024-08-19, this test case failed 3 times in the 4-day 
>daily test for Java 21.

> Flaky test: `SPARK-46957: Migrated shuffle files should be able to cleanup 
> from executor` with Java 21
> --
>
> Key: SPARK-49297
> URL: https://issues.apache.org/jira/browse/SPARK-49297
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * [https://github.com/apache/spark/actions/runs/10446893635/job/28925140545]
> {code:java}
> [info] - SPARK-46957: Migrated shuffle files should be able to cleanup from 
> executor *** FAILED *** (35 seconds, 200 milliseconds)
> 15718[info]   0 was not greater than or equal to 4 
> (BlockManagerDecommissionIntegrationSuite.scala:423)
> 15719[info]   org.scalatest.exceptions.TestFailedException:
> 15720[info]   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
> 15721[info]   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
> 15722[info]   at 
> org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
> 15723[info]   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
> 15724[info]   at 
> org.apache.spark.storage.BlockManagerDecommissionIntegrationSuite.$anonfun$new$10(BlockManagerDecommissionIntegrationSuite.scala:423)
> 15725[info]   at 
> org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
> 15726[info]   at 
> org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
> 15727[info]   at 
> org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
> 15728[info]   at 
> org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
> 15729[info]   at 
> org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
> 15730[info]   at 
> org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:155)
> 15731[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
> 15732[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
> 15733[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
> 15734[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
> 15735[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
> 15736[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
> 15737[info]   at 
> org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:227)
> 15738[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
> 15739[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
> 15740[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
> 15741[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
> 15742[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
> 15743[info]   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:69)
> 15744[info]   at 
> org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
> 15745[info]   at 
> org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
> 15746[info]   at 
> org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:69)
> 15747[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
> 15748[info]   at 
> org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
> 15749[info]   at scala.collection.immutable.List.foreach(List.scala:334)
> 15750[info]   at 
> org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
> 15751[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
> 15752[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
> 15753[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
> 15754[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
> 15755[info]   at 
> org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
> 15756[info]   at org.scalatest.Suite.run(Suite.scala:1114)
> 15757[info]   at org.scalatest.Suite.run$(Suite.scala:1096)
> 15758[info]   at 
> org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
> 15759[info]   at 
> org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
> 15760[info]   at org.scalatest.SuperEngine.runImpl(E

[jira] [Created] (SPARK-49297) Flaky test: `SPARK-46957: Migrated shuffle files should be able to cleanup from executor` with Java 21

2024-08-18 Thread Yang Jie (Jira)
Yang Jie created SPARK-49297:


 Summary: Flaky test: `SPARK-46957: Migrated shuffle files should 
be able to cleanup from executor` with Java 21
 Key: SPARK-49297
 URL: https://issues.apache.org/jira/browse/SPARK-49297
 Project: Spark
  Issue Type: Bug
  Components: Tests
Affects Versions: 4.0.0
Reporter: Yang Jie


* [https://github.com/apache/spark/actions/runs/10446893635/job/28925140545]

{code:java}
[info] - SPARK-46957: Migrated shuffle files should be able to cleanup from 
executor *** FAILED *** (35 seconds, 200 milliseconds)
15718[info]   0 was not greater than or equal to 4 
(BlockManagerDecommissionIntegrationSuite.scala:423)
15719[info]   org.scalatest.exceptions.TestFailedException:
15720[info]   at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
15721[info]   at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
15722[info]   at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
15723[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
15724[info]   at 
org.apache.spark.storage.BlockManagerDecommissionIntegrationSuite.$anonfun$new$10(BlockManagerDecommissionIntegrationSuite.scala:423)
15725[info]   at 
org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
15726[info]   at 
org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
15727[info]   at 
org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
15728[info]   at 
org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
15729[info]   at 
org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
15730[info]   at 
org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:155)
15731[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
15732[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
15733[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
15734[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
15735[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
15736[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
15737[info]   at 
org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:227)
15738[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
15739[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
15740[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
15741[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
15742[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
15743[info]   at 
org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:69)
15744[info]   at 
org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
15745[info]   at 
org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
15746[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:69)
15747[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
15748[info]   at 
org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
15749[info]   at scala.collection.immutable.List.foreach(List.scala:334)
15750[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
15751[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
15752[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
15753[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
15754[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
15755[info]   at 
org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
15756[info]   at org.scalatest.Suite.run(Suite.scala:1114)
15757[info]   at org.scalatest.Suite.run$(Suite.scala:1096)
15758[info]   at 
org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
15759[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
15760[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
15761[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
15762[info]   at 
org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
15763[info]   at 
org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:69)
15764[info]   at 
org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
15765[info]   at 
org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
15766[info]   at 
org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:2

[jira] [Created] (SPARK-49292) Upgrade dropwizard metrics to 4.2.27

2024-08-18 Thread Yang Jie (Jira)
Yang Jie created SPARK-49292:


 Summary: Upgrade dropwizard metrics to 4.2.27
 Key: SPARK-49292
 URL: https://issues.apache.org/jira/browse/SPARK-49292
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-49260) Should not prepend the classes path of sql/core module in Spark Connect Shell

2024-08-16 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49260:
-
Component/s: Connect
 (was: Spark Core)

> Should not prepend the classes path of sql/core module in Spark Connect Shell 
> --
>
> Key: SPARK-49260
> URL: https://issues.apache.org/jira/browse/SPARK-49260
> Project: Spark
>  Issue Type: Improvement
>  Components: Connect
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49260) Should not prepend the classes path of sql/core module in Spark Connect Shell

2024-08-16 Thread Yang Jie (Jira)
Yang Jie created SPARK-49260:


 Summary: Should not prepend the classes path of sql/core module in 
Spark Connect Shell 
 Key: SPARK-49260
 URL: https://issues.apache.org/jira/browse/SPARK-49260
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49240) Add `scalastyle` and `checkstyle` rules to avoid `URL` constructors

2024-08-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49240.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47762
[https://github.com/apache/spark/pull/47762]

> Add `scalastyle` and `checkstyle` rules to avoid `URL` constructors
> ---
>
> Key: SPARK-49240
> URL: https://issues.apache.org/jira/browse/SPARK-49240
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49234) Upgrade `xz` to `1.10`

2024-08-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49234:


Assignee: Dongjoon Hyun

> Upgrade `xz` to `1.10`
> --
>
> Key: SPARK-49234
> URL: https://issues.apache.org/jira/browse/SPARK-49234
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49234) Upgrade `xz` to `1.10`

2024-08-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49234.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47750
[https://github.com/apache/spark/pull/47750]

> Upgrade `xz` to `1.10`
> --
>
> Key: SPARK-49234
> URL: https://issues.apache.org/jira/browse/SPARK-49234
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-49228) Investigate ExternalAppendOnlyUnsafeRowArrayBenchmark

2024-08-13 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17873390#comment-17873390
 ] 

Yang Jie commented on SPARK-49228:
--

I can also reproduce this issue locally. The stack trace for the long-running 
Runnable as follows:

 
{code:java}
"main" #1 prio=5 os_prio=31 cpu=177053.75ms elapsed=179.28s 
tid=0x00013000d200 nid=0x2803 runnable  [0x00016db1d000]
   java.lang.Thread.State: RUNNABLE
    at 
java.util.stream.ReferencePipeline$2$1.accept(java.base@17.0.11/ReferencePipeline.java:178)
    at 
java.util.HashMap$KeySpliterator.forEachRemaining(java.base@17.0.11/HashMap.java:1707)
    at 
java.util.stream.AbstractPipeline.copyInto(java.base@17.0.11/AbstractPipeline.java:509)
    at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(java.base@17.0.11/AbstractPipeline.java:499)
    at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(java.base@17.0.11/ReduceOps.java:921)
    at 
java.util.stream.AbstractPipeline.evaluate(java.base@17.0.11/AbstractPipeline.java:234)
    at 
java.util.stream.LongPipeline.reduce(java.base@17.0.11/LongPipeline.java:498)
    at 
java.util.stream.LongPipeline.sum(java.base@17.0.11/LongPipeline.java:456)
    at 
org.apache.spark.memory.TaskMemoryManager.acquireExecutionMemory(TaskMemoryManager.java:219)
    - locked <0x0007084449e8> (a org.apache.spark.memory.TaskMemoryManager)
    at 
org.apache.spark.memory.TaskMemoryManager.allocatePage(TaskMemoryManager.java:348)
    at 
org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:117)
    at 
org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:437)
    at 
org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.allocateMemoryForRecordIfNecessary(UnsafeExternalSorter.java:456)
    at 
org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:491)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.$anonfun$add$2(ExternalAppendOnlyUnsafeRowArray.scala:151)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.$anonfun$add$2$adapted(ExternalAppendOnlyUnsafeRowArray.scala:145)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray$$Lambda$888/0x00b0015a3918.apply(Unknown
 Source)
    at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:619)
    at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:617)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:935)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.add(ExternalAppendOnlyUnsafeRowArray.scala:145)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.$anonfun$testAgainstRawArrayBuffer$6(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:112)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.$anonfun$testAgainstRawArrayBuffer$6$adapted(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:112)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$$$Lambda$879/0x00b00159bbf8.apply(Unknown
 Source)
    at scala.collection.immutable.VectorStatics$.foreachRec(Vector.scala:2124)
    at scala.collection.immutable.Vector.foreach(Vector.scala:2130)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.$anonfun$testAgainstRawArrayBuffer$5(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:112)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$$$Lambda$878/0x00b00159b800.apply$mcVJ$sp(Unknown
 Source)
    at scala.runtime.java8.JFunction1$mcVJ$sp.apply(JFunction1$mcVJ$sp.scala:18)
    at 
scala.collection.immutable.NumericRange.foreach$mVc$sp(NumericRange.scala:115)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.$anonfun$testAgainstRawArrayBuffer$4(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:107)
    at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$$$Lambda$301/0x00b00127c000.apply$mcVI$sp(Unknown
 Source)
    at 
org.apache.spark.benchmark.Benchmark.$anonfun$addCase$1(Benchmark.scala:77)
    at 
org.apache.spark.benchmark.Benchmark.$anonfun$addCase$1$adapted(Benchmark.scala:75)
    at 
org.apache.spark.benchmark.Benchmark$$Lambda$300/0x00b0012778b0.apply(Unknown
 Source)
    at org.apache.spark.benchmark.Benchmark.measure(Benchmark.scala:140)
    at org.apache.spark.benchmark.Benchmark.$anonfun$run$1(Benchmark.scala:106)
    at 
org.apache.spark.benchmark.Benchmark$$Lambda$865/0x00b00158c610.apply(Unknown
 Source)
    at 
scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
    at 
scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
    at scala.collection.mutable.ArrayBuffer.map(ArrayBuffer.scala:42)
    at org.apache.

[jira] [Created] (SPARK-49221) sbt compilation warning: `Regular tasks always evaluate task dependencies (.value) regardless of if expressions`

2024-08-13 Thread Yang Jie (Jira)
Yang Jie created SPARK-49221:


 Summary: sbt compilation warning: `Regular tasks always evaluate 
task dependencies (.value) regardless of if expressions`
 Key: SPARK-49221
 URL: https://issues.apache.org/jira/browse/SPARK-49221
 Project: Spark
  Issue Type: Improvement
  Components: Project Infra
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-sbt/project/SparkBuild.scala:1554:77: 
value lookup of `/` inside an `if` expression
[warn] 
[warn] problem: `/.value` is inside an `if` expression of a regular task.
[warn]   Regular tasks always evaluate task dependencies (`.value`) regardless 
of `if` expressions.
[warn] solution:
[warn]   1. Use a conditional task `Def.taskIf(...)` to evaluate it when the 
`if` predicate is true or false.
[warn]   2. Or turn the task body into a single `if` expression; the task is 
then auto-converted to a conditional task. 
[warn]   3. Or make the static evaluation explicit by declaring `/.value` 
outside the `if` expression.
[warn]   4. If you still want to force the static lookup, you may annotate the 
task lookup with `@sbtUnchecked`, e.g. `(/.value: @sbtUnchecked)`.
[warn]   5. Add `import sbt.dsl.LinterLevel.Ignore` to your build file to 
disable all task linting.
[warn]     
[warn]         val replClasspathes = (LocalProject("connect-client-jvm") / 
Compile / dependencyClasspath)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49187) Upgrade slf4j to 2.0.16

2024-08-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49187.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47720
[https://github.com/apache/spark/pull/47720]

> Upgrade slf4j to 2.0.16
> ---
>
> Key: SPARK-49187
> URL: https://issues.apache.org/jira/browse/SPARK-49187
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49206) Add `Environment Variables` table to Master `EnvironmentPage`

2024-08-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49206.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47714
[https://github.com/apache/spark/pull/47714]

> Add `Environment Variables` table to Master `EnvironmentPage`
> -
>
> Key: SPARK-49206
> URL: https://issues.apache.org/jira/browse/SPARK-49206
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, UI
>Affects Versions: 4.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49206) Add `Environment Variables` table to Master `EnvironmentPage`

2024-08-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49206:


Assignee: Dongjoon Hyun

> Add `Environment Variables` table to Master `EnvironmentPage`
> -
>
> Key: SPARK-49206
> URL: https://issues.apache.org/jira/browse/SPARK-49206
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, UI
>Affects Versions: 4.0.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-49187) Upgrade slf4j to 2.0.16

2024-08-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49187:
-
Summary: Upgrade slf4j to 2.0.16  (was: Upgrade slf4j to 2.0.15)

> Upgrade slf4j to 2.0.16
> ---
>
> Key: SPARK-49187
> URL: https://issues.apache.org/jira/browse/SPARK-49187
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-49178) `Row#getSeq` exhibits a performance regression between master and Spark 3.5 with Scala 2.12

2024-08-09 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49178:
-
Summary: `Row#getSeq` exhibits a performance regression between master and 
Spark 3.5 with Scala 2.12  (was: `Row#getSeq` exhibits a performance regression 
between master and 3.5.)

> `Row#getSeq` exhibits a performance regression between master and Spark 3.5 
> with Scala 2.12
> ---
>
> Key: SPARK-49178
> URL: https://issues.apache.org/jira/browse/SPARK-49178
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> object GetSeqBenchmark extends SqlBasedBenchmark {
>   import spark.implicits._
>   def testRowGetSeq(valuesPerIteration: Int, arraySize: Int): Unit = {
> val data = (0 until arraySize).toArray
> val row = Seq(data).toDF().collect().head
> val benchmark = new Benchmark(
>   s"Test get seq with $arraySize from row",
>   valuesPerIteration,
>   output = output)
> benchmark.addCase("Get Seq") { _: Int =>
>   for (_ <- 0L until valuesPerIteration) {
> val ret = row.getSeq(0)
>   }
> }
> benchmark.run()
>   }
>   override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {
> val valuesPerIteration = 10
> testRowGetSeq(valuesPerIteration, 10)
> testRowGetSeq(valuesPerIteration, 100)
> testRowGetSeq(valuesPerIteration, 1000)
> testRowGetSeq(valuesPerIteration, 1)
> testRowGetSeq(valuesPerIteration, 10)
>   }
> } {code}
>  
> branch-3.5
> {code:java}
> OpenJDK 64-Bit Server VM 1.8.0_422-b05 on Linux 5.15.0-1068-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 10 from row:            Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Get Seq                                               1              1        
>    0        194.8           5.1       1.0XOpenJDK 64-Bit Server VM 
> 1.8.0_422-b05 on Linux 5.15.0-1068-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 100 from row:           Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Get Seq                                               1              1        
>    0         96.8          10.3       1.0XOpenJDK 64-Bit Server VM 
> 1.8.0_422-b05 on Linux 5.15.0-1068-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 1000 from row:          Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Get Seq                                               1              1        
>    0         97.0          10.3       1.0XOpenJDK 64-Bit Server VM 
> 1.8.0_422-b05 on Linux 5.15.0-1068-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 1 from row:         Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Get Seq                                               1              1        
>    0         96.8          10.3       1.0XOpenJDK 64-Bit Server VM 
> 1.8.0_422-b05 on Linux 5.15.0-1068-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 10 from row:        Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Get Seq                                               1              1        
>    0         96.9          10.3       1.0X {code}
> master
> {code:java}
> OpenJDK 64-Bit Server VM 17.0.12+7-LTS on Linux 6.5.0-1025-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 10 from row:            Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Get Seq                                               9             10        
>    0         10.5          94.8       1.0XOpenJDK 64-Bit Server VM 
> 17.0.12+7-LTS on Linux 6.5.0-1025-azure
> AMD EPYC 7763 64-Core Processor
> Test get seq with 100 from row:           Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 

[jira] [Created] (SPARK-49187) Upgrade slf4j to 2.0.15

2024-08-09 Thread Yang Jie (Jira)
Yang Jie created SPARK-49187:


 Summary: Upgrade slf4j to 2.0.15
 Key: SPARK-49187
 URL: https://issues.apache.org/jira/browse/SPARK-49187
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49178) `Row#getSeq` exhibits a performance regression between master and 3.5.

2024-08-09 Thread Yang Jie (Jira)
Yang Jie created SPARK-49178:


 Summary: `Row#getSeq` exhibits a performance regression between 
master and 3.5.
 Key: SPARK-49178
 URL: https://issues.apache.org/jira/browse/SPARK-49178
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
object GetSeqBenchmark extends SqlBasedBenchmark {
  import spark.implicits._

  def testRowGetSeq(valuesPerIteration: Int, arraySize: Int): Unit = {

val data = (0 until arraySize).toArray
val row = Seq(data).toDF().collect().head

val benchmark = new Benchmark(
  s"Test get seq with $arraySize from row",
  valuesPerIteration,
  output = output)

benchmark.addCase("Get Seq") { _: Int =>

  for (_ <- 0L until valuesPerIteration) {
val ret = row.getSeq(0)
  }
}

benchmark.run()
  }

  override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {
val valuesPerIteration = 10
testRowGetSeq(valuesPerIteration, 10)
testRowGetSeq(valuesPerIteration, 100)
testRowGetSeq(valuesPerIteration, 1000)
testRowGetSeq(valuesPerIteration, 1)
testRowGetSeq(valuesPerIteration, 10)
  }
} {code}
 

branch-3.5
{code:java}
OpenJDK 64-Bit Server VM 1.8.0_422-b05 on Linux 5.15.0-1068-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 10 from row:            Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                               1              1          
 0        194.8           5.1       1.0XOpenJDK 64-Bit Server VM 1.8.0_422-b05 
on Linux 5.15.0-1068-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 100 from row:           Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                               1              1          
 0         96.8          10.3       1.0XOpenJDK 64-Bit Server VM 1.8.0_422-b05 
on Linux 5.15.0-1068-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 1000 from row:          Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                               1              1          
 0         97.0          10.3       1.0XOpenJDK 64-Bit Server VM 1.8.0_422-b05 
on Linux 5.15.0-1068-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 1 from row:         Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                               1              1          
 0         96.8          10.3       1.0XOpenJDK 64-Bit Server VM 1.8.0_422-b05 
on Linux 5.15.0-1068-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 10 from row:        Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                               1              1          
 0         96.9          10.3       1.0X {code}
master
{code:java}
OpenJDK 64-Bit Server VM 17.0.12+7-LTS on Linux 6.5.0-1025-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 10 from row:            Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                               9             10          
 0         10.5          94.8       1.0XOpenJDK 64-Bit Server VM 17.0.12+7-LTS 
on Linux 6.5.0-1025-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 100 from row:           Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                              65             65          
 1          1.5         646.4       1.0XOpenJDK 64-Bit Server VM 17.0.12+7-LTS 
on Linux 6.5.0-1025-azure
AMD EPYC 7763 64-Core Processor
Test get seq with 1000 from row:          Best Time(ms)   Avg Time(ms)   
Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative

Get Seq                                             614            

[jira] [Updated] (SPARK-49155) Use more appropriate parameter type to construct `GenericArrayData`

2024-08-07 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-49155:
-
Summary: Use more appropriate parameter type to construct 
`GenericArrayData`  (was: should prioritize using an Array of Any|AnyRef as a 
parameter to construct GenericArrayData)

> Use more appropriate parameter type to construct `GenericArrayData`
> ---
>
> Key: SPARK-49155
> URL: https://issues.apache.org/jira/browse/SPARK-49155
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49155) should prioritize using an Array of Any|AnyRef as a parameter to construct GenericArrayData

2024-08-07 Thread Yang Jie (Jira)
Yang Jie created SPARK-49155:


 Summary: should prioritize using an Array of Any|AnyRef as a 
parameter to construct GenericArrayData
 Key: SPARK-49155
 URL: https://issues.apache.org/jira/browse/SPARK-49155
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49077) Remove bouncycastle-related test dependencies from hive-thriftserver module

2024-08-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49077.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47563
[https://github.com/apache/spark/pull/47563]

> Remove bouncycastle-related test dependencies from hive-thriftserver module
> ---
>
> Key: SPARK-49077
> URL: https://issues.apache.org/jira/browse/SPARK-49077
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> After SPARK-49066 merged, other than `OrcEncryptionSuite`, the test cases for 
> writing Orc data no longer require the use of `FakeKeyProvider`. As a result, 
> `hive-thriftserver` no longer needs these test dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49077) Remove bouncycastle-related test dependencies from hive-thriftserver module

2024-08-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49077:


Assignee: Yang Jie

> Remove bouncycastle-related test dependencies from hive-thriftserver module
> ---
>
> Key: SPARK-49077
> URL: https://issues.apache.org/jira/browse/SPARK-49077
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>
> After SPARK-49066 merged, other than `OrcEncryptionSuite`, the test cases for 
> writing Orc data no longer require the use of `FakeKeyProvider`. As a result, 
> `hive-thriftserver` no longer needs these test dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-49076) Fix the outdated logical plan name in `AstBuilder's` comments

2024-08-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-49076:


Assignee: BingKun Pan

> Fix the outdated logical plan name in `AstBuilder's` comments
> -
>
> Key: SPARK-49076
> URL: https://issues.apache.org/jira/browse/SPARK-49076
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Critical
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-49076) Fix the outdated logical plan name in `AstBuilder's` comments

2024-08-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-49076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-49076.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47562
[https://github.com/apache/spark/pull/47562]

> Fix the outdated logical plan name in `AstBuilder's` comments
> -
>
> Key: SPARK-49076
> URL: https://issues.apache.org/jira/browse/SPARK-49076
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49077) Remove bouncycastle-related test dependencies from hive-thriftserver module

2024-07-31 Thread Yang Jie (Jira)
Yang Jie created SPARK-49077:


 Summary: Remove bouncycastle-related test dependencies from 
hive-thriftserver module
 Key: SPARK-49077
 URL: https://issues.apache.org/jira/browse/SPARK-49077
 Project: Spark
  Issue Type: Improvement
  Components: SQL, Tests
Affects Versions: 4.0.0
Reporter: Yang Jie


After SPARK-49066 merged, other than `OrcEncryptionSuite`, the test cases for 
writing Orc data no longer require the use of `FakeKeyProvider`. As a result, 
`hive-thriftserver` no longer needs these test dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48964) Fix the discrepancy between implementation, comment and documentation of option recursive.fields.max.depth in ProtoBuf connector

2024-07-31 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48964:


Assignee: Wei Guo

> Fix the discrepancy between implementation, comment and documentation of 
> option recursive.fields.max.depth in ProtoBuf connector
> 
>
> Key: SPARK-48964
> URL: https://issues.apache.org/jira/browse/SPARK-48964
> Project: Spark
>  Issue Type: Documentation
>  Components: Connect
>Affects Versions: 3.5.0, 4.0.0, 3.5.1, 3.5.2, 3.5.3
>Reporter: Yuchen Liu
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> After the three PRs ([https://github.com/apache/spark/pull/38922,] 
> [https://github.com/apache/spark/pull/40011,] 
> [https://github.com/apache/spark/pull/40141]) working on the same option, 
> there are some legacy comments and documentation that has not been updated to 
> the latest implementation. This task should consolidate them. Below is the 
> correct description of the behavior.
> The `recursive.fields.max.depth` parameter can be specified in the 
> from_protobuf options to control the maximum allowed recursion depth for a 
> field. Setting `recursive.fields.max.depth` to 1 drops all-recursive fields, 
> setting it to 2 allows it to be recursed once, and setting it to 3 allows it 
> to be recursed twice. Attempting to set the `recursive.fields.max.depth` to a 
> value greater than 10 is not allowed. If the `recursive.fields.max.depth` is 
> specified to a value smaller than 1, recursive fields are not permitted. The 
> default value of the option is -1. if a protobuf record has more depth for 
> recursive fields than the allowed value, it will be truncated and some fields 
> may be discarded. This check is based on the fully qualified field type. SQL 
> Schema for the protobuf message
> {code:java}
> message Person { string name = 1; Person bff = 2 }{code}
> will vary based on the value of `recursive.fields.max.depth`.
> {code:java}
> 1: struct
> 2: struct>
> 3: struct>> ...
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48964) Fix the discrepancy between implementation, comment and documentation of option recursive.fields.max.depth in ProtoBuf connector

2024-07-31 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48964.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47458
[https://github.com/apache/spark/pull/47458]

> Fix the discrepancy between implementation, comment and documentation of 
> option recursive.fields.max.depth in ProtoBuf connector
> 
>
> Key: SPARK-48964
> URL: https://issues.apache.org/jira/browse/SPARK-48964
> Project: Spark
>  Issue Type: Documentation
>  Components: Connect
>Affects Versions: 3.5.0, 4.0.0, 3.5.1, 3.5.2, 3.5.3
>Reporter: Yuchen Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> After the three PRs ([https://github.com/apache/spark/pull/38922,] 
> [https://github.com/apache/spark/pull/40011,] 
> [https://github.com/apache/spark/pull/40141]) working on the same option, 
> there are some legacy comments and documentation that has not been updated to 
> the latest implementation. This task should consolidate them. Below is the 
> correct description of the behavior.
> The `recursive.fields.max.depth` parameter can be specified in the 
> from_protobuf options to control the maximum allowed recursion depth for a 
> field. Setting `recursive.fields.max.depth` to 1 drops all-recursive fields, 
> setting it to 2 allows it to be recursed once, and setting it to 3 allows it 
> to be recursed twice. Attempting to set the `recursive.fields.max.depth` to a 
> value greater than 10 is not allowed. If the `recursive.fields.max.depth` is 
> specified to a value smaller than 1, recursive fields are not permitted. The 
> default value of the option is -1. if a protobuf record has more depth for 
> recursive fields than the allowed value, it will be truncated and some fields 
> may be discarded. This check is based on the fully qualified field type. SQL 
> Schema for the protobuf message
> {code:java}
> message Person { string name = 1; Person bff = 2 }{code}
> will vary based on the value of `recursive.fields.max.depth`.
> {code:java}
> 1: struct
> 2: struct>
> 3: struct>> ...
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-48829) Upgrade `RoaringBitmap` to 1.2.1

2024-07-29 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-48829:
-
Summary:  Upgrade `RoaringBitmap` to 1.2.1  (was:  Upgrade `RoaringBitmap` 
to 1.2.0)

>  Upgrade `RoaringBitmap` to 1.2.1
> -
>
> Key: SPARK-48829
> URL: https://issues.apache.org/jira/browse/SPARK-48829
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48829) Upgrade `RoaringBitmap` to 1.2.0

2024-07-29 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48829.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47247
[https://github.com/apache/spark/pull/47247]

>  Upgrade `RoaringBitmap` to 1.2.0
> -
>
> Key: SPARK-48829
> URL: https://issues.apache.org/jira/browse/SPARK-48829
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-49055) Investigate OrcEncryptionSuite UnsatisfiedLinkError on AppleSilicon MacOS environment

2024-07-29 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869524#comment-17869524
 ] 

Yang Jie commented on SPARK-49055:
--

Yes, this did not cause the tests to fail, so what I initially wanted to 
clarify was: whether there is a need to install additional libs, or use a 
specific release version of Hadoop 3.4.0.

> Investigate OrcEncryptionSuite UnsatisfiedLinkError on AppleSilicon MacOS 
> environment
> -
>
> Key: SPARK-49055
> URL: https://issues.apache.org/jira/browse/SPARK-49055
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2024-07-30-12-59-02-278.png
>
>
> {code}
> git reset --hard 49b4c3bc9c09325de941dfaf41e4fd3a4a4c345f // 
> [SPARK-45393][BUILD] Upgrade Hadoop to 3.4.0
> build/sbt clean "sql/testOnly 
> org.apache.spark.sql.execution.datasources.orc.OrcEncryptionSuite"
> ...
> [info] OrcEncryptionSuite:
> 12:42:55.441 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 12:42:57.950 WARN org.apache.hadoop.crypto.OpensslCipher: Failed to load 
> OpenSSL Cipher.
> java.lang.UnsatisfiedLinkError: 'boolean 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()'
>   at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native 
> Method)
>   at 
> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:86)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.(OpensslAesCtrCryptoCodec.java:36)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
> [info] - Write and read an encrypted file (2 seconds, 486 milliseconds)
> [info] - Write and read an encrypted table (402 milliseconds)
> [info] - SPARK-35325: Write and read encrypted nested columns (299 
> milliseconds)
> [info] - SPARK-35992: Write and read fully-encrypted columns with default 
> masking (623 milliseconds)
> 12:42:59.856 WARN 
> org.apache.spark.sql.execution.datasources.orc.OrcEncryptionSuite: 
> = POSSIBLE THREAD LEAK IN SUITE 
> o.a.s.sql.execution.datasources.orc.OrcEncryptionSuite, threads: rpc-boss-3-1 
> (daemon=true), Thread-17 (daemon=true), ForkJoinPool.commonPool-worker-2 
> (daemon=true), shuffle-boss-6-1 (daemon=true), 
> ForkJoinPool.commonPool-worker-1 (daemon=true), Thread-18 (daemon=true), 
> ForkJoinPool.commonPool-worker-3 (daemon=true) =
> [info] Run completed in 5 seconds, 291 milliseconds.
> [info] Total number of tests run: 4
> [info] Suites: completed 1, aborted 0
> [info] Tests: succeeded 4, failed 0, canceled 0, ignored 0, pending 0
> [info] All tests passed.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-49055) Fix OrcEncryptionSuite failure on AppleSilicon MacOS environment

2024-07-29 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869518#comment-17869518
 ] 

Yang Jie edited comment on SPARK-49055 at 7/30/24 4:59 AM:
---

I apologize for providing misleading information. I just reviewed the recent GA 
test logs and I found that this is not a Mac Only issue:
 - [https://github.com/apache/spark/actions/runs/10155611310/job/28082653276]

 

!image-2024-07-30-12-59-02-278.png!


was (Author: luciferyang):
I apologize for providing misleading information. I just reviewed the recent GA 
test logs and I found that this is not a Mac Only issue:

- https://github.com/apache/spark/actions/runs/10155611310/job/28082653276

![image](https://github.com/user-attachments/assets/d82393a4-1fbd-4b41-a2c0-8f99e7ed3c8c)

> Fix OrcEncryptionSuite failure on AppleSilicon MacOS environment
> 
>
> Key: SPARK-49055
> URL: https://issues.apache.org/jira/browse/SPARK-49055
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0
> Environment: MacOS on AppleSilicon
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2024-07-30-12-59-02-278.png
>
>
> {code}
> git reset --hard 49b4c3bc9c09325de941dfaf41e4fd3a4a4c345f // 
> [SPARK-45393][BUILD] Upgrade Hadoop to 3.4.0
> build/sbt clean "sql/testOnly 
> org.apache.spark.sql.execution.datasources.orc.OrcEncryptionSuite"
> ...
> [info] OrcEncryptionSuite:
> 12:42:55.441 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 12:42:57.950 WARN org.apache.hadoop.crypto.OpensslCipher: Failed to load 
> OpenSSL Cipher.
> java.lang.UnsatisfiedLinkError: 'boolean 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()'
>   at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native 
> Method)
>   at 
> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:86)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.(OpensslAesCtrCryptoCodec.java:36)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
> [info] - Write and read an encrypted file (2 seconds, 486 milliseconds)
> [info] - Write and read an encrypted table (402 milliseconds)
> [info] - SPARK-35325: Write and read encrypted nested columns (299 
> milliseconds)
> [info] - SPARK-35992: Write and read fully-encrypted columns with default 
> masking (623 milliseconds)
> 12:42:59.856 WARN 
> org.apache.spark.sql.execution.datasources.orc.OrcEncryptionSuite: 
> = POSSIBLE THREAD LEAK IN SUITE 
> o.a.s.sql.execution.datasources.orc.OrcEncryptionSuite, threads: rpc-boss-3-1 
> (daemon=true), Thread-17 (daemon=true), ForkJoinPool.commonPool-worker-2 
> (daemon=true), shuffle-boss-6-1 (daemon=true), 
> ForkJoinPool.commonPool-worker-1 (daemon=true), Thread-18 (daemon=true), 
> ForkJoinPool.commonPool-worker-3 (daemon=true) =
> [info] Run completed in 5 seconds, 291 milliseconds.
> [info] Total number of tests run: 4
> [info] Suites: completed 1, aborted 0
> [info] Tests: succeeded 4, failed 0, canceled 0, ignored 0, pending 0
> [info] All tests passed.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-49055) Fix OrcEncryptionSuite failure on AppleSilicon MacOS environment

2024-07-29 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-49055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869518#comment-17869518
 ] 

Yang Jie commented on SPARK-49055:
--

I apologize for providing misleading information. I just reviewed the recent GA 
test logs and I found that this is not a Mac Only issue:

- https://github.com/apache/spark/actions/runs/10155611310/job/28082653276

![image](https://github.com/user-attachments/assets/d82393a4-1fbd-4b41-a2c0-8f99e7ed3c8c)

> Fix OrcEncryptionSuite failure on AppleSilicon MacOS environment
> 
>
> Key: SPARK-49055
> URL: https://issues.apache.org/jira/browse/SPARK-49055
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0
> Environment: MacOS on AppleSilicon
>Reporter: Yang Jie
>Priority: Major
>
> {code}
> git reset --hard 49b4c3bc9c09325de941dfaf41e4fd3a4a4c345f // 
> [SPARK-45393][BUILD] Upgrade Hadoop to 3.4.0
> build/sbt clean "sql/testOnly 
> org.apache.spark.sql.execution.datasources.orc.OrcEncryptionSuite"
> ...
> [info] OrcEncryptionSuite:
> 12:42:55.441 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 12:42:57.950 WARN org.apache.hadoop.crypto.OpensslCipher: Failed to load 
> OpenSSL Cipher.
> java.lang.UnsatisfiedLinkError: 'boolean 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()'
>   at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native 
> Method)
>   at 
> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:86)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.(OpensslAesCtrCryptoCodec.java:36)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
> [info] - Write and read an encrypted file (2 seconds, 486 milliseconds)
> [info] - Write and read an encrypted table (402 milliseconds)
> [info] - SPARK-35325: Write and read encrypted nested columns (299 
> milliseconds)
> [info] - SPARK-35992: Write and read fully-encrypted columns with default 
> masking (623 milliseconds)
> 12:42:59.856 WARN 
> org.apache.spark.sql.execution.datasources.orc.OrcEncryptionSuite: 
> = POSSIBLE THREAD LEAK IN SUITE 
> o.a.s.sql.execution.datasources.orc.OrcEncryptionSuite, threads: rpc-boss-3-1 
> (daemon=true), Thread-17 (daemon=true), ForkJoinPool.commonPool-worker-2 
> (daemon=true), shuffle-boss-6-1 (daemon=true), 
> ForkJoinPool.commonPool-worker-1 (daemon=true), Thread-18 (daemon=true), 
> ForkJoinPool.commonPool-worker-3 (daemon=true) =
> [info] Run completed in 5 seconds, 291 milliseconds.
> [info] Total number of tests run: 4
> [info] Suites: completed 1, aborted 0
> [info] Tests: succeeded 4, failed 0, canceled 0, ignored 0, pending 0
> [info] All tests passed.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-49012) ThriftServerQueryTestSuite failed during the Maven daily test

2024-07-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-49012:


 Summary: ThriftServerQueryTestSuite failed during the Maven daily 
test
 Key: SPARK-49012
 URL: https://issues.apache.org/jira/browse/SPARK-49012
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
- sql-on-files.sql *** FAILED ***
22911  "" did not contain "Exception" Exception did not match for query #6
22912  CREATE TABLE sql_on_files.test_orc USING ORC AS SELECT 1, expected: , 
but got: java.sql.SQLException
22913  org.apache.hive.service.cli.HiveSQLException: Error running query: 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 8542.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8542.0 
(TID 8594) (localhost executor driver): java.lang.NoClassDefFoundError: 
org/bouncycastle/jce/provider/BouncyCastleProvider
22914   at 
test.org.apache.spark.sql.execution.datasources.orc.FakeKeyProvider$Factory.createProvider(FakeKeyProvider.java:127)
22915   at 
org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
22916   at 
org.apache.hadoop.crypto.key.KeyProviderFactory.getProviders(KeyProviderFactory.java:68)
22917   at 
org.apache.orc.impl.HadoopShimsCurrent.createKeyProvider(HadoopShimsCurrent.java:97)
22918   at 
org.apache.orc.impl.HadoopShimsCurrent.getHadoopKeyProvider(HadoopShimsCurrent.java:131)
22919   at 
org.apache.orc.impl.CryptoUtils$HadoopKeyProviderFactory.create(CryptoUtils.java:158)
22920   at org.apache.orc.impl.CryptoUtils.getKeyProvider(CryptoUtils.java:141)
22921   at org.apache.orc.impl.WriterImpl.setupEncryption(WriterImpl.java:1015)
22922   at org.apache.orc.impl.WriterImpl.(WriterImpl.java:164)
22923   at org.apache.orc.OrcFile.createWriter(OrcFile.java:1078)
22924   at 
org.apache.spark.sql.execution.datasources.orc.OrcOutputWriter.(OrcOutputWriter.scala:49)
22925   at 
org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anon$1.newInstance(OrcFileFormat.scala:89)
22926   at 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:180)
22927   at 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.(FileFormatDataWriter.scala:165)
22928   at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:391)
22929   at 
org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:107)
22930   at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:901)
22931   at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:901)
22932   at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
22933   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)
22934   at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)
22935   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
22936   at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
22937   at org.apache.spark.scheduler.Task.run(Task.scala:146)
22938   at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:644)
22939   at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
22940   at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
22941   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99)
22942   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:647)
22943   at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
22944   at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
22945   at java.base/java.lang.Thread.run(Thread.java:840)
22946  Caused by: java.lang.ClassNotFoundException: 
org.bouncycastle.jce.provider.BouncyCastleProvider
22947   at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
22948   at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
22949   at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:525)
22950   ... 32 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48975) Remove unnecessary `ScalaReflectionLock` definition from `protobuf`

2024-07-23 Thread Yang Jie (Jira)
Yang Jie created SPARK-48975:


 Summary: Remove unnecessary `ScalaReflectionLock` definition from 
`protobuf`
 Key: SPARK-48975
 URL: https://issues.apache.org/jira/browse/SPARK-48975
 Project: Spark
  Issue Type: Improvement
  Components: Protobuf
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48893) Add some examples for linearRegression built-in functions

2024-07-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48893.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47343
[https://github.com/apache/spark/pull/47343]

> Add some examples for linearRegression built-in functions
> -
>
> Key: SPARK-48893
> URL: https://issues.apache.org/jira/browse/SPARK-48893
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48864) Refactor `HiveQuerySuite` and fix bug

2024-07-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48864:


Assignee: BingKun Pan

> Refactor `HiveQuerySuite` and fix bug
> -
>
> Key: SPARK-48864
> URL: https://issues.apache.org/jira/browse/SPARK-48864
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48864) Refactor `HiveQuerySuite` and fix bug

2024-07-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48864.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47293
[https://github.com/apache/spark/pull/47293]

> Refactor `HiveQuerySuite` and fix bug
> -
>
> Key: SPARK-48864
> URL: https://issues.apache.org/jira/browse/SPARK-48864
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-48876) Upgrade Guava used by the connect module to 33.2.1-jre

2024-07-11 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-48876:
-
Component/s: Build
 (was: Connect)

> Upgrade Guava used by the connect module to 33.2.1-jre
> --
>
> Key: SPARK-48876
> URL: https://issues.apache.org/jira/browse/SPARK-48876
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48876) Upgrade Guava used by the connect module to 33.2.1-jre

2024-07-11 Thread Yang Jie (Jira)
Yang Jie created SPARK-48876:


 Summary: Upgrade Guava used by the connect module to 33.2.1-jre
 Key: SPARK-48876
 URL: https://issues.apache.org/jira/browse/SPARK-48876
 Project: Spark
  Issue Type: Improvement
  Components: Connect
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48866) Fix hints of valid charset in the error message of INVALID_PARAMETER_VALUE.CHARSET

2024-07-11 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48866.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47295
[https://github.com/apache/spark/pull/47295]

> Fix hints of valid charset in the error message of 
> INVALID_PARAMETER_VALUE.CHARSET
> --
>
> Key: SPARK-48866
> URL: https://issues.apache.org/jira/browse/SPARK-48866
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48826) Upgrade `fasterxml.jackson` to 2.17.2

2024-07-09 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48826.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47241
[https://github.com/apache/spark/pull/47241]

> Upgrade `fasterxml.jackson` to 2.17.2
> -
>
> Key: SPARK-48826
> URL: https://issues.apache.org/jira/browse/SPARK-48826
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48840) Remove the check for the existence of ./dev/free_disk_space_container

2024-07-08 Thread Yang Jie (Jira)
Yang Jie created SPARK-48840:


 Summary: Remove the check for the existence of 
./dev/free_disk_space_container
 Key: SPARK-48840
 URL: https://issues.apache.org/jira/browse/SPARK-48840
 Project: Spark
  Issue Type: Improvement
  Components: Project Infra
Affects Versions: 4.0.0
Reporter: Yang Jie


`./dev/free_disk_space_container` has already been backported to branch-3.4 and 
branch-3.5 through https://github.com/apache/spark/pull/45624 and 
https://github.com/apache/spark/pull/43381, so there is no need to check its 
existence before execution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48720) Align the command ALTER TABLE ... UNSET TBLPROPERTIES ... in v1 and v2

2024-07-07 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48720.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47097
[https://github.com/apache/spark/pull/47097]

> Align the command ALTER TABLE ... UNSET TBLPROPERTIES ... in v1 and v2 
> ---
>
> Key: SPARK-48720
> URL: https://issues.apache.org/jira/browse/SPARK-48720
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48720) Align the command ALTER TABLE ... UNSET TBLPROPERTIES ... in v1 and v2

2024-07-07 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48720:


Assignee: BingKun Pan

> Align the command ALTER TABLE ... UNSET TBLPROPERTIES ... in v1 and v2 
> ---
>
> Key: SPARK-48720
> URL: https://issues.apache.org/jira/browse/SPARK-48720
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48805) Replace calls to bridged APIs based on SparkSession#sqlContext with SparkSession API

2024-07-04 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48805.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47210
[https://github.com/apache/spark/pull/47210]

> Replace calls to bridged APIs based on SparkSession#sqlContext with 
> SparkSession API
> 
>
> Key: SPARK-48805
> URL: https://issues.apache.org/jira/browse/SPARK-48805
> Project: Spark
>  Issue Type: Improvement
>  Components: Examples, ML, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> In the internal code of Spark, there are instances where, despite having a 
> SparkSession instance, the bridged APIs based on SparkSession#sqlContext are 
> still used. So we can makes some simplifications:
> 1. `SparkSession#sqlContext#read` -> `SparkSession#read`
> ```scala
> /**
>    * Returns a [[DataFrameReader]] that can be used to read non-streaming 
> data in as a
>    * `DataFrame`.
>    * {{{
>    *   sqlContext.read.parquet("/path/to/file.parquet")
>    *   sqlContext.read.schema(schema).json("/path/to/file.json")
>    * }}}
>    *
>    * @group genericdata
>    * @since 1.4.0
>    */
>   def read: DataFrameReader = sparkSession.read
> ```
> 2. `SparkSession#sqlContext#setConf` -> `SparkSession#conf#set`
> ```scala
>   /**
>    * Set the given Spark SQL configuration property.
>    *
>    * @group config
>    * @since 1.0.0
>    */
>   def setConf(key: String, value: String): Unit = {
>     sparkSession.conf.set(key, value)
>   }
> ```
> 3. `SparkSession#sqlContext#getConf` -> `SparkSession#conf#get`
> ```scala
> /**
>    * Return the value of Spark SQL configuration property for the given key.
>    *
>    * @group config
>    * @since 1.0.0
>    */
>   def getConf(key: String): String = {
>     sparkSession.conf.get(key)
>   }
> ```
> 4. `SparkSession#sqlContext#createDataFrame` -> `SparkSession#createDataFrame`
> ```scala
> /**
>    * Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
>    *
>    * @group dataframes
>    * @since 1.3.0
>    */
>   def createDataFrame[A <: Product : TypeTag](rdd: RDD[A]): DataFrame = {
>     sparkSession.createDataFrame(rdd)
>   }
> ```
> 5. `SparkSession#sqlContext#sessionState` -> `SparkSession#sessionState`
> ```scala
> private[sql] def sessionState: SessionState = sparkSession.sessionState
> ```
> 6. `SparkSession#sqlContext#sharedState` -> `SparkSession#sharedState`
> ```scala
> private[sql] def sharedState: SharedState = sparkSession.sharedState
> ```
> 7. `SparkSession#sqlContext#streams` -> `SparkSession#streams`
> ```
> /**
>    * Returns a `StreamingQueryManager` that allows managing all the
>    * [[org.apache.spark.sql.streaming.StreamingQuery StreamingQueries]] 
> active on `this` context.
>    *
>    * @since 2.0.0
>    */
>   def streams: StreamingQueryManager = sparkSession.streams
> ```
> 8. `SparkSession#sqlContext#uncacheTable` -> 
> ``SparkSession#catalog#uncacheTable`
> ```
> /**
>    * Removes the specified table from the in-memory cache.
>    * @group cachemgmt
>    * @since 1.3.0
>    */
>   def uncacheTable(tableName: String): Unit = {
>     sparkSession.catalog.uncacheTable(tableName)
>   }
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48805) Replace calls to bridged APIs based on SparkSession#sqlContext with SparkSession API

2024-07-04 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48805:


Assignee: Yang Jie

> Replace calls to bridged APIs based on SparkSession#sqlContext with 
> SparkSession API
> 
>
> Key: SPARK-48805
> URL: https://issues.apache.org/jira/browse/SPARK-48805
> Project: Spark
>  Issue Type: Improvement
>  Components: Examples, ML, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>
> In the internal code of Spark, there are instances where, despite having a 
> SparkSession instance, the bridged APIs based on SparkSession#sqlContext are 
> still used. So we can makes some simplifications:
> 1. `SparkSession#sqlContext#read` -> `SparkSession#read`
> ```scala
> /**
>    * Returns a [[DataFrameReader]] that can be used to read non-streaming 
> data in as a
>    * `DataFrame`.
>    * {{{
>    *   sqlContext.read.parquet("/path/to/file.parquet")
>    *   sqlContext.read.schema(schema).json("/path/to/file.json")
>    * }}}
>    *
>    * @group genericdata
>    * @since 1.4.0
>    */
>   def read: DataFrameReader = sparkSession.read
> ```
> 2. `SparkSession#sqlContext#setConf` -> `SparkSession#conf#set`
> ```scala
>   /**
>    * Set the given Spark SQL configuration property.
>    *
>    * @group config
>    * @since 1.0.0
>    */
>   def setConf(key: String, value: String): Unit = {
>     sparkSession.conf.set(key, value)
>   }
> ```
> 3. `SparkSession#sqlContext#getConf` -> `SparkSession#conf#get`
> ```scala
> /**
>    * Return the value of Spark SQL configuration property for the given key.
>    *
>    * @group config
>    * @since 1.0.0
>    */
>   def getConf(key: String): String = {
>     sparkSession.conf.get(key)
>   }
> ```
> 4. `SparkSession#sqlContext#createDataFrame` -> `SparkSession#createDataFrame`
> ```scala
> /**
>    * Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
>    *
>    * @group dataframes
>    * @since 1.3.0
>    */
>   def createDataFrame[A <: Product : TypeTag](rdd: RDD[A]): DataFrame = {
>     sparkSession.createDataFrame(rdd)
>   }
> ```
> 5. `SparkSession#sqlContext#sessionState` -> `SparkSession#sessionState`
> ```scala
> private[sql] def sessionState: SessionState = sparkSession.sessionState
> ```
> 6. `SparkSession#sqlContext#sharedState` -> `SparkSession#sharedState`
> ```scala
> private[sql] def sharedState: SharedState = sparkSession.sharedState
> ```
> 7. `SparkSession#sqlContext#streams` -> `SparkSession#streams`
> ```
> /**
>    * Returns a `StreamingQueryManager` that allows managing all the
>    * [[org.apache.spark.sql.streaming.StreamingQuery StreamingQueries]] 
> active on `this` context.
>    *
>    * @since 2.0.0
>    */
>   def streams: StreamingQueryManager = sparkSession.streams
> ```
> 8. `SparkSession#sqlContext#uncacheTable` -> 
> ``SparkSession#catalog#uncacheTable`
> ```
> /**
>    * Removes the specified table from the in-memory cache.
>    * @group cachemgmt
>    * @since 1.3.0
>    */
>   def uncacheTable(tableName: String): Unit = {
>     sparkSession.catalog.uncacheTable(tableName)
>   }
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48805) Replace calls to bridged APIs based on SparkSession#sqlContext with SparkSession API

2024-07-04 Thread Yang Jie (Jira)
Yang Jie created SPARK-48805:


 Summary: Replace calls to bridged APIs based on 
SparkSession#sqlContext with SparkSession API
 Key: SPARK-48805
 URL: https://issues.apache.org/jira/browse/SPARK-48805
 Project: Spark
  Issue Type: Improvement
  Components: Examples, ML, SQL, Structured Streaming
Affects Versions: 4.0.0
Reporter: Yang Jie


In the internal code of Spark, there are instances where, despite having a 
SparkSession instance, the bridged APIs based on SparkSession#sqlContext are 
still used. So we can makes some simplifications:


1. `SparkSession#sqlContext#read` -> `SparkSession#read`


```scala
/**
   * Returns a [[DataFrameReader]] that can be used to read non-streaming data 
in as a
   * `DataFrame`.
   * {{{
   *   sqlContext.read.parquet("/path/to/file.parquet")
   *   sqlContext.read.schema(schema).json("/path/to/file.json")
   * }}}
   *
   * @group genericdata
   * @since 1.4.0
   */
  def read: DataFrameReader = sparkSession.read
```

2. `SparkSession#sqlContext#setConf` -> `SparkSession#conf#set`


```scala
  /**
   * Set the given Spark SQL configuration property.
   *
   * @group config
   * @since 1.0.0
   */
  def setConf(key: String, value: String): Unit = {
    sparkSession.conf.set(key, value)
  }
```


3. `SparkSession#sqlContext#getConf` -> `SparkSession#conf#get`

```scala
/**
   * Return the value of Spark SQL configuration property for the given key.
   *
   * @group config
   * @since 1.0.0
   */
  def getConf(key: String): String = {
    sparkSession.conf.get(key)
  }
```

4. `SparkSession#sqlContext#createDataFrame` -> `SparkSession#createDataFrame`

```scala
/**
   * Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
   *
   * @group dataframes
   * @since 1.3.0
   */
  def createDataFrame[A <: Product : TypeTag](rdd: RDD[A]): DataFrame = {
    sparkSession.createDataFrame(rdd)
  }
```

5. `SparkSession#sqlContext#sessionState` -> `SparkSession#sessionState`

```scala
private[sql] def sessionState: SessionState = sparkSession.sessionState
```

6. `SparkSession#sqlContext#sharedState` -> `SparkSession#sharedState`

```scala
private[sql] def sharedState: SharedState = sparkSession.sharedState
```

7. `SparkSession#sqlContext#streams` -> `SparkSession#streams`


```
/**
   * Returns a `StreamingQueryManager` that allows managing all the
   * [[org.apache.spark.sql.streaming.StreamingQuery StreamingQueries]] active 
on `this` context.
   *
   * @since 2.0.0
   */
  def streams: StreamingQueryManager = sparkSession.streams
```

8. `SparkSession#sqlContext#uncacheTable` -> 
``SparkSession#catalog#uncacheTable`

```
/**
   * Removes the specified table from the in-memory cache.
   * @group cachemgmt
   * @since 1.3.0
   */
  def uncacheTable(tableName: String): Unit = {
    sparkSession.catalog.uncacheTable(tableName)
  }
```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48765) Enhance default value evaluation for SPARK_IDENT_STRING

2024-07-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48765.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47160
[https://github.com/apache/spark/pull/47160]

> Enhance default value evaluation for SPARK_IDENT_STRING
> ---
>
> Key: SPARK-48765
> URL: https://issues.apache.org/jira/browse/SPARK-48765
> Project: Spark
>  Issue Type: Improvement
>  Components: Deploy
>Affects Versions: 3.4.3
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48765) Enhance default value evaluation for SPARK_IDENT_STRING

2024-07-01 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48765:


Assignee: Cheng Pan

> Enhance default value evaluation for SPARK_IDENT_STRING
> ---
>
> Key: SPARK-48765
> URL: https://issues.apache.org/jira/browse/SPARK-48765
> Project: Spark
>  Issue Type: Improvement
>  Components: Deploy
>Affects Versions: 3.4.3
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48732) Cleanup deprecated api usage related to JdbcDialect.compileAggregate

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48732:


Assignee: Wei Guo

> Cleanup deprecated api usage related to JdbcDialect.compileAggregate
> 
>
> Key: SPARK-48732
> URL: https://issues.apache.org/jira/browse/SPARK-48732
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48732) Cleanup deprecated api usage related to JdbcDialect.compileAggregate

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48732.
--
Resolution: Fixed

Resloved by https://github.com/apache/spark/pull/47070

> Cleanup deprecated api usage related to JdbcDialect.compileAggregate
> 
>
> Key: SPARK-48732
> URL: https://issues.apache.org/jira/browse/SPARK-48732
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-48691) Upgrade `scalatest` related dependencies to the 3.2.19 series

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-48691:
-
Summary: Upgrade `scalatest` related dependencies to the 3.2.19 series  
(was: Upgrade `scalatest` related dependencies to the 3.2.18 series)

> Upgrade `scalatest` related dependencies to the 3.2.19 series
> -
>
> Key: SPARK-48691
> URL: https://issues.apache.org/jira/browse/SPARK-48691
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48691) Upgrade `scalatest` related dependencies to the 3.2.19 series

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48691:


Assignee: Wei Guo

> Upgrade `scalatest` related dependencies to the 3.2.19 series
> -
>
> Key: SPARK-48691
> URL: https://issues.apache.org/jira/browse/SPARK-48691
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48691) Upgrade `scalatest` related dependencies to the 3.2.19 series

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48691.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47065
[https://github.com/apache/spark/pull/47065]

> Upgrade `scalatest` related dependencies to the 3.2.19 series
> -
>
> Key: SPARK-48691
> URL: https://issues.apache.org/jira/browse/SPARK-48691
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48724) Fix incorrect conf settings of ignoreCorruptFiles related tests case in ParquetQuerySuite

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48724.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47088
[https://github.com/apache/spark/pull/47088]

> Fix incorrect conf settings of ignoreCorruptFiles related tests case in 
> ParquetQuerySuite
> -
>
> Key: SPARK-48724
> URL: https://issues.apache.org/jira/browse/SPARK-48724
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Minor
> Fix For: 4.0.0
>
>
> The code as follows:
> {code:java}
> withSQLConf(SQLConf.IGNORE_CORRUPT_FILES.key -> sqlConf) {
>   withSQLConf(SQLConf.IGNORE_CORRUPT_FILES.key -> "false") { 
> }{code}
> he inner withSQLConf (SQLConf.IGNORE_CORRUPT_FILES.key -> "false") will 
> overwrite the outer configuration, making it impossible to test the situation 
> where sqlConf is true.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48724) Fix incorrect conf settings of ignoreCorruptFiles related tests case in ParquetQuerySuite

2024-06-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48724:


Assignee: Wei Guo

> Fix incorrect conf settings of ignoreCorruptFiles related tests case in 
> ParquetQuerySuite
> -
>
> Key: SPARK-48724
> URL: https://issues.apache.org/jira/browse/SPARK-48724
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Minor
>
> The code as follows:
> {code:java}
> withSQLConf(SQLConf.IGNORE_CORRUPT_FILES.key -> sqlConf) {
>   withSQLConf(SQLConf.IGNORE_CORRUPT_FILES.key -> "false") { 
> }{code}
> he inner withSQLConf (SQLConf.IGNORE_CORRUPT_FILES.key -> "false") will 
> overwrite the outer configuration, making it impossible to test the situation 
> where sqlConf is true.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48692) Upgrade `rocksdbjni` to 9.2.1

2024-06-24 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48692.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46146
[https://github.com/apache/spark/pull/46146]

>  Upgrade `rocksdbjni` to 9.2.1
> --
>
> Key: SPARK-48692
> URL: https://issues.apache.org/jira/browse/SPARK-48692
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48661) Upgrade RoaringBitmap to 1.1.0

2024-06-20 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48661:


Assignee: Wei Guo

> Upgrade RoaringBitmap to 1.1.0
> --
>
> Key: SPARK-48661
> URL: https://issues.apache.org/jira/browse/SPARK-48661
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48661) Upgrade RoaringBitmap to 1.1.0

2024-06-20 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48661.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 47020
[https://github.com/apache/spark/pull/47020]

> Upgrade RoaringBitmap to 1.1.0
> --
>
> Key: SPARK-48661
> URL: https://issues.apache.org/jira/browse/SPARK-48661
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Minor
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48585) Make `JdbcDialect.classifyException` throw out the original exception

2024-06-17 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48585.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46937
[https://github.com/apache/spark/pull/46937]

> Make `JdbcDialect.classifyException` throw out the original exception
> -
>
> Key: SPARK-48585
> URL: https://issues.apache.org/jira/browse/SPARK-48585
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48585) Make `JdbcDialect.classifyException` throw out the original exception

2024-06-17 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48585:


Assignee: BingKun Pan

> Make `JdbcDialect.classifyException` throw out the original exception
> -
>
> Key: SPARK-48585
> URL: https://issues.apache.org/jira/browse/SPARK-48585
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Critical
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48615) Perf improvement for parsing hex string

2024-06-16 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48615:


Assignee: Kent Yao

> Perf improvement for parsing hex string
> ---
>
> Key: SPARK-48615
> URL: https://issues.apache.org/jira/browse/SPARK-48615
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> 
> Hex Comparison
> OpenJDK
>  64-Bit Server VM 17.0.10+0 on Mac OS X 14.5
> Apple M2 Max
> Cardinality 100:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                             5050           5100        
>   86          0.2        5050.1       1.0X
> Spark                                              3822           3840        
>   30          0.3        3821.6       1.3X
> Java                                               2462           2522        
>   87          0.4        2462.1       2.1XOpenJDK 64-Bit Server VM 17.0.10+0 
> on Mac OS X 14.5
> Apple M2 Max
> Cardinality 200:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                            10020          10828        
> 1154          0.2        5010.1       1.0X
> Spark                                              6875           6966        
>  144          0.3        3437.7       1.5X
> Java                                               4999           5092        
>   89          0.4        2499.3       2.0XOpenJDK 64-Bit Server VM 17.0.10+0 
> on Mac OS X 14.5
> Apple M2 Max
> Cardinality 400:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                            20090          20433        
>  433          0.2        5022.5       1.0X
> Spark                                             13389          13620        
>  229          0.3        3347.2       1.5X
> Java                                              10023          10069        
>   42          0.4        2505.6       2.0XOpenJDK 64-Bit Server VM 17.0.10+0 
> on Mac OS X 14.5
> Apple M2 Max
> Cardinality 800:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                            40277          43453        
> 2755          0.2        5034.7       1.0X
> Spark                                             27145          27380        
>  311          0.3        3393.1       1.5X
> Java                                              19980          21198        
> 1473          0.4        2497.5       2.0X {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48615) Perf improvement for parsing hex string

2024-06-16 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48615.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46972
[https://github.com/apache/spark/pull/46972]

> Perf improvement for parsing hex string
> ---
>
> Key: SPARK-48615
> URL: https://issues.apache.org/jira/browse/SPARK-48615
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> {code:java}
> 
> Hex Comparison
> OpenJDK
>  64-Bit Server VM 17.0.10+0 on Mac OS X 14.5
> Apple M2 Max
> Cardinality 100:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                             5050           5100        
>   86          0.2        5050.1       1.0X
> Spark                                              3822           3840        
>   30          0.3        3821.6       1.3X
> Java                                               2462           2522        
>   87          0.4        2462.1       2.1XOpenJDK 64-Bit Server VM 17.0.10+0 
> on Mac OS X 14.5
> Apple M2 Max
> Cardinality 200:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                            10020          10828        
> 1154          0.2        5010.1       1.0X
> Spark                                              6875           6966        
>  144          0.3        3437.7       1.5X
> Java                                               4999           5092        
>   89          0.4        2499.3       2.0XOpenJDK 64-Bit Server VM 17.0.10+0 
> on Mac OS X 14.5
> Apple M2 Max
> Cardinality 400:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                            20090          20433        
>  433          0.2        5022.5       1.0X
> Spark                                             13389          13620        
>  229          0.3        3347.2       1.5X
> Java                                              10023          10069        
>   42          0.4        2505.6       2.0XOpenJDK 64-Bit Server VM 17.0.10+0 
> on Mac OS X 14.5
> Apple M2 Max
> Cardinality 800:                      Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> 
> Apache                                            40277          43453        
> 2755          0.2        5034.7       1.0X
> Spark                                             27145          27380        
>  311          0.3        3393.1       1.5X
> Java                                              19980          21198        
> 1473          0.4        2497.5       2.0X {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48626) Change the scope of object LogKeys as private in Spark

2024-06-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48626.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46983
[https://github.com/apache/spark/pull/46983]

> Change the scope of object LogKeys as private in Spark
> --
>
> Key: SPARK-48626
> URL: https://issues.apache.org/jira/browse/SPARK-48626
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 4.0.0
>Reporter: Gengliang Wang
>Assignee: Gengliang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48612) Cleanup deprecated api usage related to commons-pool2

2024-06-13 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48612:


Assignee: Yang Jie

> Cleanup deprecated api usage related to commons-pool2
> -
>
> Key: SPARK-48612
> URL: https://issues.apache.org/jira/browse/SPARK-48612
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48612) Cleanup deprecated api usage related to commons-pool2

2024-06-13 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48612.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46967
[https://github.com/apache/spark/pull/46967]

> Cleanup deprecated api usage related to commons-pool2
> -
>
> Key: SPARK-48612
> URL: https://issues.apache.org/jira/browse/SPARK-48612
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48604) Replace deprecated classes and methods of arrow-vector called in Spark

2024-06-13 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48604.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46961
[https://github.com/apache/spark/pull/46961]

> Replace deprecated classes and methods of arrow-vector called in Spark
> --
>
> Key: SPARK-48604
> URL: https://issues.apache.org/jira/browse/SPARK-48604
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> There are some deprecated classes and methods in arrow-vector called in 
> Spark, we need to replace them:
>  * ArrowType.Decimal(precision, scale)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48604) Replace deprecated classes and methods of arrow-vector called in Spark

2024-06-13 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48604:


Assignee: Wei Guo

> Replace deprecated classes and methods of arrow-vector called in Spark
> --
>
> Key: SPARK-48604
> URL: https://issues.apache.org/jira/browse/SPARK-48604
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
>
> There are some deprecated classes and methods in arrow-vector called in 
> Spark, we need to replace them:
>  * ArrowType.Decimal(precision, scale)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48612) Cleanup deprecated api usage related to commons-pool2

2024-06-12 Thread Yang Jie (Jira)
Yang Jie created SPARK-48612:


 Summary: Cleanup deprecated api usage related to commons-pool2
 Key: SPARK-48612
 URL: https://issues.apache.org/jira/browse/SPARK-48612
 Project: Spark
  Issue Type: Improvement
  Components: Structured Streaming
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48609) Upgrade `scala-xml` to 2.3

2024-06-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48609.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46964
[https://github.com/apache/spark/pull/46964]

> Upgrade `scala-xml` to 2.3
> --
>
> Key: SPARK-48609
> URL: https://issues.apache.org/jira/browse/SPARK-48609
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48609) Upgrade `scala-xml` to 2.3

2024-06-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48609:


Assignee: BingKun Pan

> Upgrade `scala-xml` to 2.3
> --
>
> Key: SPARK-48609
> URL: https://issues.apache.org/jira/browse/SPARK-48609
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-48583) Replace deprecated classes and methods of commons-io called in Spark

2024-06-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-48583:


Assignee: Wei Guo

> Replace deprecated classes and methods of commons-io called in Spark
> 
>
> Key: SPARK-48583
> URL: https://issues.apache.org/jira/browse/SPARK-48583
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
>
> There are some deprecated classes and methods in commons-io called in Spark, 
> we need to replace them:
>  * writeStringToFile(final File file, final String data)
>  * CountingInputStream



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48583) Replace deprecated classes and methods of commons-io called in Spark

2024-06-12 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48583.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46935
[https://github.com/apache/spark/pull/46935]

> Replace deprecated classes and methods of commons-io called in Spark
> 
>
> Key: SPARK-48583
> URL: https://issues.apache.org/jira/browse/SPARK-48583
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Wei Guo
>Assignee: Wei Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> There are some deprecated classes and methods in commons-io called in Spark, 
> we need to replace them:
>  * writeStringToFile(final File file, final String data)
>  * CountingInputStream



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48595) Cleanup deprecated api usage related to commons-compress

2024-06-11 Thread Yang Jie (Jira)
Yang Jie created SPARK-48595:


 Summary: Cleanup deprecated api usage related to commons-compress
 Key: SPARK-48595
 URL: https://issues.apache.org/jira/browse/SPARK-48595
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48551) Perf improvement for escapePathName

2024-06-11 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48551.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46894
[https://github.com/apache/spark/pull/46894]

> Perf improvement for escapePathName
> ---
>
> Key: SPARK-48551
> URL: https://issues.apache.org/jira/browse/SPARK-48551
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48582) Bump `braces` from 3.0.2 to 3.0.3 in /ui-test

2024-06-10 Thread Yang Jie (Jira)
Yang Jie created SPARK-48582:


 Summary: Bump `braces` from 3.0.2 to 3.0.3 in /ui-test
 Key: SPARK-48582
 URL: https://issues.apache.org/jira/browse/SPARK-48582
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-48563) Upgrade pickle to 1.5

2024-06-10 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-48563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-48563.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 46913
[https://github.com/apache/spark/pull/46913]

> Upgrade pickle to 1.5
> -
>
> Key: SPARK-48563
> URL: https://issues.apache.org/jira/browse/SPARK-48563
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-48563) Upgrade pickle to 1.5

2024-06-07 Thread Yang Jie (Jira)
Yang Jie created SPARK-48563:


 Summary: Upgrade pickle to 1.5
 Key: SPARK-48563
 URL: https://issues.apache.org/jira/browse/SPARK-48563
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



  1   2   3   4   5   6   7   8   9   10   >