[jira] [Commented] (SPARK-41464) Implement DataFrame.to

2022-12-14 Thread Ruifeng Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647902#comment-17647902
 ] 

Ruifeng Zheng commented on SPARK-41464:
---

[~beliefer] and this one?

> Implement DataFrame.to
> --
>
> Key: SPARK-41464
> URL: https://issues.apache.org/jira/browse/SPARK-41464
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41453) Implement DataFrame.subtract

2022-12-14 Thread Ruifeng Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647901#comment-17647901
 ] 

Ruifeng Zheng commented on SPARK-41453:
---

[~beliefer] Hi, Jiaan, would you want a try?

> Implement DataFrame.subtract
> 
>
> Key: SPARK-41453
> URL: https://issues.apache.org/jira/browse/SPARK-41453
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect
>Affects Versions: 3.4.0
>Reporter: Tom van Bussel
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41472) Implement the rest of string/binary functions

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647892#comment-17647892
 ] 

Apache Spark commented on SPARK-41472:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39071

> Implement the rest of string/binary functions
> -
>
> Key: SPARK-41472
> URL: https://issues.apache.org/jira/browse/SPARK-41472
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> Implement the rest of string/binary functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41472) Implement the rest of string/binary functions

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647891#comment-17647891
 ] 

Apache Spark commented on SPARK-41472:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39071

> Implement the rest of string/binary functions
> -
>
> Key: SPARK-41472
> URL: https://issues.apache.org/jira/browse/SPARK-41472
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Xinrong Meng
>Priority: Major
>
> Implement the rest of string/binary functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41525) Improve onNewSnapshots to use unique list of known executor IDs and PVC names

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-41525:
-

Assignee: Dongjoon Hyun

> Improve onNewSnapshots to use unique list of known executor IDs and PVC names
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41515) PVC-oriented executor pod allocation

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-41515:
-

Assignee: Dongjoon Hyun

> PVC-oriented executor pod allocation
> 
>
> Key: SPARK-41515
> URL: https://issues.apache.org/jira/browse/SPARK-41515
> Project: Spark
>  Issue Type: New Feature
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: releasenotes
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41525) Improve onNewSnapshots to use unique list of known executor IDs and PVC names

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-41525:
--
Parent: SPARK-41515
Issue Type: Sub-task  (was: Improvement)

> Improve onNewSnapshots to use unique list of known executor IDs and PVC names
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41525) Improve onNewSnapshots to use unique list of known executor IDs and PVC names

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647888#comment-17647888
 ] 

Apache Spark commented on SPARK-41525:
--

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/39070

> Improve onNewSnapshots to use unique list of known executor IDs and PVC names
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41525) Improve onNewSnapshots to use unique list of known executor IDs and PVC names

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41525:


Assignee: Apache Spark

> Improve onNewSnapshots to use unique list of known executor IDs and PVC names
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Apache Spark
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41525) Improve onNewSnapshots to use unique list of known executor IDs and PVC names

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-41525:
--
Summary: Improve onNewSnapshots to use unique list of known executor IDs 
and PVC names  (was: Use unique list of known executor IDs and PVC names in 
onNewSnapshots)

> Improve onNewSnapshots to use unique list of known executor IDs and PVC names
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41525) Improve onNewSnapshots to use unique list of known executor IDs and PVC names

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41525:


Assignee: (was: Apache Spark)

> Improve onNewSnapshots to use unique list of known executor IDs and PVC names
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41525) Use unique list of executor IDs and PVC names in onNewSnapshots

2022-12-14 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-41525:
-

 Summary: Use unique list of executor IDs and PVC names in 
onNewSnapshots
 Key: SPARK-41525
 URL: https://issues.apache.org/jira/browse/SPARK-41525
 Project: Spark
  Issue Type: Improvement
  Components: Kubernetes
Affects Versions: 3.4.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41525) Use unique list of known executor IDs and PVC names in onNewSnapshots

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-41525:
--
Summary: Use unique list of known executor IDs and PVC names in 
onNewSnapshots  (was: Use unique list of executor IDs and PVC names in 
onNewSnapshots)

> Use unique list of known executor IDs and PVC names in onNewSnapshots
> -
>
> Key: SPARK-41525
> URL: https://issues.apache.org/jira/browse/SPARK-41525
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41037) Fix pandas_udf when return type is array of MapType working properly.

2022-12-14 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647878#comment-17647878
 ] 

Hyukjin Kwon commented on SPARK-41037:
--

This seems to be fixed in https://issues.apache.org/jira/browse/ARROW-17832

> Fix pandas_udf when return type is array of MapType working properly.
> -
>
> Key: SPARK-41037
> URL: https://issues.apache.org/jira/browse/SPARK-41037
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.4.0
>Reporter: Haejoon Lee
>Priority: Major
>
> The current behavior of `pandas_udf` is not working properly for the case 
> below:
> {code:java}
> df = spark.createDataFrame([["Testing_123", 9.6],["123_Testing", 
> 10.4]]).toDF("Text", "Double")
> @pandas_udf(ArrayType(MapType(StringType(),StringType(
> def test_udf(x : pd.Series) -> pd.Series:
> return x.transform(lambda x: [{"string" : y} for y in x.split("_")])
> >>> df.withColumn("test_col", test_udf("Text")).show()
> 22/11/08 12:22:51 ERROR Executor: Exception in task 10.0 in stage 2.0 (TID 
> 15)1]
> org.apache.spark.api.python.PythonException: Traceback (most recent call 
> last):
>   File "pyarrow/array.pxi", line 1037, in pyarrow.lib.Array.from_pandas
>   File "pyarrow/array.pxi", line 313, in pyarrow.lib.array
>   File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
>   File "pyarrow/error.pxi", line 123, in pyarrow.lib.check_status
> pyarrow.lib.ArrowTypeError: Could not convert {'string': '123'} with type 
> dict: was not a sequence or recognized null for conversion to list type    at 
> org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561)
>  ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(PythonArrowOutput.scala:118)
>  ~[spark-sql_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514)
>  ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
>  ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491) 
> ~[scala-library-2.12.17.jar:?]
>     at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) 
> ~[scala-library-2.12.17.jar:?]
>     at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown
>  Source) ~[?:?]
>     at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>  ~[spark-sql_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
>  ~[spark-sql_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364)
>  ~[spark-sql_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:888) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:888)
>  ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:328) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at org.apache.spark.scheduler.Task.run(Task.scala:139) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
>  ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) 
> ~[spark-core_2.12-3.4.0-SNAPSHOT.jar:3.4.0-SNAPSHOT]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  ~[?:?]
>     at java.lang.Thread.run(Thread.java:833) 

[jira] [Commented] (SPARK-41524) Expose SQL confs and extraOptions separately in o.a.s.sql.execution.streaming.state.RocksDBConf

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647869#comment-17647869
 ] 

Apache Spark commented on SPARK-41524:
--

User 'huanliwang-db' has created a pull request for this issue:
https://github.com/apache/spark/pull/39069

> Expose SQL confs and extraOptions separately in 
> o.a.s.sql.execution.streaming.state.RocksDBConf
> ---
>
> Key: SPARK-41524
> URL: https://issues.apache.org/jira/browse/SPARK-41524
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.4.0
>Reporter: Huanli Wang
>Priority: Minor
>
> Currently the usage of *StateStoreConf* is via 
> [*{{confs}}*|https://github.com/huanliwang-db/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala#L77-L78],
>  which composes both SQL confs and extraOptions into one. The name of config 
> for extraOptions shouldn't have to follow the name prefix of SQL conf, 
> because it's not bound to the context of SQL conf.
> After differentiate SQL conf and extraOptions in {*}StateStoreConf{*}, we 
> should be able to adopt more use case on operator level configs by using the 
> {*}extraOptions{*}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41524) Expose SQL confs and extraOptions separately in o.a.s.sql.execution.streaming.state.RocksDBConf

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41524:


Assignee: (was: Apache Spark)

> Expose SQL confs and extraOptions separately in 
> o.a.s.sql.execution.streaming.state.RocksDBConf
> ---
>
> Key: SPARK-41524
> URL: https://issues.apache.org/jira/browse/SPARK-41524
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.4.0
>Reporter: Huanli Wang
>Priority: Minor
>
> Currently the usage of *StateStoreConf* is via 
> [*{{confs}}*|https://github.com/huanliwang-db/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala#L77-L78],
>  which composes both SQL confs and extraOptions into one. The name of config 
> for extraOptions shouldn't have to follow the name prefix of SQL conf, 
> because it's not bound to the context of SQL conf.
> After differentiate SQL conf and extraOptions in {*}StateStoreConf{*}, we 
> should be able to adopt more use case on operator level configs by using the 
> {*}extraOptions{*}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41524) Expose SQL confs and extraOptions separately in o.a.s.sql.execution.streaming.state.RocksDBConf

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647868#comment-17647868
 ] 

Apache Spark commented on SPARK-41524:
--

User 'huanliwang-db' has created a pull request for this issue:
https://github.com/apache/spark/pull/39069

> Expose SQL confs and extraOptions separately in 
> o.a.s.sql.execution.streaming.state.RocksDBConf
> ---
>
> Key: SPARK-41524
> URL: https://issues.apache.org/jira/browse/SPARK-41524
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.4.0
>Reporter: Huanli Wang
>Priority: Minor
>
> Currently the usage of *StateStoreConf* is via 
> [*{{confs}}*|https://github.com/huanliwang-db/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala#L77-L78],
>  which composes both SQL confs and extraOptions into one. The name of config 
> for extraOptions shouldn't have to follow the name prefix of SQL conf, 
> because it's not bound to the context of SQL conf.
> After differentiate SQL conf and extraOptions in {*}StateStoreConf{*}, we 
> should be able to adopt more use case on operator level configs by using the 
> {*}extraOptions{*}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41524) Expose SQL confs and extraOptions separately in o.a.s.sql.execution.streaming.state.RocksDBConf

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41524:


Assignee: Apache Spark

> Expose SQL confs and extraOptions separately in 
> o.a.s.sql.execution.streaming.state.RocksDBConf
> ---
>
> Key: SPARK-41524
> URL: https://issues.apache.org/jira/browse/SPARK-41524
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.4.0
>Reporter: Huanli Wang
>Assignee: Apache Spark
>Priority: Minor
>
> Currently the usage of *StateStoreConf* is via 
> [*{{confs}}*|https://github.com/huanliwang-db/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala#L77-L78],
>  which composes both SQL confs and extraOptions into one. The name of config 
> for extraOptions shouldn't have to follow the name prefix of SQL conf, 
> because it's not bound to the context of SQL conf.
> After differentiate SQL conf and extraOptions in {*}StateStoreConf{*}, we 
> should be able to adopt more use case on operator level configs by using the 
> {*}extraOptions{*}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-41271) Parameterized SQL

2022-12-14 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-41271.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 38864
[https://github.com/apache/spark/pull/38864]

> Parameterized SQL
> -
>
> Key: SPARK-41271
> URL: https://issues.apache.org/jira/browse/SPARK-41271
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
> Fix For: 3.4.0
>
>
> Enhance the Spark SQL API with support for parameterized SQL statements to 
> improve security and reusability. Application developers will be able to 
> write SQL with parameter markers whose values will be passed separately from 
> the SQL code and interpreted as literals. This will help prevent SQL 
> injection attacks for applications that generate SQL based on a user’s 
> selections, which is often done via a user interface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41524) Expose SQL confs and extraOptions separately in o.a.s.sql.execution.streaming.state.RocksDBConf

2022-12-14 Thread Huanli Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huanli Wang updated SPARK-41524:

Priority: Minor  (was: Major)

> Expose SQL confs and extraOptions separately in 
> o.a.s.sql.execution.streaming.state.RocksDBConf
> ---
>
> Key: SPARK-41524
> URL: https://issues.apache.org/jira/browse/SPARK-41524
> Project: Spark
>  Issue Type: Improvement
>  Components: Structured Streaming
>Affects Versions: 3.4.0
>Reporter: Huanli Wang
>Priority: Minor
>
> Currently the usage of *StateStoreConf* is via 
> [*{{confs}}*|https://github.com/huanliwang-db/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala#L77-L78],
>  which composes both SQL confs and extraOptions into one. The name of config 
> for extraOptions shouldn't have to follow the name prefix of SQL conf, 
> because it's not bound to the context of SQL conf.
> After differentiate SQL conf and extraOptions in {*}StateStoreConf{*}, we 
> should be able to adopt more use case on operator level configs by using the 
> {*}extraOptions{*}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41524) Expose SQL confs and extraOptions separately in o.a.s.sql.execution.streaming.state.RocksDBConf

2022-12-14 Thread Huanli Wang (Jira)
Huanli Wang created SPARK-41524:
---

 Summary: Expose SQL confs and extraOptions separately in 
o.a.s.sql.execution.streaming.state.RocksDBConf
 Key: SPARK-41524
 URL: https://issues.apache.org/jira/browse/SPARK-41524
 Project: Spark
  Issue Type: Improvement
  Components: Structured Streaming
Affects Versions: 3.4.0
Reporter: Huanli Wang


Currently the usage of *StateStoreConf* is via 
[*{{confs}}*|https://github.com/huanliwang-db/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreConf.scala#L77-L78],
 which composes both SQL confs and extraOptions into one. The name of config 
for extraOptions shouldn't have to follow the name prefix of SQL conf, because 
it's not bound to the context of SQL conf.

After differentiate SQL conf and extraOptions in {*}StateStoreConf{*}, we 
should be able to adopt more use case on operator level configs by using the 
{*}extraOptions{*}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41434) Support LambdaFunction expresssion

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41434:


Assignee: Apache Spark

> Support LambdaFunction expresssion
> --
>
> Key: SPARK-41434
> URL: https://issues.apache.org/jira/browse/SPARK-41434
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41434) Support LambdaFunction expresssion

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647863#comment-17647863
 ] 

Apache Spark commented on SPARK-41434:
--

User 'zhengruifeng' has created a pull request for this issue:
https://github.com/apache/spark/pull/39068

> Support LambdaFunction expresssion
> --
>
> Key: SPARK-41434
> URL: https://issues.apache.org/jira/browse/SPARK-41434
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41434) Support LambdaFunction expresssion

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41434:


Assignee: (was: Apache Spark)

> Support LambdaFunction expresssion
> --
>
> Key: SPARK-41434
> URL: https://issues.apache.org/jira/browse/SPARK-41434
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41521) Upgrade kubernetes-client to 6.3.0

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-41521:
--
Summary: Upgrade kubernetes-client to 6.3.0  (was: Upgrade fabric8io - 
kubernetes-client from 6.2.0 to 6.3.0)

> Upgrade kubernetes-client to 6.3.0
> --
>
> Key: SPARK-41521
> URL: https://issues.apache.org/jira/browse/SPARK-41521
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-41521) Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-41521.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 39065
[https://github.com/apache/spark/pull/39065]

> Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0
> -
>
> Key: SPARK-41521
> URL: https://issues.apache.org/jira/browse/SPARK-41521
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41521) Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-41521:
-

Assignee: BingKun Pan

> Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0
> -
>
> Key: SPARK-41521
> URL: https://issues.apache.org/jira/browse/SPARK-41521
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41522) GA dependencies test faild

2022-12-14 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647846#comment-17647846
 ] 

Yang Jie commented on SPARK-41522:
--

also cc [~dongjoon] 

> GA dependencies test faild
> --
>
> Key: SPARK-41522
> URL: https://issues.apache.org/jira/browse/SPARK-41522
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> Error: ] Some problems were encountered while processing the POMs:
> 58[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-sketch_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 59[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-kvstore_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 60[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-network-common_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 61[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-network-shuffle_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 62[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-unsafe_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 63[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-tags_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 64[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-catalyst_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 65[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-sql_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 66[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-hive_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 67[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-token-provider-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could 
> not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 68[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-streaming-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not 
> find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 69[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-streaming-kafka-0-10-assembly_2.12:3.4.0-SNAPSHOT: 
> Could not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT 
> and 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 70[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-sql-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 71[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-avro_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 72[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-connect_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 73[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-connect-common_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 13
> 74[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-protobuf_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> 

[jira] [Commented] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647842#comment-17647842
 ] 

Apache Spark commented on SPARK-41523:
--

User 'LuciferYang' has created a pull request for this issue:
https://github.com/apache/spark/pull/39066

> `protoc-jar-maven-plugin` should uniformly use 
> `protoc-jar-maven-plugin.version` as the version 
> 
>
> Key: SPARK-41523
> URL: https://issues.apache.org/jira/browse/SPARK-41523
> Project: Spark
>  Issue Type: Improvement
>  Components: Protobuf
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2022-12-15-12-53-02-818.png
>
>
> [https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]
>  
> !image-2022-12-15-12-53-02-818.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41523:


Assignee: Apache Spark

> `protoc-jar-maven-plugin` should uniformly use 
> `protoc-jar-maven-plugin.version` as the version 
> 
>
> Key: SPARK-41523
> URL: https://issues.apache.org/jira/browse/SPARK-41523
> Project: Spark
>  Issue Type: Improvement
>  Components: Protobuf
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Assignee: Apache Spark
>Priority: Major
> Attachments: image-2022-12-15-12-53-02-818.png
>
>
> [https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]
>  
> !image-2022-12-15-12-53-02-818.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41523:


Assignee: (was: Apache Spark)

> `protoc-jar-maven-plugin` should uniformly use 
> `protoc-jar-maven-plugin.version` as the version 
> 
>
> Key: SPARK-41523
> URL: https://issues.apache.org/jira/browse/SPARK-41523
> Project: Spark
>  Issue Type: Improvement
>  Components: Protobuf
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2022-12-15-12-53-02-818.png
>
>
> [https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]
>  
> !image-2022-12-15-12-53-02-818.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647841#comment-17647841
 ] 

Apache Spark commented on SPARK-41523:
--

User 'LuciferYang' has created a pull request for this issue:
https://github.com/apache/spark/pull/39066

> `protoc-jar-maven-plugin` should uniformly use 
> `protoc-jar-maven-plugin.version` as the version 
> 
>
> Key: SPARK-41523
> URL: https://issues.apache.org/jira/browse/SPARK-41523
> Project: Spark
>  Issue Type: Improvement
>  Components: Protobuf
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2022-12-15-12-53-02-818.png
>
>
> [https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]
>  
> !image-2022-12-15-12-53-02-818.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-41523:
-
Attachment: image-2022-12-15-12-53-02-818.png

> `protoc-jar-maven-plugin` should uniformly use 
> `protoc-jar-maven-plugin.version` as the version 
> 
>
> Key: SPARK-41523
> URL: https://issues.apache.org/jira/browse/SPARK-41523
> Project: Spark
>  Issue Type: Improvement
>  Components: Protobuf
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2022-12-15-12-53-02-818.png
>
>
> [https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]
>  
> !image-2022-12-15-12-53-02-818.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-41523:
-
Description: 
[https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]

 

!image-2022-12-15-12-53-02-818.png!

  was:
[https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]

 

!image-2022-12-15-12-52-10-474.png!


> `protoc-jar-maven-plugin` should uniformly use 
> `protoc-jar-maven-plugin.version` as the version 
> 
>
> Key: SPARK-41523
> URL: https://issues.apache.org/jira/browse/SPARK-41523
> Project: Spark
>  Issue Type: Improvement
>  Components: Protobuf
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
> Attachments: image-2022-12-15-12-53-02-818.png
>
>
> [https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]
>  
> !image-2022-12-15-12-53-02-818.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41523) `protoc-jar-maven-plugin` should uniformly use `protoc-jar-maven-plugin.version` as the version

2022-12-14 Thread Yang Jie (Jira)
Yang Jie created SPARK-41523:


 Summary: `protoc-jar-maven-plugin` should uniformly use 
`protoc-jar-maven-plugin.version` as the version 
 Key: SPARK-41523
 URL: https://issues.apache.org/jira/browse/SPARK-41523
 Project: Spark
  Issue Type: Improvement
  Components: Protobuf
Affects Versions: 3.4.0
Reporter: Yang Jie


[https://github.com/apache/spark/blob/7671bc975f2d88ab929e4982abfe3e166fa72e35/connector/protobuf/pom.xml#L161-L167]

 

!image-2022-12-15-12-52-10-474.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-41522) GA dependencies test faild

2022-12-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-41522:
-
Description: 
{code:java}
Error: ] Some problems were encountered while processing the POMs:
58[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sketch_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
59[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-kvstore_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
60[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-network-common_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 11
61[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-network-shuffle_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 11
62[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-unsafe_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
63[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-tags_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
64[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-catalyst_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
65[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sql_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
66[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-hive_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
67[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-token-provider-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not 
find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
68[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
69[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kafka-0-10-assembly_2.12:3.4.0-SNAPSHOT: Could 
not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
70[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sql-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
71[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-avro_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11
72[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-connect_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
73[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-connect-common_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 13
74[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-protobuf_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11
75[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-ganglia-lgpl_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 20, column 11
76[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kinesis-asl_2.12:3.4.0-SNAPSHOT: Could not 
find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 

[jira] [Commented] (SPARK-41522) GA dependencies test faild

2022-12-14 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647831#comment-17647831
 ] 

Yang Jie commented on SPARK-41522:
--

cc [~gurwls223] 

> GA dependencies test faild
> --
>
> Key: SPARK-41522
> URL: https://issues.apache.org/jira/browse/SPARK-41522
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> Error: ] Some problems were encountered while processing the POMs:
> 58[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-sketch_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 59[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-kvstore_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 60[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-network-common_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 61[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-network-shuffle_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 62[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-unsafe_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 63[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-tags_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 64[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-catalyst_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 65[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-sql_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 66[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-hive_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 67[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-token-provider-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could 
> not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 68[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-streaming-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not 
> find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 69[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-streaming-kafka-0-10-assembly_2.12:3.4.0-SNAPSHOT: 
> Could not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT 
> and 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 70[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-sql-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 71[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-avro_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 21, column 11
> 72[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-connect_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 11
> 73[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-connect-common_2.12:3.4.0-SNAPSHOT: Could not find 
> artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
> 'parent.relativePath' points at wrong local POM @ line 22, column 13
> 74[FATAL] Non-resolvable parent POM for 
> org.apache.spark:spark-protobuf_2.12:3.4.0-SNAPSHOT: Could not find artifact 
> org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT 

[jira] [Updated] (SPARK-41522) GA dependencies test faild

2022-12-14 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-41522:
-
Description: 
{code:java}
Error: ] Some problems were encountered while processing the POMs:
58[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sketch_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
59[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-kvstore_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
60[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-network-common_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 11
61[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-network-shuffle_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 11
62[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-unsafe_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
63[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-tags_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
64[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-catalyst_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
65[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sql_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
66[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-hive_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
67[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-token-provider-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not 
find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
68[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
69[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kafka-0-10-assembly_2.12:3.4.0-SNAPSHOT: Could 
not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
70[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sql-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
71[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-avro_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11
72[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-connect_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
73[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-connect-common_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 13
74[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-protobuf_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11
75[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-ganglia-lgpl_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 20, column 11
76[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kinesis-asl_2.12:3.4.0-SNAPSHOT: Could not 
find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 

[jira] [Created] (SPARK-41522) GA dependencies test faild

2022-12-14 Thread Yang Jie (Jira)
Yang Jie created SPARK-41522:


 Summary: GA dependencies test faild
 Key: SPARK-41522
 URL: https://issues.apache.org/jira/browse/SPARK-41522
 Project: Spark
  Issue Type: Bug
  Components: Project Infra
Affects Versions: 3.4.0
Reporter: Yang Jie


{code:java}
Error: ] Some problems were encountered while processing the POMs:
58[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sketch_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
59[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-kvstore_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
60[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-network-common_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 11
61[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-network-shuffle_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 11
62[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-unsafe_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
63[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-tags_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
64[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-catalyst_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
65[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sql_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
66[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-hive_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
67[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-token-provider-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not 
find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
68[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
69[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-streaming-kafka-0-10-assembly_2.12:3.4.0-SNAPSHOT: Could 
not find artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
70[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-sql-kafka-0-10_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 21, column 11
71[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-avro_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11
72[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-connect_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 22, column 11
73[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-connect-common_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 22, column 13
74[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-protobuf_2.12:3.4.0-SNAPSHOT: Could not find artifact 
org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11
75[FATAL] Non-resolvable parent POM for 
org.apache.spark:spark-ganglia-lgpl_2.12:3.4.0-SNAPSHOT: Could not find 
artifact org.apache.spark:spark-parent_2.12:pom:3.4.0-SNAPSHOT and 
'parent.relativePath' points at wrong local POM @ line 20, column 11
76[FATAL] Non-resolvable parent POM for 

[jira] [Assigned] (SPARK-41521) Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41521:


Assignee: Apache Spark

> Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0
> -
>
> Key: SPARK-41521
> URL: https://issues.apache.org/jira/browse/SPARK-41521
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: BingKun Pan
>Assignee: Apache Spark
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41521) Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41521:


Assignee: (was: Apache Spark)

> Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0
> -
>
> Key: SPARK-41521
> URL: https://issues.apache.org/jira/browse/SPARK-41521
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41521) Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647795#comment-17647795
 ] 

Apache Spark commented on SPARK-41521:
--

User 'panbingkun' has created a pull request for this issue:
https://github.com/apache/spark/pull/39065

> Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0
> -
>
> Key: SPARK-41521
> URL: https://issues.apache.org/jira/browse/SPARK-41521
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41521) Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0

2022-12-14 Thread BingKun Pan (Jira)
BingKun Pan created SPARK-41521:
---

 Summary: Upgrade fabric8io - kubernetes-client from 6.2.0 to 6.3.0
 Key: SPARK-41521
 URL: https://issues.apache.org/jira/browse/SPARK-41521
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.4.0
Reporter: BingKun Pan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41520) Split AND_OR TreePattern to separate AND and OR TreePatterns

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647793#comment-17647793
 ] 

Apache Spark commented on SPARK-41520:
--

User 'kelvinjian-db' has created a pull request for this issue:
https://github.com/apache/spark/pull/39064

> Split AND_OR TreePattern to separate AND and OR TreePatterns
> 
>
> Key: SPARK-41520
> URL: https://issues.apache.org/jira/browse/SPARK-41520
> Project: Spark
>  Issue Type: Improvement
>  Components: Optimizer
>Affects Versions: 3.3.1
>Reporter: Kelvin Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41520) Split AND_OR TreePattern to separate AND and OR TreePatterns

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41520:


Assignee: Apache Spark

> Split AND_OR TreePattern to separate AND and OR TreePatterns
> 
>
> Key: SPARK-41520
> URL: https://issues.apache.org/jira/browse/SPARK-41520
> Project: Spark
>  Issue Type: Improvement
>  Components: Optimizer
>Affects Versions: 3.3.1
>Reporter: Kelvin Jiang
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41520) Split AND_OR TreePattern to separate AND and OR TreePatterns

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41520:


Assignee: (was: Apache Spark)

> Split AND_OR TreePattern to separate AND and OR TreePatterns
> 
>
> Key: SPARK-41520
> URL: https://issues.apache.org/jira/browse/SPARK-41520
> Project: Spark
>  Issue Type: Improvement
>  Components: Optimizer
>Affects Versions: 3.3.1
>Reporter: Kelvin Jiang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-41438) Implement DataFrame. colRegex

2022-12-14 Thread Ruifeng Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruifeng Zheng resolved SPARK-41438.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 39035
[https://github.com/apache/spark/pull/39035]

> Implement DataFrame. colRegex
> -
>
> Key: SPARK-41438
> URL: https://issues.apache.org/jira/browse/SPARK-41438
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Assignee: Ruifeng Zheng
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41438) Implement DataFrame. colRegex

2022-12-14 Thread Ruifeng Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruifeng Zheng reassigned SPARK-41438:
-

Assignee: Ruifeng Zheng

> Implement DataFrame. colRegex
> -
>
> Key: SPARK-41438
> URL: https://issues.apache.org/jira/browse/SPARK-41438
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, PySpark
>Affects Versions: 3.4.0
>Reporter: Ruifeng Zheng
>Assignee: Ruifeng Zheng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41520) Split AND_OR TreePattern to separate AND and OR TreePatterns

2022-12-14 Thread Kelvin Jiang (Jira)
Kelvin Jiang created SPARK-41520:


 Summary: Split AND_OR TreePattern to separate AND and OR 
TreePatterns
 Key: SPARK-41520
 URL: https://issues.apache.org/jira/browse/SPARK-41520
 Project: Spark
  Issue Type: Improvement
  Components: Optimizer
Affects Versions: 3.3.1
Reporter: Kelvin Jiang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41519) Pin versions-maven-plugin version

2022-12-14 Thread John Zhuge (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647711#comment-17647711
 ] 

John Zhuge commented on SPARK-41519:


{noformat}
[ERROR]
java.nio.file.NoSuchFileException: /Users/jzhuge/Repos/upstream-spark/avro
    at sun.nio.fs.UnixException.translateToIOException (UnixException.java:86)
    at sun.nio.fs.UnixException.rethrowAsIOException (UnixException.java:102)
    at sun.nio.fs.UnixException.rethrowAsIOException (UnixException.java:107)
    at sun.nio.fs.UnixFileSystemProvider.newByteChannel 
(UnixFileSystemProvider.java:214)
    at java.nio.file.Files.newByteChannel (Files.java:361)
    at java.nio.file.Files.newByteChannel (Files.java:407)
    at java.nio.file.spi.FileSystemProvider.newInputStream 
(FileSystemProvider.java:384)
    at java.nio.file.Files.newInputStream (Files.java:152)
    at org.codehaus.plexus.util.xml.XmlReader. (XmlReader.java:129)
    at org.codehaus.plexus.util.xml.XmlStreamReader. 
(XmlStreamReader.java:67)
    at org.codehaus.plexus.util.ReaderFactory.newXmlReader 
(ReaderFactory.java:122)
    at org.codehaus.mojo.versions.api.PomHelper.readXmlFile 
(PomHelper.java:1498)
    at org.codehaus.mojo.versions.AbstractVersionsUpdaterMojo.process 
(AbstractVersionsUpdaterMojo.java:326)
    at org.codehaus.mojo.versions.SetMojo.execute (SetMojo.java:381)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute2 
(MojoExecutor.java:370)
    at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute 
(MojoExecutor.java:351)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:215)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:171)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:163)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
    at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:294)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:960)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:293)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:196)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:347){noformat}

> Pin versions-maven-plugin version
> -
>
> Key: SPARK-41519
> URL: https://issues.apache.org/jira/browse/SPARK-41519
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.4.0
>Reporter: John Zhuge
>Priority: Minor
>
> `versions-maven-plugin` release `2.14.0` broke the following this command in 
> Spark:
> {noformat}
> build/mvn versions:set -DnewVersion=3.4.0-jz-0 -DgenerateBackupPoms=false
> {noformat}
> See [https://github.com/mojohaus/versions/issues/848.]
> The plugin will be fixed in 2.14.1. However it may be desirable to pin plugin 
> version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41519) Pin versions-maven-plugin version

2022-12-14 Thread John Zhuge (Jira)
John Zhuge created SPARK-41519:
--

 Summary: Pin versions-maven-plugin version
 Key: SPARK-41519
 URL: https://issues.apache.org/jira/browse/SPARK-41519
 Project: Spark
  Issue Type: Task
  Components: Project Infra
Affects Versions: 3.4.0
Reporter: John Zhuge


`versions-maven-plugin` release `2.14.0` broke the following this command in 
Spark:
{noformat}
build/mvn versions:set -DnewVersion=3.4.0-jz-0 -DgenerateBackupPoms=false
{noformat}
See [https://github.com/mojohaus/versions/issues/848.]

The plugin will be fixed in 2.14.1. However it may be desirable to pin plugin 
version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41518) Assign a name to the error class _LEGACY_ERROR_TEMP_2422

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647596#comment-17647596
 ] 

Apache Spark commented on SPARK-41518:
--

User 'MaxGekk' has created a pull request for this issue:
https://github.com/apache/spark/pull/39061

> Assign a name to the error class _LEGACY_ERROR_TEMP_2422
> 
>
> Key: SPARK-41518
> URL: https://issues.apache.org/jira/browse/SPARK-41518
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
>
> Choose a proper name to the legacy error class _LEGACY_ERROR_TEMP_2422, and 
> double check that there are tests for the error class, and improve the error 
> message format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41518) Assign a name to the error class _LEGACY_ERROR_TEMP_2422

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41518:


Assignee: Max Gekk  (was: Apache Spark)

> Assign a name to the error class _LEGACY_ERROR_TEMP_2422
> 
>
> Key: SPARK-41518
> URL: https://issues.apache.org/jira/browse/SPARK-41518
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Max Gekk
>Priority: Major
>
> Choose a proper name to the legacy error class _LEGACY_ERROR_TEMP_2422, and 
> double check that there are tests for the error class, and improve the error 
> message format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41518) Assign a name to the error class _LEGACY_ERROR_TEMP_2422

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41518:


Assignee: Apache Spark  (was: Max Gekk)

> Assign a name to the error class _LEGACY_ERROR_TEMP_2422
> 
>
> Key: SPARK-41518
> URL: https://issues.apache.org/jira/browse/SPARK-41518
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Assignee: Apache Spark
>Priority: Major
>
> Choose a proper name to the legacy error class _LEGACY_ERROR_TEMP_2422, and 
> double check that there are tests for the error class, and improve the error 
> message format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41518) Assign a name to the error class _LEGACY_ERROR_TEMP_2422

2022-12-14 Thread Max Gekk (Jira)
Max Gekk created SPARK-41518:


 Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_2422
 Key: SPARK-41518
 URL: https://issues.apache.org/jira/browse/SPARK-41518
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.4.0
Reporter: Max Gekk
Assignee: Max Gekk


Choose a proper name to the legacy error class _LEGACY_ERROR_TEMP_2422, and 
double check that there are tests for the error class, and improve the error 
message format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-41517) Failed to find name hashes for org.apache.spark.connect.proto.LocalRelation

2022-12-14 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647097#comment-17647097
 ] 

Yang Jie edited comment on SPARK-41517 at 12/14/22 1:15 PM:


cc [~gurwls223] , 

Can you reproduce this issue? Although it seems that the `dev/mima` result is 
successful

 


was (Author: luciferyang):
cc [~gurwls223] 

> Failed to find name hashes for org.apache.spark.connect.proto.LocalRelation
> ---
>
> Key: SPARK-41517
> URL: https://issues.apache.org/jira/browse/SPARK-41517
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>
> run "dev/mina" locally, then there are java.lang.RuntimeException
> {code:java}
> [error] java.lang.RuntimeException: Failed to find name hashes for 
> org.apache.spark.connect.proto.LocalRelation
> [error] scala.sys.package$.error(package.scala:30)
> [error] 
> sbt.internal.inc.AnalysisCallback.nameHashesForCompanions(Incremental.scala:962)
> [error] sbt.internal.inc.AnalysisCallback.analyzeClass(Incremental.scala:969)
> [error] 
> sbt.internal.inc.AnalysisCallback.$anonfun$addProductsAndDeps$4(Incremental.scala:992)
> [error] 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
> [error] scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> [error] scala.collection.TraversableLike.map(TraversableLike.scala:286)
> [error] scala.collection.TraversableLike.map$(TraversableLike.scala:279)
> [error] 
> scala.collection.mutable.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:50)
> [error] scala.collection.SetLike.map(SetLike.scala:105)
> [error] scala.collection.SetLike.map$(SetLike.scala:105)
> [error] scala.collection.mutable.AbstractSet.map(Set.scala:50)
> [error] 
> sbt.internal.inc.AnalysisCallback.$anonfun$addProductsAndDeps$1(Incremental.scala:992)
> [error] 
> scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
> [error] 
> scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
> [error] scala.collection.Iterator.foreach(Iterator.scala:943)
> [error] scala.collection.Iterator.foreach$(Iterator.scala:943)
> [error] scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> [error] scala.collection.IterableLike.foreach(IterableLike.scala:74)
> [error] scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> [error] scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> [error] scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
> [error] scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
> [error] scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
> [error] 
> sbt.internal.inc.AnalysisCallback.addProductsAndDeps(Incremental.scala:985)
> [error] sbt.internal.inc.AnalysisCallback.getAnalysis(Incremental.scala:919)
> [error] 
> sbt.internal.inc.AnalysisCallback.getCycleResultOnce(Incremental.scala:910)
> [error] sbt.internal.inc.Incremental$$anon$2.run(Incremental.scala:464)
> [error] 
> sbt.internal.inc.IncrementalCommon$CycleState.next(IncrementalCommon.scala:116)
> [error] 
> sbt.internal.inc.IncrementalCommon$$anon$1.next(IncrementalCommon.scala:56)
> [error] 
> sbt.internal.inc.IncrementalCommon$$anon$1.next(IncrementalCommon.scala:52)
> [error] sbt.internal.inc.IncrementalCommon.cycle(IncrementalCommon.scala:263)
> [error] 
> sbt.internal.inc.Incremental$.$anonfun$incrementalCompile$8(Incremental.scala:418)
> [error] 
> sbt.internal.inc.Incremental$.withClassfileManager(Incremental.scala:506)
> [error] 
> sbt.internal.inc.Incremental$.incrementalCompile(Incremental.scala:405)
> [error] sbt.internal.inc.Incremental$.apply(Incremental.scala:171)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.compileInternal(IncrementalCompilerImpl.scala:534)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileIncrementally$1(IncrementalCompilerImpl.scala:488)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.handleCompilationError(IncrementalCompilerImpl.scala:332)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.compileIncrementally(IncrementalCompilerImpl.scala:425)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.compile(IncrementalCompilerImpl.scala:137)
> [error] sbt.Defaults$.compileIncrementalTaskImpl(Defaults.scala:2363)
> [error] sbt.Defaults$.$anonfun$compileIncrementalTask$2(Defaults.scala:2313)
> [error] 
> sbt.internal.server.BspCompileTask$.$anonfun$compute$1(BspCompileTask.scala:30)
> [error] sbt.internal.io.Retry$.apply(Retry.scala:46)
> [error] sbt.internal.io.Retry$.apply(Retry.scala:28)
> [error] sbt.internal.io.Retry$.apply(Retry.scala:23)
> [error] sbt.internal.server.BspCompileTask$.compute(BspCompileTask.scala:30)
> [error] 

[jira] [Commented] (SPARK-41517) Failed to find name hashes for org.apache.spark.connect.proto.LocalRelation

2022-12-14 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647097#comment-17647097
 ] 

Yang Jie commented on SPARK-41517:
--

cc [~gurwls223] 

> Failed to find name hashes for org.apache.spark.connect.proto.LocalRelation
> ---
>
> Key: SPARK-41517
> URL: https://issues.apache.org/jira/browse/SPARK-41517
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>
> run "dev/mina" locally, then there are java.lang.RuntimeException
> {code:java}
> [error] java.lang.RuntimeException: Failed to find name hashes for 
> org.apache.spark.connect.proto.LocalRelation
> [error] scala.sys.package$.error(package.scala:30)
> [error] 
> sbt.internal.inc.AnalysisCallback.nameHashesForCompanions(Incremental.scala:962)
> [error] sbt.internal.inc.AnalysisCallback.analyzeClass(Incremental.scala:969)
> [error] 
> sbt.internal.inc.AnalysisCallback.$anonfun$addProductsAndDeps$4(Incremental.scala:992)
> [error] 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
> [error] scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> [error] scala.collection.TraversableLike.map(TraversableLike.scala:286)
> [error] scala.collection.TraversableLike.map$(TraversableLike.scala:279)
> [error] 
> scala.collection.mutable.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:50)
> [error] scala.collection.SetLike.map(SetLike.scala:105)
> [error] scala.collection.SetLike.map$(SetLike.scala:105)
> [error] scala.collection.mutable.AbstractSet.map(Set.scala:50)
> [error] 
> sbt.internal.inc.AnalysisCallback.$anonfun$addProductsAndDeps$1(Incremental.scala:992)
> [error] 
> scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
> [error] 
> scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
> [error] scala.collection.Iterator.foreach(Iterator.scala:943)
> [error] scala.collection.Iterator.foreach$(Iterator.scala:943)
> [error] scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> [error] scala.collection.IterableLike.foreach(IterableLike.scala:74)
> [error] scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> [error] scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> [error] scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
> [error] scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
> [error] scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
> [error] 
> sbt.internal.inc.AnalysisCallback.addProductsAndDeps(Incremental.scala:985)
> [error] sbt.internal.inc.AnalysisCallback.getAnalysis(Incremental.scala:919)
> [error] 
> sbt.internal.inc.AnalysisCallback.getCycleResultOnce(Incremental.scala:910)
> [error] sbt.internal.inc.Incremental$$anon$2.run(Incremental.scala:464)
> [error] 
> sbt.internal.inc.IncrementalCommon$CycleState.next(IncrementalCommon.scala:116)
> [error] 
> sbt.internal.inc.IncrementalCommon$$anon$1.next(IncrementalCommon.scala:56)
> [error] 
> sbt.internal.inc.IncrementalCommon$$anon$1.next(IncrementalCommon.scala:52)
> [error] sbt.internal.inc.IncrementalCommon.cycle(IncrementalCommon.scala:263)
> [error] 
> sbt.internal.inc.Incremental$.$anonfun$incrementalCompile$8(Incremental.scala:418)
> [error] 
> sbt.internal.inc.Incremental$.withClassfileManager(Incremental.scala:506)
> [error] 
> sbt.internal.inc.Incremental$.incrementalCompile(Incremental.scala:405)
> [error] sbt.internal.inc.Incremental$.apply(Incremental.scala:171)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.compileInternal(IncrementalCompilerImpl.scala:534)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileIncrementally$1(IncrementalCompilerImpl.scala:488)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.handleCompilationError(IncrementalCompilerImpl.scala:332)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.compileIncrementally(IncrementalCompilerImpl.scala:425)
> [error] 
> sbt.internal.inc.IncrementalCompilerImpl.compile(IncrementalCompilerImpl.scala:137)
> [error] sbt.Defaults$.compileIncrementalTaskImpl(Defaults.scala:2363)
> [error] sbt.Defaults$.$anonfun$compileIncrementalTask$2(Defaults.scala:2313)
> [error] 
> sbt.internal.server.BspCompileTask$.$anonfun$compute$1(BspCompileTask.scala:30)
> [error] sbt.internal.io.Retry$.apply(Retry.scala:46)
> [error] sbt.internal.io.Retry$.apply(Retry.scala:28)
> [error] sbt.internal.io.Retry$.apply(Retry.scala:23)
> [error] sbt.internal.server.BspCompileTask$.compute(BspCompileTask.scala:30)
> [error] sbt.Defaults$.$anonfun$compileIncrementalTask$1(Defaults.scala:2311)
> [error] scala.Function1.$anonfun$compose$1(Function1.scala:49)
> [error] 
> 

[jira] [Created] (SPARK-41517) Failed to find name hashes for org.apache.spark.connect.proto.LocalRelation

2022-12-14 Thread Yang Jie (Jira)
Yang Jie created SPARK-41517:


 Summary: Failed to find name hashes for 
org.apache.spark.connect.proto.LocalRelation
 Key: SPARK-41517
 URL: https://issues.apache.org/jira/browse/SPARK-41517
 Project: Spark
  Issue Type: Bug
  Components: Project Infra
Affects Versions: 3.4.0
Reporter: Yang Jie


run "dev/mina" locally, then there are java.lang.RuntimeException
{code:java}
[error] java.lang.RuntimeException: Failed to find name hashes for 
org.apache.spark.connect.proto.LocalRelation
[error] scala.sys.package$.error(package.scala:30)
[error] 
sbt.internal.inc.AnalysisCallback.nameHashesForCompanions(Incremental.scala:962)
[error] sbt.internal.inc.AnalysisCallback.analyzeClass(Incremental.scala:969)
[error] 
sbt.internal.inc.AnalysisCallback.$anonfun$addProductsAndDeps$4(Incremental.scala:992)
[error] 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
[error] scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
[error] scala.collection.TraversableLike.map(TraversableLike.scala:286)
[error] scala.collection.TraversableLike.map$(TraversableLike.scala:279)
[error] 
scala.collection.mutable.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:50)
[error] scala.collection.SetLike.map(SetLike.scala:105)
[error] scala.collection.SetLike.map$(SetLike.scala:105)
[error] scala.collection.mutable.AbstractSet.map(Set.scala:50)
[error] 
sbt.internal.inc.AnalysisCallback.$anonfun$addProductsAndDeps$1(Incremental.scala:992)
[error] 
scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
[error] 
scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
[error] scala.collection.Iterator.foreach(Iterator.scala:943)
[error] scala.collection.Iterator.foreach$(Iterator.scala:943)
[error] scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
[error] scala.collection.IterableLike.foreach(IterableLike.scala:74)
[error] scala.collection.IterableLike.foreach$(IterableLike.scala:73)
[error] scala.collection.AbstractIterable.foreach(Iterable.scala:56)
[error] scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
[error] scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
[error] scala.collection.AbstractTraversable.foldLeft(Traversable.scala:108)
[error] 
sbt.internal.inc.AnalysisCallback.addProductsAndDeps(Incremental.scala:985)
[error] sbt.internal.inc.AnalysisCallback.getAnalysis(Incremental.scala:919)
[error] 
sbt.internal.inc.AnalysisCallback.getCycleResultOnce(Incremental.scala:910)
[error] sbt.internal.inc.Incremental$$anon$2.run(Incremental.scala:464)
[error] 
sbt.internal.inc.IncrementalCommon$CycleState.next(IncrementalCommon.scala:116)
[error] 
sbt.internal.inc.IncrementalCommon$$anon$1.next(IncrementalCommon.scala:56)
[error] 
sbt.internal.inc.IncrementalCommon$$anon$1.next(IncrementalCommon.scala:52)
[error] sbt.internal.inc.IncrementalCommon.cycle(IncrementalCommon.scala:263)
[error] 
sbt.internal.inc.Incremental$.$anonfun$incrementalCompile$8(Incremental.scala:418)
[error] 
sbt.internal.inc.Incremental$.withClassfileManager(Incremental.scala:506)
[error] sbt.internal.inc.Incremental$.incrementalCompile(Incremental.scala:405)
[error] sbt.internal.inc.Incremental$.apply(Incremental.scala:171)
[error] 
sbt.internal.inc.IncrementalCompilerImpl.compileInternal(IncrementalCompilerImpl.scala:534)
[error] 
sbt.internal.inc.IncrementalCompilerImpl.$anonfun$compileIncrementally$1(IncrementalCompilerImpl.scala:488)
[error] 
sbt.internal.inc.IncrementalCompilerImpl.handleCompilationError(IncrementalCompilerImpl.scala:332)
[error] 
sbt.internal.inc.IncrementalCompilerImpl.compileIncrementally(IncrementalCompilerImpl.scala:425)
[error] 
sbt.internal.inc.IncrementalCompilerImpl.compile(IncrementalCompilerImpl.scala:137)
[error] sbt.Defaults$.compileIncrementalTaskImpl(Defaults.scala:2363)
[error] sbt.Defaults$.$anonfun$compileIncrementalTask$2(Defaults.scala:2313)
[error] 
sbt.internal.server.BspCompileTask$.$anonfun$compute$1(BspCompileTask.scala:30)
[error] sbt.internal.io.Retry$.apply(Retry.scala:46)
[error] sbt.internal.io.Retry$.apply(Retry.scala:28)
[error] sbt.internal.io.Retry$.apply(Retry.scala:23)
[error] sbt.internal.server.BspCompileTask$.compute(BspCompileTask.scala:30)
[error] sbt.Defaults$.$anonfun$compileIncrementalTask$1(Defaults.scala:2311)
[error] scala.Function1.$anonfun$compose$1(Function1.scala:49)
[error] 
sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:62)
[error] sbt.std.Transform$$anon$4.work(Transform.scala:68)
[error] sbt.Execute.$anonfun$submit$2(Execute.scala:282)
[error] sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23)
[error] sbt.Execute.work(Execute.scala:291)
[error] sbt.Execute.$anonfun$submit$1(Execute.scala:282)
[error] 
sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:265)
[error] 

[jira] [Assigned] (SPARK-41508) Assign name to _LEGACY_ERROR_TEMP_1179 and unwrap the existing SparkThrowable

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41508:


Assignee: (was: Apache Spark)

>  Assign name to _LEGACY_ERROR_TEMP_1179 and unwrap the existing SparkThrowable
> --
>
> Key: SPARK-41508
> URL: https://issues.apache.org/jira/browse/SPARK-41508
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41508) Assign name to _LEGACY_ERROR_TEMP_1179 and unwrap the existing SparkThrowable

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41508:


Assignee: Apache Spark

>  Assign name to _LEGACY_ERROR_TEMP_1179 and unwrap the existing SparkThrowable
> --
>
> Key: SPARK-41508
> URL: https://issues.apache.org/jira/browse/SPARK-41508
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41508) Assign name to _LEGACY_ERROR_TEMP_1179 and unwrap the existing SparkThrowable

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647087#comment-17647087
 ] 

Apache Spark commented on SPARK-41508:
--

User 'LuciferYang' has created a pull request for this issue:
https://github.com/apache/spark/pull/39063

>  Assign name to _LEGACY_ERROR_TEMP_1179 and unwrap the existing SparkThrowable
> --
>
> Key: SPARK-41508
> URL: https://issues.apache.org/jira/browse/SPARK-41508
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41516) Allow jdbc dialects to override the query used to create a table

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647038#comment-17647038
 ] 

Apache Spark commented on SPARK-41516:
--

User 'huangxiaopingRD' has created a pull request for this issue:
https://github.com/apache/spark/pull/39062

> Allow jdbc dialects to override the query used to create a table
> 
>
> Key: SPARK-41516
> URL: https://issues.apache.org/jira/browse/SPARK-41516
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: xiaoping.huang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41516) Allow jdbc dialects to override the query used to create a table

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41516:


Assignee: (was: Apache Spark)

> Allow jdbc dialects to override the query used to create a table
> 
>
> Key: SPARK-41516
> URL: https://issues.apache.org/jira/browse/SPARK-41516
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: xiaoping.huang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-41516) Allow jdbc dialects to override the query used to create a table

2022-12-14 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647037#comment-17647037
 ] 

Apache Spark commented on SPARK-41516:
--

User 'huangxiaopingRD' has created a pull request for this issue:
https://github.com/apache/spark/pull/39062

> Allow jdbc dialects to override the query used to create a table
> 
>
> Key: SPARK-41516
> URL: https://issues.apache.org/jira/browse/SPARK-41516
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: xiaoping.huang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-41516) Allow jdbc dialects to override the query used to create a table

2022-12-14 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-41516:


Assignee: Apache Spark

> Allow jdbc dialects to override the query used to create a table
> 
>
> Key: SPARK-41516
> URL: https://issues.apache.org/jira/browse/SPARK-41516
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.4.0
>Reporter: xiaoping.huang
>Assignee: Apache Spark
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-41516) Allow jdbc dialects to override the query used to create a table

2022-12-14 Thread xiaoping.huang (Jira)
xiaoping.huang created SPARK-41516:
--

 Summary: Allow jdbc dialects to override the query used to create 
a table
 Key: SPARK-41516
 URL: https://issues.apache.org/jira/browse/SPARK-41516
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, SQL
Affects Versions: 3.4.0
Reporter: xiaoping.huang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-41514) Add `PVC-oriented executor pod allocation` section and revise config name

2022-12-14 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-41514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-41514.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Issue resolved by pull request 39058
[https://github.com/apache/spark/pull/39058]

> Add `PVC-oriented executor pod allocation` section and revise config name
> -
>
> Key: SPARK-41514
> URL: https://issues.apache.org/jira/browse/SPARK-41514
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, Kubernetes
>Affects Versions: 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-41497) Accumulator undercounting in the case of retry task with rdd cache

2022-12-14 Thread Mridul Muralidharan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-41497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17646972#comment-17646972
 ] 

Mridul Muralidharan edited comment on SPARK-41497 at 12/14/22 8:03 AM:
---

[~Ngone51] Agree, that is what I was not sure of (whether we can detect this 
scenario about use of accumulators which might be updated subsequently). Note 
that updates to the same accumulator can happen before and after a cache in 
user code - so we might be able to only catch scenario when there are no 
accumulators.
If I am not wrong, SQL makes very heavy use of accumulators, and so most stages 
will end up having them anyway - right ?

I would expect this scenario (even without accumulator) to be fairly low 
frequency enough that the cost of extra recomputation might be fine.


was (Author: mridulm80):
[~Ngone51] Agree, that is what I was not sure of.
I would expect this scenario (even without accumulator) to be fairly low 
frequency enough that the cost of extra recomputation might be fine.

> Accumulator undercounting in the case of retry task with rdd cache
> --
>
> Key: SPARK-41497
> URL: https://issues.apache.org/jira/browse/SPARK-41497
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.0.3, 3.1.3, 3.2.2, 3.3.1
>Reporter: wuyi
>Priority: Major
>
> Accumulator could be undercounted when the retried task has rdd cache.  See 
> the example below and you could also find the completed and reproducible 
> example at 
> [https://github.com/apache/spark/compare/master...Ngone51:spark:fix-acc]
>   
> {code:scala}
> test("SPARK-XXX") {
>   // Set up a cluster with 2 executors
>   val conf = new SparkConf()
> .setMaster("local-cluster[2, 1, 
> 1024]").setAppName("TaskSchedulerImplSuite")
>   sc = new SparkContext(conf)
>   // Set up a custom task scheduler. The scheduler will fail the first task 
> attempt of the job
>   // submitted below. In particular, the failed first attempt task would 
> success on computation
>   // (accumulator accounting, result caching) but only fail to report its 
> success status due
>   // to the concurrent executor lost. The second task attempt would success.
>   taskScheduler = setupSchedulerWithCustomStatusUpdate(sc)
>   val myAcc = sc.longAccumulator("myAcc")
>   // Initiate a rdd with only one partition so there's only one task and 
> specify the storage level
>   // with MEMORY_ONLY_2 so that the rdd result will be cached on both two 
> executors.
>   val rdd = sc.parallelize(0 until 10, 1).mapPartitions { iter =>
> myAcc.add(100)
> iter.map(x => x + 1)
>   }.persist(StorageLevel.MEMORY_ONLY_2)
>   // This will pass since the second task attempt will succeed
>   assert(rdd.count() === 10)
>   // This will fail due to `myAcc.add(100)` won't be executed during the 
> second task attempt's
>   // execution. Because the second task attempt will load the rdd cache 
> directly instead of
>   // executing the task function so `myAcc.add(100)` is skipped.
>   assert(myAcc.value === 100)
> } {code}
>  
> We could also hit this issue with decommission even if the rdd only has one 
> copy. For example, decommission could migrate the rdd cache block to another 
> executor (the result is actually the same with 2 copies) and the 
> decommissioned executor lost before the task reports its success status to 
> the driver. 
>  
> And the issue is a bit more complicated than expected to fix. I have tried to 
> give some fixes but all of them are not ideal:
> Option 1: Clean up any rdd cache related to the failed task: in practice, 
> this option can already fix the issue in most cases. However, theoretically, 
> rdd cache could be reported to the driver right after the driver cleans up 
> the failed task's caches due to asynchronous communication. So this option 
> can’t resolve the issue thoroughly;
> Option 2: Disallow rdd cache reuse across the task attempts for the same 
> task: this option can 100% fix the issue. The problem is this way can also 
> affect the case where rdd cache can be reused across the attempts (e.g., when 
> there is no accumulator operation in the task), which can have perf 
> regression;
> Option 3: Introduce accumulator cache: first, this requires a new framework 
> for supporting accumulator cache; second, the driver should improve its logic 
> to distinguish whether the accumulator cache value should be reported to the 
> user to avoid overcounting. For example, in the case of task retry, the value 
> should be reported. However, in the case of rdd cache reuse, the value 
> shouldn’t be reported (should it?);
> Option 4: Do task success validation when a task trying to load the rdd 
> cache: this way defines a rdd cache is only valid/accessible if the task has