yaooqinn commented on code in PR #40437:
URL: https://github.com/apache/spark/pull/40437#discussion_r1152809362
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/KeepCommandOutputWithHive.scala:
##
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundati
wangyum commented on code in PR #40601:
URL: https://github.com/apache/spark/pull/40601#discussion_r1152803208
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -424,6 +428,8 @@ class Analyzer(override val catalogManager: CatalogManager)
yaooqinn commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-1489781086
I am not sure why we must keep consistent with Hive for such a case,
1. this is just output from the command line interface, not a programming
API.
2. the `hive` CLI itself is a
itholic commented on PR #40525:
URL: https://github.com/apache/spark/pull/40525#issuecomment-1489771881
CI passed. cc @HyukjinKwon @ueshin @xinrong-meng @zhengruifeng PTAL when you
find some time.
I summarized key changes into PR description for review.
--
This is an automated mess
zsxwing commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1152772092
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private val E
wangyum opened a new pull request, #40601:
URL: https://github.com/apache/spark/pull/40601
### What changes were proposed in this pull request?
This PR makes it cast the result type of string +/- interval to timestamp
type instead of string type.
### Why are the changes needed?
LuciferYang commented on PR #40598:
URL: https://github.com/apache/spark/pull/40598#issuecomment-1489742010
cc @sadikovi @srowen @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on PR #40597:
URL: https://github.com/apache/spark/pull/40597#issuecomment-1489741560
Thanks @HyukjinKwon @sadikovi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152742646
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
HyukjinKwon closed pull request #40599: [SPARK-42907][TESTS][FOLLOWUP] Avro
functions doctest cleanup
URL: https://github.com/apache/spark/pull/40599
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
anishshri-db commented on PR #40600:
URL: https://github.com/apache/spark/pull/40600#issuecomment-1489721186
@HeartSaVioR - PTAL when you get a chance. Thx
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
HyukjinKwon commented on PR #40599:
URL: https://github.com/apache/spark/pull/40599#issuecomment-1489721124
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
anishshri-db opened a new pull request, #40600:
URL: https://github.com/apache/spark/pull/40600
### What changes were proposed in this pull request?
Add option to skip commit coordinator as part of StreamingWrite API for DSv2
sources/sinks. This option was already present as part of the B
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152742646
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
zhengruifeng opened a new pull request, #40599:
URL: https://github.com/apache/spark/pull/40599
### What changes were proposed in this pull request?
Avro functions doctest cleanup, remove unused `print`
### Why are the changes needed?
those lines were just to investigate the logs
HyukjinKwon closed pull request #40597: [SPARK-42971][CORE] Change to print
`workdir` if `appDirs` is null when worker handle `WorkDirCleanup` event
URL: https://github.com/apache/spark/pull/40597
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
HyukjinKwon commented on PR #40597:
URL: https://github.com/apache/spark/pull/40597#issuecomment-1489687927
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
LuciferYang opened a new pull request, #40598:
URL: https://github.com/apache/spark/pull/40598
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
itholic commented on PR #39702:
URL: https://github.com/apache/spark/pull/39702#issuecomment-1489668136
@MaxGekk Can you take a look when you find some time?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abov
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152707836
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152702218
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
ulysses-you commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1152689151
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRulesHolder.scala:
##
@@ -26,5 +26,6 @@ import org.apache.spark.sql.execution.SparkPlan
sadikovi commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152687590
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
beliefer commented on PR #40355:
URL: https://github.com/apache/spark/pull/40355#issuecomment-1489622921
@hvanhovell Could you take a review ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
beliefer commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1152684039
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CollectionExpressionsSuite.scala:
##
@@ -1855,50 +1855,6 @@ class CollectionExpressionsSuite e
wangyum closed pull request #40294: [SPARK-40610][SQL] Support unwrap date type
to string type
URL: https://github.com/apache/spark/pull/40294
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
wangyum commented on PR #40294:
URL: https://github.com/apache/spark/pull/40294#issuecomment-1489605259
Close it, because this change may have potential data issue. Users can `set
spark.sql.legacy.typeCoercion.datetimeToString.enabled` to `true` to restore
the old behavior.
--
This i
LuciferYang commented on PR #40597:
URL: https://github.com/apache/spark/pull/40597#issuecomment-1489600591
cc @HyukjinKwon @sadikovi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
Hisoka-X commented on code in PR #40564:
URL: https://github.com/apache/spark/pull/40564#discussion_r1152663238
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/IntegrationTestUtils.scala:
##
@@ -57,6 +57,12 @@ object IntegrationTestUtils {
LuciferYang opened a new pull request, #40597:
URL: https://github.com/apache/spark/pull/40597
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
LuciferYang commented on code in PR #36677:
URL: https://github.com/apache/spark/pull/36677#discussion_r1152661979
##
core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala:
##
@@ -516,7 +516,8 @@ private[deploy] class Worker(
val cleanupFuture: concurrent.Futu
cloud-fan commented on code in PR #40437:
URL: https://github.com/apache/spark/pull/40437#discussion_r1152653649
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/KeepCommandOutputWithHive.scala:
##
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundat
panbingkun opened a new pull request, #40596:
URL: https://github.com/apache/spark/pull/40596
### What changes were proposed in this pull request?
The pr aims to upgrade buf from 1.15.1 to 1.16.0
### Why are the changes needed?
Release Notes: https://github.com/bufbuild/buf/relea
cloud-fan commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1152649900
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRulesHolder.scala:
##
@@ -26,5 +26,6 @@ import org.apache.spark.sql.execution.SparkPlan
*
cloud-fan commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1152649299
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRulesHolder.scala:
##
@@ -26,5 +26,6 @@ import org.apache.spark.sql.execution.SparkPlan
*
ueshin commented on PR #40594:
URL: https://github.com/apache/spark/pull/40594#issuecomment-1489567987
#40595
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe
ueshin opened a new pull request, #40595:
URL: https://github.com/apache/spark/pull/40595
### What changes were proposed in this pull request?
Reuses `pyspark.sql.tests.test_arrow` test cases.
### Why are the changes needed?
`test_arrow` is also helpful because it contain
beliefer commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1152648264
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -5056,128 +4950,45 @@ case class ArrayCompact(child: Express
cloud-fan commented on code in PR #40116:
URL: https://github.com/apache/spark/pull/40116#discussion_r1152647858
##
sql/core/src/main/scala/org/apache/spark/sql/SQLImplicits.scala:
##
@@ -45,7 +45,7 @@ abstract class SQLImplicits extends LowPrioritySQLImplicits {
}
// Pr
beliefer commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1152647435
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -1400,120 +1400,24 @@ case class ArrayContains(left: Express
beliefer commented on PR #40291:
URL: https://github.com/apache/spark/pull/40291#issuecomment-1489556667
> Is that #40415?
It is https://github.com/apache/spark/pull/40358
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
sadikovi commented on code in PR #36677:
URL: https://github.com/apache/spark/pull/36677#discussion_r1152636135
##
core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala:
##
@@ -516,7 +516,8 @@ private[deploy] class Worker(
val cleanupFuture: concurrent.Future[
HyukjinKwon commented on PR #40594:
URL: https://github.com/apache/spark/pull/40594#issuecomment-1489522478
It has a conflict with branch-3.4. Would you mind creating a backport please?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
LuciferYang commented on code in PR #36677:
URL: https://github.com/apache/spark/pull/36677#discussion_r1152618683
##
core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala:
##
@@ -516,7 +516,8 @@ private[deploy] class Worker(
val cleanupFuture: concurrent.Futu
HyukjinKwon closed pull request #40594: [SPARK-42970][CONNECT][PYTHON][TESTS]
Reuse pyspark.sql.tests.test_arrow test cases
URL: https://github.com/apache/spark/pull/40594
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
HyukjinKwon commented on PR #40594:
URL: https://github.com/apache/spark/pull/40594#issuecomment-1489520920
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
github-actions[bot] commented on PR #39102:
URL: https://github.com/apache/spark/pull/39102#issuecomment-1489512808
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
amaliujia commented on code in PR #40581:
URL: https://github.com/apache/spark/pull/40581#discussion_r1152606377
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -482,27 +482,66 @@ class SparkConnectPlanner(val sess
srowen commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152590107
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
sadikovi commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1152583895
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
sadikovi commented on code in PR #36677:
URL: https://github.com/apache/spark/pull/36677#discussion_r1152581161
##
core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala:
##
@@ -516,7 +516,8 @@ private[deploy] class Worker(
val cleanupFuture: concurrent.Future[
ueshin opened a new pull request, #40594:
URL: https://github.com/apache/spark/pull/40594
### What changes were proposed in this pull request?
Reuses `pyspark.sql.tests.test_arrow` test cases.
### Why are the changes needed?
`test_arrow` is also helpful because it contain
zhenlineo commented on code in PR #40581:
URL: https://github.com/apache/spark/pull/40581#discussion_r1152474793
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -482,27 +482,66 @@ class SparkConnectPlanner(val sess
zhenlineo commented on code in PR #40581:
URL: https://github.com/apache/spark/pull/40581#discussion_r1152469860
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/IntegrationTestUtils.scala:
##
@@ -43,7 +45,25 @@ object IntegrationTestUtils
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152453603
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwrit
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152452932
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwrit
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152451343
##
python/pyspark/sql/connect/streaming/query.py:
##
@@ -0,0 +1,173 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license a
amaliujia commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152441819
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwr
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152436270
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwrit
MaxGekk opened a new pull request, #40593:
URL: https://github.com/apache/spark/pull/40593
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
hvanhovell closed pull request #40590: [SPARK-42631][CONNECT][FOLLOW-UP] Expose
Column.expr to extensions
URL: https://github.com/apache/spark/pull/40590
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
hvanhovell commented on PR #40590:
URL: https://github.com/apache/spark/pull/40590#issuecomment-1489181163
Merging.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsub
amaliujia commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152388067
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwr
amaliujia commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152387534
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwr
amaliujia commented on code in PR #40581:
URL: https://github.com/apache/spark/pull/40581#discussion_r1152385742
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -482,27 +482,66 @@ class SparkConnectPlanner(val sess
amaliujia commented on PR #40590:
URL: https://github.com/apache/spark/pull/40590#issuecomment-1489129059
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscrib
jiangxb1987 opened a new pull request, #40592:
URL: https://github.com/apache/spark/pull/40592
### What changes were proposed in this pull request?
The PR fixes a bug that SparkListenerTaskStart can have `stageAttemptId =
-1` when a task is launched after the stage is cancelled. A
WweiL commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152323338
##
python/pyspark/sql/connect/streaming/query.py:
##
@@ -0,0 +1,173 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agr
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152315438
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1969,6 +2014,136 @@ class SparkConnectPlanner(val sess
WweiL commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152307065
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1969,6 +2014,136 @@ class SparkConnectPlanner(val sessio
WweiL commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152296460
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1969,6 +2014,136 @@ class SparkConnectPlanner(val sessio
hvanhovell commented on PR #40291:
URL: https://github.com/apache/spark/pull/40291#issuecomment-1489025120
Is that https://github.com/apache/spark/pull/40415?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
zhenlineo commented on code in PR #40581:
URL: https://github.com/apache/spark/pull/40581#discussion_r1152274921
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/IntegrationTestUtils.scala:
##
@@ -43,7 +45,27 @@ object IntegrationTestUtils
paul-laffon-dd opened a new pull request, #40591:
URL: https://github.com/apache/spark/pull/40591
### What changes were proposed in this pull request?
The exit code is already available in the `stop(exitCode: Int)` function of
the SparkContext, it only can be propagated to
sunchao commented on code in PR #39950:
URL: https://github.com/apache/spark/pull/39950#discussion_r1152231701
##
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/ParquetFooterReader.java:
##
@@ -17,23 +17,53 @@
package org.apache.spark.sql.execution.
tomvanbussel opened a new pull request, #40590:
URL: https://github.com/apache/spark/pull/40590
### What changes were proposed in this pull request?
This PR is a follow-up to https://github.com/apache/spark/pull/40234, which
makes it possible for extensions to create custom `Dataset`s and
zhenlineo commented on code in PR #40564:
URL: https://github.com/apache/spark/pull/40564#discussion_r1152212099
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/IntegrationTestUtils.scala:
##
@@ -57,6 +57,12 @@ object IntegrationTestUtils
Hisoka-X commented on code in PR #40564:
URL: https://github.com/apache/spark/pull/40564#discussion_r1152172912
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##
@@ -58,10 +58,12 @@ object SparkConnectServerUtils
MaxGekk closed pull request #40565: [SPARK-42873][SQL] Define Spark SQL types
as keywords
URL: https://github.com/apache/spark/pull/40565
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
MaxGekk commented on PR #40565:
URL: https://github.com/apache/spark/pull/40565#issuecomment-1488877352
Merging to master. Thank you, @cloud-fan for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
MaxGekk commented on PR #40565:
URL: https://github.com/apache/spark/pull/40565#issuecomment-1488876848
Highly likely, the GA `continuous-integration/appveyor/pr` is not related to
my changes. I am going to merge this PR.
--
This is an automated message from the Apache Git Service.
To res
zhenlineo commented on code in PR #40564:
URL: https://github.com/apache/spark/pull/40564#discussion_r1152081937
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##
@@ -58,10 +58,12 @@ object SparkConnectServerUtil
infoankitp commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1151970771
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CollectionExpressionsSuite.scala:
##
@@ -1855,50 +1855,6 @@ class CollectionExpressionsSuite
infoankitp commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1151952607
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -1400,120 +1400,24 @@ case class ArrayContains(left: Expre
VindhyaG commented on code in PR #40553:
URL: https://github.com/apache/spark/pull/40553#discussion_r1151950076
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -883,6 +883,129 @@ class Dataset[T] private[sql](
println(showString(numRows, truncate, verti
ryan-johnson-databricks commented on code in PR #40300:
URL: https://github.com/apache/spark/pull/40300#discussion_r1151946878
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java:
##
@@ -48,11 +47,22 @@ public interface SupportsMetad
VindhyaG commented on code in PR #40553:
URL: https://github.com/apache/spark/pull/40553#discussion_r1151938443
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -535,6 +535,159 @@ class Dataset[T] private[sql] (
}
}
+ /**
+ *
VindhyaG commented on code in PR #40553:
URL: https://github.com/apache/spark/pull/40553#discussion_r1151936705
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -883,6 +883,129 @@ class Dataset[T] private[sql](
println(showString(numRows, truncate, verti
yabola closed pull request #40495: test reading footer within file range
URL: https://github.com/apache/spark/pull/40495
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsu
Kwafoor commented on code in PR #40294:
URL: https://github.com/apache/spark/pull/40294#discussion_r1151924388
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/UnwrapCastInBinaryComparison.scala:
##
@@ -133,6 +133,11 @@ object UnwrapCastInBinaryComparison e
ulysses-you commented on PR #40589:
URL: https://github.com/apache/spark/pull/40589#issuecomment-1488464221
cc @cloud-fan @dongjoon-hyun @yaooqinn
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
ulysses-you opened a new pull request, #40589:
URL: https://github.com/apache/spark/pull/40589
### What changes were proposed in this pull request?
Add `injectQueryStageOptimizerRule` public method in `SparkSessionExtensions`
### Why are the changes needed?
Provid
yaooqinn opened a new pull request, #40588:
URL: https://github.com/apache/spark/pull/40588
### What changes were proposed in this pull request?
This PR redirects '42P07' SQL state to table not found according to the doc
-
https://www.postgresql.org/docs/14/errcodes-ap
tamama commented on PR #37206:
URL: https://github.com/apache/spark/pull/37206#issuecomment-1488285148
> > > > We intend to fallback to
[Spark-3](https://issues.apache.org/jira/browse/SPARK-3).3.1 Scala-2.12
(instead of Scala 2.13)
> > >
> > >
> > > @tamama Using Scala 2.12 can a
beliefer commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1151672500
##
connector/connect/common/src/test/resources/query-tests/explain-results/function_array_append.explain:
##
@@ -1,2 +1,2 @@
-Project [array_append(e#0, 1) AS array_ap
beliefer commented on code in PR #40563:
URL: https://github.com/apache/spark/pull/40563#discussion_r1151668147
##
connector/connect/common/src/test/resources/query-tests/explain-results/function_array_append.explain:
##
@@ -1,2 +1,2 @@
-Project [array_append(e#0, 1) AS array_ap
zhengruifeng commented on PR #40582:
URL: https://github.com/apache/spark/pull/40582#issuecomment-1488273071
thank you for reviews, merged into master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
zhengruifeng closed pull request #40582: [SPARK-42954][PYTHON][CONNECT] Add
`YearMonthIntervalType` to PySpark and Spark Connect Python Client
URL: https://github.com/apache/spark/pull/40582
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
MaxGekk commented on code in PR #40565:
URL: https://github.com/apache/spark/pull/40565#discussion_r1151655717
##
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -993,14 +993,34 @@ colPosition
: position=FIRST | position=AFTER after
peter-toth commented on code in PR #40268:
URL: https://github.com/apache/spark/pull/40268#discussion_r1151559781
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:
##
@@ -113,15 +114,13 @@ object ConstantPropagation extends Rule[LogicalPla
1 - 100 of 123 matches
Mail list logo