HyukjinKwon commented on code in PR #39098:
URL: https://github.com/apache/spark/pull/39098#discussion_r1058130837
##
python/pyspark/pandas/generic.py:
##
@@ -748,7 +748,7 @@ def to_csv(
2012-02-29 12:00:00,US,2
2012-03-31 12:00:00,JP,3
->>>
itholic opened a new pull request, #39260:
URL: https://github.com/apache/spark/pull/39260
### What changes were proposed in this pull request?
This PR proposes to assign name to _LEGACY_ERROR_TEMP_1249,
"NOT_A_PARTITIONED_TABLE".
### Why are the changes needed?
Ngone51 commented on PR #39011:
URL: https://github.com/apache/spark/pull/39011#issuecomment-1366436959
Thanks @mridulm @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
beliefer commented on PR #39251:
URL: https://github.com/apache/spark/pull/39251#issuecomment-1366435201
@HyukjinKwon @zhengruifeng @grundprinzip Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
AngersZh commented on PR #39259:
URL: https://github.com/apache/spark/pull/39259#issuecomment-1366432024
ping @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on PR #39097:
URL: https://github.com/apache/spark/pull/39097#issuecomment-1366430660
I'm fixing the root cause at https://github.com/apache/spark/pull/39248
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
cloud-fan commented on PR #39248:
URL: https://github.com/apache/spark/pull/39248#issuecomment-1366430564
cc @viirya @HyukjinKwon @gengliangwang @allisonwang-db
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
AngersZh opened a new pull request, #39259:
URL: https://github.com/apache/spark/pull/39259
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058115431
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##
@@ -117,10 +111,6 @@ object
LuciferYang commented on PR #39250:
URL: https://github.com/apache/spark/pull/39250#issuecomment-1366426696
for example
https://github.com/apache/spark/blob/87a235c2143449bd8da0acee4ec3cd3155bb/sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java#L168
cloud-fan commented on code in PR #39248:
URL: https://github.com/apache/spark/pull/39248#discussion_r1058114484
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##
@@ -36,18 +36,12 @@ class
itholic opened a new pull request, #39258:
URL: https://github.com/apache/spark/pull/39258
### What changes were proposed in this pull request?
This PR proposes to assign name to _LEGACY_ERROR_TEMP_2149,
"MALFORMED_CSV_RECORD".
### Why are the changes needed?
HeartSaVioR closed pull request #39247: [SPARK-41733][SQL][SS] Apply
tree-pattern based pruning for the rule ResolveWindowTime
URL: https://github.com/apache/spark/pull/39247
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HeartSaVioR commented on PR #39247:
URL: https://github.com/apache/spark/pull/39247#issuecomment-1366416569
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR commented on PR #39247:
URL: https://github.com/apache/spark/pull/39247#issuecomment-1366416497
https://github.com/HeartSaVioR/spark/runs/10328690654
Looks like GA fails to pull the test result. Build succeeded.
--
This is an automated message from the Apache Git Service.
HyukjinKwon closed pull request #39257: [MINOR][CONNECT] Regenerate Protobuf
for Python
URL: https://github.com/apache/spark/pull/39257
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #39257:
URL: https://github.com/apache/spark/pull/39257#issuecomment-1366414887
Sorry, it's my bad.
Merged to master. to fix up the build.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon opened a new pull request, #39257:
URL: https://github.com/apache/spark/pull/39257
### What changes were proposed in this pull request?
There is unsynced Python side of Protobuf. This PR regenerates
### Why are the changes needed?
To fix the build.
zhengruifeng commented on code in PR #39246:
URL: https://github.com/apache/spark/pull/39246#discussion_r1058097335
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -328,6 +329,16 @@ class
ulysses-you commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058097111
##
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala:
##
@@ -143,29 +141,11 @@ case class
zhengruifeng commented on PR #39251:
URL: https://github.com/apache/spark/pull/39251#issuecomment-1366411747
merged into master, thank you @beliefer !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
zhengruifeng closed pull request #39251: [SPARK-41736][CONNECT][PYTHON]
`pyspark_types_to_proto_types` should supports `ArrayType`
URL: https://github.com/apache/spark/pull/39251
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
zhengruifeng commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058094822
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
HyukjinKwon closed pull request #39252: [SPARK-41734][CONNECT] Add a parent
message for Catalog
URL: https://github.com/apache/spark/pull/39252
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on PR #39252:
URL: https://github.com/apache/spark/pull/39252#issuecomment-1366407948
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058092052
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
LuciferYang commented on PR #39235:
URL: https://github.com/apache/spark/pull/39235#issuecomment-1366407361
Thanks @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
LuciferYang commented on code in PR #39255:
URL: https://github.com/apache/spark/pull/39255#discussion_r1058091610
##
pom.xml:
##
@@ -827,8 +827,7 @@
com.google.protobuf
protobuf-java
-${protobuf.hadoopDependency.version}
-
MaxGekk closed pull request #39235: [SPARK-41729][CORE][SQL] Rename
`_LEGACY_ERROR_TEMP_0011` to
`UNSUPPORTED_FEATURE.COMBINATION_QUERY_RESULT_CLAUSES`
URL: https://github.com/apache/spark/pull/39235
--
This is an automated message from the Apache Git Service.
To respond to the message,
MaxGekk commented on PR #39235:
URL: https://github.com/apache/spark/pull/39235#issuecomment-1366406803
+1, LGTM. Merging to master.
Thank you, @LuciferYang.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058089769
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058089550
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058089392
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
LuciferYang commented on code in PR #39255:
URL: https://github.com/apache/spark/pull/39255#discussion_r1058080832
##
pom.xml:
##
@@ -827,8 +827,7 @@
com.google.protobuf
protobuf-java
-${protobuf.hadoopDependency.version}
-
cloud-fan commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058086587
##
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala:
##
@@ -143,29 +141,11 @@ case class
zhengruifeng commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058086316
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
zhengruifeng commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058085725
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1061,6 +1068,81 @@ class
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058085692
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1061,6 +1068,81 @@ class
zhengruifeng commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058084926
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1061,6 +1068,81 @@ class
grundprinzip commented on PR #39256:
URL: https://github.com/apache/spark/pull/39256#issuecomment-1366397023
R: @HyukjinKwon @zhengruifeng
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
grundprinzip opened a new pull request, #39256:
URL: https://github.com/apache/spark/pull/39256
### What changes were proposed in this pull request?
This PR mixes the client ID into the cache for the SparkSessions on the
server. This is necessary to allow to concurrent SparkSessions of
LuciferYang commented on code in PR #39255:
URL: https://github.com/apache/spark/pull/39255#discussion_r1058080832
##
pom.xml:
##
@@ -827,8 +827,7 @@
com.google.protobuf
protobuf-java
-${protobuf.hadoopDependency.version}
-
LuciferYang commented on code in PR #39255:
URL: https://github.com/apache/spark/pull/39255#discussion_r1058080832
##
pom.xml:
##
@@ -827,8 +827,7 @@
com.google.protobuf
protobuf-java
-${protobuf.hadoopDependency.version}
-
ulysses-you commented on PR #39220:
URL: https://github.com/apache/spark/pull/39220#issuecomment-1366395718
@cloud-fan addressed all comments
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang opened a new pull request, #39255:
URL: https://github.com/apache/spark/pull/39255
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
cloud-fan commented on code in PR #39099:
URL: https://github.com/apache/spark/pull/39099#discussion_r1058077238
##
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala:
##
@@ -374,7 +374,7 @@ final class Decimal extends Ordered[Decimal] with
Serializable {
LuciferYang commented on code in PR #39250:
URL: https://github.com/apache/spark/pull/39250#discussion_r1058073382
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaBeanDeserializationSuite.java:
##
@@ -590,9 +590,9 @@ public Item call(Item item1, Item item2) throws
srowen commented on PR #39215:
URL: https://github.com/apache/spark/pull/39215#issuecomment-1366386151
Seems OK to me, un-mark it as Draft to let it test
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
LuciferYang commented on code in PR #39110:
URL: https://github.com/apache/spark/pull/39110#discussion_r1058072078
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -390,3 +390,38 @@ message SQLExecutionUIData {
repeated int64 stages = 11;
techaddict commented on PR #39110:
URL: https://github.com/apache/spark/pull/39110#issuecomment-1366384358
@gengliangwang addressed all the comments
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
grundprinzip commented on code in PR #39091:
URL: https://github.com/apache/spark/pull/39091#discussion_r1058070997
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -907,6 +907,38 @@ def test_random_split(self):
grundprinzip commented on code in PR #39091:
URL: https://github.com/apache/spark/pull/39091#discussion_r1058070139
##
connector/connect/common/src/main/protobuf/spark/connect/base.proto:
##
@@ -181,6 +184,18 @@ message ExecutePlanResponse {
string metric_type = 3;
grundprinzip commented on code in PR #39246:
URL: https://github.com/apache/spark/pull/39246#discussion_r1058068989
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -328,6 +329,16 @@ class
grundprinzip commented on code in PR #39252:
URL: https://github.com/apache/spark/pull/39252#discussion_r1058067646
##
connector/connect/common/src/main/protobuf/spark/connect/relations.proto:
##
@@ -70,35 +70,7 @@ message Relation {
StatDescribe describe = 102;
//
grundprinzip commented on code in PR #39252:
URL: https://github.com/apache/spark/pull/39252#discussion_r1058067556
##
connector/connect/common/src/main/protobuf/spark/connect/catalog.proto:
##
@@ -24,6 +24,41 @@ import "spark/connect/types.proto";
option java_multiple_files =
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058067328
##
python/pyspark/sql/connect/group.py:
##
@@ -97,36 +97,46 @@ def agg(self, *exprs: Union[Column, Dict[str, str]]) ->
"DataFrame":
),
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058066260
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1061,6 +1068,81 @@ class
grundprinzip commented on code in PR #39254:
URL: https://github.com/apache/spark/pull/39254#discussion_r1058066012
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1061,6 +1068,81 @@ class
LuciferYang commented on code in PR #39215:
URL: https://github.com/apache/spark/pull/39215#discussion_r1058061532
##
project/MimaExcludes.scala:
##
@@ -129,7 +129,16 @@ object MimaExcludes {
zhengruifeng commented on PR #39254:
URL: https://github.com/apache/spark/pull/39254#issuecomment-1366372126
cc @HyukjinKwon @grundprinzip
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng opened a new pull request, #39254:
URL: https://github.com/apache/spark/pull/39254
### What changes were proposed in this pull request?
Implement `GroupedData.{min, max, avg, sum}`
### Why are the changes needed?
TLDR, `df.groupby().min` != `df.groupby().agg(min)`
LuciferYang commented on code in PR #39215:
URL: https://github.com/apache/spark/pull/39215#discussion_r1058059669
##
project/MimaExcludes.scala:
##
@@ -129,7 +129,27 @@ object MimaExcludes {
beliefer commented on PR #39251:
URL: https://github.com/apache/spark/pull/39251#issuecomment-1366364200
ping @zhengruifeng @grundprinzip @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
cloud-fan commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058051128
##
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala:
##
@@ -143,29 +141,10 @@ case class
packyan commented on PR #39021:
URL: https://github.com/apache/spark/pull/39021#issuecomment-1366356581
Any one can give me some suggestions?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR commented on PR #39253:
URL: https://github.com/apache/spark/pull/39253#issuecomment-1366355228
The change is identical to #39245 except import (one line diff.). I'll go
merging this once CI passes.
--
This is an automated message from the Apache Git Service.
To respond to
HeartSaVioR opened a new pull request, #39253:
URL: https://github.com/apache/spark/pull/39253
This PR ports back #39245 to branch-3.3.
### What changes were proposed in this pull request?
This PR proposes to apply tree-pattern based pruning for the rule
SessionWindowing, to
HeartSaVioR commented on PR #39247:
URL: https://github.com/apache/spark/pull/39247#issuecomment-1366352458
The change is only to rebase the fix due to the conflict from #39245. I'm
going to merge once the CI passes.
--
This is an automated message from the Apache Git Service.
To respond
cloud-fan commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058042355
##
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala:
##
@@ -143,29 +141,10 @@ case class
HeartSaVioR commented on PR #39245:
URL: https://github.com/apache/spark/pull/39245#issuecomment-1366349270
It conflicts with branch-3.3 (probably 3.2 as well). I'll create a new PR
for backport.
--
This is an automated message from the Apache Git Service.
To respond to the message,
cloud-fan commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058042058
##
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveExplainSuite.scala:
##
@@ -102,21 +102,15 @@ class HiveExplainSuite extends QueryTest with
cloud-fan commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058041588
##
sql/core/src/test/scala/org/apache/spark/sql/util/DataFrameCallbackSuite.scala:
##
@@ -217,10 +217,10 @@ class DataFrameCallbackSuite extends QueryTest
HeartSaVioR closed pull request #39245: [SPARK-41732][SQL][SS] Apply
tree-pattern based pruning for the rule SessionWindowing
URL: https://github.com/apache/spark/pull/39245
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HeartSaVioR commented on PR #39245:
URL: https://github.com/apache/spark/pull/39245#issuecomment-1366348248
Thanks, merging to master/3.3/3.2!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on code in PR #39220:
URL: https://github.com/apache/spark/pull/39220#discussion_r1058041258
##
sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala:
##
@@ -1217,7 +1230,7 @@ class AdaptiveQueryExecSuite
cloud-fan commented on code in PR #39240:
URL: https://github.com/apache/spark/pull/39240#discussion_r1058037999
##
connector/connect/common/src/main/protobuf/spark/connect/relations.proto:
##
@@ -379,9 +379,10 @@ message Sample {
// (Optional) The random seed.
optional
LuciferYang commented on code in PR #39192:
URL: https://github.com/apache/spark/pull/39192#discussion_r1058036782
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -390,3 +390,214 @@ message SQLExecutionUIData {
repeated int64 stages = 11;
HyukjinKwon commented on PR #39239:
URL: https://github.com/apache/spark/pull/39239#issuecomment-1366342520
Let me take a look
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #39252:
URL: https://github.com/apache/spark/pull/39252#issuecomment-1366341521
cc @zhengruifeng and @grundprinzip FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon opened a new pull request, #39252:
URL: https://github.com/apache/spark/pull/39252
### What changes were proposed in this pull request?
This PR proposes to add a parent Protobuf message for Catalog (see
https://github.com/apache/spark/pull/39214#discussion_r1057439608).
LuciferYang commented on code in PR #39110:
URL: https://github.com/apache/spark/pull/39110#discussion_r1058034241
##
core/src/main/scala/org/apache/spark/status/protobuf/RDDOperationGraphWrapperSerializer.scala:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software
LuciferYang commented on code in PR #39110:
URL: https://github.com/apache/spark/pull/39110#discussion_r1058034241
##
core/src/main/scala/org/apache/spark/status/protobuf/RDDOperationGraphWrapperSerializer.scala:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software
cloud-fan commented on code in PR #39133:
URL: https://github.com/apache/spark/pull/39133#discussion_r1058032279
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/unresolved.scala:
##
@@ -136,6 +136,30 @@ object UnresolvedTableValuedFunction {
}
}
+/**
LuciferYang commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058030818
##
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
LuciferYang commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058030818
##
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
LuciferYang commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058030818
##
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
cloud-fan commented on code in PR #39202:
URL: https://github.com/apache/spark/pull/39202#discussion_r1058030669
##
core/src/main/scala/org/apache/spark/internal/config/History.scala:
##
@@ -79,6 +79,21 @@ private[spark] object History {
.stringConf
.createOptional
beliefer opened a new pull request, #39251:
URL: https://github.com/apache/spark/pull/39251
### What changes were proposed in this pull request?
Currently, `pyspark_types_to_proto_types` used to transform pyspark
datatypes to protobuffer datatypes.
But it not supports the array type
LuciferYang commented on code in PR #39226:
URL: https://github.com/apache/spark/pull/39226#discussion_r1058024767
##
core/src/main/scala/org/apache/spark/status/AppStatusStore.scala:
##
@@ -733,6 +734,15 @@ private[spark] class AppStatusStore(
def close(): Unit = {
thejdeep commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1058019046
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -100,11 +100,21 @@ message TaskDataWrapper {
int64
LuciferYang commented on PR #39235:
URL: https://github.com/apache/spark/pull/39235#issuecomment-1366320342
[9d74522](https://github.com/apache/spark/pull/39235/commits/9d7452237e3febaf3ba6e8384db30aebb325b34b)
remove `_LEGACY_ERROR_TEMP_0011` from `error-classes.json`.
--
This is an
thejdeep commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1058019046
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -100,11 +100,21 @@ message TaskDataWrapper {
int64
panbingkun commented on code in PR #39192:
URL: https://github.com/apache/spark/pull/39192#discussion_r1058009462
##
core/src/main/scala/org/apache/spark/status/protobuf/StageDataWrapperSerializer.scala:
##
@@ -0,0 +1,622 @@
+/*
+ * Licensed to the Apache Software Foundation
tedyu commented on PR #39250:
URL: https://github.com/apache/spark/pull/39250#issuecomment-1366305206
@srowen
I have covered `JavaBeanDeserializationSuite`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon closed pull request #39243:
[SPARK-41697][CONNECT][TESTS][FOLLOW-UP] Disable test_toDF_with_schema_string
back, and fix test_freqItems
URL: https://github.com/apache/spark/pull/39243
--
This is an automated message from the Apache Git Service.
To respond to the message, please
HyukjinKwon commented on PR #39243:
URL: https://github.com/apache/spark/pull/39243#issuecomment-1366301882
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
panbingkun commented on code in PR #39192:
URL: https://github.com/apache/spark/pull/39192#discussion_r1058004492
##
core/src/main/scala/org/apache/spark/status/protobuf/StageDataWrapperSerializer.scala:
##
@@ -0,0 +1,622 @@
+/*
+ * Licensed to the Apache Software Foundation
HyukjinKwon closed pull request #39225: [SPARK-41654][CONNECT][TESTS] Enable
doctests for pyspark.sql.connect.window
URL: https://github.com/apache/spark/pull/39225
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #39225:
URL: https://github.com/apache/spark/pull/39225#issuecomment-1366301196
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
panbingkun commented on code in PR #39192:
URL: https://github.com/apache/spark/pull/39192#discussion_r1058002785
##
core/src/main/protobuf/org/apache/spark/status/protobuf/store_types.proto:
##
@@ -390,3 +390,214 @@ message SQLExecutionUIData {
repeated int64 stages = 11;
1 - 100 of 222 matches
Mail list logo