cloud-fan commented on PR #38358:
URL: https://github.com/apache/spark/pull/38358#issuecomment-1308354718
thanks, merging to 3.3/3.2!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
amaliujia commented on code in PR #38566:
URL: https://github.com/apache/spark/pull/38566#discussion_r1017553189
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##
@@ -79,6 +85,32 @@ class
amaliujia commented on code in PR #38566:
URL: https://github.com/apache/spark/pull/38566#discussion_r1017551706
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##
@@ -79,6 +85,32 @@ class
cloud-fan commented on PR #38573:
URL: https://github.com/apache/spark/pull/38573#issuecomment-1308348049
shall we add the python client API in this PR as well?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on code in PR #38566:
URL: https://github.com/apache/spark/pull/38566#discussion_r1017549065
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##
@@ -79,6 +85,32 @@ class
cloud-fan closed pull request #38475: [SPARK-40992][CONNECT] Support
toDF(columnNames) in Connect DSL
URL: https://github.com/apache/spark/pull/38475
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
cloud-fan commented on PR #38475:
URL: https://github.com/apache/spark/pull/38475#issuecomment-1308344423
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
wangyum commented on code in PR #38577:
URL: https://github.com/apache/spark/pull/38577#discussion_r1017537591
##
dev/make-distribution.sh:
##
@@ -161,7 +161,7 @@ fi
# Build uber fat JAR
cd "$SPARK_HOME"
-export MAVEN_OPTS="${MAVEN_OPTS:--Xss128m -Xmx4g -Xmx4g
HyukjinKwon closed pull request #38570: [SPARK-41056][R] Fix new R_LIBS_SITE
behavior introduced in R 4.2
URL: https://github.com/apache/spark/pull/38570
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #38570:
URL: https://github.com/apache/spark/pull/38570#issuecomment-1308338748
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #38570:
URL: https://github.com/apache/spark/pull/38570#issuecomment-1308337531
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengchenyu commented on PR #37949:
URL: https://github.com/apache/spark/pull/37949#issuecomment-1308320736
@dongjoon-hyun @srowen Can you please review this PR?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on PR #38511:
URL: https://github.com/apache/spark/pull/38511#issuecomment-1308318576
cc @viirya @sigmod @hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
amaliujia commented on PR #38573:
URL: https://github.com/apache/spark/pull/38573#issuecomment-1308314938
R: @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
LuciferYang commented on code in PR #38577:
URL: https://github.com/apache/spark/pull/38577#discussion_r1017502280
##
dev/make-distribution.sh:
##
@@ -161,7 +161,7 @@ fi
# Build uber fat JAR
cd "$SPARK_HOME"
-export MAVEN_OPTS="${MAVEN_OPTS:--Xss128m -Xmx4g -Xmx4g
LuciferYang commented on PR #38577:
URL: https://github.com/apache/spark/pull/38577#issuecomment-1308298746
> As you mentioned in the PR description, did you hit this only at master
branch issue?
Let me double check this
> Which java version are you using now?
test on
viirya commented on code in PR #38577:
URL: https://github.com/apache/spark/pull/38577#discussion_r1017499034
##
dev/make-distribution.sh:
##
@@ -161,7 +161,7 @@ fi
# Build uber fat JAR
cd "$SPARK_HOME"
-export MAVEN_OPTS="${MAVEN_OPTS:--Xss128m -Xmx4g -Xmx4g
pan3793 commented on PR #38577:
URL: https://github.com/apache/spark/pull/38577#issuecomment-1308297255
I reproduced the issue w/ openjdk 1.8.0_332 macos-aarch64
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun commented on PR #38577:
URL: https://github.com/apache/spark/pull/38577#issuecomment-1308293543
To @LuciferYang ,
- As you mentioned in the PR description, did you hit this only at `master`
branch issue?
- Which java version are you using now?
--
This is an automated
amaliujia commented on code in PR #38566:
URL: https://github.com/apache/spark/pull/38566#discussion_r1017494208
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##
@@ -79,6 +85,32 @@ class
dongjoon-hyun commented on PR #38577:
URL: https://github.com/apache/spark/pull/38577#issuecomment-1308290786
cc @viirya too
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhengruifeng commented on code in PR #38506:
URL: https://github.com/apache/spark/pull/38506#discussion_r1017492795
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -317,7 +319,83 @@ def unionByName(self, other: "DataFrame",
allowMissingColumns: bool = False) ->
if
zhengruifeng commented on code in PR #38566:
URL: https://github.com/apache/spark/pull/38566#discussion_r1017490854
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##
@@ -79,6 +85,32 @@ class
zhengruifeng commented on code in PR #38566:
URL: https://github.com/apache/spark/pull/38566#discussion_r1017490854
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##
@@ -79,6 +85,32 @@ class
cloud-fan commented on PR #38491:
URL: https://github.com/apache/spark/pull/38491#issuecomment-1308284185
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan closed pull request #38491: [SPARK-41058][CONNECT] Remove unused
import in commands.proto
URL: https://github.com/apache/spark/pull/38491
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
cloud-fan commented on code in PR #38490:
URL: https://github.com/apache/spark/pull/38490#discussion_r101748
##
core/src/main/resources/error/error-classes.json:
##
@@ -668,6 +668,23 @@
}
}
},
+ "LOCATION_ALREADY_EXISTS" : {
+"message" : [
+ "Cannot
amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1017484449
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
if self._plan is None:
return []
amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1017484449
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
if self._plan is None:
return []
LuciferYang commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1017484528
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -2330,7 +2330,7 @@ class DataFrameSuite extends QueryTest
new File(uuid,
zhengruifeng commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1017482919
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
if self._plan is None:
return []
LuciferYang commented on PR #38091:
URL: https://github.com/apache/spark/pull/38091#issuecomment-1308279804
finally fixed, thanks @wankunde
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng opened a new pull request, #38578:
URL: https://github.com/apache/spark/pull/38578
### What changes were proposed in this pull request?
Implement `DataFrame.crosstab` and `DataFrame.stat.crosstab`
### Why are the changes needed?
for api coverage
###
LuciferYang commented on PR #38577:
URL: https://github.com/apache/spark/pull/38577#issuecomment-1308270119
cc @HyukjinKwon @wangyum @dongjoon-hyun @srowen @pan3793 @panbingkun
Can you reproduce the issue?
--
This is an automated message from the Apache Git Service.
To
LuciferYang opened a new pull request, #38577:
URL: https://github.com/apache/spark/pull/38577
### What changes were proposed in this pull request?
Run
```
dev/make-distribution.sh --tgz -Phadoop-3 -Phadoop-cloud -Pmesos -Pyarn
-Pkinesis-asl -Phive-thriftserver
panbingkun commented on PR #38545:
URL: https://github.com/apache/spark/pull/38545#issuecomment-1308245910
cc @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
itholic opened a new pull request, #38576:
URL: https://github.com/apache/spark/pull/38576
### What changes were proposed in this pull request?
This PR proposes to rename `UNSUPPORTED_CORRELATED_REFERENCE` to
`CORRELATED_REFERENCE`.
### Why are the changes needed?
itholic commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1017450594
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -2330,7 +2330,7 @@ class DataFrameSuite extends QueryTest
new File(uuid,
itholic commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1017450594
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -2330,7 +2330,7 @@ class DataFrameSuite extends QueryTest
new File(uuid,
WweiL commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1017447511
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsInPandasWithStateSuite.scala:
##
@@ -240,25 +240,30 @@ class FlatMapGroupsInPandasWithStateSuite
WweiL commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1017447320
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
itholic opened a new pull request, #38575:
URL: https://github.com/apache/spark/pull/38575
### What changes were proposed in this pull request?
The original PR to introduce the error class `PATH_NOT_FOUND` was reverted
since it breaks the tests in different test env.
This
AmplabJenkins commented on PR #38535:
URL: https://github.com/apache/spark/pull/38535#issuecomment-1308229533
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
gengliangwang commented on PR #38567:
URL: https://github.com/apache/spark/pull/38567#issuecomment-1308227478
> Or part of a larger set of changes
Yes, I created Jira https://issues.apache.org/jira/browse/SPARK-41053, and
this one is just the beginning.
> Maintaining state
gengliangwang commented on PR #38567:
URL: https://github.com/apache/spark/pull/38567#issuecomment-1308222974
> It seems to not match what is described in the pr description or add
equivalent functionality which was reverted
I believe it covers the reverted PR
WweiL commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1017435393
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala:
##
@@ -188,17 +194,26 @@ class UnsupportedOperationsSuite extends
alex-balikov commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1017413562
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
19Serhii99 opened a new pull request, #38574:
URL: https://github.com/apache/spark/pull/38574
### What changes were proposed in this pull request?
There's a problem with submitting spark jobs to K8s cluster: the library
generates and reuses the same name for config maps (for drivers
mridulm commented on PR #38064:
URL: https://github.com/apache/spark/pull/38064#issuecomment-1308159592
The test failures look like due to the memory requirements for the test
'collect data with single partition larger than 2GB bytes array limit' is too
high - causing OOM.
--
This is an
amaliujia opened a new pull request, #38573:
URL: https://github.com/apache/spark/pull/38573
### What changes were proposed in this pull request?
1. support `def selectExpr(exprs: String*)` in Connect DSL.
2. Server side supports translation Expressions in Strings.
itholic opened a new pull request, #38572:
URL: https://github.com/apache/spark/pull/38572
### What changes were proposed in this pull request?
This PR proposes to rename `_LEGACY_ERROR_TEMP_2420` to
`NESTED_AGGREGATE_FUNCTION`
### Why are the changes needed?
We
ulysses-you commented on code in PR #38558:
URL: https://github.com/apache/spark/pull/38558#discussion_r1017369252
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala:
##
@@ -209,6 +209,19 @@ case class AdaptiveSparkPlanExec(
srielau commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017369097
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+ "GROUP_BY_AGGREGATE" :
LuciferYang commented on PR #38550:
URL: https://github.com/apache/spark/pull/38550#issuecomment-1308146770
Thanks @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
AngersZh commented on code in PR #34815:
URL: https://github.com/apache/spark/pull/34815#discussion_r1017365581
##
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala:
##
@@ -620,4 +620,17 @@ class CliSuite extends SparkFunSuite with
srowen commented on PR #38550:
URL: https://github.com/apache/spark/pull/38550#issuecomment-1308144543
Merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
srowen closed pull request #38550: [SPARK-41039][BUILD] Upgrade
`scala-parallel-collections` to 1.0.4 for Scala 2.13
URL: https://github.com/apache/spark/pull/38550
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
AngersZh opened a new pull request, #38571:
URL: https://github.com/apache/spark/pull/38571
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017361159
##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class
amaliujia commented on PR #38491:
URL: https://github.com/apache/spark/pull/38491#issuecomment-1308133299
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon opened a new pull request, #38570:
URL: https://github.com/apache/spark/pull/38570
### What changes were proposed in this pull request?
This PR proposes to keep the `R_LIBS_SITE` as was. It has been changed from
R 4.2.
### Why are the changes needed?
To keep
itholic commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017351734
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+ "GROUP_BY_AGGREGATE" :
itholic commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017351734
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+ "GROUP_BY_AGGREGATE" :
aokolnychyi commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017352098
##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class
itholic commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017351734
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+ "GROUP_BY_AGGREGATE" :
itholic commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017351734
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+ "GROUP_BY_AGGREGATE" :
itholic commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017351734
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+ "GROUP_BY_AGGREGATE" :
LuciferYang commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017347010
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+
LuciferYang commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1017346760
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+
amaliujia commented on PR #38491:
URL: https://github.com/apache/spark/pull/38491#issuecomment-1308120960
@dengziming I have missed this PR because there was no JIRA created under
https://issues.apache.org/jira/browse/SPARK-39375 (I was monitoring works
happened there).
Since you
mridulm commented on PR #38567:
URL: https://github.com/apache/spark/pull/38567#issuecomment-1308118224
Also note that we cannot avoid parsing event files at history server with db
generated at driver - unless the configs match for both (retained stages,
tasks, queries, etc): particularly
cloud-fan commented on PR #38491:
URL: https://github.com/apache/spark/pull/38491#issuecomment-1308116015
@amaliujia can you take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
mridulm commented on PR #38567:
URL: https://github.com/apache/spark/pull/38567#issuecomment-1308113372
To comment on proposal in description, based on past prototypes I have
worked on/seen:
Maintaining state at driver on disk backed store and copying that to dfs has
a few things
itholic commented on PR #38569:
URL: https://github.com/apache/spark/pull/38569#issuecomment-1308111274
cc @MaxGekk @srielau FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
itholic opened a new pull request, #38569:
URL: https://github.com/apache/spark/pull/38569
### What changes were proposed in this pull request?
This PR proposes to rename `_LEGACY_ERROR_TEMP_2424` to `GROUP_BY_AGGREGATE`
### Why are the changes needed?
To use proper
mridulm commented on PR #38567:
URL: https://github.com/apache/spark/pull/38567#issuecomment-1308103370
This pr is mostly adding ability to use a disk backed store in addition to
in memory store.
It seems to not match what is described in the pr description - is this wip ?
--
This is
LuciferYang commented on PR #38507:
URL: https://github.com/apache/spark/pull/38507#issuecomment-130810
GA passed, @MaxGekk could you help to review this pr again, thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
LuciferYang commented on PR #38550:
URL: https://github.com/apache/spark/pull/38550#issuecomment-1308100945
maven test all UTs with Scala 2.13 and this pr passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
maryannxue commented on code in PR #38558:
URL: https://github.com/apache/spark/pull/38558#discussion_r1017328200
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala:
##
@@ -209,6 +209,19 @@ case class AdaptiveSparkPlanExec(
cloud-fan commented on code in PR #38263:
URL: https://github.com/apache/spark/pull/38263#discussion_r1017320620
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/maskExpressions.scala:
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software
cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017318193
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -320,6 +320,9 @@ abstract class Optimizer(catalogManager:
bersprockets commented on PR #38565:
URL: https://github.com/apache/spark/pull/38565#issuecomment-1308083533
@HyukjinKwon Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on code in PR #34815:
URL: https://github.com/apache/spark/pull/34815#discussion_r1017317387
##
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala:
##
@@ -620,4 +620,17 @@ class CliSuite extends SparkFunSuite with
cloud-fan commented on PR #38419:
URL: https://github.com/apache/spark/pull/38419#issuecomment-1308082603
ceil/floor also takes a second parameter for num of digits to retain.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
viirya commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017317077
##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -66,7 +65,7 @@ case class
viirya commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017316276
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -320,6 +320,9 @@ abstract class Optimizer(catalogManager: CatalogManager)
HyukjinKwon commented on PR #38565:
URL: https://github.com/apache/spark/pull/38565#issuecomment-1308081567
Merged to master, branch-3.3, and branch-3.2.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
Yikun closed pull request #21: [SPARK-40569][TESTS] Add smoke test in
standalone cluster for spark-docker
URL: https://github.com/apache/spark-docker/pull/21
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
Yikun commented on PR #21:
URL: https://github.com/apache/spark-docker/pull/21#issuecomment-1308072699
Merge to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
Yikun commented on PR #21:
URL: https://github.com/apache/spark-docker/pull/21#issuecomment-1308072327
```
testing/run_tests.sh --image-url ghcr.io/yikun/spark-docker/spark:python3
--scala-version 2.12 --spark-version 3.3.0
===> Smoke test for
AmplabJenkins commented on PR #38541:
URL: https://github.com/apache/spark/pull/38541#issuecomment-1308071934
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon closed pull request #38565: [SPARK-41035][SQL] Don't patch foldable
children of aggregate functions in `RewriteDistinctAggregates`
URL: https://github.com/apache/spark/pull/38565
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
Narcasserun opened a new pull request, #38568:
URL: https://github.com/apache/spark/pull/38568
What changes were proposed in this pull request?
Reuse variables from declared procfs files instead of duplicate code
Why are the changes needed?
The cost of looking up the config is
cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017300121
##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class
cloud-fan commented on code in PR #38557:
URL: https://github.com/apache/spark/pull/38557#discussion_r1017300121
##
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/RowLevelOperationRuntimeGroupFiltering.scala:
##
@@ -89,10 +88,8 @@ case class
zhengruifeng commented on PR #38318:
URL: https://github.com/apache/spark/pull/38318#issuecomment-1308066292
thanks @cloud-fan @HyukjinKwon @amaliujia for reviews!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
ulysses-you commented on code in PR #38558:
URL: https://github.com/apache/spark/pull/38558#discussion_r1017299064
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala:
##
@@ -209,6 +209,19 @@ case class AdaptiveSparkPlanExec(
cloud-fan closed pull request #38318: [SPARK-40852][CONNECT][PYTHON] Introduce
`StatFunction` in proto and implement `DataFrame.summary`
URL: https://github.com/apache/spark/pull/38318
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
gengliangwang closed pull request #38542: Revert "[SPARK-38550][SQL][CORE] Use
a disk-based store to save more debug information for live UI"
URL: https://github.com/apache/spark/pull/38542
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
cloud-fan commented on PR #38318:
URL: https://github.com/apache/spark/pull/38318#issuecomment-1308063610
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
1 - 100 of 234 matches
Mail list logo