gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1080920518
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1080920518
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
LuciferYang opened a new pull request, #39650:
URL: https://github.com/apache/spark/pull/39650
### What changes were proposed in this pull request?
This pr aims to add null check before `ContinuousWriteRDD#compute` function
close `dataWriter` to avoid NPE.
### Why are the chang
LuciferYang commented on PR #39650:
URL: https://github.com/apache/spark/pull/39650#issuecomment-1396609145
cc @HeartSaVioR
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
LuciferYang commented on code in PR #39650:
URL: https://github.com/apache/spark/pull/39650#discussion_r1080946019
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousWriteRDD.scala:
##
@@ -88,7 +88,7 @@ class ContinuousWriteRDD(var prev: RDD
itholic opened a new pull request, #39651:
URL: https://github.com/apache/spark/pull/39651
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
LuciferYang commented on code in PR #39642:
URL: https://github.com/apache/spark/pull/39642#discussion_r1081018834
##
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala:
##
@@ -125,14 +127,16 @@ class StateOperatorProgress private[sql](
* @since 2.1.0
*/
antonipp commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1073566530
##
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala:
##
@@ -239,6 +239,41 @@ private[spar
antonipp commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1073572202
##
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala:
##
@@ -239,6 +239,41 @@ private[spar
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081026350
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -171,6 +171,84 @@ class KeyValueGroupedDataset[K, V] private[sql](
flatMapGrou
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081026350
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -171,6 +171,84 @@ class KeyValueGroupedDataset[K, V] private[sql](
flatMapGrou
cloud-fan commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081074462
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -330,6 +331,18 @@ public void testGroupBy() {
Encoders.STRING());
Asser
cloud-fan commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081076961
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -387,7 +400,27 @@ public void testGroupBy() {
},
Encoders.STRING());
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081087009
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -330,6 +331,18 @@ public void testGroupBy() {
Encoders.STRING());
Assert
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081093035
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -387,7 +400,27 @@ public void testGroupBy() {
},
Encoders.STRING());
-
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081026350
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -171,6 +171,84 @@ class KeyValueGroupedDataset[K, V] private[sql](
flatMapGrou
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081093035
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -387,7 +400,27 @@ public void testGroupBy() {
},
Encoders.STRING());
-
cloud-fan commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081097523
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -387,7 +400,27 @@ public void testGroupBy() {
},
Encoders.STRING());
peter-toth opened a new pull request, #39652:
URL: https://github.com/apache/spark/pull/39652
### What changes were proposed in this pull request?
This is a follow-up PR to https://github.com/apache/spark/pull/38034. It
relaxes `multiTransformDown()`'s `rule` parameter type to accept any
peter-toth commented on PR #39652:
URL: https://github.com/apache/spark/pull/39652#issuecomment-1396823289
@cloud-fan, could you please take a look at this small improvement?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
codecov-commenter commented on PR #39647:
URL: https://github.com/apache/spark/pull/39647#issuecomment-1396839380
#
[Codecov](https://codecov.io/gh/apache/spark/pull/39647?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Soft
panbingkun commented on PR #39275:
URL: https://github.com/apache/spark/pull/39275#issuecomment-1396852540
> https://user-images.githubusercontent.com/1097932/213345430-088ace51-e8ab-4f2b-9097-0184ab94efb8.png";>
>
> @panbingkun there are 7 usages in from live entities, while there are
panbingkun commented on code in PR #39275:
URL: https://github.com/apache/spark/pull/39275#discussion_r1081174525
##
core/src/main/scala/org/apache/spark/status/protobuf/PoolDataSerializer.scala:
##
@@ -34,7 +33,7 @@ class PoolDataSerializer extends ProtobufSerDe[PoolData] {
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081178784
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -387,7 +400,27 @@ public void testGroupBy() {
},
Encoders.STRING());
-
panbingkun commented on code in PR #39275:
URL: https://github.com/apache/spark/pull/39275#discussion_r1081187372
##
core/src/main/scala/org/apache/spark/status/protobuf/StageDataWrapperSerializer.scala:
##
@@ -393,10 +393,8 @@ class StageDataWrapperSerializer extends
ProtobufS
HyukjinKwon opened a new pull request, #39653:
URL: https://github.com/apache/spark/pull/39653
### What changes were proposed in this pull request?
This PR proposes to enable pushing down the limit through Python UDFs by
disabling `PushProjectionThroughLimit` and `CollapseProject` if
HyukjinKwon commented on code in PR #39653:
URL: https://github.com/apache/spark/pull/39653#discussion_r1081229852
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -1006,7 +1006,7 @@ object CollapseProject extends Rule[LogicalPlan] wi
HyukjinKwon commented on code in PR #39585:
URL: https://github.com/apache/spark/pull/39585#discussion_r1081248787
##
python/pyspark/sql/connect/functions.py:
##
@@ -2350,8 +2356,21 @@ def unwrap_udt(col: "ColumnOrName") -> Column:
unwrap_udt.__doc__ = pysparkfuncs.unwrap_udt._
HyukjinKwon commented on PR #39649:
URL: https://github.com/apache/spark/pull/39649#issuecomment-1396979534
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon closed pull request #39649: [SPARK-42111][SQL][TESTS] Mark
`Orc*FilterSuite/OrcV*SchemaPruningSuite` as `ExtendedSQLTest`
URL: https://github.com/apache/spark/pull/39649
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HyukjinKwon closed pull request #39639: [SPARK-42080][PYTHON][DOCS] Add
guideline for PySpark errors
URL: https://github.com/apache/spark/pull/39639
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
srowen commented on PR #39190:
URL: https://github.com/apache/spark/pull/39190#issuecomment-1397061274
It makes sense to me. I don't know a lot about this code, so hesitate to
review it. Does this only affect display metrics? I'm just wondering why it
hadn't caused a problem before. Maybe i
srowen commented on PR #39190:
URL: https://github.com/apache/spark/pull/39190#issuecomment-1397061880
Or maybe more to the point, do you have a concrete example of how this
arises in Spark?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
tedyu opened a new pull request, #39654:
URL: https://github.com/apache/spark/pull/39654
### What changes were proposed in this pull request?
This PR adds `ioe` to the warning log of `finalizeShuffleMerge`.
### Why are the changes needed?
With `ioe` logged, user would have more c
tedyu commented on PR #39654:
URL: https://github.com/apache/spark/pull/39654#issuecomment-1397097053
cc @mridulm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubs
dongjoon-hyun commented on PR #39649:
URL: https://github.com/apache/spark/pull/39649#issuecomment-1397128418
Thank you, @HyukjinKwon !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081446977
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081447485
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081448596
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081448247
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081449075
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081449349
##
sql/core/src/main/scala/org/apache/spark/sql/execution/AliasAwareOutputExpression.scala:
##
@@ -74,18 +73,4 @@ trait AliasAwareOutputPartitioning extends
AliasAw
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081449673
##
sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala:
##
@@ -1314,6 +1313,135 @@ class PlannerSuite extends SharedSparkSession with
Adaptive
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081450369
##
sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala:
##
@@ -1314,6 +1313,135 @@ class PlannerSuite extends SharedSparkSession with
Adaptive
dongjoon-hyun commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1081451704
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala:
##
@@ -353,3 +381,16 @@ class BasicDri
dongjoon-hyun commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1081452390
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala:
##
@@ -353,3 +381,16 @@ class BasicDri
peter-toth commented on PR #37525:
URL: https://github.com/apache/spark/pull/37525#issuecomment-1397176778
I've rebased the PR on https://github.com/apache/spark/pull/39652, that is
not yet merged, so there is an extra commit
(https://github.com/apache/spark/pull/37525/commits/59646bbc26476
cloud-fan commented on PR #39652:
URL: https://github.com/apache/spark/pull/39652#issuecomment-1397201326
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan closed pull request #39652: [SPARK-40599][SQL] Relax multiTransform
rule type to allow alternatives to be any kinds of Seq
URL: https://github.com/apache/spark/pull/39652
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
peter-toth commented on PR #39652:
URL: https://github.com/apache/spark/pull/39652#issuecomment-1397202187
Thanks for the quick review!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
EnricoMi commented on PR #39640:
URL: https://github.com/apache/spark/pull/39640#issuecomment-1397205310
All changes done, all tests green.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
peter-toth commented on PR #37525:
URL: https://github.com/apache/spark/pull/37525#issuecomment-1397205575
> I've rebased the PR on #39652, that is not yet merged, so there is an
extra commit
([59646bb](https://github.com/apache/spark/commit/59646bbc26476ec957fd7bff8cbae317791dc228))
in th
xkrogen commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081513455
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -304,6 +304,14 @@ object SQLConf {
.stringConf
.createOptional
+ val PLAN
xkrogen commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081513455
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -304,6 +304,14 @@ object SQLConf {
.stringConf
.createOptional
+ val PLAN
cloud-fan commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081550033
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/rules/RuleExecutor.scala:
##
@@ -151,12 +152,15 @@ abstract class RuleExecutor[TreeType <: TreeNode[_]]
dongjoon-hyun closed pull request #39651: [SPARK-42113][PS][INFRA] Upgrade
pandas to 1.5.3
URL: https://github.com/apache/spark/pull/39651
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
antonipp commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1081559422
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala:
##
@@ -353,3 +381,16 @@ class BasicDriverFe
aokolnychyi commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1397327118
I've updated this PR and its description so it is ready for another look.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
dtenedor commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081597693
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala:
##
@@ -227,7 +227,7 @@ object LogicalPlanIntegrity {
* this method ch
antonipp commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1073834555
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala:
##
@@ -168,27 +168,27 @@ private[spark] class Ba
dongjoon-hyun opened a new pull request, #39655:
URL: https://github.com/apache/spark/pull/39655
### What changes were proposed in this pull request?
This PR aims to mark `ColumnarBatchSuite` as `ExtendedSQLTest`
### Why are the changes needed?
### Does this PR introd
dongjoon-hyun commented on PR #39655:
URL: https://github.com/apache/spark/pull/39655#issuecomment-1397414001
Could you review this, @huaxingao ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
dongjoon-hyun commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081658172
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -848,13 +848,13 @@ public void registerExecutor(Strin
dongjoon-hyun commented on PR #39655:
URL: https://github.com/apache/spark/pull/39655#issuecomment-1397423585
Thank you, @huaxingao !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081661106
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -848,13 +848,13 @@ public void registerExecutor(String appId,
srielau commented on PR #38419:
URL: https://github.com/apache/spark/pull/38419#issuecomment-1397428887
What is the result type? Does it match the input?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
srielau commented on code in PR #38419:
URL: https://github.com/apache/spark/pull/38419#discussion_r1081669031
##
sql/core/src/test/resources/sql-tests/inputs/trunc.sql:
##
@@ -0,0 +1,136 @@
+-- trunc decimal
Review Comment:
Can you add some tests for the result type, specia
srielau commented on code in PR #38419:
URL: https://github.com/apache/spark/pull/38419#discussion_r1081675876
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala:
##
@@ -1432,6 +1681,53 @@ case class Logarithm(left: Expression, right:
dongjoon-hyun commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081684212
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffl
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081692479
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge(F
jchen5 commented on PR #39375:
URL: https://github.com/apache/spark/pull/39375#issuecomment-1397459943
Definitely, I added some more tests. The is the set of factors I tested:
- Subquery type:
- Eligible for DecorrelateInnerQuery: Scalar, lateral join
- Not supported: EXISTS (new
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1081741822
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1081742355
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1081743950
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
rithwik-db commented on code in PR #39637:
URL: https://github.com/apache/spark/pull/39637#discussion_r1081766800
##
python/pyspark/ml/torch/tests/test_distributor.py:
##
@@ -286,6 +326,23 @@ def tearDown(self) -> None:
os.unlink(self.tempFile.name)
self.spark.
dongjoon-hyun commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081772125
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffl
viirya commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081781360
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -274,6 +274,120 @@ case class ReplaceData(
}
}
+/**
+ * Writes a
dongjoon-hyun commented on PR #39655:
URL: https://github.com/apache/spark/pull/39655#issuecomment-1397531305
Merged to master for Apache Spark 3.4.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun closed pull request #39655: [SPARK-42116][SQL][TESTS] Mark
`ColumnarBatchSuite` as `ExtendedSQLTest`
URL: https://github.com/apache/spark/pull/39655
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
chaoqin-li1123 commented on PR #39647:
URL: https://github.com/apache/spark/pull/39647#issuecomment-1397535787
The test failure seems
irrelevant(https://github.com/chaoqin-li1123/spark/actions/runs/3956602101/jobs/6776029863#step:11:1317)
--
This is an automated message from the Apache Gi
viirya commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081796690
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -477,6 +507,73 @@ object DataWritingSparkTask extends Loggi
rithwik-db commented on code in PR #39637:
URL: https://github.com/apache/spark/pull/39637#discussion_r1081814743
##
python/pyspark/ml/torch/tests/test_distributor.py:
##
@@ -288,6 +288,13 @@ def test_local_training_succeeds(self) -> None:
if cuda_env_var:
rithwik-db commented on code in PR #39299:
URL: https://github.com/apache/spark/pull/39299#discussion_r1081819229
##
python/pyspark/ml/torch/distributor.py:
##
@@ -72,6 +77,19 @@ def get_conf_boolean(sc: SparkContext, key: str,
default_value: str) -> bool:
)
+def get_l
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081825442
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -477,6 +507,73 @@ object DataWritingSparkTask extends
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081828234
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -274,6 +274,120 @@ case class ReplaceData(
}
}
+/**
+ * Wri
grundprinzip commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081830731
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (AS
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081850206
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -274,6 +274,120 @@ case class ReplaceData(
}
}
+/**
+ * Wri
huaxingao commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1397613359
+1 LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubsc
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081692479
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge(F
gengliangwang commented on PR #39275:
URL: https://github.com/apache/spark/pull/39275#issuecomment-1397656825
Thanks, merging to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
gengliangwang closed pull request #39275: [SPARK-41759][CORE] Use `weakIntern`
on string values in create new objects during deserialization
URL: https://github.com/apache/spark/pull/39275
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
hvanhovell closed pull request #39541: [SPARK-42043][CONNECT] Scala Client
Result with E2E Tests
URL: https://github.com/apache/spark/pull/39541
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
hvanhovell commented on PR #39541:
URL: https://github.com/apache/spark/pull/39541#issuecomment-1397666200
Merged
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubsc
aokolnychyi commented on PR #32921:
URL: https://github.com/apache/spark/pull/32921#issuecomment-1397703601
Hi, @LorenzoMartini! I am not sure how much `SupportsRuntimeFiltering` API
will be helpful for built-in sources because Spark treats them in a special
way. For instance, `PushDownUtil
dongjoon-hyun commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081945632
##
connector/connect/client/jvm/pom.xml:
##
@@ -47,6 +47,12 @@
+
Review Comment:
Hi, @hvanhovell .
I believe this is not a
dongjoon-hyun commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081947954
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##
@@ -0,0 +1,198 @@
+/*
+ * Licensed to the Apa
allisonwang-db opened a new pull request, #39656:
URL: https://github.com/apache/spark/pull/39656
### What changes were proposed in this pull request?
This PR adds two new built-in table-valued functions in the table function
registry: `inline` and `inline_outer`.
### Why a
mridulm commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081952766
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge
mridulm commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081952766
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081954269
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge(F
1 - 100 of 206 matches
Mail list logo