xwu99 commented on PR #33941:
URL: https://github.com/apache/spark/pull/33941#issuecomment-1103533768
> Support a basic reuse policy (perhaps EXEC_CORES_EQUAL since I think that
was your original use case) and allow user to specify their own. ie config that
perhaps load all policies, like t
LuciferYang commented on code in PR #36237:
URL: https://github.com/apache/spark/pull/36237#discussion_r853779042
##
core/src/main/scala/org/apache/spark/status/KVUtils.scala:
##
@@ -100,6 +100,33 @@ private[spark] object KVUtils extends Logging {
}
}
+ /** Counts the
yaooqinn commented on PR #36222:
URL: https://github.com/apache/spark/pull/36222#issuecomment-1103527220
thanks, @srowen @mridulm @HyukjinKwon for the check, merged to
master/3.3/3.2/3.1/3.0
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
bkosaraju commented on PR #24801:
URL: https://github.com/apache/spark/pull/24801#issuecomment-1103526132
@etspaceman Can this PR be merged ? Thanks, as this can unblock some of
local and integration testing pieces for kinesis streaming.
--
This is an automated message from the Apache Git
yaooqinn closed pull request #36222: [SPARK-38922][Core] TaskLocation.apply
throw NullPointerException
URL: https://github.com/apache/spark/pull/36222
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
xwu99 commented on code in PR #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r853770991
##
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala:
##
@@ -283,8 +283,9 @@ case class SparkListenerApplicationEnd(time: Long) extends
SparkListenerEven
PavithraRamachandran opened a new pull request, #36278:
URL: https://github.com/apache/spark/pull/36278
### What changes were proposed in this pull request?
Making stage ID which is currently displayed as a static text in DAG into a
navigable link to the particular stage Page.
cloud-fan commented on code in PR #36238:
URL: https://github.com/apache/spark/pull/36238#discussion_r853768026
##
core/src/main/scala/org/apache/spark/executor/Executor.scala:
##
@@ -264,16 +290,26 @@ private[spark] class Executor(
decommissioned = true
}
+ private[e
xwu99 commented on code in PR #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r853761272
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -518,11 +540,25 @@ private[spark] class ExecutorAllocationManager(
numExecutorsTarget +
cloud-fan closed pull request #36276: [SPARK-38962][SQL] Fix wrong computeStats
at DataSourceV2Relation
URL: https://github.com/apache/spark/pull/36276
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
cloud-fan commented on PR #36276:
URL: https://github.com/apache/spark/pull/36276#issuecomment-1103506965
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on code in PR #36117:
URL: https://github.com/apache/spark/pull/36117#discussion_r853755920
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanDistinctKeys.scala:
##
@@ -29,6 +29,12 @@ import
org.apache.spark.sql.internal.S
AmplabJenkins commented on PR #36252:
URL: https://github.com/apache/spark/pull/36252#issuecomment-1103504970
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
MaxGekk commented on code in PR #36259:
URL: https://github.com/apache/spark/pull/36259#discussion_r853747055
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -416,13 +437,20 @@ object QueryParsingErrors extends QueryErrorsBase {
}
ivoson commented on code in PR #36259:
URL: https://github.com/apache/spark/pull/36259#discussion_r853727398
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -416,13 +437,20 @@ object QueryParsingErrors extends QueryErrorsBase {
}
beliefer opened a new pull request, #36277:
URL: https://github.com/apache/spark/pull/36277
### What changes were proposed in this pull request?
This PR backport https://github.com/apache/spark/pull/35531 and
https://github.com/apache/spark/pull/35041 to branch-3.3
### Why are
cloud-fan commented on code in PR #32298:
URL: https://github.com/apache/spark/pull/32298#discussion_r853713177
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala:
##
@@ -663,11 +663,13 @@ case class UnresolvedWith(
*
cloud-fan commented on code in PR #32298:
URL: https://github.com/apache/spark/pull/32298#discussion_r853713177
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala:
##
@@ -663,11 +663,13 @@ case class UnresolvedWith(
*
cloud-fan commented on code in PR #32298:
URL: https://github.com/apache/spark/pull/32298#discussion_r853712433
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/MergeScalarSubqueries.scala:
##
@@ -0,0 +1,382 @@
+/*
+ * Licensed to the Apache Software Founda
ulysses-you commented on code in PR #36276:
URL: https://github.com/apache/spark/pull/36276#discussion_r853710938
##
sql/catalyst/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Relation.scala:
##
@@ -80,7 +80,7 @@ case class DataSourceV2Relation(
ulysses-you commented on PR #36276:
URL: https://github.com/apache/spark/pull/36276#issuecomment-1103451109
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
T
ulysses-you opened a new pull request, #36276:
URL: https://github.com/apache/spark/pull/36276
### What changes were proposed in this pull request?
Use `Scan` to match `SupportsReportStatistics`.
### Why are the changes needed?
The interface `SupportsReportStatist
ulysses-you commented on code in PR #36253:
URL: https://github.com/apache/spark/pull/36253#discussion_r853708410
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsReportUniqueKeys.java:
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundati
ulysses-you commented on code in PR #36117:
URL: https://github.com/apache/spark/pull/36117#discussion_r853706929
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanDistinctKeys.scala:
##
@@ -29,6 +29,12 @@ import
org.apache.spark.sql.internal
gengliangwang commented on code in PR #36274:
URL: https://github.com/apache/spark/pull/36274#discussion_r853703104
##
core/src/main/scala/org/apache/spark/internal/config/Status.scala:
##
@@ -73,8 +73,8 @@ private[spark] object Status {
val DISK_STORE_DIR_FOR_STATUS =
weixiuli commented on PR #36162:
URL: https://github.com/apache/spark/pull/36162#issuecomment-1103442022
cc @Ngone51 @mridulm Could you please help take a look when you have time?
Thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
anchovYu commented on PR #36241:
URL: https://github.com/apache/spark/pull/36241#issuecomment-1103441494
The cherrypick PR to 3.3: https://github.com/apache/spark/pull/36275
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
anchovYu commented on PR #36275:
URL: https://github.com/apache/spark/pull/36275#issuecomment-1103440641
Hi @MaxGekk , this is the cherry-picked PR. Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
xwu99 commented on code in PR #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r853701291
##
core/src/main/scala/org/apache/spark/resource/ResourceProfileManager.scala:
##
@@ -77,13 +85,27 @@ private[spark] class ResourceProfileManager(sparkConf:
SparkConf,
linhongliu-db commented on code in PR #36274:
URL: https://github.com/apache/spark/pull/36274#discussion_r853701285
##
core/src/main/scala/org/apache/spark/internal/config/Status.scala:
##
@@ -73,8 +73,8 @@ private[spark] object Status {
val DISK_STORE_DIR_FOR_STATUS =
anchovYu opened a new pull request, #36275:
URL: https://github.com/apache/spark/pull/36275
Backport to 3.3:
Closes #36241 from anchovYu/ansi-error-improve.
Authored-by: Xinyi Yu
Signed-off-by: Max Gekk
(cherry picked from commit f76b3e766f79b4c2d4f1ecffaad25aeb962336b7)
beliefer commented on PR #36258:
URL: https://github.com/apache/spark/pull/36258#issuecomment-1103439332
@HyukjinKwon @cloud-fan Thank you.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
gengliangwang commented on code in PR #36274:
URL: https://github.com/apache/spark/pull/36274#discussion_r853699656
##
core/src/main/scala/org/apache/spark/internal/config/Status.scala:
##
@@ -73,8 +73,8 @@ private[spark] object Status {
val DISK_STORE_DIR_FOR_STATUS =
gengliangwang commented on PR #36274:
URL: https://github.com/apache/spark/pull/36274#issuecomment-1103437742
cc @LinhongLiu @cloud-fan @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
gengliangwang opened a new pull request, #36274:
URL: https://github.com/apache/spark/pull/36274
### What changes were proposed in this pull request?
* Add the conf `spark.appStatusStore.diskStoreDir` in configuration.md
* This diagnostic API requires setting `spark.appStatu
xwu99 commented on code in PR #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r853695878
##
core/src/main/scala/org/apache/spark/resource/ResourceProfileManager.scala:
##
@@ -77,13 +85,27 @@ private[spark] class ResourceProfileManager(sparkConf:
SparkConf,
xwu99 commented on code in PR #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r853695878
##
core/src/main/scala/org/apache/spark/resource/ResourceProfileManager.scala:
##
@@ -77,13 +85,27 @@ private[spark] class ResourceProfileManager(sparkConf:
SparkConf,
cloud-fan commented on code in PR #32298:
URL: https://github.com/apache/spark/pull/32298#discussion_r853685821
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/MergeScalarSubqueries.scala:
##
@@ -0,0 +1,382 @@
+/*
+ * Licensed to the Apache Software Founda
zhengruifeng commented on PR #36181:
URL: https://github.com/apache/spark/pull/36181#issuecomment-1103419993
@xinrong-databricks I think we should keep in line with the pandas on
dealing with NAs, and end users do not need to know the internal details about
converting NAs to nulls.
HyukjinKwon commented on code in PR #36245:
URL: https://github.com/apache/spark/pull/36245#discussion_r853683619
##
sql/core/src/main/scala/org/apache/spark/sql/execution/BaseScriptTransformationExec.scala:
##
@@ -262,7 +262,8 @@ trait BaseScriptTransformationExec extends Unary
HyukjinKwon commented on PR #36245:
URL: https://github.com/apache/spark/pull/36245#issuecomment-1103417894
cc @AngersZh FYI .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific com
HyukjinKwon commented on code in PR #36253:
URL: https://github.com/apache/spark/pull/36253#discussion_r853683206
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsReportUniqueKeys.java:
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundati
cloud-fan closed pull request #36268: [SPARK-37575][SQL][FOLLOWUP] Update the
migration guide for added legacy flag for the breaking change of write null
value in csv to unquoted empty string
URL: https://github.com/apache/spark/pull/36268
--
This is an automated message from the Apache Git
cloud-fan commented on PR #36268:
URL: https://github.com/apache/spark/pull/36268#issuecomment-1103415783
thanks, merging to master/3.3!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
cloud-fan commented on code in PR #36117:
URL: https://github.com/apache/spark/pull/36117#discussion_r853680161
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanDistinctKeys.scala:
##
@@ -29,6 +29,12 @@ import
org.apache.spark.sql.internal.S
LuciferYang commented on code in PR #36261:
URL: https://github.com/apache/spark/pull/36261#discussion_r853680010
##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonForeachWriterSuite.scala:
##
@@ -102,11 +102,15 @@ class PythonForeachWriterSuite extends Spar
dongjoon-hyun commented on PR #36271:
URL: https://github.com/apache/spark/pull/36271#issuecomment-1103391957
+1, LGTM.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To u
zhengruifeng commented on code in PR #36246:
URL: https://github.com/apache/spark/pull/36246#discussion_r853669412
##
python/pyspark/pandas/series.py:
##
@@ -2209,15 +2219,43 @@ def _interpolate(
) * null_index_forward + last_non_null_forward
fill_cond = ~F.i
zhengruifeng commented on code in PR #36246:
URL: https://github.com/apache/spark/pull/36246#discussion_r853668488
##
python/pyspark/pandas/generic.py:
##
@@ -3259,6 +3260,10 @@ def interpolate(
Maximum number of consecutive NaNs to fill. Must be greater than
zhengruifeng commented on code in PR #36246:
URL: https://github.com/apache/spark/pull/36246#discussion_r853668488
##
python/pyspark/pandas/generic.py:
##
@@ -3259,6 +3260,10 @@ def interpolate(
Maximum number of consecutive NaNs to fill. Must be greater than
panbingkun opened a new pull request, #36273:
URL: https://github.com/apache/spark/pull/36273
### What changes were proposed in this pull request?
Added an exception to be thrown in SparkConf.validateSettings if set initial
memory(set by "spark.executor.extraJavaOptions=-Xms{XXX}G" )
HyukjinKwon closed pull request #36271: [SPARK-38844][PYTHON][TESTS][FOLLOW-UP]
Test pyspark.pandas.tests.test_generic_functions
URL: https://github.com/apache/spark/pull/36271
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on PR #36271:
URL: https://github.com/apache/spark/pull/36271#issuecomment-1103375342
Thanks all!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To u
HyukjinKwon commented on PR #36271:
URL: https://github.com/apache/spark/pull/36271#issuecomment-1103375136
All pyspark tests passed.
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
HyukjinKwon closed pull request #36270: [SPARK-38956][TESTS] Fix
FAILED_EXECUTE_UDF test case on Java 17
URL: https://github.com/apache/spark/pull/36270
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon commented on PR #36270:
URL: https://github.com/apache/spark/pull/36270#issuecomment-1103372759
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon closed pull request #36258: [SPARK-37613][SQL][FOLLOWUP] Supplement
docs for regr_count
URL: https://github.com/apache/spark/pull/36258
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to th
HyukjinKwon commented on PR #36258:
URL: https://github.com/apache/spark/pull/36258#issuecomment-1103372145
Merged to master and branch-3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
ulysses-you commented on code in PR #36117:
URL: https://github.com/apache/spark/pull/36117#discussion_r853662688
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanDistinctKeys.scala:
##
@@ -29,6 +29,12 @@ import
org.apache.spark.sql.internal
zhengruifeng commented on PR #36257:
URL: https://github.com/apache/spark/pull/36257#issuecomment-1103367501
Thanks @HyukjinKwon for reviewing!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
ulysses-you commented on code in PR #36117:
URL: https://github.com/apache/spark/pull/36117#discussion_r853662688
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanDistinctKeys.scala:
##
@@ -29,6 +29,12 @@ import
org.apache.spark.sql.internal
HyukjinKwon closed pull request #36255: [SPARK-38828][PYTHON] Remove
TimestampNTZ type Python support in Spark 3.3
URL: https://github.com/apache/spark/pull/36255
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
HyukjinKwon commented on PR #36255:
URL: https://github.com/apache/spark/pull/36255#issuecomment-1103357946
Merged to master and branch-3.3.
cc @MaxGekk @gengliangwang Python side is ready for hiding timestamp ntz.
--
This is an automated message from the Apache Git Service.
To
HyukjinKwon closed pull request #36257: [SPARK-38943][PYTHON] EWM support
ignore_na
URL: https://github.com/apache/spark/pull/36257
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comme
HyukjinKwon commented on PR #36257:
URL: https://github.com/apache/spark/pull/36257#issuecomment-1103355723
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #36261:
URL: https://github.com/apache/spark/pull/36261#discussion_r853658325
##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonForeachWriterSuite.scala:
##
@@ -102,11 +102,15 @@ class PythonForeachWriterSuite extends Spar
zhengruifeng commented on PR #36246:
URL: https://github.com/apache/spark/pull/36246#issuecomment-1103354556
@itholic Sure! after
https://github.com/apache/spark/pull/36271#pullrequestreview-946491209 get
merged, I will rebase this PR to trigger the build. Thanks!
--
This is an automate
itholic commented on code in PR #36215:
URL: https://github.com/apache/spark/pull/36215#discussion_r853657776
##
python/pyspark/pandas/tests/test_series.py:
##
@@ -1749,14 +1759,22 @@ def test_drop(self):
with self.assertRaisesRegex(ValueError, msg):
psser.
morvenhuang commented on PR #36212:
URL: https://github.com/apache/spark/pull/36212#issuecomment-1103345856
@dtenedor Thank you for the patience and the help.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
beliefer commented on PR #35041:
URL: https://github.com/apache/spark/pull/35041#issuecomment-1103345609
@cloud-fan Thank you very much! @MaxGekk @jiangxb1987 Thank you too.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
itholic commented on code in PR #36246:
URL: https://github.com/apache/spark/pull/36246#discussion_r853654254
##
python/pyspark/pandas/generic.py:
##
@@ -3259,6 +3260,10 @@ def interpolate(
Maximum number of consecutive NaNs to fill. Must be greater than
itholic commented on PR #36246:
URL: https://github.com/apache/spark/pull/36246#issuecomment-1103341881
Seems like the CI builder is fixed now. Could you rebase to master ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
allisonwang-db opened a new pull request, #36272:
URL: https://github.com/apache/spark/pull/36272
### What changes were proposed in this pull request?
This PR uses multipart identifiers when parsing table-valued functions.
### Why are the changes needed?
To make table-valued
itholic commented on PR #36083:
URL: https://github.com/apache/spark/pull/36083#issuecomment-1103322576
Thanks! Let me create the ticket right after completing this PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
zhengruifeng commented on PR #36271:
URL: https://github.com/apache/spark/pull/36271#issuecomment-1103321116
great catch! thanks @HyukjinKwon !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
Yikun commented on PR #36083:
URL: https://github.com/apache/spark/pull/36083#issuecomment-1103318782
@itholic Fine for me, and actually I wasn't mean use above scripts to
generate doc completely (inline code), because we had some specific note based
on doc.
You could just regard it
itholic commented on PR #36083:
URL: https://github.com/apache/spark/pull/36083#issuecomment-1103309286
@Yikun Yeah, I tried it at first, but it was a bit tricky since the rules
for the RST format that Sphinx checks are way stricter than I think when
building a document, so I decided to man
github-actions[bot] closed pull request #34062: [SPARK-36819][SQL] Don't insert
redundant filters in case static partition pruning can be done
URL: https://github.com/apache/spark/pull/34062
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
github-actions[bot] commented on PR #33446:
URL: https://github.com/apache/spark/pull/33446#issuecomment-1103285936
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #35083:
URL: https://github.com/apache/spark/pull/35083#issuecomment-1103285907
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
HyukjinKwon commented on PR #36271:
URL: https://github.com/apache/spark/pull/36271#issuecomment-1103264269
Build link: https://github.com/HyukjinKwon/spark/runs/6086874477
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
HyukjinKwon commented on code in PR #36127:
URL: https://github.com/apache/spark/pull/36127#discussion_r853600923
##
python/pyspark/pandas/tests/test_generic_functions.py:
##
@@ -0,0 +1,124 @@
+#
Review Comment:
https://github.com/apache/spark/pull/36271
--
This is an au
HyukjinKwon opened a new pull request, #36271:
URL: https://github.com/apache/spark/pull/36271
### What changes were proposed in this pull request?
This is a minor followup of https://github.com/apache/spark/pull/36127 that
actually activates the tests added.
### Why are the ch
huaxingao commented on PR #36264:
URL: https://github.com/apache/spark/pull/36264#issuecomment-1103262457
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
huaxingao commented on code in PR #36264:
URL: https://github.com/apache/spark/pull/36264#discussion_r853597550
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/connector/SupportsPushDownCatalystFilters.scala:
##
@@ -35,7 +35,7 @@ trait SupportsPushDownCatalystFilter
HyukjinKwon commented on code in PR #36127:
URL: https://github.com/apache/spark/pull/36127#discussion_r853597396
##
python/pyspark/pandas/tests/test_generic_functions.py:
##
@@ -0,0 +1,124 @@
+#
Review Comment:
Oh, actually we should add this file into
https://github.com/a
dongjoon-hyun commented on PR #36270:
URL: https://github.com/apache/spark/pull/36270#issuecomment-1103253160
cc @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
T
dongjoon-hyun commented on code in PR #36220:
URL: https://github.com/apache/spark/pull/36220#discussion_r853570987
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -418,4 +418,20 @@ class QueryExecutionErrorsSuite extends QueryTest
williamhyun opened a new pull request, #36270:
URL: https://github.com/apache/spark/pull/36270
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
xinrong-databricks commented on code in PR #36269:
URL: https://github.com/apache/spark/pull/36269#discussion_r853557693
##
python/pyspark/pandas/tests/test_series.py:
##
@@ -260,6 +255,12 @@ def test_rename_axis(self):
psser.rename_axis(index=str.upper).sort_index(
AmplabJenkins commented on PR #36260:
URL: https://github.com/apache/spark/pull/36260#issuecomment-1103244085
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
xinrong-databricks commented on code in PR #36269:
URL: https://github.com/apache/spark/pull/36269#discussion_r853557693
##
python/pyspark/pandas/tests/test_series.py:
##
@@ -260,6 +255,12 @@ def test_rename_axis(self):
psser.rename_axis(index=str.upper).sort_index(
xinrong-databricks opened a new pull request, #36269:
URL: https://github.com/apache/spark/pull/36269
### What changes were proposed in this pull request?
Test anchor frame for in-place `Series.rename_axis`.
### Why are the changes needed?
As a follow-up for https://github.com/ap
dtenedor commented on PR #36212:
URL: https://github.com/apache/spark/pull/36212#issuecomment-1103230094
@morvenhuang Note, I made a small update to `ResolveDefaultColumns` to fix
the two tests you mentioned in [1]. Thanks for pointing that out, it gives us a
chance to improve that code.
maryannxue commented on code in PR #36238:
URL: https://github.com/apache/spark/pull/36238#discussion_r853536305
##
core/src/main/scala/org/apache/spark/executor/Executor.scala:
##
@@ -264,16 +290,26 @@ private[spark] class Executor(
decommissioned = true
}
+ private[
anchovYu commented on code in PR #36241:
URL: https://github.com/apache/spark/pull/36241#discussion_r853530179
##
sql/core/src/test/resources/sql-tests/results/string-functions.sql.out:
##
@@ -1,5 +1,5 @@
-- Automatically generated by SQLQueryTestSuite
--- Number of queries: 14
mridulm commented on PR #35683:
URL: https://github.com/apache/spark/pull/35683#issuecomment-1103207562
Let us add shuffle service enabled == false as well, until this is supported
in this context.
--
This is an automated message from the Apache Git Service.
To respond to the message, ple
mridulm commented on PR #36222:
URL: https://github.com/apache/spark/pull/36222#issuecomment-1103205833
Thanks for clarifying, looks good to me.
+CC @tgravescs - you might be interested in this behavior.
--
This is an automated message from the Apache Git Service.
To respond to the mess
bersprockets commented on code in PR #36230:
URL: https://github.com/apache/spark/pull/36230#discussion_r853513127
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala:
##
@@ -144,6 +144,8 @@ object EliminateOuterJoin extends Rule[LogicalPlan] with
anchovYu commented on PR #36268:
URL: https://github.com/apache/spark/pull/36268#issuecomment-1103202176
Hi @cloud-fan , this is the migration guide update follow-up. Could you
review? Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
1 - 100 of 191 matches
Mail list logo