beliefer commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877803218
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCCatalog.scala:
##
@@ -32,11 +35,14 @@ import org.apache.spark.sql.jdbc.{JdbcDialect, J
wangyum commented on PR #36588:
URL: https://github.com/apache/spark/pull/36588#issuecomment-1132514681
A case from production:
![image](https://user-images.githubusercontent.com/5399861/169463931-65bfd0c0-1759-4f9d-8a0a-66b32463b76a.png)
--
This is an automated message from the Ap
cloud-fan commented on code in PR #36608:
URL: https://github.com/apache/spark/pull/36608#discussion_r877763059
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Covariance.scala:
##
@@ -34,7 +34,7 @@ abstract class Covariance(val left: Expressio
amaliujia commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877754283
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
LuciferYang commented on PR #36616:
URL: https://github.com/apache/spark/pull/36616#issuecomment-1132495258
This pr mainly focuses on `Parquet`. If this is acceptable, I will change
Orc in another pr
--
This is an automated message from the Apache Git Service.
To respond to the message, p
cloud-fan commented on code in PR #36608:
URL: https://github.com/apache/spark/pull/36608#discussion_r877745569
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Covariance.scala:
##
@@ -69,7 +69,7 @@ abstract class Covariance(val left: Expressio
cloud-fan commented on code in PR #36614:
URL: https://github.com/apache/spark/pull/36614#discussion_r877742782
##
docs/sql-ref-ansi-compliance.md:
##
@@ -28,10 +28,10 @@ The casting behaviours are defined as store assignment
rules in the standard.
When `spark.sql.storeAssig
cloud-fan commented on code in PR #36614:
URL: https://github.com/apache/spark/pull/36614#discussion_r877742454
##
docs/sql-ref-ansi-compliance.md:
##
@@ -28,10 +28,10 @@ The casting behaviours are defined as store assignment
rules in the standard.
When `spark.sql.storeAssig
LuciferYang commented on PR #36616:
URL: https://github.com/apache/spark/pull/36616#issuecomment-1132482921
will update pr description later
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
cloud-fan commented on PR #36615:
URL: https://github.com/apache/spark/pull/36615#issuecomment-1132481952
Good catch!
This is a long-standing issue. The type coercion for decimal types is really
messy as it's not bound to `Expression.resolved`. Changing the rule order does
fix this s
LuciferYang opened a new pull request, #36616:
URL: https://github.com/apache/spark/pull/36616
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was thi
HyukjinKwon commented on PR #36486:
URL: https://github.com/apache/spark/pull/36486#issuecomment-1132476982
I haven't taken a close look but seems fine from a cursory look. Should be
good to go.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
zhengruifeng commented on PR #36486:
URL: https://github.com/apache/spark/pull/36486#issuecomment-1132454863
cc @HyukjinKwon @xinrong-databricks @itholic would you mind take a look
whenyou have some time, thanks
--
This is an automated message from the Apache Git Service.
To respond to th
manuzhang commented on PR #36615:
URL: https://github.com/apache/spark/pull/36615#issuecomment-1132451324
cc @gengliangwang @cloud-fan @turboFei
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
manuzhang opened a new pull request, #36615:
URL: https://github.com/apache/spark/pull/36615
### What changes were proposed in this pull request?
When analyzing, apply WidenSetOperationTypes after other rules.
### Why are the changes needed?
The following SQL returns 1.00 whi
HyukjinKwon commented on PR #36589:
URL: https://github.com/apache/spark/pull/36589#issuecomment-1132433602
Merged to master and branch-3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
HyukjinKwon closed pull request #36589: [SPARK-39218][SS][PYTHON] Make
foreachBatch streaming query stop gracefully
URL: https://github.com/apache/spark/pull/36589
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
gengliangwang commented on PR #36614:
URL: https://github.com/apache/spark/pull/36614#issuecomment-1132430487
cc @tanvn as well. Thanks for pointing it out!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
gengliangwang opened a new pull request, #36614:
URL: https://github.com/apache/spark/pull/36614
### What changes were proposed in this pull request?
1. Remove the Experimental notation in ANSI SQL compliance doc
2. Update the description of `spark.sql.ansi.enabled`, since t
dongjoon-hyun commented on PR #36358:
URL: https://github.com/apache/spark/pull/36358#issuecomment-1132427215
Thank you so much, @zwangsheng .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
amaliujia commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877706031
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
amaliujia commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877705884
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
zwangsheng closed pull request #36358: [SPARK-39023] [K8s] Add Executor Pod
inter-pod anti-affinity
URL: https://github.com/apache/spark/pull/36358
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to th
zwangsheng commented on PR #36358:
URL: https://github.com/apache/spark/pull/36358#issuecomment-1132423502
> Hi, @zwangsheng . Thank you for making a PR.
However, Apache Spark community wants to avoid feature duplications like
this.
The proposed feature is already delivered to many pro
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r877700761
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -769,6 +785,25 @@ private[spark] class TaskSetManager(
}
}
+ def setTaskRecordsA
beliefer commented on PR #36608:
URL: https://github.com/apache/spark/pull/36608#issuecomment-1132419433
ping @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
gengliangwang commented on PR #27590:
URL: https://github.com/apache/spark/pull/27590#issuecomment-1132410903
@tanvn nice catch!
@cloud-fan Yes I will update the docs on 3.2 and above
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on t
HyukjinKwon commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877689895
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser, self.loc[self.isnull()].spark.transform(
cloud-fan commented on PR #27590:
URL: https://github.com/apache/spark/pull/27590#issuecomment-1132399814
I think we can remove the experimental mark now. What do you think?
@gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877685094
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684804
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684610
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
ulysses-you commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132397474
Looks correct to me. BTW, after Spark3.3 the RebalancePartitions supports
specify the initialNumPartition, so the demo code can be:
```scala
val optNumPartitions = if (numPartiti
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684350
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684235
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877682958
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877673383
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser, self.loc[self.isnull()].spark.transform(lambda
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877669800
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser, self.loc[self.isnull()].spark.transform(lambda
LuciferYang commented on code in PR #36611:
URL: https://github.com/apache/spark/pull/36611#discussion_r877674754
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -308,28 +308,7 @@ private[spark] object Utils extends Logging {
* newly created, and is not marke
LuciferYang commented on code in PR #36611:
URL: https://github.com/apache/spark/pull/36611#discussion_r877674586
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -339,9 +318,7 @@ private[spark] object Utils extends Logging {
def createTempDir(
root: Str
yaooqinn commented on PR #36592:
URL: https://github.com/apache/spark/pull/36592#issuecomment-1132373620
thanks, merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
yaooqinn closed pull request #36592: [SPARK-39221][SQL] Make sensitive
information be redacted correctly for thrift server job/stage tab
URL: https://github.com/apache/spark/pull/36592
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
LuciferYang commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1132373369
> Yeah, I think we should better fix `Utils.createTempDir`.
Yeah ~ now this pr only change one file and achieved the goal
--
This is an automated message from the Apache Git S
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877673383
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser, self.loc[self.isnull()].spark.transform(lambda
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877669800
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser, self.loc[self.isnull()].spark.transform(lambda
huaxingao commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132364552
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877664483
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser, self.loc[self.isnull()].spark.transform(
huaxingao commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132363307
Thanks @aokolnychyi for the proposal. I agree that we should support both
strictly required distribution and best effort distribution. For best effort
distribution, if user doesn't requ
beliefer commented on code in PR #36330:
URL: https://github.com/apache/spark/pull/36330#discussion_r877653817
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/util/V2ExpressionSQLBuilder.java:
##
@@ -228,4 +244,18 @@ protected String visitSQLFunction(String funcName
HyukjinKwon commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877653141
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def test_streami
zsxwing commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877648746
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def test_streaming_f
HyukjinKwon commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877647489
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def test_streami
zsxwing commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877642301
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def test_streaming_f
HyukjinKwon commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1132340311
Yeah, I think we should better fix `Utils.createTempDir`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
github-actions[bot] closed pull request #34785: [SPARK-37523][SQL] Support
optimize skewed partitions in Distribution and Ordering if numPartitions is not
specified
URL: https://github.com/apache/spark/pull/34785
--
This is an automated message from the Apache Git Service.
To respond to the
github-actions[bot] closed pull request #35049: [SPARK-37757][BUILD] Enable
Spark test scheduled job on ARM runner
URL: https://github.com/apache/spark/pull/35049
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
github-actions[bot] closed pull request #35402: [SPARK-37536][SQL] Allow for
API user to disable Shuffle on Local Mode
URL: https://github.com/apache/spark/pull/35402
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
github-actions[bot] commented on PR #35424:
URL: https://github.com/apache/spark/pull/35424#issuecomment-1132318749
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
dongjoon-hyun commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1132318738
Thank you, @eejbyfeldt , @cloud-fan , @srowen !
cc @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
srowen commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1132316150
Merged to master/3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
srowen closed pull request #36004: [SPARK-38681][SQL] Support nested generic
case classes
URL: https://github.com/apache/spark/pull/36004
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1132295581
Thank you, @eejbyfeldt .
cc @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
hai-tao-1 commented on PR #36606:
URL: https://github.com/apache/spark/pull/36606#issuecomment-1132271280
The PR test fails with ```[error] spark-core: Failed binary compatibility
check against org.apache.spark:spark-core_2.12:3.2.0! Found 9 potential
problems (filtered 924)```. Anyone coul
hai-tao-1 commented on PR #36606:
URL: https://github.com/apache/spark/pull/36606#issuecomment-1132271279
The PR test fails with ```[error] spark-core: Failed binary compatibility
check against org.apache.spark:spark-core_2.12:3.2.0! Found 9 potential
problems (filtered 924)```. Anyone coul
dongjoon-hyun commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1132159846
Merged to master. I added you to the Apache Spark contributor group and
assigned SPARK-39225 to you, @hai-tao-1 .
Welcome to the Apache Spark community.
--
This is an automated
dongjoon-hyun closed pull request #36597: [SPARK-39225][CORE] Support
`spark.history.fs.update.batchSize`
URL: https://github.com/apache/spark/pull/36597
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
amaliujia commented on PR #36586:
URL: https://github.com/apache/spark/pull/36586#issuecomment-1132121981
R: @cloud-fan this PR is ready to review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
huaxingao opened a new pull request, #34785:
URL: https://github.com/apache/spark/pull/34785
### What changes were proposed in this pull request?
Support optimize skewed partitions in Distribution and Ordering if
numPartitions is not specified
### Why are the changes needed?
otterc commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877366840
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
aokolnychyi commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132014116
Thanks for the PR, @huaxingao. I think it is a great feature and it would be
awesome to get it done.
I spent some time thinking about this and have a few questions/proposals.
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877361726
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
MaxGekk commented on PR #36603:
URL: https://github.com/apache/spark/pull/36603#issuecomment-1132003245
@panbingkun Could you backport this to branch-3.3, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
MaxGekk closed pull request #36603: [SPARK-39163][SQL] Throw an exception w/
error class for an invalid bucket file
URL: https://github.com/apache/spark/pull/36603
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
otterc commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877340985
##
core/src/test/scala/org/apache/spark/storage/ShuffleBlockFetcherIteratorSuite.scala:
##
@@ -1786,4 +1786,32 @@ class ShuffleBlockFetcherIteratorSuite extends
SparkFun
hai-tao-1 commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1131988355
> Thank you for updates, @hai-tao-1 . Yes, the only remaining comment is the
test case.
>
> > We need a test case for the configuration. Please check the corner cases
especially.
otterc commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877329983
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
LuciferYang commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1131979146
It seems that this change is big. Another way to keep one `createTempDir`
is to let `Utils.createTempDir` call `JavaUtils.createTempDir` . Is this
acceptable?
--
This is an aut
akpatnam25 commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877316473
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
akpatnam25 commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877316296
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
nkronenfeld opened a new pull request, #36613:
URL: https://github.com/apache/spark/pull/36613
### What changes were proposed in this pull request?
This PR simply adds typed select methods to Dataset up to the max Tuple size
of 22.
This has been bugging me for years, so I final
vli-databricks commented on PR #36584:
URL: https://github.com/apache/spark/pull/36584#issuecomment-1131948634
Yes, the purpose is ease of migration, removed change to `functions.scala`
to limit scope to Spark SQL only.
--
This is an automated message from the Apache Git Service.
To respo
MaxGekk commented on PR #36584:
URL: https://github.com/apache/spark/pull/36584#issuecomment-1131939361
How about to add the function to other APIs like first() in
- PySpark:
https://github.com/apache/spark/blob/b63674ea5f746306a96ab8c39c23a230a6cb9566/sql/core/src/main/scala/org/apache/s
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877275914
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877273610
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
MaxGekk commented on code in PR #36580:
URL: https://github.com/apache/spark/pull/36580#discussion_r877269334
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala:
##
@@ -1971,4 +1971,10 @@ object QueryExecutionErrors extends QueryErrorsBase {
MaxGekk closed pull request #36612: [SPARK-39234][SQL] Code clean up in
SparkThrowableHelper.getMessage
URL: https://github.com/apache/spark/pull/36612
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
MaxGekk commented on PR #36612:
URL: https://github.com/apache/spark/pull/36612#issuecomment-1131917484
+1, LGTM. Merging to master.
Thank you, @gengliangwang and @cloud-fan for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
vli-databricks commented on PR #36584:
URL: https://github.com/apache/spark/pull/36584#issuecomment-1131915487
@MaxGekk please review and help me merge this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abov
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877251054
##
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:
##
@@ -4342,6 +4342,56 @@ class DAGSchedulerSuite extends SparkFunSuite with
TempLocalSparkCo
tanvn commented on PR #27590:
URL: https://github.com/apache/spark/pull/27590#issuecomment-1131855914
@gengliangwang @dongjoon-hyun
Hi, I have a question.
In Spark 3.2.1, are `spark.sql.ansi.enabled` and
`spark.sql.storeAssignmentPolicy` still considered as experimental options ?
I
dongjoon-hyun commented on PR #36377:
URL: https://github.com/apache/spark/pull/36377#issuecomment-1131851884
Thank you for the reverting decision, @cloud-fan and @AngersZh .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
cloud-fan closed pull request #35850: [SPARK-38529][SQL] Prevent
GeneratorNestedColumnAliasing to be applied to non-Explode generators
URL: https://github.com/apache/spark/pull/35850
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
cloud-fan commented on PR #35850:
URL: https://github.com/apache/spark/pull/35850#issuecomment-1131827928
thanks, merging to master/3.3!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
cloud-fan commented on code in PR #35850:
URL: https://github.com/apache/spark/pull/35850#discussion_r877169574
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/NestedColumnAliasing.scala:
##
@@ -321,6 +321,38 @@ object GeneratorNestedColumnAliasing {
cloud-fan commented on code in PR #36295:
URL: https://github.com/apache/spark/pull/36295#discussion_r877141680
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala:
##
@@ -203,6 +204,245 @@ class JDBCV2Suite extends QueryTest with
SharedSparkSession with Expl
cloud-fan commented on code in PR #36295:
URL: https://github.com/apache/spark/pull/36295#discussion_r877141680
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala:
##
@@ -203,6 +204,245 @@ class JDBCV2Suite extends QueryTest with
SharedSparkSession with Expl
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877123725
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCCatalog.scala:
##
@@ -32,11 +35,14 @@ import org.apache.spark.sql.jdbc.{JdbcDialect,
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877113355
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -201,6 +203,14 @@ class V2ExpressionBuilder(
None
}
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877113355
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -201,6 +203,14 @@ class V2ExpressionBuilder(
None
}
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877115137
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala:
##
@@ -744,6 +744,14 @@ object DataSourceStrategy
PushableColu
1 - 100 of 147 matches
Mail list logo