amaliujia commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877754283
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
LuciferYang commented on PR #36616:
URL: https://github.com/apache/spark/pull/36616#issuecomment-1132495258
This pr mainly focuses on `Parquet`. If this is acceptable, I will change
Orc in another pr
--
This is an automated message from the Apache Git Service.
To respond to the message,
cloud-fan commented on code in PR #36608:
URL: https://github.com/apache/spark/pull/36608#discussion_r877745569
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Covariance.scala:
##
@@ -69,7 +69,7 @@ abstract class Covariance(val left:
cloud-fan commented on code in PR #36614:
URL: https://github.com/apache/spark/pull/36614#discussion_r877742782
##
docs/sql-ref-ansi-compliance.md:
##
@@ -28,10 +28,10 @@ The casting behaviours are defined as store assignment
rules in the standard.
When
cloud-fan commented on code in PR #36614:
URL: https://github.com/apache/spark/pull/36614#discussion_r877742454
##
docs/sql-ref-ansi-compliance.md:
##
@@ -28,10 +28,10 @@ The casting behaviours are defined as store assignment
rules in the standard.
When
LuciferYang commented on PR #36616:
URL: https://github.com/apache/spark/pull/36616#issuecomment-1132482921
will update pr description later
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on PR #36615:
URL: https://github.com/apache/spark/pull/36615#issuecomment-1132481952
Good catch!
This is a long-standing issue. The type coercion for decimal types is really
messy as it's not bound to `Expression.resolved`. Changing the rule order does
fix this
LuciferYang opened a new pull request, #36616:
URL: https://github.com/apache/spark/pull/36616
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
HyukjinKwon commented on PR #36486:
URL: https://github.com/apache/spark/pull/36486#issuecomment-1132476982
I haven't taken a close look but seems fine from a cursory look. Should be
good to go.
--
This is an automated message from the Apache Git Service.
To respond to the message,
zhengruifeng commented on PR #36486:
URL: https://github.com/apache/spark/pull/36486#issuecomment-1132454863
cc @HyukjinKwon @xinrong-databricks @itholic would you mind take a look
whenyou have some time, thanks
--
This is an automated message from the Apache Git Service.
To respond to
manuzhang commented on PR #36615:
URL: https://github.com/apache/spark/pull/36615#issuecomment-1132451324
cc @gengliangwang @cloud-fan @turboFei
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
manuzhang opened a new pull request, #36615:
URL: https://github.com/apache/spark/pull/36615
### What changes were proposed in this pull request?
When analyzing, apply WidenSetOperationTypes after other rules.
### Why are the changes needed?
The following SQL returns 1.00
HyukjinKwon commented on PR #36589:
URL: https://github.com/apache/spark/pull/36589#issuecomment-1132433602
Merged to master and branch-3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon closed pull request #36589: [SPARK-39218][SS][PYTHON] Make
foreachBatch streaming query stop gracefully
URL: https://github.com/apache/spark/pull/36589
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
gengliangwang commented on PR #36614:
URL: https://github.com/apache/spark/pull/36614#issuecomment-1132430487
cc @tanvn as well. Thanks for pointing it out!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
gengliangwang opened a new pull request, #36614:
URL: https://github.com/apache/spark/pull/36614
### What changes were proposed in this pull request?
1. Remove the Experimental notation in ANSI SQL compliance doc
2. Update the description of `spark.sql.ansi.enabled`, since
dongjoon-hyun commented on PR #36358:
URL: https://github.com/apache/spark/pull/36358#issuecomment-1132427215
Thank you so much, @zwangsheng .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
amaliujia commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877706031
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
amaliujia commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877705884
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
zwangsheng closed pull request #36358: [SPARK-39023] [K8s] Add Executor Pod
inter-pod anti-affinity
URL: https://github.com/apache/spark/pull/36358
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
zwangsheng commented on PR #36358:
URL: https://github.com/apache/spark/pull/36358#issuecomment-1132423502
> Hi, @zwangsheng . Thank you for making a PR.
However, Apache Spark community wants to avoid feature duplications like
this.
The proposed feature is already delivered to many
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r877700761
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -769,6 +785,25 @@ private[spark] class TaskSetManager(
}
}
+ def
beliefer commented on PR #36608:
URL: https://github.com/apache/spark/pull/36608#issuecomment-1132419433
ping @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
gengliangwang commented on PR #27590:
URL: https://github.com/apache/spark/pull/27590#issuecomment-1132410903
@tanvn nice catch!
@cloud-fan Yes I will update the docs on 3.2 and above
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
HyukjinKwon commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877689895
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
cloud-fan commented on PR #27590:
URL: https://github.com/apache/spark/pull/27590#issuecomment-1132399814
I think we can remove the experimental mark now. What do you think?
@gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877685094
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684804
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684610
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
ulysses-you commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132397474
Looks correct to me. BTW, after Spark3.3 the RebalancePartitions supports
specify the initialNumPartition, so the demo code can be:
```scala
val optNumPartitions = if
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684350
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -367,24 +377,40 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877684235
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r877682958
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +97,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877673383
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877669800
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
LuciferYang commented on code in PR #36611:
URL: https://github.com/apache/spark/pull/36611#discussion_r877674754
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -308,28 +308,7 @@ private[spark] object Utils extends Logging {
* newly created, and is not
LuciferYang commented on code in PR #36611:
URL: https://github.com/apache/spark/pull/36611#discussion_r877674586
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -339,9 +318,7 @@ private[spark] object Utils extends Logging {
def createTempDir(
root:
yaooqinn commented on PR #36592:
URL: https://github.com/apache/spark/pull/36592#issuecomment-1132373620
thanks, merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
yaooqinn closed pull request #36592: [SPARK-39221][SQL] Make sensitive
information be redacted correctly for thrift server job/stage tab
URL: https://github.com/apache/spark/pull/36592
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
LuciferYang commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1132373369
> Yeah, I think we should better fix `Utils.createTempDir`.
Yeah ~ now this pr only change one file and achieved the goal
--
This is an automated message from the Apache Git
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877673383
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877669800
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
huaxingao commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132364552
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r877664483
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
huaxingao commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132363307
Thanks @aokolnychyi for the proposal. I agree that we should support both
strictly required distribution and best effort distribution. For best effort
distribution, if user doesn't
beliefer commented on code in PR #36330:
URL: https://github.com/apache/spark/pull/36330#discussion_r877653817
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/util/V2ExpressionSQLBuilder.java:
##
@@ -228,4 +244,18 @@ protected String visitSQLFunction(String
HyukjinKwon commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877653141
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def
zsxwing commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877648746
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def
HyukjinKwon commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877647489
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def
zsxwing commented on code in PR #36589:
URL: https://github.com/apache/spark/pull/36589#discussion_r877642301
##
python/pyspark/sql/tests/test_streaming.py:
##
@@ -592,6 +592,18 @@ def collectBatch(df, id):
if q:
q.stop()
+def
HyukjinKwon commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1132340311
Yeah, I think we should better fix `Utils.createTempDir`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
github-actions[bot] closed pull request #34785: [SPARK-37523][SQL] Support
optimize skewed partitions in Distribution and Ordering if numPartitions is not
specified
URL: https://github.com/apache/spark/pull/34785
--
This is an automated message from the Apache Git Service.
To respond to the
github-actions[bot] closed pull request #35049: [SPARK-37757][BUILD] Enable
Spark test scheduled job on ARM runner
URL: https://github.com/apache/spark/pull/35049
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
github-actions[bot] closed pull request #35402: [SPARK-37536][SQL] Allow for
API user to disable Shuffle on Local Mode
URL: https://github.com/apache/spark/pull/35402
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
github-actions[bot] commented on PR #35424:
URL: https://github.com/apache/spark/pull/35424#issuecomment-1132318749
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
dongjoon-hyun commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1132318738
Thank you, @eejbyfeldt , @cloud-fan , @srowen !
cc @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
srowen commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1132316150
Merged to master/3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
srowen closed pull request #36004: [SPARK-38681][SQL] Support nested generic
case classes
URL: https://github.com/apache/spark/pull/36004
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1132295581
Thank you, @eejbyfeldt .
cc @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
hai-tao-1 commented on PR #36606:
URL: https://github.com/apache/spark/pull/36606#issuecomment-1132271280
The PR test fails with ```[error] spark-core: Failed binary compatibility
check against org.apache.spark:spark-core_2.12:3.2.0! Found 9 potential
problems (filtered 924)```. Anyone
hai-tao-1 commented on PR #36606:
URL: https://github.com/apache/spark/pull/36606#issuecomment-1132271279
The PR test fails with ```[error] spark-core: Failed binary compatibility
check against org.apache.spark:spark-core_2.12:3.2.0! Found 9 potential
problems (filtered 924)```. Anyone
dongjoon-hyun commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1132159846
Merged to master. I added you to the Apache Spark contributor group and
assigned SPARK-39225 to you, @hai-tao-1 .
Welcome to the Apache Spark community.
--
This is an
dongjoon-hyun closed pull request #36597: [SPARK-39225][CORE] Support
`spark.history.fs.update.batchSize`
URL: https://github.com/apache/spark/pull/36597
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
amaliujia commented on PR #36586:
URL: https://github.com/apache/spark/pull/36586#issuecomment-1132121981
R: @cloud-fan this PR is ready to review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
huaxingao opened a new pull request, #34785:
URL: https://github.com/apache/spark/pull/34785
### What changes were proposed in this pull request?
Support optimize skewed partitions in Distribution and Ordering if
numPartitions is not specified
### Why are the changes needed?
otterc commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877366840
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
aokolnychyi commented on PR #34785:
URL: https://github.com/apache/spark/pull/34785#issuecomment-1132014116
Thanks for the PR, @huaxingao. I think it is a great feature and it would be
awesome to get it done.
I spent some time thinking about this and have a few questions/proposals.
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877361726
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
MaxGekk commented on PR #36603:
URL: https://github.com/apache/spark/pull/36603#issuecomment-1132003245
@panbingkun Could you backport this to branch-3.3, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
MaxGekk closed pull request #36603: [SPARK-39163][SQL] Throw an exception w/
error class for an invalid bucket file
URL: https://github.com/apache/spark/pull/36603
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
otterc commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877340985
##
core/src/test/scala/org/apache/spark/storage/ShuffleBlockFetcherIteratorSuite.scala:
##
@@ -1786,4 +1786,32 @@ class ShuffleBlockFetcherIteratorSuite extends
hai-tao-1 commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1131988355
> Thank you for updates, @hai-tao-1 . Yes, the only remaining comment is the
test case.
>
> > We need a test case for the configuration. Please check the corner cases
otterc commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877329983
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
LuciferYang commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1131979146
It seems that this change is big. Another way to keep one `createTempDir`
is to let `Utils.createTempDir` call `JavaUtils.createTempDir` . Is this
acceptable?
--
This is an
akpatnam25 commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877316473
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
akpatnam25 commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877316296
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
nkronenfeld opened a new pull request, #36613:
URL: https://github.com/apache/spark/pull/36613
### What changes were proposed in this pull request?
This PR simply adds typed select methods to Dataset up to the max Tuple size
of 22.
This has been bugging me for years, so I
vli-databricks commented on PR #36584:
URL: https://github.com/apache/spark/pull/36584#issuecomment-1131948634
Yes, the purpose is ease of migration, removed change to `functions.scala`
to limit scope to Spark SQL only.
--
This is an automated message from the Apache Git Service.
To
MaxGekk commented on PR #36584:
URL: https://github.com/apache/spark/pull/36584#issuecomment-1131939361
How about to add the function to other APIs like first() in
- PySpark:
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877275914
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877273610
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1885,6 +1885,14 @@ private[spark] class DAGScheduler(
mapOutputTracker.
MaxGekk commented on code in PR #36580:
URL: https://github.com/apache/spark/pull/36580#discussion_r877269334
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala:
##
@@ -1971,4 +1971,10 @@ object QueryExecutionErrors extends QueryErrorsBase {
MaxGekk closed pull request #36612: [SPARK-39234][SQL] Code clean up in
SparkThrowableHelper.getMessage
URL: https://github.com/apache/spark/pull/36612
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
MaxGekk commented on PR #36612:
URL: https://github.com/apache/spark/pull/36612#issuecomment-1131917484
+1, LGTM. Merging to master.
Thank you, @gengliangwang and @cloud-fan for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
vli-databricks commented on PR #36584:
URL: https://github.com/apache/spark/pull/36584#issuecomment-1131915487
@MaxGekk please review and help me merge this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
mridulm commented on code in PR #36601:
URL: https://github.com/apache/spark/pull/36601#discussion_r877251054
##
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:
##
@@ -4342,6 +4342,56 @@ class DAGSchedulerSuite extends SparkFunSuite with
tanvn commented on PR #27590:
URL: https://github.com/apache/spark/pull/27590#issuecomment-1131855914
@gengliangwang @dongjoon-hyun
Hi, I have a question.
In Spark 3.2.1, are `spark.sql.ansi.enabled` and
`spark.sql.storeAssignmentPolicy` still considered as experimental options ?
dongjoon-hyun commented on PR #36377:
URL: https://github.com/apache/spark/pull/36377#issuecomment-1131851884
Thank you for the reverting decision, @cloud-fan and @AngersZh .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan closed pull request #35850: [SPARK-38529][SQL] Prevent
GeneratorNestedColumnAliasing to be applied to non-Explode generators
URL: https://github.com/apache/spark/pull/35850
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan commented on PR #35850:
URL: https://github.com/apache/spark/pull/35850#issuecomment-1131827928
thanks, merging to master/3.3!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on code in PR #35850:
URL: https://github.com/apache/spark/pull/35850#discussion_r877169574
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/NestedColumnAliasing.scala:
##
@@ -321,6 +321,38 @@ object GeneratorNestedColumnAliasing {
cloud-fan commented on code in PR #36295:
URL: https://github.com/apache/spark/pull/36295#discussion_r877141680
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala:
##
@@ -203,6 +204,245 @@ class JDBCV2Suite extends QueryTest with
SharedSparkSession with
cloud-fan commented on code in PR #36295:
URL: https://github.com/apache/spark/pull/36295#discussion_r877141680
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala:
##
@@ -203,6 +204,245 @@ class JDBCV2Suite extends QueryTest with
SharedSparkSession with
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877123725
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCCatalog.scala:
##
@@ -32,11 +35,14 @@ import org.apache.spark.sql.jdbc.{JdbcDialect,
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877113355
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -201,6 +203,14 @@ class V2ExpressionBuilder(
None
}
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877113355
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -201,6 +203,14 @@ class V2ExpressionBuilder(
None
}
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877115137
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala:
##
@@ -744,6 +744,14 @@ object DataSourceStrategy
cloud-fan commented on code in PR #36593:
URL: https://github.com/apache/spark/pull/36593#discussion_r877113355
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -201,6 +203,14 @@ class V2ExpressionBuilder(
None
}
gengliangwang commented on PR #36475:
URL: https://github.com/apache/spark/pull/36475#issuecomment-1131731396
@dtenedor after a closer look, I think we can resolve this in a simpler way.
I make a PR on your repo: https://github.com/dtenedor/spark/pull/4
You can merge it on your repo if
AmplabJenkins commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1131715634
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
1 - 100 of 155 matches
Mail list logo