LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333907049
##
connector/avro/pom.xml:
##
@@ -70,12 +70,10 @@
org.apache.spark
spark-tags_${scala.binary.version}
-
Review Comment:
It seems that
xiongbo-sjtu commented on PR #43021:
URL: https://github.com/apache/spark/pull/43021#issuecomment-1730841001
@jiangxb1987 Good suggestion! I've updated the unit test accordingly.
Please merge this fix to mainline. If possible, please help patch v3.3.1
and above.
Thanks!
HyukjinKwon commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333903491
##
pom.xml:
##
@@ -3648,13 +3646,14 @@
+
Review Comment:
Nesting a comment doesn't work .. so I took it out.
--
This is an automated
dongjoon-hyun commented on PR #43048:
URL: https://github.com/apache/spark/pull/43048#issuecomment-1730837488
Could you review this, @LuciferYang ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333901878
##
dev/mima:
##
@@ -24,9 +24,9 @@ set -e
FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
cd "$FWDIR"
-SPARK_PROFILES=${1:-"-Pscala-2.13 -Pmesos -Pkubernetes -Pyarn
HyukjinKwon commented on PR #43049:
URL: https://github.com/apache/spark/pull/43049#issuecomment-1730836146
Sure, I am fine
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
dongjoon-hyun commented on PR #43049:
URL: https://github.com/apache/spark/pull/43049#issuecomment-1730835682
@HyukjinKwon and @LuciferYang .
If the proposed removal is only 53 lines, I believe this could be a cleaner
way and we can recover it back easily.
To recover only
LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333898761
##
dev/mima:
##
@@ -24,9 +24,9 @@ set -e
FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
cd "$FWDIR"
-SPARK_PROFILES=${1:-"-Pscala-2.13 -Pmesos -Pkubernetes -Pyarn
LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333898761
##
dev/mima:
##
@@ -24,9 +24,9 @@ set -e
FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
cd "$FWDIR"
-SPARK_PROFILES=${1:-"-Pscala-2.13 -Pmesos -Pkubernetes -Pyarn
itholic opened a new pull request, #43051:
URL: https://github.com/apache/spark/pull/43051
### What changes were proposed in this pull request?
Similar to https://github.com/apache/spark/pull/42955, this PR proposes to
correct the message for Spark ML only tests from Spark Connect.
LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333898761
##
dev/mima:
##
@@ -24,9 +24,9 @@ set -e
FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
cd "$FWDIR"
-SPARK_PROFILES=${1:-"-Pscala-2.13 -Pmesos -Pkubernetes -Pyarn
dongjoon-hyun commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333898023
##
pom.xml:
##
@@ -3648,23 +3646,6 @@
-
- scala-2.13
Review Comment:
+1
--
This is an automated message from the Apache Git
dongjoon-hyun commented on PR #43049:
URL: https://github.com/apache/spark/pull/43049#issuecomment-1730832844
BTW, it's surprising to me that the diff size is very small.
![Screenshot 2023-09-21 at 10 42 52
LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333897015
##
pom.xml:
##
@@ -3648,23 +3646,6 @@
-
- scala-2.13
Review Comment:
+1
--
This is an automated message from the Apache Git
zhengruifeng commented on PR #43045:
URL: https://github.com/apache/spark/pull/43045#issuecomment-1730831739
Please hold off for a while, we may need to rename `lambda functions` which
sounds too "technology"
--
This is an automated message from the Apache Git Service.
To respond to the
HyukjinKwon commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333896833
##
pom.xml:
##
@@ -3648,23 +3646,6 @@
-
- scala-2.13
Review Comment:
I will comment this out.
--
This is an automated message from
dongjoon-hyun commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333896190
##
pom.xml:
##
@@ -3648,23 +3646,6 @@
-
- scala-2.13
Review Comment:
We still need this because we need a way to switch back to 2.13
zhengruifeng commented on PR #43011:
URL: https://github.com/apache/spark/pull/43011#issuecomment-1730830602
> > > What is the purpose of "lambda function"? All others are type-specific
or "functionality"-specific. But lambda is "technology".
> >
> >
> > lambda functions were
HyukjinKwon commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333895946
##
connector/avro/pom.xml:
##
@@ -70,12 +70,10 @@
org.apache.spark
spark-tags_${scala.binary.version}
-
Review Comment:
Do we need to
dongjoon-hyun commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333894185
##
dev/mima:
##
@@ -24,9 +24,9 @@ set -e
FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
cd "$FWDIR"
-SPARK_PROFILES=${1:-"-Pscala-2.13 -Pmesos -Pkubernetes -Pyarn
LuciferYang commented on code in PR #43005:
URL: https://github.com/apache/spark/pull/43005#discussion_r1333894733
##
.github/workflows/build_and_test.yml:
##
@@ -649,11 +649,11 @@ jobs:
if [ -f ./dev/free_disk_space_container ]; then
dongjoon-hyun commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333894185
##
dev/mima:
##
@@ -24,9 +24,9 @@ set -e
FWDIR="$(cd "`dirname "$0"`"/..; pwd)"
cd "$FWDIR"
-SPARK_PROFILES=${1:-"-Pscala-2.13 -Pmesos -Pkubernetes -Pyarn
dongjoon-hyun commented on PR #43049:
URL: https://github.com/apache/spark/pull/43049#issuecomment-1730826061
I have the same concern. I'd like to keep these because of Scala 3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
dongjoon-hyun opened a new pull request, #43050:
URL: https://github.com/apache/spark/pull/43050
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333889711
##
connector/avro/pom.xml:
##
@@ -70,12 +70,10 @@
org.apache.spark
spark-tags_${scala.binary.version}
-
Review Comment:
Are we sure
LuciferYang commented on code in PR #43049:
URL: https://github.com/apache/spark/pull/43049#discussion_r1333889711
##
connector/avro/pom.xml:
##
@@ -70,12 +70,10 @@
org.apache.spark
spark-tags_${scala.binary.version}
-
Review Comment:
Are we sure
HyukjinKwon commented on PR #43049:
URL: https://github.com/apache/spark/pull/43049#issuecomment-1730819584
and I am intentionally using the same JIRA to make it easier to manage
together (e.g., reverting).
cc @dongjoon-hyun and @LuciferYang FYI
--
This is an automated message
HyukjinKwon commented on PR #43049:
URL: https://github.com/apache/spark/pull/43049#issuecomment-1730819193
I didn't cleanup `dev/lint-scala` because the Scala profile was there even
before that PR.
--
This is an automated message from the Apache Git Service.
To respond to the message,
HyukjinKwon opened a new pull request, #43049:
URL: https://github.com/apache/spark/pull/43049
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/43008 that
cleanups Scala version specific comments, and `scala-2.13`
LuciferYang commented on code in PR #43005:
URL: https://github.com/apache/spark/pull/43005#discussion_r1333883366
##
.github/workflows/build_and_test.yml:
##
@@ -649,11 +649,11 @@ jobs:
if [ -f ./dev/free_disk_space_container ]; then
LuciferYang commented on code in PR #43005:
URL: https://github.com/apache/spark/pull/43005#discussion_r1333882096
##
docs/building-spark.md:
##
@@ -27,7 +27,7 @@ license: |
## Apache Maven
The Maven-based build is the build of reference for Apache Spark.
-Building Spark
dongjoon-hyun opened a new pull request, #43048:
URL: https://github.com/apache/spark/pull/43048
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
srielau commented on PR #43011:
URL: https://github.com/apache/spark/pull/43011#issuecomment-1730804920
> > What is the purpose of "lambda function"? All others are type-specific
or "functionality"-specific. But lambda is "technology".
>
> lambda functions were already exposed to end
LuciferYang commented on PR #43005:
URL: https://github.com/apache/spark/pull/43005#issuecomment-1730804346
I'll update this PR later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on PR #43008:
URL: https://github.com/apache/spark/pull/43008#issuecomment-1730803156
Thanks all ~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dongjoon-hyun commented on PR #43047:
URL: https://github.com/apache/spark/pull/43047#issuecomment-1730803036
Merged to master for Apache Spark 4.0.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun commented on PR #43046:
URL: https://github.com/apache/spark/pull/43046#issuecomment-1730802906
Merged to master for Apache Spark 4.0.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
LuciferYang commented on PR #43047:
URL: https://github.com/apache/spark/pull/43047#issuecomment-1730802930
Thanks @HyukjinKwon ~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun closed pull request #43046: [MINOR][PYTHON][TESTS] Add init file
to pyspark.ml.deepspeed.tests
URL: https://github.com/apache/spark/pull/43046
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
LuciferYang commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333875577
##
docs/_plugins/copy_api_dirs.rb:
##
@@ -26,8 +26,8 @@
curr_dir = pwd
cd("..")
-puts "Running 'build/sbt -Pkinesis-asl clean compile unidoc' from
dongjoon-hyun closed pull request #43047: [SPARK-44113][INFRA][FOLLOW-UP]
Remove Scala 2.13 scheduled jobs
URL: https://github.com/apache/spark/pull/43047
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon opened a new pull request, #43047:
URL: https://github.com/apache/spark/pull/43047
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/43008 that
removes the leftover scheduled GitHub Actions build for Scala
zhengruifeng commented on PR #43045:
URL: https://github.com/apache/spark/pull/43045#issuecomment-1730792686
This PR make categories follow expression descriptions, instead of
`functions.scala`:
one major difference is that `lambda function` is in group `collect
function` in
HyukjinKwon commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333866152
##
docs/_plugins/copy_api_dirs.rb:
##
@@ -26,8 +26,8 @@
curr_dir = pwd
cd("..")
-puts "Running 'build/sbt -Pkinesis-asl clean compile unidoc' from
HyukjinKwon commented on PR #43011:
URL: https://github.com/apache/spark/pull/43011#issuecomment-1730788674
yeah, I mean individual `ExpressionDescription`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
zhengruifeng commented on PR #43011:
URL: https://github.com/apache/spark/pull/43011#issuecomment-1730787450
@HyukjinKwon this page is not built from `functions.scala`, but from the
groups specified in expression definitions, like
zhengruifeng commented on PR #43045:
URL: https://github.com/apache/spark/pull/43045#issuecomment-1730781367
cc @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon opened a new pull request, #43046:
URL: https://github.com/apache/spark/pull/43046
### What changes were proposed in this pull request?
This PR proposes to add `__init__.py` file to `pyspark.ml.deepspeed.tests`
### Why are the changes needed?
To make
zhengruifeng commented on PR #43045:
URL: https://github.com/apache/spark/pull/43045#issuecomment-1730775047
In `functions.py`, I think only the comments are incorrect, I guess we can
leave it alone?
dongjoon-hyun commented on PR #43005:
URL: https://github.com/apache/spark/pull/43005#issuecomment-1730773583
#43008 is merged. Shall we resume this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun commented on PR #43008:
URL: https://github.com/apache/spark/pull/43008#issuecomment-1730771484
Merged to master. Thank you, @LuciferYang and all.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun closed pull request #43008: [SPARK-44113][BUILD][INFRA][DOCS]
Drop support for Scala 2.12
URL: https://github.com/apache/spark/pull/43008
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
zhengruifeng opened a new pull request, #43045:
URL: https://github.com/apache/spark/pull/43045
### What changes were proposed in this pull request?
re-org python function categories
### Why are the changes needed?
should be consistent with SQL function groups
###
HyukjinKwon commented on code in PR #43033:
URL: https://github.com/apache/spark/pull/43033#discussion_r1333849657
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -159,7 +159,7 @@ object Size {
4
""",
zhengruifeng commented on code in PR #43033:
URL: https://github.com/apache/spark/pull/43033#discussion_r1333848817
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -159,7 +159,7 @@ object Size {
4
""",
HyukjinKwon closed pull request #43031: [SPARK-45251][CONNECT] Add client_type
field for FetchErrorDetails
URL: https://github.com/apache/spark/pull/43031
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #43031:
URL: https://github.com/apache/spark/pull/43031#issuecomment-1730765889
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on code in PR #43033:
URL: https://github.com/apache/spark/pull/43033#discussion_r1333848369
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala:
##
@@ -1271,7 +1271,7 @@ case class Pow(left: Expression, right:
HyukjinKwon commented on code in PR #43033:
URL: https://github.com/apache/spark/pull/43033#discussion_r1333848154
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala:
##
@@ -1271,7 +1271,7 @@ case class Pow(left: Expression, right:
HyukjinKwon closed pull request #43033: [SPARK-45253][SQL][DOCS] Correct the
group of `ShiftLeft` and `ArraySize`
URL: https://github.com/apache/spark/pull/43033
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #43033:
URL: https://github.com/apache/spark/pull/43033#issuecomment-1730764638
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #42994: [SPARK-43433][PS] Match `GroupBy.nth`
behavior to the latest Pandas
URL: https://github.com/apache/spark/pull/42994
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #42994:
URL: https://github.com/apache/spark/pull/42994#issuecomment-1730762760
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
pan3793 commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333842930
##
dev/deps/spark-deps-hadoop-3-hive-2.3:
##
@@ -131,14 +132,16 @@ jettison/1.5.4//jettison-1.5.4.jar
itholic commented on code in PR #43024:
URL: https://github.com/apache/spark/pull/43024#discussion_r1333842788
##
dev/requirements.txt:
##
@@ -34,7 +35,9 @@ pydata_sphinx_theme
ipython
nbsphinx
numpydoc
-jinja2<3.0.0
+# Commented this because we need the latest version of
itholic closed pull request #43024: [SPARK-45246][BUILD][PS] Encourage using
latest `jinja2` other than documentation build
URL: https://github.com/apache/spark/pull/43024
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
LuciferYang commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333842194
##
docs/building-spark.md:
##
@@ -28,7 +28,7 @@ license: |
The Maven-based build is the build of reference for Apache Spark.
Building Spark using Maven
LuciferYang commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333841049
##
dev/deps/spark-deps-hadoop-3-hive-2.3:
##
@@ -131,14 +132,16 @@ jettison/1.5.4//jettison-1.5.4.jar
panbingkun opened a new pull request, #43044:
URL: https://github.com/apache/spark/pull/43044
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
pan3793 commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333840310
##
docs/building-spark.md:
##
@@ -28,7 +28,7 @@ license: |
The Maven-based build is the build of reference for Apache Spark.
Building Spark using Maven requires
HyukjinKwon closed pull request #42949: [SPARK-45093][CONNECT][PYTHON] Error
reporting for addArtifacts query
URL: https://github.com/apache/spark/pull/42949
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
HyukjinKwon commented on PR #42949:
URL: https://github.com/apache/spark/pull/42949#issuecomment-1730753709
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
pan3793 commented on code in PR #43008:
URL: https://github.com/apache/spark/pull/43008#discussion_r1333838783
##
dev/deps/spark-deps-hadoop-3-hive-2.3:
##
@@ -131,14 +132,16 @@ jettison/1.5.4//jettison-1.5.4.jar
cloud-fan commented on code in PR #42950:
URL: https://github.com/apache/spark/pull/42950#discussion_r1333838758
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1903,13 +1903,20 @@ private[spark] class DAGScheduler(
case smt:
HyukjinKwon commented on PR #43011:
URL: https://github.com/apache/spark/pull/43011#issuecomment-1730749736
Just to be clear, this is automatic documentation based on the current
documentation. If the grouping is wrong, or to be fixed, we should fix
`functions.scala` main documentation.
HyukjinKwon commented on code in PR #43024:
URL: https://github.com/apache/spark/pull/43024#discussion_r1333832303
##
dev/requirements.txt:
##
@@ -34,7 +35,9 @@ pydata_sphinx_theme
ipython
nbsphinx
numpydoc
-jinja2<3.0.0
+# Commented this because we need the latest version
zhengruifeng commented on PR #43033:
URL: https://github.com/apache/spark/pull/43033#issuecomment-1730742391
also cc @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun closed pull request #43036: [SPARK-45257][CORE] Enable
`spark.eventLog.compress` by default
URL: https://github.com/apache/spark/pull/43036
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun commented on PR #43036:
URL: https://github.com/apache/spark/pull/43036#issuecomment-1730741728
The last commit is a doc-only change and the previous commit passed `core`
module unit tests.
![Screenshot 2023-09-21 at 8 08 12
cloud-fan closed pull request #43010: [SPARK-41086][SQL] Use DataFrame ID to
semantically validate CollectMetrics
URL: https://github.com/apache/spark/pull/43010
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
LuciferYang commented on PR #43008:
URL: https://github.com/apache/spark/pull/43008#issuecomment-1730740750
> Could you re-trigger the failed pipeline, @LuciferYang ?
GA passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan commented on PR #43010:
URL: https://github.com/apache/spark/pull/43010#issuecomment-1730740595
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on code in PR #43010:
URL: https://github.com/apache/spark/pull/43010#discussion_r1333829755
##
python/pyspark/sql/connect/plan.py:
##
@@ -1192,6 +1192,7 @@ def plan(self, session: "SparkConnectClient") ->
proto.Relation:
assert self._child is
itholic opened a new pull request, #43043:
URL: https://github.com/apache/spark/pull/43043
### What changes were proposed in this pull request?
This PR proposes to change the default value for `numeric_only` with related
functions.
### Why are the changes needed?
dongjoon-hyun commented on code in PR #43036:
URL: https://github.com/apache/spark/pull/43036#discussion_r1333813862
##
docs/core-migration-guide.md:
##
@@ -22,6 +22,10 @@ license: |
* Table of contents
{:toc}
+## Upgrading from Core 3.4 to 4.0
+
+- Since Spark 4.4, Spark
ulysses-you commented on PR #43026:
URL: https://github.com/apache/spark/pull/43026#issuecomment-1730709993
thanks, merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
ulysses-you closed pull request #43026: [SPARK-45244][TESTS] Correct spelling
in VolcanoTestsSuite
URL: https://github.com/apache/spark/pull/43026
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
amaliujia commented on code in PR #43010:
URL: https://github.com/apache/spark/pull/43010#discussion_r1333802817
##
python/pyspark/sql/connect/plan.py:
##
@@ -1192,6 +1192,7 @@ def plan(self, session: "SparkConnectClient") ->
proto.Relation:
assert self._child is not
amaliujia commented on code in PR #43010:
URL: https://github.com/apache/spark/pull/43010#discussion_r1333802817
##
python/pyspark/sql/connect/plan.py:
##
@@ -1192,6 +1192,7 @@ def plan(self, session: "SparkConnectClient") ->
proto.Relation:
assert self._child is not
amaliujia commented on code in PR #43010:
URL: https://github.com/apache/spark/pull/43010#discussion_r1333800854
##
python/pyspark/sql/connect/plan.py:
##
@@ -1192,6 +1192,7 @@ def plan(self, session: "SparkConnectClient") ->
proto.Relation:
assert self._child is not
chenyu-opensource commented on PR #43028:
URL: https://github.com/apache/spark/pull/43028#issuecomment-1730704040
> No, I mean, is there any downside to just hard coding it? I don't love
exposing more configs if it's not clear when one would ever change it
@srowen I have changed it
amaliujia commented on code in PR #43010:
URL: https://github.com/apache/spark/pull/43010#discussion_r1333800854
##
python/pyspark/sql/connect/plan.py:
##
@@ -1192,6 +1192,7 @@ def plan(self, session: "SparkConnectClient") ->
proto.Relation:
assert self._child is not
ueshin commented on PR #43042:
URL: https://github.com/apache/spark/pull/43042#issuecomment-1730701468
cc @dtenedor @allisonwang-db
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
ueshin opened a new pull request, #43042:
URL: https://github.com/apache/spark/pull/43042
### What changes were proposed in this pull request?
Refactors `ResolveFunctions` analyzer rule to delay making lateral join when
table arguments are used.
- Delay making lateral join
beliefer commented on PR #41860:
URL: https://github.com/apache/spark/pull/41860#issuecomment-1730681120
@maheshk114 Could you rebase this PR?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
chenyu-opensource commented on PR #43028:
URL: https://github.com/apache/spark/pull/43028#issuecomment-1730679712
> No, I mean, is there any downside to just hard coding it? I don't love
exposing more configs if it's not clear when one would ever change it
This is to prevent anyone
ulysses-you commented on PR #42967:
URL: https://github.com/apache/spark/pull/42967#issuecomment-1730679064
thanks for review, merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
ulysses-you closed pull request #42967: [SPARK-45191][SQL]
InMemoryTableScanExec simpleStringWithNodeId adds columnar info
URL: https://github.com/apache/spark/pull/42967
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
yaooqinn commented on code in PR #43036:
URL: https://github.com/apache/spark/pull/43036#discussion_r1333788232
##
docs/core-migration-guide.md:
##
@@ -22,6 +22,10 @@ license: |
* Table of contents
{:toc}
+## Upgrading from Core 3.4 to 4.0
+
+- Since Spark 4.4, Spark will
srowen commented on PR #43028:
URL: https://github.com/apache/spark/pull/43028#issuecomment-1730675157
No, I mean, is there any downside to just hard coding it? I don't love
exposing more configs if it's not clear when one would ever change it
--
This is an automated message from the
1 - 100 of 224 matches
Mail list logo