itholic opened a new pull request, #38650:
URL: https://github.com/apache/spark/pull/38650
### What changes were proposed in this pull request?
This PR proposes to rename `UNSUPPORTED_EMPTY_LOCATION` to
`INVALID_EMPTY_LOCATION`.
### Why are the changes needed?
Error clas
itholic commented on PR #38650:
URL: https://github.com/apache/spark/pull/38650#issuecomment-1313235817
cc @MaxGekk @srielau
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021172211
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsInPandasWithStateSuite.scala:
##
@@ -240,25 +240,30 @@ class FlatMapGroupsInPandasWithStateSu
yaooqinn opened a new pull request, #37355:
URL: https://github.com/apache/spark/pull/37355
### What changes were proposed in this pull request?
This ticket intends to add query hints for cache behaviors, users can
perform actions like the use/skip/cache/uncache, etc o
wankunde opened a new pull request, #38649:
URL: https://github.com/apache/spark/pull/38649
### What changes were proposed in this pull request?
We can optimize query `SELECT * FROM tab WHERE trim(addr) LIKE ANY ('5000',
'5001', '5002', '5003', '5004')` to `SELECT * FROM t
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021160270
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala:
##
@@ -188,17 +194,26 @@ class UnsupportedOperationsSuite ex
viirya commented on code in PR #38626:
URL: https://github.com/apache/spark/pull/38626#discussion_r1021159125
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkOptimizer.scala:
##
@@ -51,8 +51,10 @@ class SparkOptimizer(
Batch("Optimize Metadata Only Query", On
cloud-fan commented on PR #38648:
URL: https://github.com/apache/spark/pull/38648#issuecomment-1313214393
cc @MaxGekk @zsxwing
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific commen
cloud-fan opened a new pull request, #38648:
URL: https://github.com/apache/spark/pull/38648
### What changes were proposed in this pull request?
Spark is an open system where users can use plugins in different places.
When people hit a bug, it may not come from Spark, but fro
itholic opened a new pull request, #38647:
URL: https://github.com/apache/spark/pull/38647
### What changes were proposed in this pull request?
This PR proposes to integrate `UNSCALED_VALUE_TOO_LARGE_FOR_PRECISION` into
`NUMERIC_VALUE_OUT_OF_RANGE`, and also improve the error message.
HeartSaVioR commented on PR #38384:
URL: https://github.com/apache/spark/pull/38384#issuecomment-1313207941
We can't add reviewers who aren't a part of committers of the project. The
best bet is to let them know via cc. and quickly leave a review comment which
will register them to the revi
mridulm commented on PR #38622:
URL: https://github.com/apache/spark/pull/38622#issuecomment-1313206633
I will let @tgravescs take a look at this - I dont have as much context as
him.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
mridulm commented on PR #38064:
URL: https://github.com/apache/spark/pull/38064#issuecomment-1313204753
+CC @Yikun, @HyukjinKwon - any suggestions for @liuzqt's [query
above](https://github.com/apache/spark/pull/38064#issuecomment-1311348015) ?
Thanks
--
This is an automated message from
mridulm commented on code in PR #38441:
URL: https://github.com/apache/spark/pull/38441#discussion_r1021139605
##
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##
@@ -2024,6 +2024,15 @@ package object config {
.stringConf
.createOptional
+
cloud-fan commented on code in PR #38626:
URL: https://github.com/apache/spark/pull/38626#discussion_r1021139971
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkOptimizer.scala:
##
@@ -51,8 +51,10 @@ class SparkOptimizer(
Batch("Optimize Metadata Only Query",
cloud-fan commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021136795
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
Share
itholic commented on PR #38646:
URL: https://github.com/apache/spark/pull/38646#issuecomment-1313185281
cc @srielau @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
itholic opened a new pull request, #38646:
URL: https://github.com/apache/spark/pull/38646
### What changes were proposed in this pull request?
This PR proposes to improve error message for
`UNRESOLVED_MAP_KEY.WITHOUT_SUGGESTION`.
### Why are the changes needed?
Prin
MaxGekk commented on PR #38645:
URL: https://github.com/apache/spark/pull/38645#issuecomment-1313181313
Thank you, @dongjoon-hyun for the quick fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
EnricoMi commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021131560
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
Shared
dongjoon-hyun closed pull request #38645: [SPARK-41109][SQL][FOLLOWUP] Fix
Scalastyle
URL: https://github.com/apache/spark/pull/38645
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific com
dongjoon-hyun commented on PR #38645:
URL: https://github.com/apache/spark/pull/38645#issuecomment-1313178651
Thank you, @MaxGekk . I manually verified this PR. Merged to master to
recover the CIs (including PR builders).
```
$ dev/scalastyle
Using SPARK_LOCAL_IP=localhost
SLF4J:
dongjoon-hyun commented on code in PR #38615:
URL: https://github.com/apache/spark/pull/38615#discussion_r1021129372
##
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala:
##
@@ -30,6 +30,7 @@ import org.apache.commons.io.FileUtils
import org.apache.spark.{Accum
dongjoon-hyun commented on PR #38645:
URL: https://github.com/apache/spark/pull/38645#issuecomment-1313176225
cc @panbingkun , too
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific com
dongjoon-hyun commented on PR #38615:
URL: https://github.com/apache/spark/pull/38615#issuecomment-1313174514
I made a PR.
- https://github.com/apache/spark/pull/38645
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
dongjoon-hyun opened a new pull request, #38645:
URL: https://github.com/apache/spark/pull/38645
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### H
EnricoMi commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021126305
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
Shared
huaxingao commented on PR #38628:
URL: https://github.com/apache/spark/pull/38628#issuecomment-1313171254
@kazuyukitanimura Thanks for working on this!
I took a look at how Iceberg handles FLBA. For iceberg type
`Types.FixedType`, the underneath Parquet type is `fixed_len_byte_array`
cloud-fan commented on code in PR #38627:
URL: https://github.com/apache/spark/pull/38627#discussion_r1021124144
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -441,11 +441,14 @@ class SparkConnectPlanner(session: SparkS
LuciferYang commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1313170801
Thanks @dongjoon-hyun, as mentioned earlier, I will further find for a more
suitable `ReservedCodeCacheSize ` to make the build test process more stable
and fast
--
This is
dongjoon-hyun commented on PR #38615:
URL: https://github.com/apache/spark/pull/38615#issuecomment-1313170892
Hi, @MaxGekk . It seems that this PR didn't passed CI properly. Could you
check once more?
--
This is an automated message from the Apache Git Service.
To respond to the message,
cloud-fan commented on code in PR #38627:
URL: https://github.com/apache/spark/pull/38627#discussion_r1021122825
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -441,11 +441,14 @@ class SparkConnectPlanner(session: SparkS
dongjoon-hyun closed pull request #38589: [SPARK-41087][BUILD] Remove
duplicated `-Xmx4g` from `dev/make-distribution.sh` and make `build/mvn` use
the same JAVA_OPTS
URL: https://github.com/apache/spark/pull/38589
--
This is an automated message from the Apache Git Service.
To respond to the
cloud-fan commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021120668
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
-
grundprinzip commented on code in PR #38627:
URL: https://github.com/apache/spark/pull/38627#discussion_r1021117249
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -441,11 +441,14 @@ class SparkConnectPlanner(session: Spa
grundprinzip commented on code in PR #38627:
URL: https://github.com/apache/spark/pull/38627#discussion_r1021116483
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -148,29 +147,25 @@ class SparkConnectProtoSuite extend
MaxGekk closed pull request #38600: [SPARK-41098][SQL] Rename
`GROUP_BY_POS_REFERS_AGG_EXPR` to `GROUP_BY_POS_AGGREGATE`
URL: https://github.com/apache/spark/pull/38600
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
MaxGekk commented on PR #38600:
URL: https://github.com/apache/spark/pull/38600#issuecomment-1313159759
+1, LGTM. Merging to master.
Thank you, @itholic.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
dongjoon-hyun closed pull request #38643: [SPARK-41091][BUILD][3.2] Fix Docker
release tool for branch-3.2
URL: https://github.com/apache/spark/pull/38643
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
dongjoon-hyun commented on PR #38643:
URL: https://github.com/apache/spark/pull/38643#issuecomment-1313159023
Since this is irrelevant to GitHub Action CI, I merged this to branch-3.2.
Please let me know if there is another release blocker~
--
This is an automated message from the Apach
rangadi commented on PR #38384:
URL: https://github.com/apache/spark/pull/38384#issuecomment-1313158714
@HeartSaVioR could you add @SandishKumarHN to the reviewers? I am not able
add.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
rangadi commented on PR #38384:
URL: https://github.com/apache/spark/pull/38384#issuecomment-1313157909
@SandishKumarHN could you take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
MaxGekk closed pull request #38615: [SPARK-41109][SQL] Rename the error class
_LEGACY_ERROR_TEMP_1216 to INVALID_LIKE_PATTERN
URL: https://github.com/apache/spark/pull/38615
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
MaxGekk commented on PR #38615:
URL: https://github.com/apache/spark/pull/38615#issuecomment-1313155169
+1, LGTM. Merging to master.
Thank you, @panbingkun and @srielau for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G
LuciferYang commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1313153448
The test is divided into two parts:
1. Bare metal server, `Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz` with Java
8u352, test command as follows:
```
build/mvn clean install
sunchao commented on PR #38643:
URL: https://github.com/apache/spark/pull/38643#issuecomment-1313150540
> [SPARK-37474](https://issues.apache.org/jira/browse/SPARK-37474) is Apache
Spark 3.3.0. That cannot be the root cause of any issues at branch-3.2. Given
that, I'm wondering if you want
itholic commented on PR #38644:
URL: https://github.com/apache/spark/pull/38644#issuecomment-1313150235
cc @MaxGekk @srielau
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
itholic opened a new pull request, #38644:
URL: https://github.com/apache/spark/pull/38644
### What changes were proposed in this pull request?
This PR proposes to rename `OUT_OF_DECIMAL_TYPE_RANGE` to
`NUMERIC_OUT_OF_SUPPORTED_RANGE`,
and also improve its error message.
grundprinzip commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021107215
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
dongjoon-hyun closed pull request #38641: [SPARK-41126][K8S] `entrypoint.sh`
should use its WORKDIR instead of `/tmp` directory
URL: https://github.com/apache/spark/pull/38641
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
dongjoon-hyun commented on PR #38641:
URL: https://github.com/apache/spark/pull/38641#issuecomment-1313138482
Thank you so much! Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
dongjoon-hyun commented on code in PR #38641:
URL: https://github.com/apache/spark/pull/38641#discussion_r1021102742
##
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh:
##
@@ -41,11 +41,11 @@ if [ -z "$JAVA_HOME" ]; then
fi
SPARK_CLASSPATH="$SPAR
viirya commented on PR #38641:
URL: https://github.com/apache/spark/pull/38641#issuecomment-1313137510
Makes sense to me.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dongjoon-hyun commented on code in PR #38641:
URL: https://github.com/apache/spark/pull/38641#discussion_r1021102742
##
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh:
##
@@ -41,11 +41,11 @@ if [ -z "$JAVA_HOME" ]; then
fi
SPARK_CLASSPATH="$SPAR
dongjoon-hyun commented on code in PR #38641:
URL: https://github.com/apache/spark/pull/38641#discussion_r1021102455
##
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh:
##
@@ -41,11 +41,11 @@ if [ -z "$JAVA_HOME" ]; then
fi
SPARK_CLASSPATH="$SPAR
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021102010
##
core/src/main/resources/error/error-classes.json:
##
@@ -933,6 +933,11 @@
],
"sqlState" : "42000"
},
+ "TEMP_VIEW_DOES_NOT_BELONG_TO_A_DATABASE" : {
viirya commented on code in PR #38641:
URL: https://github.com/apache/spark/pull/38641#discussion_r1021102101
##
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh:
##
@@ -41,11 +41,11 @@ if [ -z "$JAVA_HOME" ]; then
fi
SPARK_CLASSPATH="$SPARK_CLASS
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021102010
##
core/src/main/resources/error/error-classes.json:
##
@@ -933,6 +933,11 @@
],
"sqlState" : "42000"
},
+ "TEMP_VIEW_DOES_NOT_BELONG_TO_A_DATABASE" : {
dongjoon-hyun commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1313135223
If you verified it, it will be okay. So, could you describe your 4-days-ago
test environment a little more to us, @LuciferYang ?
--
This is an automated message from the Apache Gi
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021101689
##
core/src/main/resources/error/error-classes.json:
##
@@ -933,6 +933,11 @@
],
"sqlState" : "42000"
},
+ "TEMP_VIEW_DOES_NOT_BELONG_TO_A_DATABASE" : {
cloud-fan commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021099206
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
Share
cloud-fan commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021098036
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
Share
dongjoon-hyun commented on PR #38641:
URL: https://github.com/apache/spark/pull/38641#issuecomment-1313130296
Could you review this please when you have some time, @viirya ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
ulysses-you commented on code in PR #38619:
URL: https://github.com/apache/spark/pull/38619#discussion_r1021091927
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/InjectRuntimeFilter.scala:
##
@@ -99,7 +99,7 @@ object InjectRuntimeFilter extends Rule[Logic
LuciferYang commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1313122325
> maven test all passed
> Did you run the full Maven testing with this configuration?
Yes. 4 days ago, I checked the full UTs with this configuration and all test
passed.
sunchao commented on PR #38643:
URL: https://github.com/apache/spark/pull/38643#issuecomment-1313112767
cc @HyukjinKwon @srowen @zero323 who was involved in #34728. I'm not
familiar with R at all so please check if this is actually doing the correct
thing.
--
This is an automated message
sunchao opened a new pull request, #38643:
URL: https://github.com/apache/spark/pull/38643
### What changes were proposed in this pull request?
This tries to fix `do-release-docker.sh` for branch-3.2.
### Why are the changes needed?
Currently the following
amaliujia commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1021075427
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -48,6 +72,9 @@ message Request {
// The logical plan to be executed / analyzed.
Plan plan
cloud-fan commented on code in PR #38619:
URL: https://github.com/apache/spark/pull/38619#discussion_r1021071321
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/InjectRuntimeFilter.scala:
##
@@ -99,7 +99,7 @@ object InjectRuntimeFilter extends Rule[Logical
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1021070978
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala:
##
@@ -722,18 +722,15 @@ private[spark] class HiveExternalCatalog(conf: SparkConf,
ha
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1021070839
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala:
##
@@ -722,18 +722,15 @@ private[spark] class HiveExternalCatalog(conf: SparkConf,
ha
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1021070153
##
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala:
##
@@ -894,12 +895,14 @@ class InsertSuite extends QueryTest with
TestHiveSingleton with Befo
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1021069901
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:
##
@@ -609,6 +609,19 @@ private[hive] class HiveClientImpl(
shim.alterTable(cli
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1021069335
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClient.scala:
##
@@ -127,6 +127,9 @@ private[hive] trait HiveClient {
*/
def alterTable(dbName:
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1021069335
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClient.scala:
##
@@ -127,6 +127,9 @@ private[hive] trait HiveClient {
*/
def alterTable(dbName:
cloud-fan commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1021068631
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -48,6 +72,9 @@ message Request {
// The logical plan to be executed / analyzed.
Plan plan
zhengruifeng commented on PR #38621:
URL: https://github.com/apache/spark/pull/38621#issuecomment-1313087872
thank you @HyukjinKwon @amaliujia @cloud-fan for the reviews
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
cloud-fan commented on code in PR #38627:
URL: https://github.com/apache/spark/pull/38627#discussion_r1021067557
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -441,11 +441,14 @@ class SparkConnectPlanner(session: SparkS
cloud-fan closed pull request #38621: [SPARK-4][CONNECT][PYTHON] Implement
`DataFrame.show`
URL: https://github.com/apache/spark/pull/38621
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
cloud-fan commented on PR #38621:
URL: https://github.com/apache/spark/pull/38621#issuecomment-1313086271
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on code in PR #38621:
URL: https://github.com/apache/spark/pull/38621#discussion_r1021065768
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -253,6 +254,23 @@ message Repartition {
bool shuffle = 3;
}
+// Compose the string r
amaliujia commented on code in PR #38627:
URL: https://github.com/apache/spark/pull/38627#discussion_r1021065389
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -148,29 +147,25 @@ class SparkConnectProtoSuite extends P
amaliujia commented on code in PR #38642:
URL: https://github.com/apache/spark/pull/38642#discussion_r1021063230
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -207,6 +208,18 @@ def test_range(self):
.equals(self.spark.range(start=0, end=10, step=3
amaliujia commented on code in PR #38642:
URL: https://github.com/apache/spark/pull/38642#discussion_r1021063033
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -207,6 +208,18 @@ def test_range(self):
.equals(self.spark.range(start=0, end=10, step=3
cloud-fan closed pull request #38571: [SPARK-37555][TEST][FOLLOWUP] Increase
timeout of CLI test `spark-sql should pass last unclosed comment to backend`
URL: https://github.com/apache/spark/pull/38571
--
This is an automated message from the Apache Git Service.
To respond to the message, ple
cloud-fan commented on PR #38571:
URL: https://github.com/apache/spark/pull/38571#issuecomment-1313079538
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021062648
##
core/src/main/resources/error/error-classes.json:
##
@@ -933,6 +933,11 @@
],
"sqlState" : "42000"
},
+ "TEMP_VIEW_DOES_NOT_BELONG_TO_A_DATABASE" : {
amaliujia opened a new pull request, #38642:
URL: https://github.com/apache/spark/pull/38642
### What changes were proposed in this pull request?
This PR adds `CreateGlobalTempView` and `CreateOrReplaceGlobalTempView` to
Python DataFrame API.
Meanwhile, this PR extends
dongjoon-hyun commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1313078598
BTW, @LuciferYang . Did you run the full Maven testing with this
configuration? `dev/make-distribution.sh` is simply for building while
`build/mvn` should support testing. Please ru
cloud-fan commented on code in PR #38404:
URL: https://github.com/apache/spark/pull/38404#discussion_r1021061558
##
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -319,6 +319,7 @@ query
insertInto
: INSERT OVERWRITE TABLE? multipa
cloud-fan closed pull request #38632: [SPARK-41116][CONNECT] Input relation can
be optional for Project in Connect proto
URL: https://github.com/apache/spark/pull/38632
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
cloud-fan commented on PR #38632:
URL: https://github.com/apache/spark/pull/38632#issuecomment-1313072362
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021059774
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
-
dongjoon-hyun opened a new pull request, #38641:
URL: https://github.com/apache/spark/pull/38641
…
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
amaliujia commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021050959
##
core/src/main/resources/error/error-classes.json:
##
@@ -933,6 +933,11 @@
],
"sqlState" : "42000"
},
+ "TEMP_VIEW_DOES_NOT_BELONG_TO_A_DATABASE" : {
AngersZh commented on PR #38571:
URL: https://github.com/apache/spark/pull/38571#issuecomment-1313050303
> @AngersZh can you fill the PR description? Then we can merge it.
done
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021045529
##
core/src/main/resources/error/error-classes.json:
##
@@ -933,6 +933,11 @@
],
"sqlState" : "42000"
},
+ "TEMP_VIEW_DOES_NOT_BELONG_TO_A_DATABASE" : {
itholic commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1021045305
##
core/src/main/resources/error/error-classes.json:
##
@@ -1277,6 +1277,11 @@
"A correlated outer name reference within a subquery expression body
was not
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021045167
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2162,6 +2162,16 @@ object SQLConf {
.booleanConf
.createWithDefault(f
cloud-fan commented on code in PR #38595:
URL: https://github.com/apache/spark/pull/38595#discussion_r1021044871
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2162,6 +2162,16 @@ object SQLConf {
.booleanConf
.createWithDefault(f
1 - 100 of 136 matches
Mail list logo