gengliangwang opened a new pull request, #38997:
URL: https://github.com/apache/spark/pull/38997
### What changes were proposed in this pull request?
Handle TimestampNTZ in `Cast.canUpCast`:
* Date and timestamp type can up cast to TimestampNTZ.
* TimestampNTZ can up cast
ahmed-mahran opened a new pull request, #38996:
URL: https://github.com/apache/spark/pull/38996
### What changes were proposed in this pull request?
A follow-up on https://github.com/apache/spark/pull/38966 to update relevant
documentation and remove redundant sort key.
ulysses-you commented on code in PR #38779:
URL: https://github.com/apache/spark/pull/38779#discussion_r1044175147
##
core/pom.xml:
##
@@ -616,6 +621,48 @@
+
+org.apache.maven.plugins
+maven-shade-plugin
Review Comment:
@ge
Ngone51 commented on PR #38995:
URL: https://github.com/apache/spark/pull/38995#issuecomment-1343967442
> What about renaming IsolatedRpcEndpoint to IsolatedThreadSafeRpcEndpoint
simply?
I think this would breach the original design by the author argued at
https://github.com/apache/s
HyukjinKwon commented on PR #38967:
URL: https://github.com/apache/spark/pull/38967#issuecomment-1343959972
Oops, this is apparently used. When I run the commands below:
```bash
./build/mvn -Phive -DskipTests clean package
./python/run-tests --module pyspark-connect -p 1
```
LuciferYang commented on code in PR #38944:
URL: https://github.com/apache/spark/pull/38944#discussion_r1044163985
##
connector/connect/common/pom.xml:
##
@@ -0,0 +1,225 @@
+
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+
shuyouZZ commented on code in PR #38983:
URL: https://github.com/apache/spark/pull/38983#discussion_r1044159282
##
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala:
##
@@ -996,6 +996,21 @@ private[history] class FsHistoryProvider(conf: SparkConf,
cloc
LuciferYang commented on code in PR #38944:
URL: https://github.com/apache/spark/pull/38944#discussion_r1044157198
##
connector/connect/common/pom.xml:
##
@@ -0,0 +1,225 @@
+
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+
LuciferYang commented on code in PR #38944:
URL: https://github.com/apache/spark/pull/38944#discussion_r1044157198
##
connector/connect/common/pom.xml:
##
@@ -0,0 +1,225 @@
+
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+
LuciferYang commented on code in PR #38944:
URL: https://github.com/apache/spark/pull/38944#discussion_r1044157198
##
connector/connect/common/pom.xml:
##
@@ -0,0 +1,225 @@
+
+
+
+http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+
HyukjinKwon commented on PR #38994:
URL: https://github.com/apache/spark/pull/38994#issuecomment-1343943663
I prefer to merge https://github.com/apache/spark/pull/38991 first but
please don't bother. I don't mind resolving conflicts 👍
--
This is an automated message from the Apache Git S
Ngone51 opened a new pull request, #38995:
URL: https://github.com/apache/spark/pull/38995
### What changes were proposed in this pull request?
This PR introduces a new layer `IsolatedThreadSafeRpcEndpoint` to extend
`IsolatedRpcEndpoint` and changes all the endpoints whic
toujours33 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1044148832
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1178,8 +1178,13 @@ private[spark] class DAGScheduler(
listenerBus.post(SparkListene
dongjoon-hyun commented on PR #38982:
URL: https://github.com/apache/spark/pull/38982#issuecomment-1343935891
Thank you, @Yikun . It seems that your test works, but linter job failed at
SparkR issue again.
- https://github.com/Yikun/spark/actions/runs/3655117402/jobs/6176152494
--
This
HyukjinKwon commented on PR #38994:
URL: https://github.com/apache/spark/pull/38994#issuecomment-1343931366
cc @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
T
HyukjinKwon opened a new pull request, #38994:
URL: https://github.com/apache/spark/pull/38994
### What changes were proposed in this pull request?
This PR proposes to resolve the circular imports workarounds
### Why are the changes needed?
For better readability and
idealspark opened a new pull request, #38993:
URL: https://github.com/apache/spark/pull/38993
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
--
This is an automated message from the Apache Gi
infoankitp commented on code in PR #38865:
URL: https://github.com/apache/spark/pull/38865#discussion_r1044143098
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,92 @@ case class ArrayExcept(left: Expressio
HyukjinKwon commented on PR #38992:
URL: https://github.com/apache/spark/pull/38992#issuecomment-1343924920
cc @zhengruifeng
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon opened a new pull request, #38992:
URL: https://github.com/apache/spark/pull/38992
### What changes were proposed in this pull request?
This PR proposes to document the correct way of running Spark Connect tests
with `--parallelism 1` option in `./python/run-tests` script.
Yikun commented on PR #38982:
URL: https://github.com/apache/spark/pull/38982#issuecomment-1343923114
For the pyspark failare, let's see
https://github.com/Yikun/spark/pull/193/commits/3840beb42877335efd3bc6089c99bce5287b3079
works or not:
https://github.com/Yikun/spark/actions/runs/365511
dongjoon-hyun commented on PR #38982:
URL: https://github.com/apache/spark/pull/38982#issuecomment-1343920950
As I mentioned
[here](https://github.com/apache/spark/pull/38982#issuecomment-1343437210),
it's irrelevant to this PR and a known issue. You can ignore that, @pan3793 .
--
This i
dongjoon-hyun commented on PR #38991:
URL: https://github.com/apache/spark/pull/38991#issuecomment-1343920074
Thank you, @HyukjinKwon !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
HyukjinKwon commented on PR #38991:
URL: https://github.com/apache/spark/pull/38991#issuecomment-1343919107
cc @grundprinzip @hvanhovell @dongjoon-hyun @amaliujia @zhengruifeng
@xinrong-meng FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please
HyukjinKwon opened a new pull request, #38991:
URL: https://github.com/apache/spark/pull/38991
### What changes were proposed in this pull request?
This PR proposes to:
- Print out the correct error message when dependencies are not installed
for `pyspark.sql.connect`
- Igno
jiaoqingbo opened a new pull request, #38990:
URL: https://github.com/apache/spark/pull/38990
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
mridulm commented on code in PR #38876:
URL: https://github.com/apache/spark/pull/38876#discussion_r1044131362
##
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##
@@ -637,9 +637,11 @@ private[spark] class BlockManager(
def reregister(): Unit = {
// TOD
mridulm commented on code in PR #38876:
URL: https://github.com/apache/spark/pull/38876#discussion_r1044130961
##
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##
@@ -637,9 +637,11 @@ private[spark] class BlockManager(
def reregister(): Unit = {
// TOD
shenjiayu17 commented on PR #38534:
URL: https://github.com/apache/spark/pull/38534#issuecomment-1343908559
Hi @wangyum. I'm very interested in this optimization on partial
aggregation. But why does it need these child node limit? Do they make some
influence on function or performance?
`
dengziming commented on code in PR #38984:
URL: https://github.com/apache/spark/pull/38984#discussion_r1044127806
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -305,7 +305,11 @@ class SparkConnectPlanner(session:
amaliujia commented on code in PR #38984:
URL: https://github.com/apache/spark/pull/38984#discussion_r1044125168
##
python/pyspark/sql/connect/proto_converter.py:
##
@@ -0,0 +1,62 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
amaliujia commented on code in PR #38984:
URL: https://github.com/apache/spark/pull/38984#discussion_r1044124554
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -305,7 +305,11 @@ class SparkConnectPlanner(session:
boneanxs commented on PR #38980:
URL: https://github.com/apache/spark/pull/38980#issuecomment-1343903807
@cloud-fan @rdblue @dongjoon-hyun @steveloughran could you please take a
look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
amaliujia commented on code in PR #38979:
URL: https://github.com/apache/spark/pull/38979#discussion_r1044122515
##
connector/connect/common/src/main/protobuf/spark/connect/relations.proto:
##
@@ -304,6 +305,24 @@ message LocalRelation {
// Local collection data serialized in
pan3793 commented on PR #38982:
URL: https://github.com/apache/spark/pull/38982#issuecomment-1343879880
@dongjoon-hyun the pyspark and lint PR fail consistently, and I see there
were also failed on previous commits. Sorry I'm not familiar w/ Python, cc
@zhengruifeng @Yikun would you please
pan3793 commented on PR #38989:
URL: https://github.com/apache/spark/pull/38989#issuecomment-1343878019
cc @srowen @LuciferYang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comme
pan3793 opened a new pull request, #38989:
URL: https://github.com/apache/spark/pull/38989
### What changes were proposed in this pull request?
Correctly transform the SPI services for Yarn Shuffle Service by configuring
`ServicesResourceTransformer`.
### Why are the ch
gengliangwang commented on code in PR #38988:
URL: https://github.com/apache/spark/pull/38988#discussion_r1044100198
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala:
##
@@ -1280,6 +1280,16 @@ case class Cast(
}
}
+ // Whether Spark
zhengruifeng commented on PR #38979:
URL: https://github.com/apache/spark/pull/38979#issuecomment-1343866751
@HyukjinKwon @cloud-fan @amaliujia @grundprinzip @hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
gengliangwang opened a new pull request, #38988:
URL: https://github.com/apache/spark/pull/38988
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### H
LuciferYang commented on PR #38954:
URL: https://github.com/apache/spark/pull/38954#issuecomment-1343860910
friendly ping @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044083377
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
yabola commented on PR #38882:
URL: https://github.com/apache/spark/pull/38882#issuecomment-1343836630
@gengliangwang I add new UT and change decode to carefully decode each
parameter, I think it aligns with the previous behavior and is more accurate (
I reuse
[decodeURLParameter](https://
SandishKumarHN commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044079922
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
va
HeartSaVioR commented on code in PR #38517:
URL: https://github.com/apache/spark/pull/38517#discussion_r1044068977
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecution.scala:
##
@@ -0,0 +1,282 @@
+/*
+ * Licensed to the Apa
HeartSaVioR commented on code in PR #38517:
URL: https://github.com/apache/spark/pull/38517#discussion_r1044075012
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala:
##
@@ -0,0 +1,1865 @@
+/*
+ * Licensed to t
Ngone51 commented on code in PR #38702:
URL: https://github.com/apache/spark/pull/38702#discussion_r1044071110
##
core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:
##
@@ -1849,6 +1849,68 @@ abstract class AppStatusListenerSuite extends
SparkFunSuite with
beliefer commented on PR #38874:
URL: https://github.com/apache/spark/pull/38874#issuecomment-1343813646
OK. Another way. `ArrayCompact` can reuse `ArrayFilter` and implement
`RuntimeReplaceable`.
```
> SELECT filter(array(1, 2, 3, null), x -> x IS NOT NULL);
[1,2,3]
pan3793 commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1044069338
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala:
##
@@ -241,14 +241,14 @@ class KubernetesConfSuite extends Sp
cloud-fan commented on code in PR #38776:
URL: https://github.com/apache/spark/pull/38776#discussion_r1044068904
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -1761,6 +1763,114 @@ class Analyzer(override val catalogManager:
CatalogM
Ngone51 commented on PR #38702:
URL: https://github.com/apache/spark/pull/38702#issuecomment-1343811023
> Btw, do you also want to remove the if (event.taskInfo == null) { check in
beginning of onTaskEnd ?
@mridulm Since the latest PR fix doesn't involve the metrics, I think we
can
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044066697
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044064987
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044065935
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044065444
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044064987
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
Ngone51 commented on code in PR #38702:
URL: https://github.com/apache/spark/pull/38702#discussion_r1044064944
##
core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:
##
@@ -1849,6 +1849,68 @@ abstract class AppStatusListenerSuite extends
SparkFunSuite with
Ngone51 commented on code in PR #38702:
URL: https://github.com/apache/spark/pull/38702#discussion_r1044064601
##
core/src/main/scala/org/apache/spark/status/AppStatusListener.scala:
##
@@ -689,7 +689,15 @@ private[spark] class AppStatusListener(
if (metricsDelta != null)
zhengruifeng commented on code in PR #38979:
URL: https://github.com/apache/spark/pull/38979#discussion_r1044063886
##
python/pyspark/sql/connect/plan.py:
##
@@ -167,21 +169,38 @@ def _repr_html_(self) -> str:
class LocalRelation(LogicalPlan):
-"""Creates a LocalRelatio
zhengruifeng commented on PR #38979:
URL: https://github.com/apache/spark/pull/38979#issuecomment-1343806000
difference in casting:
this PR leverages `Dataset.to(schema)` to cast datatypes, which is very
different from the pyspark's approach which relies on [the `_acceptable_types`
list]
Ngone51 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1044062231
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1178,8 +1178,13 @@ private[spark] class DAGScheduler(
listenerBus.post(SparkListenerTa
Ngone51 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1044062231
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -1178,8 +1178,13 @@ private[spark] class DAGScheduler(
listenerBus.post(SparkListenerTa
Ngone51 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1044060202
##
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##
@@ -383,8 +383,8 @@ private[spark] class DAGScheduler(
/**
* Called by the TaskSetManage
pan3793 commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1044056418
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala:
##
@@ -241,14 +241,14 @@ class KubernetesConfSuite extends Sp
SandishKumarHN commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044056150
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
va
vinodkc commented on code in PR #38146:
URL: https://github.com/apache/spark/pull/38146#discussion_r1044051578
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/maskExpressions.scala:
##
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation
vinodkc commented on code in PR #38146:
URL: https://github.com/apache/spark/pull/38146#discussion_r1044051385
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/maskExpressions.scala:
##
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation
vinodkc commented on code in PR #38146:
URL: https://github.com/apache/spark/pull/38146#discussion_r1044051062
##
sql/core/src/test/resources/sql-tests/inputs/string-functions.sql:
##
@@ -58,6 +58,69 @@ SELECT substring('Spark SQL' from 5);
SELECT substring('Spark SQL' from -3)
sandeep-katta commented on PR #38874:
URL: https://github.com/apache/spark/pull/38874#issuecomment-1343778634
> Basically, the implementation looks good. But we can use
`RuntimeReplaceable` to simplify this PR.
>
> `ArrayCompact` can reuse `ArrayRemove` and implement `RuntimeReplaceab
LuciferYang commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1044047853
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala:
##
@@ -241,14 +241,14 @@ class KubernetesConfSuite extend
pan3793 commented on PR #38985:
URL: https://github.com/apache/spark/pull/38985#issuecomment-1343774719
> It seems this is to fix a bad case caused by user use way?
It right.
> Is it necessary for Spark to do fault tolerance?
The change is small, I think it's valuable.
shuyouZZ commented on code in PR #38983:
URL: https://github.com/apache/spark/pull/38983#discussion_r1044044735
##
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala:
##
@@ -996,6 +996,21 @@ private[history] class FsHistoryProvider(conf: SparkConf,
cloc
srielau commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1044041728
##
core/src/main/resources/error/error-classes.json:
##
@@ -1443,6 +1443,11 @@
"A correlated outer name reference within a subquery expression body
was not
wankunde commented on code in PR #38649:
URL: https://github.com/apache/spark/pull/38649#discussion_r1044038097
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:
##
@@ -762,10 +762,40 @@ object LikeSimplification extends Rule[LogicalPlan]
LuciferYang commented on PR #38985:
URL: https://github.com/apache/spark/pull/38985#issuecomment-1343768447
It seems this is to fix a bad case caused by user use way? The current
`lang3` version used by Spark does not trigger this issue, right? I don't know
how many similar bad cases will b
Yikun commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1044003100
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala:
##
@@ -241,14 +241,14 @@ class KubernetesConfSuite extends Spar
LuciferYang commented on PR #38874:
URL: https://github.com/apache/spark/pull/38874#issuecomment-1343752740
> Basically, the implementation looks good. But we can use
`RuntimeReplaceable` to simplify this PR.
>
> `ArrayCompact` can reuse `ArrayRemove` and implement `RuntimeReplaceable
LuciferYang commented on code in PR #38874:
URL: https://github.com/apache/spark/pull/38874#discussion_r1044002198
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,57 @@ case class ArrayExcept(left: Expressi
SandishKumarHN commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1044001778
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
va
LuciferYang commented on code in PR #38865:
URL: https://github.com/apache/spark/pull/38865#discussion_r1044000749
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,92 @@ case class ArrayExcept(left: Expressi
LuciferYang commented on PR #38974:
URL: https://github.com/apache/spark/pull/38974#issuecomment-1343747647
This pr can fix the compile issue in the dev mail list reported by
@steveloughran, but should we wait until Hadoop 3.4 is upgraded?. What do you
think? @HyukjinKwon @dongjoon-hyun @s
zhengruifeng commented on code in PR #38973:
URL: https://github.com/apache/spark/pull/38973#discussion_r1043999011
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -309,6 +310,24 @@ class SparkConnectPlanner(sessio
beliefer commented on PR #38874:
URL: https://github.com/apache/spark/pull/38874#issuecomment-1343734796
> @beliefer would you mind also help reviewing? Thanks
Thank you for you ping.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
pan3793 commented on PR #38982:
URL: https://github.com/apache/spark/pull/38982#issuecomment-1343734626
Re-triggered.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
beliefer commented on PR #38874:
URL: https://github.com/apache/spark/pull/38874#issuecomment-1343731612
Basically, the implementation looks good. But we can use
`RuntimeReplaceable` to simplify this PR.
`ArrayCompact` can reuse `ArrayRemove` and implement `RuntimeReplaceable`.
```
pan3793 commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1043983985
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesConf.scala:
##
@@ -275,7 +275,7 @@ private[spark] object KubernetesConf {
pan3793 commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1043983005
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala:
##
@@ -241,14 +241,14 @@ class KubernetesConfSuite extends Sp
pan3793 commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1043983985
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesConf.scala:
##
@@ -275,7 +275,7 @@ private[spark] object KubernetesConf {
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1043982706
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
pan3793 commented on code in PR #38985:
URL: https://github.com/apache/spark/pull/38985#discussion_r1043983005
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala:
##
@@ -241,14 +241,14 @@ class KubernetesConfSuite extends Sp
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1043982706
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1043982706
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
beliefer commented on code in PR #38874:
URL: https://github.com/apache/spark/pull/38874#discussion_r1043982018
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,57 @@ case class ArrayExcept(left: Expression,
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1043915051
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
beliefer commented on PR #38874:
URL: https://github.com/apache/spark/pull/38874#issuecomment-1343723426
@sandeep-katta Could you update the PR description and add the info contains
syntax, arguments, examples and the mainstream database supports array_append ?
Please refer https://github.c
rangadi commented on code in PR #38922:
URL: https://github.com/apache/spark/pull/38922#discussion_r1043915051
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufOptions.scala:
##
@@ -38,6 +38,12 @@ private[sql] class ProtobufOptions(
val parse
beliefer commented on code in PR #38865:
URL: https://github.com/apache/spark/pull/38865#discussion_r1043977011
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,92 @@ case class ArrayExcept(left: Expression,
AmplabJenkins commented on PR #38946:
URL: https://github.com/apache/spark/pull/38946#issuecomment-1343716380
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
AmplabJenkins commented on PR #38947:
URL: https://github.com/apache/spark/pull/38947#issuecomment-1343716355
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
wForget commented on PR #38871:
URL: https://github.com/apache/spark/pull/38871#issuecomment-1343712073
Thanks @planga82, cc @HyukjinKwon @dongjoon-hyun Could you please take a
look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
1 - 100 of 287 matches
Mail list logo