srowen commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153736049
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
ueshin opened a new pull request, #40612:
URL: https://github.com/apache/spark/pull/40612
### What changes were proposed in this pull request?
Fixes the comparison the result with Arrow optimization enabled/disabled.
### Why are the changes needed?
in `test_arrow`, there
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153629770
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for overwrit
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153813344
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1969,6 +2014,136 @@ class SparkConnectPlanner(val sess
gengliangwang commented on PR #40592:
URL: https://github.com/apache/spark/pull/40592#issuecomment-1491064274
Merging to master/3.4/3.3/3.2
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
gengliangwang closed pull request #40592: [SPARK-42967][CORE][3.2][3.3][3.4]
Fix SparkListenerTaskStart.stageAttemptId when a task is started after the
stage is cancelled
URL: https://github.com/apache/spark/pull/40592
--
This is an automated message from the Apache Git Service.
To respond t
HyukjinKwon commented on PR #40612:
URL: https://github.com/apache/spark/pull/40612#issuecomment-1491101671
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
HyukjinKwon commented on PR #40595:
URL: https://github.com/apache/spark/pull/40595#issuecomment-1491102080
Merged to branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
HyukjinKwon closed pull request #40612: [SPARK-42969][CONNECT][TESTS] Fix the
comparison the result with Arrow optimization enabled/disabled
URL: https://github.com/apache/spark/pull/40612
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon closed pull request #40595:
[SPARK-42970][CONNECT][PYTHON][TESTS][3.4] Reuse pyspark.sql.tests.test_arrow
test cases
URL: https://github.com/apache/spark/pull/40595
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153889706
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -1742,6 +1742,8 @@ class DataFrameSuite extends QueryTest
Seq(Row(2, 1, 2), Row(1
HyukjinKwon commented on code in PR #40608:
URL: https://github.com/apache/spark/pull/40608#discussion_r1153890797
##
python/pyspark/sql/dataframe.py:
##
@@ -706,6 +706,25 @@ def explain(
assert self._sc._jvm is not None
print(self._sc._jvm.PythonSQLUtils.expl
HeartSaVioR commented on PR #40561:
URL: https://github.com/apache/spark/pull/40561#issuecomment-1491105348
> What is the decision about batch support?
I just added support of batch in the latest commit. It needs be more test
coverage for batch query support so that's why we have new
HyukjinKwon commented on code in PR #40591:
URL: https://github.com/apache/spark/pull/40591#discussion_r1153892668
##
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala:
##
@@ -289,7 +289,8 @@ case class SparkListenerApplicationStart(
driverAttributes: Optio
itholic commented on PR #39937:
URL: https://github.com/apache/spark/pull/39937#issuecomment-1491110788
Test passed. @MaxGekk could you take a look when you find some time?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895088
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895088
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895796
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153896106
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153896257
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153896951
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153897632
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895796
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
zhengruifeng commented on PR #40612:
URL: https://github.com/apache/spark/pull/40612#issuecomment-1491119721
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubsc
WweiL commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153899173
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -2120,7 +2130,6 @@ class SparkConnectPlanner(val session:
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r115394
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
github-actions[bot] commented on PR #39130:
URL: https://github.com/apache/spark/pull/39130#issuecomment-1491126263
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #39102: [SPARK-41555][SQL] Multi
sparkSession should share single SQLAppStatusStore
URL: https://github.com/apache/spark/pull/39102
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
github-actions[bot] closed pull request #38732: [SPARK-41210][K8S] Window based
executor failure tracking mechanism
URL: https://github.com/apache/spark/pull/38732
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153904044
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895088
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153904387
##
python/pyspark/sql/connect/readwriter.py:
##
@@ -37,7 +37,7 @@
from pyspark.sql.connect._typing import ColumnOrName, OptionalPrimitiveType
from pyspark.sql
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153906537
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153906537
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153908437
##
python/pyspark/sql/connect/session.py:
##
@@ -489,10 +495,6 @@ def sparkContext(self) -> Any:
def streams(self) -> Any:
raise NotImplementedError("stre
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153908962
##
python/pyspark/sql/connect/session.py:
##
@@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153914621
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153915130
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153915713
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153915858
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153917326
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
sadikovi commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153918589
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
srowen commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153918687
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
srowen commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153918827
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153917326
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153920360
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153925550
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang opened a new pull request, #40613:
URL: https://github.com/apache/spark/pull/40613
This reverts commit 5cb5d1fa66ad9d6e94beb17d3fda3a8f220bc371.
### What changes were proposed in this pull request?
### Why are the changes needed?
### Do
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153926757
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
lucaspompeun opened a new pull request, #40614:
URL: https://github.com/apache/spark/pull/40614
### What changes were proposed in this pull request?
Correction of code highlights in SQL protobuf documentation.
old version:
![image](https://user-images.githubusercont
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153928232
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -330,7 +351,9 @@ private[spark] object Utils extends Logging {
def createTempDir(
root: St
cloud-fan commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491166217
The change makes sense, but I'd say this is a legacy feature and the
existing behavior doesn't make sense at all. For string +/- internal, the
string can be timestamp, timestamp_ntz and
cloud-fan commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491166746
Or we should probably fail it in ANSI mode, cc @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
lucaspompeun commented on PR #40614:
URL: https://github.com/apache/spark/pull/40614#issuecomment-1491167649
I'have corrected the problem that cause build error in github workflow
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153930483
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -320,7 +320,28 @@ private[spark] object Utils extends Logging {
* newly created, and is not mark
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153930902
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153931344
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153925550
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
RyanBerti opened a new pull request, #40615:
URL: https://github.com/apache/spark/pull/40615
### What changes were proposed in this pull request?
This PR adds a new dependency on the datasketches-java project, and provides
3 new functions which utilize Datasketches HllSketch and Union ins
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153932172
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153932172
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
LuciferYang closed pull request #40598: [SPARK-42974][CORE] Restore
`Utils#createTempDir` use `ShutdownHookManager#registerShutdownDeleteDir` to
cleanup tempDir
URL: https://github.com/apache/spark/pull/40598
--
This is an automated message from the Apache Git Service.
To respond to the mess
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153934642
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -320,7 +320,28 @@ private[spark] object Utils extends Logging {
* newly created, and is not mark
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153935588
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private v
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153935792
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private v
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153936004
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private v
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153926757
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
yaooqinn commented on PR #40602:
URL: https://github.com/apache/spark/pull/40602#issuecomment-1491192780
cc @cloud-fan @HyukjinKwon thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153951716
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private v
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153955764
##
common/network-common/src/test/java/org/apache/spark/network/StreamTestHelper.java:
##
@@ -49,7 +49,7 @@ private static ByteBuffer createBuffer(int bufSize) {
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153956628
##
common/network-shuffle/src/test/java/org/apache/spark/network/shuffle/ExternalBlockHandlerSuite.java:
##
@@ -125,7 +125,7 @@ private void checkDiagnosisResult(
hvanhovell commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1153957184
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -45,7 +45,7 @@ private[sql] class SparkResult[T](
priv
hvanhovell commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1153957374
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -134,24 +134,41 @@ private[sql] class SparkResult[T](
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153957701
##
common/network-shuffle/src/test/java/org/apache/spark/network/shuffle/TestShuffleDataContext.java:
##
@@ -47,8 +47,9 @@ public TestShuffleDataContext(int numLoca
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153957872
##
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java:
##
@@ -243,7 +243,9 @@ protected void serviceInit(Configuration external
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153958391
##
core/src/test/java/test/org/apache/spark/Java8RDDAPISuite.java:
##
@@ -246,7 +246,7 @@ public void mapPartitions() {
@Test
public void sequenceFile() thr
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153958604
##
core/src/test/java/test/org/apache/spark/JavaAPISuite.java:
##
@@ -93,7 +94,7 @@ public class JavaAPISuite implements Serializable {
@Before
public void se
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153958746
##
streaming/src/test/java/test/org/apache/spark/streaming/JavaAPISuite.java:
##
@@ -1476,7 +1476,7 @@ public void testCheckpointMasterRecovery() throws
Interrupte
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153926757
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
gengliangwang commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153964890
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache So
gengliangwang commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153965249
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private
wangyum commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491229091
+1 for fail it in ANSI mode.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
gengliangwang commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491232046
> My suggestion is don't touch it to keep legacy workloads running. We
should update the SQL queries to not use String so extensively.
+1, totally agree!
--
This is an auto
Hisoka-X commented on code in PR #40609:
URL: https://github.com/apache/spark/pull/40609#discussion_r1153973500
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -625,6 +625,20 @@ class QueryExecutionErrorsSuite
}
}
+ test("
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153973985
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Soft
Hisoka-X commented on code in PR #40609:
URL: https://github.com/apache/spark/pull/40609#discussion_r1153975175
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -625,6 +625,20 @@ class QueryExecutionErrorsSuite
}
}
+ test("
cloud-fan commented on code in PR #40545:
URL: https://github.com/apache/spark/pull/40545#discussion_r1153976307
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala:
##
@@ -220,9 +220,20 @@ object FileSourceStrategy extends Strategy wit
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153976547
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private v
cloud-fan commented on code in PR #40602:
URL: https://github.com/apache/spark/pull/40602#discussion_r1153979914
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -398,10 +398,24 @@ abstract class JdbcDialect extends Serializable with
Logging {
hvanhovell commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1153980266
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -134,24 +134,41 @@ private[sql] class SparkResult[T](
cloud-fan commented on code in PR #40602:
URL: https://github.com/apache/spark/pull/40602#discussion_r1153980662
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/DB2Dialect.scala:
##
@@ -113,8 +114,9 @@ private object DB2Dialect extends JdbcDialect {
// scalastyle:off lin
cloud-fan commented on PR #32987:
URL: https://github.com/apache/spark/pull/32987#issuecomment-1491247967
After taking another thought, I think the idea is valid. If a subexpression
will be evaluated at least once, and likely more than once due to conditional
branches, it should be benefici
hvanhovell commented on code in PR #40611:
URL: https://github.com/apache/spark/pull/40611#discussion_r1153984573
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowSerializer.scala:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache S
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153984929
##
sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala:
##
@@ -500,6 +500,22 @@ class SparkSessionExtensionSuite extends SparkFunSuite
yaooqinn commented on PR #40583:
URL: https://github.com/apache/spark/pull/40583#issuecomment-1491249761
cc @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153985168
##
sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala:
##
@@ -1161,3 +1177,12 @@ object AddLimit extends Rule[LogicalPlan] {
case
LuciferYang commented on PR #40610:
URL: https://github.com/apache/spark/pull/40610#issuecomment-1491250312
```
2023-03-30T16:09:39.936Z [0m[[0m[0minfo[0m] [0m[0m[31m- Dataset
result destructive iterator *** FAILED *** (84 milliseconds)[0m[0m
2023-03-30T16:09:39.9382605Z
LuciferYang commented on PR #40605:
URL: https://github.com/apache/spark/pull/40605#issuecomment-1491252507
GA passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153988178
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala:
##
@@ -111,11 +112,12 @@ class SparkSessionExtensions {
type FunctionDescription
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153989800
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala:
##
@@ -111,11 +112,12 @@ class SparkSessionExtensions {
type FunctionDescription
101 - 200 of 213 matches
Mail list logo