wbo4958 commented on code in PR #45232:
URL: https://github.com/apache/spark/pull/45232#discussion_r1513987011
##
python/pyspark/resource/profile.py:
##
@@ -114,14 +122,26 @@ def id(self) -> int:
int
A unique id of this :class:`ResourceProfile`
mridulm commented on code in PR #45240:
URL: https://github.com/apache/spark/pull/45240#discussion_r1513970260
##
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##
@@ -117,6 +117,14 @@ package object config {
.bytesConf(ByteUnit.MiB)
panbingkun opened a new pull request, #45400:
URL: https://github.com/apache/spark/pull/45400
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
mridulm commented on PR #45240:
URL: https://github.com/apache/spark/pull/45240#issuecomment-1980260288
I would like to understand the usecase better here - It is still unclear to
me what characteristics you are shooting for by this PR.
Reduction in OOM is mentioned [as a
mihailom-db commented on code in PR #45383:
URL: https://github.com/apache/spark/pull/45383#discussion_r1513956640
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala:
##
@@ -764,6 +782,91 @@ abstract class TypeCoercionBase {
}
}
+
mihailom-db commented on code in PR #45383:
URL: https://github.com/apache/spark/pull/45383#discussion_r1513955577
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala:
##
@@ -764,6 +782,91 @@ abstract class TypeCoercionBase {
}
}
+
panbingkun opened a new pull request, #45399:
URL: https://github.com/apache/spark/pull/45399
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
mihailom-db commented on code in PR #45383:
URL: https://github.com/apache/spark/pull/45383#discussion_r1513941920
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala:
##
@@ -958,14 +1062,16 @@ object TypeCoercion extends TypeCoercionBase {
panbingkun commented on code in PR #45368:
URL: https://github.com/apache/spark/pull/45368#discussion_r1513941395
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/V2SessionCatalog.scala:
##
@@ -156,15 +156,6 @@ class V2SessionCatalog(catalog:
yaooqinn commented on PR #45396:
URL: https://github.com/apache/spark/pull/45396#issuecomment-1980207462
Thank you all. Merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
yaooqinn closed pull request #45396: [SPARK-47293][CORE] Build batchSchema with
sparkSchema instead of append one by one
URL: https://github.com/apache/spark/pull/45396
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
uros-db commented on code in PR #45382:
URL: https://github.com/apache/spark/pull/45382#discussion_r1513909856
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -343,19 +346,33 @@ public boolean contains(final UTF8String substring) {
cloud-fan closed pull request #45389: [SPARK-46835][SQL][Collations] Join
support for non-binary collations
URL: https://github.com/apache/spark/pull/45389
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
cloud-fan commented on PR #45389:
URL: https://github.com/apache/spark/pull/45389#issuecomment-1980164987
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #45366:
URL: https://github.com/apache/spark/pull/45366#issuecomment-1980160010
https://github.com/HyukjinKwon/spark/actions/runs/8167770830
It should work now I believe but let me wait for the test result.
--
This is an automated message from the Apache
AngersZh commented on PR #45398:
URL: https://github.com/apache/spark/pull/45398#issuecomment-1980144655
> @AngersZh I guess you are changing a outdate codebase... This feature
has been supported at #34542 (Spark 3.3)
Yea...didn't see the change
--
This is an automated
AngersZh closed pull request #45398: [SPARK-47294][SQL]
OptimizeSkewInRebalanceRepartitions should support
ProjectExec(_,ShuffleQueryStageExec)
URL: https://github.com/apache/spark/pull/45398
--
This is an automated message from the Apache Git Service.
To respond to the message, please
erenavsarogullari commented on code in PR #45234:
URL: https://github.com/apache/spark/pull/45234#discussion_r1513862684
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/QueryStageExec.scala:
##
@@ -148,6 +148,18 @@ abstract class QueryStageExec extends
pan3793 commented on code in PR #45327:
URL: https://github.com/apache/spark/pull/45327#discussion_r1513846126
##
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java:
##
@@ -36,6 +38,7 @@
* of the file format).
*/
public final class
yaooqinn commented on PR #45384:
URL: https://github.com/apache/spark/pull/45384#issuecomment-1980118542
Thank you @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on PR #41805:
URL: https://github.com/apache/spark/pull/41805#issuecomment-1980114512
Ya, @LuciferYang is right.
To @Midhunpottammal , you need SPARK-43831 for Java 21 support.
--
This is an automated message from the Apache Git Service.
To respond to the
ulysses-you commented on PR #45398:
URL: https://github.com/apache/spark/pull/45398#issuecomment-1980095300
@AngersZh I guess you are changing a outdate codebase... This feature
has been supported at https://github.com/apache/spark/pull/34542 (Spark 3.3)
--
This is an automated
doki23 commented on PR #45181:
URL: https://github.com/apache/spark/pull/45181#issuecomment-1980094365
> I don't think it fixes the issue completely and there are some problems
with the solution. I believe a proper solution is in the following comment:
[#45181
sweisdb commented on PR #45394:
URL: https://github.com/apache/spark/pull/45394#issuecomment-1980063354
@mridulm At its core, using AES-CTR mode without authentication is insecure
because someone can change RPC contents by simply XORing the ciphertext. This
can be demonstrated by modifying
AngersZh commented on PR #45398:
URL: https://github.com/apache/spark/pull/45398#issuecomment-1980056417
ping @ulysses-you @yaooqinn
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
AngersZh opened a new pull request, #45398:
URL: https://github.com/apache/spark/pull/45398
### What changes were proposed in this pull request?
Current OptimizeSkewInRebalanceRepartitions only support match case
ShuffleQueryStageExec
```
plan transformUp {
case
wForget commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513786488
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized plan
wForget opened a new pull request, #45397:
URL: https://github.com/apache/spark/pull/45397
### What changes were proposed in this pull request?
Add ConvertCommandResultToLocalRelation rule.
### Why are the changes needed?
### Does this PR introduce
zwangsheng opened a new pull request, #45396:
URL: https://github.com/apache/spark/pull/45396
### What changes were proposed in this pull request?
Simplify the building process of `batchSchema` by passing
`sparkSchema.fields` instead of adding fields one by one.
zhuqi-lucas commented on code in PR #45314:
URL: https://github.com/apache/spark/pull/45314#discussion_r1513776816
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/HasPartitionSize.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation
HyukjinKwon closed pull request #45395: [SPARK-47277][3.5] PySpark util
function assertDataFrameEqual should not support streaming DF
URL: https://github.com/apache/spark/pull/45395
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #45395:
URL: https://github.com/apache/spark/pull/45395#issuecomment-1980009033
Merged to branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
yaooqinn closed pull request #45384: [SPARK-47280][SQL] Remove timezone
limitation for ORACLE TIMESTAMP WITH TIMEZONE
URL: https://github.com/apache/spark/pull/45384
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
yaooqinn commented on PR #45384:
URL: https://github.com/apache/spark/pull/45384#issuecomment-1980006732
Thanks for the review @cloud-fan.
Merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
ueshin commented on code in PR #45378:
URL: https://github.com/apache/spark/pull/45378#discussion_r1513759920
##
python/pyspark/sql/profiler.py:
##
@@ -224,6 +224,54 @@ def dump(id: int) -> None:
for id in sorted(code_map.keys()):
dump(id)
+
cloud-fan closed pull request #45357: [SPARK-47247][SQL] Use smaller target
size when coalescing partitions with exploding joins
URL: https://github.com/apache/spark/pull/45357
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
melin commented on PR #38202:
URL: https://github.com/apache/spark/pull/38202#issuecomment-1979982318
cc @zwangsheng
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
cloud-fan commented on PR #45357:
URL: https://github.com/apache/spark/pull/45357#issuecomment-1979976679
thanks for the review, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
doki23 commented on PR #45181:
URL: https://github.com/apache/spark/pull/45181#issuecomment-1979974166
> I don't think it fixes the issue completely and there are some problems
with the solution. I believe a proper solution is in the following comment:
[#45181
mridulm closed pull request #45390: [SPARK-47146][CORE][3.5] Possible thread
leak when doing sort merge join
URL: https://github.com/apache/spark/pull/45390
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
mridulm commented on PR #45390:
URL: https://github.com/apache/spark/pull/45390#issuecomment-1979973804
Merged to branch-3.5 and branch-3.4
Thanks for fixing this @JacobZheng0927 !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
wForget commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513732088
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized plan
doki23 commented on PR #45181:
URL: https://github.com/apache/spark/pull/45181#issuecomment-1979968619
Maybe
[this](https://github.com/apache/spark/pull/45181#issuecomment-1969241145) is
the proper solution.
But we need find all the children of logicalPlan if they're cached:
```scala
zhuqi-lucas commented on code in PR #45314:
URL: https://github.com/apache/spark/pull/45314#discussion_r1513735868
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/HasPartitionSize.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation
zhuqi-lucas commented on code in PR #45314:
URL: https://github.com/apache/spark/pull/45314#discussion_r1513735868
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/HasPartitionSize.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation
wForget commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513732088
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized plan
panbingkun closed pull request #45387: [SPARK-47283][PYSPARK][DOCS] Remove
Spark version drop down to the PySpark doc site
URL: https://github.com/apache/spark/pull/45387
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
LuciferYang commented on PR #41805:
URL: https://github.com/apache/spark/pull/41805#issuecomment-1979957804
@Midhunpottammal Spark 3.5 has not announced support for Java 21, this
feature is likely to be released in Spark 4.0 :)
--
This is an automated message from the Apache Git
HyukjinKwon commented on PR #45366:
URL: https://github.com/apache/spark/pull/45366#issuecomment-1979952811
https://github.com/HyukjinKwon/spark/actions/runs/8165935950/job/22323872479
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
yaooqinn commented on PR #45384:
URL: https://github.com/apache/spark/pull/45384#issuecomment-1979951059
cc @cloud-fan @dongjoon-hyun, thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
doki23 commented on code in PR #45181:
URL: https://github.com/apache/spark/pull/45181#discussion_r1513722604
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -3878,6 +3880,8 @@ class Dataset[T] private[sql](
*/
def persist(newLevel: StorageLevel):
wForget commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513716970
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized plan
mridulm commented on PR #45394:
URL: https://github.com/apache/spark/pull/45394#issuecomment-1979944365
It is not clear to me why we should be making this change, what the benefits
are and what the current limitations are.
Note that Spark 4.0 support TLS - so if this is still required in
cloud-fan commented on PR #45388:
URL: https://github.com/apache/spark/pull/45388#issuecomment-1979942297
late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
xinrong-meng commented on code in PR #45378:
URL: https://github.com/apache/spark/pull/45378#discussion_r1513714435
##
python/pyspark/sql/profiler.py:
##
@@ -224,6 +224,54 @@ def dump(id: int) -> None:
for id in sorted(code_map.keys()):
dump(id)
cloud-fan commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513713801
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized plan
yaooqinn commented on PR #45388:
URL: https://github.com/apache/spark/pull/45388#issuecomment-1979937660
merged to master. thanks @ulysses-you @HyukjinKwon.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
yaooqinn closed pull request #45388: [SPARK-47285][SQL] AdaptiveSparkPlanExec
should always use the context.session
URL: https://github.com/apache/spark/pull/45388
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
doki23 commented on code in PR #45181:
URL: https://github.com/apache/spark/pull/45181#discussion_r1513704138
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -193,10 +193,12 @@ private[sql] object Dataset {
*/
@Stable
class Dataset[T] private[sql](
-
wForget commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513704269
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized plan
doki23 commented on code in PR #45181:
URL: https://github.com/apache/spark/pull/45181#discussion_r1513702086
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -193,10 +193,12 @@ private[sql] object Dataset {
*/
@Stable
class Dataset[T] private[sql](
-
cloud-fan commented on code in PR #45125:
URL: https://github.com/apache/spark/pull/45125#discussion_r1513701849
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteWithExpression.scala:
##
@@ -34,7 +34,7 @@ import
cloud-fan commented on code in PR #45125:
URL: https://github.com/apache/spark/pull/45125#discussion_r1513701628
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteWithExpression.scala:
##
@@ -34,7 +34,7 @@ import
cloud-fan commented on code in PR #45125:
URL: https://github.com/apache/spark/pull/45125#discussion_r1513701474
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -328,6 +328,34 @@ abstract class Optimizer(catalogManager:
HyukjinKwon commented on code in PR #45378:
URL: https://github.com/apache/spark/pull/45378#discussion_r1513693941
##
python/pyspark/sql/profiler.py:
##
@@ -224,6 +224,54 @@ def dump(id: int) -> None:
for id in sorted(code_map.keys()):
dump(id)
+
rangadi commented on code in PR #44323:
URL: https://github.com/apache/spark/pull/44323#discussion_r1513647366
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala:
##
@@ -198,11 +198,15 @@ object
dongjoon-hyun commented on PR #45391:
URL: https://github.com/apache/spark/pull/45391#issuecomment-1979890992
Thank you but we have the on-going work already. Let me close this,
@neilramaswamy .
- #45365
--
This is an automated message from the Apache Git Service.
To respond to the
dongjoon-hyun closed pull request #45391: [WIP][BUILD] Upgrade RocksDB version
to 8.11.3
URL: https://github.com/apache/spark/pull/45391
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
neilramaswamy commented on PR #45391:
URL: https://github.com/apache/spark/pull/45391#issuecomment-1979889470
JDK 17 run: https://github.com/neilramaswamy/nr-spark/actions/runs/8164820755
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
WweiL opened a new pull request, #45395:
URL: https://github.com/apache/spark/pull/45395
### What changes were proposed in this pull request?
Backport https://github.com/apache/spark/pull/45380 to branch-3.5
The handy util function should not support streaming
sweisdb opened a new pull request, #45394:
URL: https://github.com/apache/spark/pull/45394
### What changes were proposed in this pull request?
The high level issue is that Apache Spark's RPC encryption is using
unauthenticated CTR. We want to switch to GCM.
The complication is
github-actions[bot] closed pull request #43841: [SPARK-45954][SQL] Remove
redundant shuffles
URL: https://github.com/apache/spark/pull/43841
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
WweiL commented on PR #45380:
URL: https://github.com/apache/spark/pull/45380#issuecomment-1979850219
@HyukjinKwon Sure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon closed pull request #45375: [SPARK-44746][PYTHON] Add more Python
UDTF documentation for functions that accept input tables
URL: https://github.com/apache/spark/pull/45375
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #45375:
URL: https://github.com/apache/spark/pull/45375#issuecomment-1979844986
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513644737
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized
HyukjinKwon commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513644462
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4483,6 +4478,17 @@ class Dataset[T] private[sql](
}
}
+ /** Returns a optimized
HyukjinKwon commented on code in PR #45373:
URL: https://github.com/apache/spark/pull/45373#discussion_r1513638446
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -655,7 +649,8 @@ class Dataset[T] private[sql](
* @group basic
* @since 2.4.0
*/
-
HyukjinKwon commented on PR #45380:
URL: https://github.com/apache/spark/pull/45380#issuecomment-1979831889
Merged to master.
@WweiL would you mind opening a backporting PR to branch-3.5?
--
This is an automated message from the Apache Git Service.
To respond to the message, please
HyukjinKwon closed pull request #45380: [SPARK-47277] PySpark util function
assertDataFrameEqual should not support streaming DF
URL: https://github.com/apache/spark/pull/45380
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on PR #45366:
URL: https://github.com/apache/spark/pull/45366#issuecomment-1979825939
https://github.com/HyukjinKwon/spark/actions/runs/8164584039
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
anishshri-db commented on code in PR #45360:
URL: https://github.com/apache/spark/pull/45360#discussion_r1513631712
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -246,25 +246,35 @@ class RocksDB(
HyukjinKwon closed pull request #45393: [SPARK-47251][PYTHON][FOLLOWUP] Use
__name__ instead of string representation
URL: https://github.com/apache/spark/pull/45393
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
HyukjinKwon commented on PR #45393:
URL: https://github.com/apache/spark/pull/45393#issuecomment-1979820319
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
sahnib commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513626298
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -125,6 +125,12 @@
],
"sqlState" : "428FR"
},
+
anishshri-db commented on code in PR #45360:
URL: https://github.com/apache/spark/pull/45360#discussion_r1513624528
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -246,25 +246,35 @@ class RocksDB(
jingz-db commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1513621273
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StateTypesEncoderUtils.scala:
##
@@ -86,3 +88,53 @@ object StateTypesEncoder {
new
anishshri-db commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1513618200
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/TransformWithMapStateSuite.scala:
##
@@ -0,0 +1,392 @@
+/*
+ * Licensed to the Apache Software
sahnib commented on code in PR #45360:
URL: https://github.com/apache/spark/pull/45360#discussion_r1513594152
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -246,25 +246,35 @@ class RocksDB(
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513595185
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/TransformWithStateWatermarkSuite.scala:
##
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513594533
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/TransformWithStateWatermarkSuite.scala:
##
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513593565
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/TransformWithStateWatermarkSuite.scala:
##
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513592033
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -676,6 +678,43 @@ class KeyValueGroupedDataset[K, V] private[sql](
)
}
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513590003
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -676,6 +678,43 @@ class KeyValueGroupedDataset[K, V] private[sql](
)
}
allisonwang-db commented on PR #45375:
URL: https://github.com/apache/spark/pull/45375#issuecomment-1979764609
Looks good! Also cc @ueshin and @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513587938
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/TransformWithStateWatermarkSuite.scala:
##
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513586387
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/EventTimeWatermarkExec.scala:
##
@@ -129,3 +129,37 @@ case class EventTimeWatermarkExec(
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513585434
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -676,6 +678,43 @@ class KeyValueGroupedDataset[K, V] private[sql](
)
}
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513584948
##
sql/core/src/main/scala/org/apache/spark/sql/KeyValueGroupedDataset.scala:
##
@@ -676,6 +678,43 @@ class KeyValueGroupedDataset[K, V] private[sql](
)
}
anishshri-db commented on code in PR #45376:
URL: https://github.com/apache/spark/pull/45376#discussion_r1513581167
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/EventTimeWatermark.scala:
##
@@ -40,7 +41,8 @@ object EventTimeWatermark {
case class
1 - 100 of 224 matches
Mail list logo