panbingkun commented on PR #45326:
URL: https://github.com/apache/spark/pull/45326#issuecomment-1990682229
@dongjoon-hyun
To be more `reliable`, I will convert this pr from `draft` to `review` until
GA can run successfully.
I have tried the test `ClientStreamingQuerySuite ` several
zhengruifeng opened a new pull request, #45472:
URL: https://github.com/apache/spark/pull/45472
### What changes were proposed in this pull request?
Factor session-related tests out of `test_connect_basic`
### Why are the changes needed?
for testing parallelism
###
yaooqinn commented on PR #45385:
URL: https://github.com/apache/spark/pull/45385#issuecomment-1990595991
Instead of handling such a special case here, JVM has provided helpful
arguments to deal with OutOfMemoryError.
--
This is an automated message from the Apache Git Service.
To
HeartSaVioR closed pull request #45360: [SPARK-47250][SS] Add additional
validations and NERF changes for RocksDB state provider and use of column
families
URL: https://github.com/apache/spark/pull/45360
--
This is an automated message from the Apache Git Service.
To respond to the message,
HeartSaVioR commented on PR #45360:
URL: https://github.com/apache/spark/pull/45360#issuecomment-1990520156
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
WweiL commented on PR #45448:
URL: https://github.com/apache/spark/pull/45448#issuecomment-1990155844
Closing this since https://github.com/apache/spark/pull/45468 has the same
change
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
WweiL closed pull request #45448: [SPARK-47332][SS][Connect] Remove not needed
logic in PythonStreamingRunner
URL: https://github.com/apache/spark/pull/45448
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
WweiL commented on code in PR #45468:
URL: https://github.com/apache/spark/pull/45468#discussion_r1520814035
##
core/src/main/scala/org/apache/spark/api/python/StreamingPythonRunner.scala:
##
@@ -68,17 +68,11 @@ private[spark] class StreamingPythonRunner(
HeartSaVioR closed pull request #45023: [SPARK-46962][SS][PYTHON] Add interface
for python streaming data source API and implement python worker to run python
streaming data source
URL: https://github.com/apache/spark/pull/45023
--
This is an automated message from the Apache Git Service.
HeartSaVioR commented on PR #45023:
URL: https://github.com/apache/spark/pull/45023#issuecomment-1990058421
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1520802905
##
python/pyspark/sql/datasource.py:
##
@@ -426,6 +426,10 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
in the final
panbingkun commented on code in PR #44665:
URL: https://github.com/apache/spark/pull/44665#discussion_r1520791983
##
python/pyspark/sql/functions/builtin.py:
##
@@ -15534,19 +15532,7 @@ def to_csv(col: "ColumnOrName", options:
Optional[Dict[str, str]] = None) -> Col
|
yaooqinn opened a new pull request, #45471:
URL: https://github.com/apache/spark/pull/45471
### What changes were proposed in this pull request?
This PR Supports TimestampNTZ for DB2 TIMESTAMP WITH TIME ZONE when
`preferTimestampNTZ` option is set to true by users
panbingkun commented on PR #45469:
URL: https://github.com/apache/spark/pull/45469#issuecomment-1989938050
The pr related to spark-website is here:
https://github.com/apache/spark-website/pull/508
--
This is an automated message from the Apache Git Service.
To respond to the message,
yaooqinn commented on PR #45440:
URL: https://github.com/apache/spark/pull/45440#issuecomment-1989935613
Merged to master
Thank you @cloud-fan @allisonwang-db
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
yaooqinn closed pull request #45440: [SPARK-46043][SQL][FOLLOWUP] do not
resolve v2 table provider with custom session catalog
URL: https://github.com/apache/spark/pull/45440
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
srielau opened a new pull request, #45470:
URL: https://github.com/apache/spark/pull/45470
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
yaooqinn commented on PR #45469:
URL: https://github.com/apache/spark/pull/45469#issuecomment-1989889832
Merged to master.
Thank you all.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
yaooqinn closed pull request #45469: [MINOR][DOCS] Remove the extra text on
page sql-error-conditions-sqlstates
URL: https://github.com/apache/spark/pull/45469
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
panbingkun commented on PR #45469:
URL: https://github.com/apache/spark/pull/45469#issuecomment-1989795305
cc @itholic @zhengruifeng @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
panbingkun commented on PR #45469:
URL: https://github.com/apache/spark/pull/45469#issuecomment-1989791290
This issue has existed since version `3.4.0`. After this PR, I will submit
`a patch` to fix the doc in `spark-website`.
panbingkun opened a new pull request, #45469:
URL: https://github.com/apache/spark/pull/45469
### What changes were proposed in this pull request?
The pr aims to remove the extra text `.. include:: /shared/replacements.md`
on page `sql-error-conditions-sqlstates.md`.
### Why are
HeartSaVioR commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520711461
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MapStateImpl.scala:
##
@@ -73,24 +74,31 @@ class MapStateImpl[K, V](
}
/** Get the map
HeartSaVioR commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520713399
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MapStateImpl.scala:
##
@@ -73,24 +74,31 @@ class MapStateImpl[K, V](
}
/** Get the map
HeartSaVioR commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520711461
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MapStateImpl.scala:
##
@@ -73,24 +74,31 @@ class MapStateImpl[K, V](
}
/** Get the map
jingz-db commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520699470
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/MapStateSuite.scala:
##
@@ -67,6 +67,7 @@ class MapStateSuite extends StateVariableSuiteBase
MaxGekk commented on code in PR #44665:
URL: https://github.com/apache/spark/pull/44665#discussion_r1520694548
##
python/pyspark/sql/functions/builtin.py:
##
@@ -15534,19 +15532,7 @@ def to_csv(col: "ColumnOrName", options:
Optional[Dict[str, str]] = None) -> Col
|
HeartSaVioR commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520688468
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/MapStateSuite.scala:
##
@@ -67,6 +67,7 @@ class MapStateSuite extends
HeartSaVioR commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520688468
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/MapStateSuite.scala:
##
@@ -67,6 +67,7 @@ class MapStateSuite extends
nchammas commented on PR #45461:
URL: https://github.com/apache/spark/pull/45461#issuecomment-1989744426
Ah, the test failure is due to the generated error documentation that is
checked in to git.
#44971 will eliminate this kind of maintenance headache. (Also, look at that
diff
anishshri-db commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1520644679
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TransformWithStateExec.scala:
##
@@ -163,6 +249,16 @@ case class TransformWithStateExec(
anishshri-db commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1520644342
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulProcessorHandleImpl.scala:
##
@@ -121,6 +123,46 @@ class StatefulProcessorHandleImpl(
panbingkun commented on PR #45326:
URL: https://github.com/apache/spark/pull/45326#issuecomment-1989711132
> It seems that one of `ClientStreamingQuerySuite` test hangs due to the
independent flakiness. Could you re-trigger it when it fails?
Sure, let me continue to observe and
MaxGekk commented on code in PR #45462:
URL: https://github.com/apache/spark/pull/45462#discussion_r1520631768
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -3004,6 +3004,12 @@
],
"sqlState" : "2200E"
},
+ "NULL_QUERY_STRING_EXECUTE_IMMEDIATE"
panbingkun commented on PR #45452:
URL: https://github.com/apache/spark/pull/45452#issuecomment-1989710105
> Please let me know if this is ready, @panbingkun .
Yeah, it's ready
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
xinrong-meng commented on PR #45461:
URL: https://github.com/apache/spark/pull/45461#issuecomment-1989702036
LGTM after fixing the test, thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
anishshri-db commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520618069
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MapStateImpl.scala:
##
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation
jingz-db commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520613982
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MapStateImpl.scala:
##
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
HyukjinKwon closed pull request #45411: [SPARK-47309][SQL][XML] Add schema
inference unit tests
URL: https://github.com/apache/spark/pull/45411
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on PR #45411:
URL: https://github.com/apache/spark/pull/45411#issuecomment-1989688316
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
allisonwang-db commented on code in PR #45468:
URL: https://github.com/apache/spark/pull/45468#discussion_r1520588014
##
core/src/main/scala/org/apache/spark/api/python/StreamingPythonRunner.scala:
##
@@ -68,17 +68,11 @@ private[spark] class StreamingPythonRunner(
HyukjinKwon commented on PR #45468:
URL: https://github.com/apache/spark/pull/45468#issuecomment-1989660024
im fine w/ this change but would defer to @ueshin
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #45448:
URL: https://github.com/apache/spark/pull/45448#issuecomment-1989657160
seems the test failure is related
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon closed pull request #45460: [SPARK-47341][Connect] Replace commands
with relations in a few tests in SparkConnectClientSuite
URL: https://github.com/apache/spark/pull/45460
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #45460:
URL: https://github.com/apache/spark/pull/45460#issuecomment-1989654626
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #45461:
URL: https://github.com/apache/spark/pull/45461#issuecomment-1989652642
I think the test failure is related:
```
[info] - Error classes match with document *** FAILED *** (145 milliseconds)
[info] "...one of the DataFrame[s] but Spark is
HyukjinKwon closed pull request #45436: [SPARK-47327][SQL] Fix thread safety
issue in ICU Collator
URL: https://github.com/apache/spark/pull/45436
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon commented on PR #45436:
URL: https://github.com/apache/spark/pull/45436#issuecomment-1989650161
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
wbo4958 commented on PR #45232:
URL: https://github.com/apache/spark/pull/45232#issuecomment-1989639850
Hi @grundprinzip, Could you help review it again?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
ueshin commented on code in PR #45468:
URL: https://github.com/apache/spark/pull/45468#discussion_r1520516470
##
core/src/main/scala/org/apache/spark/SparkEnv.scala:
##
@@ -141,20 +141,22 @@ class SparkEnv (
pythonExec: String,
workerModule: String,
anishshri-db commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1520520070
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TransformWithStateExec.scala:
##
@@ -103,8 +116,12 @@ case class TransformWithStateExec(
anishshri-db commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1520519676
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulProcessorHandleImpl.scala:
##
@@ -121,6 +123,46 @@ class StatefulProcessorHandleImpl(
allisonwang-db commented on PR #45468:
URL: https://github.com/apache/spark/pull/45468#issuecomment-1989589046
cc @ueshin @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
allisonwang-db opened a new pull request, #45468:
URL: https://github.com/apache/spark/pull/45468
### What changes were proposed in this pull request?
This PR adds an extra config to env.createPythonWorker to make daemon mode
configurable to give more flexibility when
erenavsarogullari commented on code in PR #45234:
URL: https://github.com/apache/spark/pull/45234#discussion_r1520503365
##
sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala:
##
@@ -897,6 +897,138 @@ class AdaptiveQueryExecSuite
}
anishshri-db commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1520472228
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulProcessorHandleImpl.scala:
##
@@ -121,6 +123,46 @@ class StatefulProcessorHandleImpl(
allisonwang-db commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1520455477
##
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonStreamingSourceRunner.scala:
##
@@ -0,0 +1,208 @@
+/*
+ * Licensed to the Apache Software
allisonwang-db commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1520455477
##
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonStreamingSourceRunner.scala:
##
@@ -0,0 +1,208 @@
+/*
+ * Licensed to the Apache Software
allisonwang-db commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1520450074
##
python/pyspark/sql/datasource.py:
##
@@ -298,6 +320,133 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
...
+class
jingz-db opened a new pull request, #45467:
URL: https://github.com/apache/spark/pull/45467
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
ahshahid commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520395436
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
shujingyang-db commented on code in PR #45411:
URL: https://github.com/apache/spark/pull/45411#discussion_r1520393576
##
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/xml/XmlInferSchemaSuite.scala:
##
@@ -0,0 +1,296 @@
+/*
+ * Licensed to the Apache
shujingyang-db commented on code in PR #45411:
URL: https://github.com/apache/spark/pull/45411#discussion_r1520394446
##
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/xml/TestXmlData.scala:
##
@@ -68,4 +68,444 @@ private[xml] trait TestXmlData {
f(dir)
shujingyang-db commented on code in PR #45411:
URL: https://github.com/apache/spark/pull/45411#discussion_r1520393576
##
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/xml/XmlInferSchemaSuite.scala:
##
@@ -0,0 +1,296 @@
+/*
+ * Licensed to the Apache
yhosny opened a new pull request, #45466:
URL: https://github.com/apache/spark/pull/45466
### What changes were proposed in this pull request?
Convert JsonFunctiosnSuite.scala to XML equivalent. Note that XML doesn’t
implement all json functions like json_tuple,
chaoqin-li1123 commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1520368942
##
sql/core/src/test/scala/org/apache/spark/sql/execution/python/PythonStreamingDataSourceSuite.scala:
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache
ahshahid commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520323348
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
ahshahid commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520314613
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
anishshri-db commented on code in PR #45360:
URL: https://github.com/apache/spark/pull/45360#discussion_r1520296947
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala:
##
@@ -582,7 +636,7 @@ class RocksDBSuite extends
anishshri-db commented on code in PR #45360:
URL: https://github.com/apache/spark/pull/45360#discussion_r1520296736
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala:
##
@@ -536,6 +536,67 @@ class RocksDBSuite extends
sunchao commented on code in PR #45267:
URL: https://github.com/apache/spark/pull/45267#discussion_r1520267117
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala:
##
@@ -635,6 +636,22 @@ trait ShuffleSpec {
*/
def
GideonPotok commented on PR #45453:
URL: https://github.com/apache/spark/pull/45453#issuecomment-1989191094
> @GideonPotok - I think that better approach for benchmarking collation
track is to start with the basics. e.g. unit benchmarks against
`CollationFactory` +`UTF8String`. E.g. what
sunchao commented on code in PR #45267:
URL: https://github.com/apache/spark/pull/45267#discussion_r1520187304
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/functions/ReducibleFunction.java:
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software
jingz-db commented on code in PR #45341:
URL: https://github.com/apache/spark/pull/45341#discussion_r1520256936
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MapStateImpl.scala:
##
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
chaoqin-li1123 commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1520229021
##
python/pyspark/sql/datasource.py:
##
@@ -298,6 +320,133 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
...
+class
dongjoon-hyun commented on PR #45408:
URL: https://github.com/apache/spark/pull/45408#issuecomment-1989116494
Could you do the following to re-generate the golden files, @ted-jenks ?
```
SPARK_GENERATE_GOLDEN_FILES=1 build/sbt "sql/testOnly
org.apache.spark.sql.SQLQueryTestSuite"
dongjoon-hyun commented on PR #45451:
URL: https://github.com/apache/spark/pull/45451#issuecomment-1989111676
Thank you, @panbingkun .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun closed pull request #45451: [SPARK-47339][BUILD] Upgrade
checkStyle to `10.14.0`
URL: https://github.com/apache/spark/pull/45451
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #45326:
URL: https://github.com/apache/spark/pull/45326#issuecomment-1989108153
It seems that one of `ClientStreamingQuerySuite` test hangs due to the
independent flakiness. Could you re-trigger it when it fails?
```
[info] *** Test still running after 3
dongjoon-hyun commented on PR #45459:
URL: https://github.com/apache/spark/pull/45459#issuecomment-1989103412
Merged to master. Thank you all.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun closed pull request #45459:
[SPARK-45245][CONNECT][TESTS][FOLLOW-UP] Remove unneeded Matchers trait in the
test
URL: https://github.com/apache/spark/pull/45459
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
dongjoon-hyun commented on PR #45463:
URL: https://github.com/apache/spark/pull/45463#issuecomment-1989098700
Thank you, @cashmand and all. Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun closed pull request #45463: [SPARK-45827][SQL][FOLLOWUP] Fix for
collation
URL: https://github.com/apache/spark/pull/45463
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
rayhondo closed pull request #45465: Raykim/iceberg 150
URL: https://github.com/apache/spark/pull/45465
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
rayhondo opened a new pull request, #45465:
URL: https://github.com/apache/spark/pull/45465
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
johnnywalker commented on code in PR #45410:
URL: https://github.com/apache/spark/pull/45410#discussion_r1520132359
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala:
##
@@ -153,12 +153,12 @@ object JDBCRDD extends Logging {
*/
class
peter-toth commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520115101
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
peter-toth commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520078411
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
peter-toth commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520078411
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
EnricoMi commented on PR #45464:
URL: https://github.com/apache/spark/pull/45464#issuecomment-1988911223
> @EnricoMi this looks much simpler than my previous attempt #38357
Thanks for the pointer! I have a PR for driver log support in the pipeline.
--
This is an automated message
EnricoMi commented on code in PR #45464:
URL: https://github.com/apache/spark/pull/45464#discussion_r1520050341
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesExecutorBackend.scala:
##
@@ -28,6 +28,46 @@ import
peter-toth commented on code in PR #45343:
URL: https://github.com/apache/spark/pull/45343#discussion_r1520049076
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSelfJoinSuite.scala:
##
@@ -498,4 +559,70 @@ class DataFrameSelfJoinSuite extends QueryTest with
EnricoMi commented on code in PR #45464:
URL: https://github.com/apache/spark/pull/45464#discussion_r1519987571
##
docs/configuration.md:
##
@@ -1627,15 +1627,13 @@ Apart from these, the following properties are also
available, and may be useful
dbatomic commented on PR #45453:
URL: https://github.com/apache/spark/pull/45453#issuecomment-1988798774
@GideonPotok - I think that better approach for benchmarking collation track
is to start with the basics. e.g. unit benchmarks against `CollationFactory`
+`UTF8String`. E.g. what is the
stevomitric commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1519971782
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -378,13 +378,6 @@ public boolean matchAt(final UTF8String s, int pos) {
miland-db commented on PR #45423:
URL: https://github.com/apache/spark/pull/45423#issuecomment-1988766106
Thank you! @MaxGekk and thank you @HyukjinKwon for the comments
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
MaxGekk commented on code in PR #45407:
URL: https://github.com/apache/spark/pull/45407#discussion_r1519942137
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkIntervalUtils.scala:
##
@@ -131,24 +131,21 @@ trait SparkIntervalUtils {
*/
def
sahnib commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1519943584
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TransformWithStateExec.scala:
##
@@ -103,8 +116,12 @@ case class TransformWithStateExec(
val
pan3793 commented on code in PR #45464:
URL: https://github.com/apache/spark/pull/45464#discussion_r1519873684
##
docs/configuration.md:
##
@@ -1627,15 +1627,13 @@ Apart from these, the following properties are also
available, and may be useful
pan3793 commented on PR #45464:
URL: https://github.com/apache/spark/pull/45464#issuecomment-1988639547
@EnricoMi this looks much simpler than my previous attempt
https://github.com/apache/spark/pull/38357
--
This is an automated message from the Apache Git Service.
To respond to the
1 - 100 of 192 matches
Mail list logo