itholic commented on code in PR #42388:
URL: https://github.com/apache/spark/pull/42388#discussion_r1286623971
##
python/pyspark/pandas/tests/computation/test_compute.py:
##
@@ -101,16 +101,9 @@ def test_mode(self):
with self.assertRaises(ValueError):
LuciferYang commented on code in PR #42378:
URL: https://github.com/apache/spark/pull/42378#discussion_r1286623320
##
core/src/main/scala/org/apache/spark/deploy/history/EventLogFileCompactor.scala:
##
@@ -158,6 +159,8 @@ class EventLogFileCompactor(
)
}
itholic commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286614995
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ = PySparkDataFrame.to.__doc__
HyukjinKwon commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286607344
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ =
zhengruifeng commented on code in PR #42388:
URL: https://github.com/apache/spark/pull/42388#discussion_r1286598403
##
python/pyspark/pandas/tests/computation/test_compute.py:
##
@@ -101,16 +101,9 @@ def test_mode(self):
with self.assertRaises(ValueError):
zhengruifeng commented on code in PR #42388:
URL: https://github.com/apache/spark/pull/42388#discussion_r1286597113
##
python/pyspark/pandas/tests/computation/test_compute.py:
##
@@ -101,16 +101,9 @@ def test_mode(self):
with self.assertRaises(ValueError):
zhengruifeng commented on code in PR #42388:
URL: https://github.com/apache/spark/pull/42388#discussion_r1286597113
##
python/pyspark/pandas/tests/computation/test_compute.py:
##
@@ -101,16 +101,9 @@ def test_mode(self):
with self.assertRaises(ValueError):
zhengruifeng commented on code in PR #42388:
URL: https://github.com/apache/spark/pull/42388#discussion_r1286597113
##
python/pyspark/pandas/tests/computation/test_compute.py:
##
@@ -101,16 +101,9 @@ def test_mode(self):
with self.assertRaises(ValueError):
itholic commented on PR #42388:
URL: https://github.com/apache/spark/pull/42388#issuecomment-1668907825
cc @zhengruifeng
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
itholic commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286592428
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ = PySparkDataFrame.to.__doc__
itholic commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286592428
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ = PySparkDataFrame.to.__doc__
itholic commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286592428
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ = PySparkDataFrame.to.__doc__
itholic commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286592428
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ = PySparkDataFrame.to.__doc__
lvyanquan commented on code in PR #42380:
URL: https://github.com/apache/spark/pull/42380#discussion_r1286588941
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala:
##
@@ -286,7 +286,14 @@ class JacksonParser(
}
case
itholic opened a new pull request, #42388:
URL: https://github.com/apache/spark/pull/42388
### What changes were proposed in this pull request?
This PR proposes to enable tests for pandas API on Spark with Spark Connect
### Why are the changes needed?
To increate
HyukjinKwon commented on code in PR #42380:
URL: https://github.com/apache/spark/pull/42380#discussion_r1286568544
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala:
##
@@ -286,7 +286,14 @@ class JacksonParser(
}
case
lvyanquan commented on PR #42380:
URL: https://github.com/apache/spark/pull/42380#issuecomment-1668865559
Sure. testCase was added in comment of `How was this patch tested?`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HyukjinKwon commented on code in PR #42369:
URL: https://github.com/apache/spark/pull/42369#discussion_r1286567129
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1732,6 +1732,23 @@ def to(self, schema: StructType) -> "DataFrame":
to.__doc__ =
yaooqinn commented on PR #42295:
URL: https://github.com/apache/spark/pull/42295#issuecomment-1668862618
Hi @liangyu-1, please also take care of the CI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
yaooqinn commented on PR #42295:
URL: https://github.com/apache/spark/pull/42295#issuecomment-1668845632
LGTM, please update the PR description according the code updates so far
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HeartSaVioR closed pull request #42378: [SPARK-44703][CORE] Log eventLog
rewrite duration when compact old event log files
URL: https://github.com/apache/spark/pull/42378
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
liangyu-1 commented on PR #42295:
URL: https://github.com/apache/spark/pull/42295#issuecomment-1668845332
I moved the ApplicationMaster instantiating and assignment inside the doAs
block, and I rebuild the project and test it on my cluster, the shutdown hook
thread now has the correct
hvanhovell opened a new pull request, #42387:
URL: https://github.com/apache/spark/pull/42387
### What changes were proposed in this pull request?
This PR adds the `udf` (with a return type), and `callUDF` functions to
`functions.scala` for the Spark Connect Scala Client.
### Why
HeartSaVioR commented on PR #42378:
URL: https://github.com/apache/spark/pull/42378#issuecomment-1668844958
Thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng commented on PR #42353:
URL: https://github.com/apache/spark/pull/42353#issuecomment-1668841085
merged to master and branch-3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng closed pull request #42353: [SPARK-44005][PYTHON] Improve error
messages for regular Python UDTFs that return non-tuple values
URL: https://github.com/apache/spark/pull/42353
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
zhengruifeng commented on PR #42385:
URL: https://github.com/apache/spark/pull/42385#issuecomment-1668839667
cc @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
LuciferYang commented on PR #42360:
URL: https://github.com/apache/spark/pull/42360#issuecomment-1668838807
Merged into master and branch-3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang closed pull request #42360: [SPARK-44689][CONNECT] Make the
exception handling of function `SparkConnectPlanner#unpackScalarScalaUDF` more
universal
URL: https://github.com/apache/spark/pull/42360
--
This is an automated message from the Apache Git Service.
To respond to the
LuciferYang commented on PR #42167:
URL: https://github.com/apache/spark/pull/42167#issuecomment-1668837855
Merged into master. Thanks @HyukjinKwon @zhengruifeng @wangyum
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
LuciferYang closed pull request #42167: [SPARK-44554][INFRA] Make Python linter
related checks pass of branch-3.3/3.4 daily testing
URL: https://github.com/apache/spark/pull/42167
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
LuciferYang commented on PR #42370:
URL: https://github.com/apache/spark/pull/42370#issuecomment-1668836650
Thanks @dongjoon-hyun ~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
pan3793 commented on code in PR #42336:
URL: https://github.com/apache/spark/pull/42336#discussion_r1286544346
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala:
##
@@ -122,6 +130,23 @@ class HiveFileFormat(fileSinkConf: FileSinkDesc)
bogao007 commented on PR #42386:
URL: https://github.com/apache/spark/pull/42386#issuecomment-1668820419
> @bogao007 PTAL since this touches a couple of streaming classes.
Nice, thanks for the change!
--
This is an automated message from the Apache Git Service.
To respond to the
sunchao commented on PR #42324:
URL: https://github.com/apache/spark/pull/42324#issuecomment-1668811184
Thanks! merged to master/branch-3.4/branch-3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
sunchao closed pull request #42324: [SPARK-44641][SQL] Incorrect result in
certain scenarios when SPJ is not triggered
URL: https://github.com/apache/spark/pull/42324
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
hvanhovell closed pull request #42367: [SPARK-43429][CONNECT] Add Default &
Active SparkSession for Scala Client
URL: https://github.com/apache/spark/pull/42367
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
hvanhovell commented on PR #42367:
URL: https://github.com/apache/spark/pull/42367#issuecomment-1668807299
Merging to master/3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HeartSaVioR closed pull request #42354: [SPARK-44683][SS] Logging level isn't
passed to RocksDB state store provider correctly
URL: https://github.com/apache/spark/pull/42354
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HeartSaVioR commented on PR #42354:
URL: https://github.com/apache/spark/pull/42354#issuecomment-1668804815
Thanks! Merging to master/3.5!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon closed pull request #42371: [SPARK-44694][PYTHON][CONNECT] Refactor
active sessions and expose them as an API
URL: https://github.com/apache/spark/pull/42371
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on PR #42371:
URL: https://github.com/apache/spark/pull/42371#issuecomment-1668799750
Merged to master and branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
hvanhovell commented on PR #42386:
URL: https://github.com/apache/spark/pull/42386#issuecomment-1668795001
@bogao007 PTAL since this touches a couple of streaming classes.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
hvanhovell opened a new pull request, #42386:
URL: https://github.com/apache/spark/pull/42386
### What changes were proposed in this pull request?
This PR deduplicates the following classes:
- `org.apache.spark.sql.SaveMode`
-
anchovYu commented on code in PR #42276:
URL: https://github.com/apache/spark/pull/42276#discussion_r1286515285
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -131,95 +131,97 @@ object
itholic commented on code in PR #42332:
URL: https://github.com/apache/spark/pull/42332#discussion_r1286515194
##
python/pyspark/testing/utils.py:
##
@@ -464,23 +467,42 @@ def assertDataFrameEqual(
raise PySparkAssertionError(
anchovYu commented on code in PR #42276:
URL: https://github.com/apache/spark/pull/42276#discussion_r1286512577
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -131,95 +131,97 @@ object
anchovYu commented on code in PR #42276:
URL: https://github.com/apache/spark/pull/42276#discussion_r1286512367
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -131,95 +131,97 @@ object
anchovYu commented on code in PR #42276:
URL: https://github.com/apache/spark/pull/42276#discussion_r1285473731
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAliasReference.scala:
##
@@ -131,138 +131,166 @@ object
utkarsh39 opened a new pull request, #42385:
URL: https://github.com/apache/spark/pull/42385
### What changes were proposed in this pull request?
PythonRunner, a utility that executes Python UDFs in Spark, uses two threads
in a producer-consumer model today. This multi-threading
bogao007 commented on code in PR #42384:
URL: https://github.com/apache/spark/pull/42384#discussion_r1286500284
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/streaming/ClientStreamingQuerySuite.scala:
##
@@ -352,6 +352,26 @@ class ClientStreamingQuerySuite
hvanhovell commented on PR #42384:
URL: https://github.com/apache/spark/pull/42384#issuecomment-1668759250
cc @bogao007 PTAL
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
hvanhovell opened a new pull request, #42384:
URL: https://github.com/apache/spark/pull/42384
### What changes were proposed in this pull request?
This PR adds `Dataset.dropDuplicatesWithinWatermark` to the Spark Connect
Scala Client.
### Why are the changes needed?
Increase
itholic commented on code in PR #40370:
URL: https://github.com/apache/spark/pull/40370#discussion_r1286496031
##
python/docs/source/migration_guide/pyspark_upgrade.rst:
##
@@ -28,6 +28,7 @@ Upgrading from PySpark 3.5 to 4.0
* In Spark 4.0, ``Series.append`` has been removed
itholic commented on code in PR #40370:
URL: https://github.com/apache/spark/pull/40370#discussion_r1286495087
##
python/pyspark/pandas/tests/frame/test_reindexing.py:
##
@@ -854,7 +879,8 @@ def test_sample(self):
class FrameReidexingTests(FrameReindexingMixin,
itholic commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286494110
##
python/pyspark/errors/error_classes.py:
##
@@ -622,6 +622,11 @@
"No active Spark session found. Please create a new Spark session before
running the code."
itholic commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286494110
##
python/pyspark/errors/error_classes.py:
##
@@ -622,6 +622,11 @@
"No active Spark session found. Please create a new Spark session before
running the code."
github-actions[bot] closed pull request #28488: [SPARK-29083][CORE] Prefetch
elements in rdd.toLocalIterator
URL: https://github.com/apache/spark/pull/28488
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
github-actions[bot] closed pull request #40608:
[SPARK-35198][CONNECT][CORE][PYTHON][SQL] Add support for calling debugCodegen
from Python & Java
URL: https://github.com/apache/spark/pull/40608
--
This is an automated message from the Apache Git Service.
To respond to the message, please
github-actions[bot] commented on PR #40949:
URL: https://github.com/apache/spark/pull/40949#issuecomment-1668735341
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
HyukjinKwon commented on PR #42378:
URL: https://github.com/apache/spark/pull/42378#issuecomment-1668725457
cc @HeartSaVioR FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
srielau commented on code in PR #40474:
URL: https://github.com/apache/spark/pull/40474#discussion_r1286477051
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala:
##
@@ -567,6 +568,135 @@ class SparkSqlAstBuilder extends AstBuilder {
}
}
+
srielau commented on code in PR #40474:
URL: https://github.com/apache/spark/pull/40474#discussion_r1286473766
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##
@@ -34,6 +35,36 @@ import
amaliujia commented on code in PR #42363:
URL: https://github.com/apache/spark/pull/42363#discussion_r1286462131
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/QuotingUtils.scala:
##
@@ -37,6 +49,18 @@ object QuotingUtils {
}
}
+ def quoted(namespace:
ueshin commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286461234
##
python/pyspark/sql/connect/session.py:
##
@@ -93,14 +94,13 @@
from pyspark.sql.connect.udtf import UDTFRegistration
-# `_active_spark_session` stores the
ueshin commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286461234
##
python/pyspark/sql/connect/session.py:
##
@@ -93,14 +94,13 @@
from pyspark.sql.connect.udtf import UDTFRegistration
-# `_active_spark_session` stores the
HyukjinKwon commented on PR #42373:
URL: https://github.com/apache/spark/pull/42373#issuecomment-1668701756
Oh I just cherry-picked this because it seems fairly minor but I don't mind
reverting this out of branch-3.5
--
This is an automated message from the Apache Git Service.
To respond
ueshin commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286461234
##
python/pyspark/sql/connect/session.py:
##
@@ -93,14 +94,13 @@
from pyspark.sql.connect.udtf import UDTFRegistration
-# `_active_spark_session` stores the
dongjoon-hyun commented on PR #42373:
URL: https://github.com/apache/spark/pull/42373#issuecomment-1668692419
Thank you. Is this Apache Spark 3.5-only bug fix?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on code in PR #42332:
URL: https://github.com/apache/spark/pull/42332#discussion_r1286455108
##
python/pyspark/testing/utils.py:
##
@@ -464,23 +467,42 @@ def assertDataFrameEqual(
raise PySparkAssertionError(
HyukjinKwon commented on code in PR #42332:
URL: https://github.com/apache/spark/pull/42332#discussion_r1286452469
##
python/pyspark/testing/utils.py:
##
@@ -464,23 +467,42 @@ def assertDataFrameEqual(
raise PySparkAssertionError(
HyukjinKwon closed pull request #42373: [MINOR][UI] Increasing the number of
significant digits for Fraction Cached of RDD
URL: https://github.com/apache/spark/pull/42373
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on PR #42373:
URL: https://github.com/apache/spark/pull/42373#issuecomment-1668687126
Merged to master and branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon closed pull request #41712: [SPARK-44132][SQL] Materialize `Stream`
of join column names to avoid codegen failure
URL: https://github.com/apache/spark/pull/41712
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on PR #41712:
URL: https://github.com/apache/spark/pull/41712#issuecomment-1668686373
Merged to master and branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on PR #42321:
URL: https://github.com/apache/spark/pull/42321#issuecomment-1668686189
Let's fix up
https://github.com/vicennial/spark/actions/runs/5789336753/job/15690218462 the
linter. otherwise should be good to go.
--
This is an automated message from the Apache
siying commented on PR #42354:
URL: https://github.com/apache/spark/pull/42354#issuecomment-1668686101
CC @HeartSaVioR
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286447366
##
python/pyspark/sql/connect/session.py:
##
@@ -93,14 +94,13 @@
from pyspark.sql.connect.udtf import UDTFRegistration
-# `_active_spark_session` stores
heyihong commented on code in PR #42363:
URL: https://github.com/apache/spark/pull/42363#discussion_r1286443119
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/QuotingUtils.scala:
##
@@ -37,6 +49,18 @@ object QuotingUtils {
}
}
+ def quoted(namespace:
juliuszsompolski commented on PR #42355:
URL: https://github.com/apache/spark/pull/42355#issuecomment-1668675937
@hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
juliuszsompolski commented on code in PR #42355:
URL: https://github.com/apache/spark/pull/42355#discussion_r1286442702
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -82,7 +93,7 @@ object Connect {
"Set to 0 for
juliuszsompolski commented on code in PR #42355:
URL: https://github.com/apache/spark/pull/42355#discussion_r1286441954
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/execution/ExecuteResponseObserver.scala:
##
@@ -85,12 +85,18 @@ private[connect] class
heyihong commented on code in PR #42363:
URL: https://github.com/apache/spark/pull/42363#discussion_r1286441314
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/QuotingUtils.scala:
##
@@ -37,6 +49,18 @@ object QuotingUtils {
}
}
+ def quoted(namespace:
heyihong commented on code in PR #42363:
URL: https://github.com/apache/spark/pull/42363#discussion_r1286441314
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/QuotingUtils.scala:
##
@@ -37,6 +49,18 @@ object QuotingUtils {
}
}
+ def quoted(namespace:
heyihong commented on code in PR #42363:
URL: https://github.com/apache/spark/pull/42363#discussion_r1286441314
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/QuotingUtils.scala:
##
@@ -37,6 +49,18 @@ object QuotingUtils {
}
}
+ def quoted(namespace:
juliuszsompolski commented on PR #42355:
URL: https://github.com/apache/spark/pull/42355#issuecomment-1668673016
@hvanhovell
> Is the ExecuteGrpcResponseSender thread safe? For example is detach() safe?
detach() is synchronized on the executeObserver it's attached to, like the
monkeyboy123 commented on PR #42376:
URL: https://github.com/apache/spark/pull/42376#issuecomment-1668672646
gently ping @viirya Could you help me to reivew it? also cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
hvanhovell closed pull request #42368: [SPARK-44692][CONNECT][SQL] Move
Trigger(s) to sql/api
URL: https://github.com/apache/spark/pull/42368
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
hvanhovell commented on PR #42368:
URL: https://github.com/apache/spark/pull/42368#issuecomment-1668666826
Merging to master/3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
ueshin commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286422331
##
python/pyspark/sql/connect/session.py:
##
@@ -93,14 +94,13 @@
from pyspark.sql.connect.udtf import UDTFRegistration
-# `_active_spark_session` stores the
ueshin commented on code in PR #42371:
URL: https://github.com/apache/spark/pull/42371#discussion_r1286413587
##
python/pyspark/sql/connect/session.py:
##
@@ -628,20 +664,18 @@ def is_stopped(self) -> bool:
"""
return self.client.is_closed
-@classmethod
vinodkc commented on PR #42380:
URL: https://github.com/apache/spark/pull/42380#issuecomment-1668628765
Can you please add a testcase for millisecond & microseconds precision
checks?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon closed pull request #42266: [SPARK-44575][SQL][CONNECT] Implement
basic error translation
URL: https://github.com/apache/spark/pull/42266
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon commented on PR #42266:
URL: https://github.com/apache/spark/pull/42266#issuecomment-1668604341
Merged to master and branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon closed pull request #42267: [SPARK-43606][PS] Remove `Int64Index` &
`Float64Index`
URL: https://github.com/apache/spark/pull/42267
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on PR #42267:
URL: https://github.com/apache/spark/pull/42267#issuecomment-1668602519
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
allisonwang-db commented on PR #42353:
URL: https://github.com/apache/spark/pull/42353#issuecomment-1668587368
cc @ueshin @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #42381:
URL: https://github.com/apache/spark/pull/42381#issuecomment-1668567470
Merged to master for Apache Spark 4.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun closed pull request #42381: [SPARK-44707][K8S] Use INFO log in
`ExecutorPodsWatcher.onClose` if `SparkContext` is stopped
URL: https://github.com/apache/spark/pull/42381
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun commented on code in PR #42381:
URL: https://github.com/apache/spark/pull/42381#discussion_r1286375435
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsWatchSnapshotSource.scala:
##
@@ -86,12 +86,20 @@ class
1 - 100 of 222 matches
Mail list logo