grundprinzip commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562045410
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562042451
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
panbingkun commented on PR #46000:
URL: https://github.com/apache/spark/pull/46000#issuecomment-2051022992
> `WriteInputFormatTestDataGenerator` is called by `test_readwrite.py`. Even
though the CI has passed, I still want to confirm that putting it into test.jar
will not have a negative
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562041299
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
zhengruifeng commented on PR #46023:
URL: https://github.com/apache/spark/pull/46023#issuecomment-2051011882
it will need separate PRs for 3.4/3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
panbingkun commented on code in PR #45957:
URL: https://github.com/apache/spark/pull/45957#discussion_r1562034403
##
common/utils/src/main/scala/org/apache/spark/internal/Logging.scala:
##
@@ -105,9 +105,10 @@ trait Logging {
val context = new java.util.HashMap[String,
zhengruifeng opened a new pull request, #46023:
URL: https://github.com/apache/spark/pull/46023
### What changes were proposed in this pull request?
`DataFrameWriterV2.overwrite` fails with invalid plan
### Why are the changes needed?
bug fix
### Does this PR
zhengruifeng commented on code in PR #46023:
URL: https://github.com/apache/spark/pull/46023#discussion_r1562034412
##
python/pyspark/sql/tests/test_readwriter.py:
##
@@ -252,6 +252,11 @@ def test_create_without_provider(self):
):
mridulm commented on PR #46014:
URL: https://github.com/apache/spark/pull/46014#issuecomment-2051007024
My concern with adding `3.4.3` was, that would typically mean it is
available in `3.5.x` - but it wont, except for specific versions of `3.5`.
Should we document it as such ?
--
This
gengliangwang commented on code in PR #45957:
URL: https://github.com/apache/spark/pull/45957#discussion_r1562024837
##
common/utils/src/main/scala/org/apache/spark/internal/Logging.scala:
##
@@ -105,9 +105,10 @@ trait Logging {
val context = new
gengliangwang closed pull request #45975: [SPARK-47792][CORE] Make the value of
MDC can support `null` & cannot be `MessageWithContext`
URL: https://github.com/apache/spark/pull/45975
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
gengliangwang commented on PR #45975:
URL: https://github.com/apache/spark/pull/45975#issuecomment-2050993550
Thanks, merging to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
grundprinzip commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562015569
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562014004
##
python/pyspark/cloudpickle/cloudpickle.py:
##
@@ -1461,7 +1461,7 @@ def dump(obj, file, protocol=None, buffer_callback=None):
Pickler(file,
beliefer commented on PR #45982:
URL: https://github.com/apache/spark/pull/45982#issuecomment-2050974804
@yaooqinn @dongjoon-hyun @bjornjorgensen Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
grundprinzip commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562012354
##
python/pyspark/cloudpickle/cloudpickle.py:
##
@@ -1461,7 +1461,7 @@ def dump(obj, file, protocol=None, buffer_callback=None):
Pickler(file,
HeartSaVioR commented on PR #45960:
URL: https://github.com/apache/spark/pull/45960#issuecomment-2050965926
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1562007347
##
core/src/main/scala/org/apache/spark/api/python/WriteInputFormatTestDataGenerator.scala:
##
@@ -104,6 +105,7 @@ private[python] class
HeartSaVioR commented on PR #45960:
URL: https://github.com/apache/spark/pull/45960#issuecomment-2050965820
The CI failure happened in known flakiness - SparkSessionE2ESuite.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HeartSaVioR closed pull request #45960: [SPARK-47784][SS] Merge TTLMode and
TimeoutMode into a single TimeMode.
URL: https://github.com/apache/spark/pull/45960
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1562007477
##
core/src/main/scala/org/apache/spark/api/python/WriteInputFormatTestDataGenerator.scala:
##
@@ -104,6 +105,7 @@ private[python] class
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562004935
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562004935
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562005836
##
python/pyspark/cloudpickle/cloudpickle.py:
##
@@ -1461,7 +1461,7 @@ def dump(obj, file, protocol=None, buffer_callback=None):
Pickler(file,
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562005257
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
grundprinzip commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562002375
##
python/pyspark/cloudpickle/cloudpickle.py:
##
@@ -1461,7 +1461,7 @@ def dump(obj, file, protocol=None, buffer_callback=None):
Pickler(file,
grundprinzip commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562001832
##
python/pyspark/sql/tests/connect/streaming/test_parity_foreach_batch.py:
##
@@ -30,33 +30,73 @@ def
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562000962
##
python/pyspark/sql/tests/connect/streaming/test_parity_foreach_batch.py:
##
@@ -30,33 +30,73 @@ def
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1562000621
##
python/pyspark/cloudpickle/cloudpickle.py:
##
@@ -1461,7 +1461,7 @@ def dump(obj, file, protocol=None, buffer_callback=None):
Pickler(file,
yaooqinn commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050953196
> Many of the workloads will just fail in the parser/analyzer and the
failures are not related to data quality issues. Also, there is no easy
workaround like "try_*" functions.
grundprinzip commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1561992506
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
gengliangwang commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050941358
>And what about the default values of the ANI-related sub-configurations,
such as
spark.sql.ansi.enforceReservedKeywords
spark.sql.ansi.relationPrecedence
HyukjinKwon commented on code in PR #46002:
URL: https://github.com/apache/spark/pull/46002#discussion_r1561991509
##
python/pyspark/sql/connect/session.py:
##
@@ -1034,6 +1034,20 @@ def profile(self) -> Profile:
profile.__doc__ = PySparkSession.profile.__doc__
+
yaooqinn commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050928965
https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html should be
updated too.
And what about the default values of the ANI-related sub-configurations,
such as
-
LuciferYang commented on PR #46000:
URL: https://github.com/apache/spark/pull/46000#issuecomment-2050911774
`WriteInputFormatTestDataGenerator` is called by `test_readwrite.py`. Even
though the CI has passed, I still want to confirm that putting it into test.jar
will not have a negative
dongjoon-hyun commented on PR #45983:
URL: https://github.com/apache/spark/pull/45983#issuecomment-2050905018
Thank you for the info. :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #46019:
URL: https://github.com/apache/spark/pull/46019#issuecomment-2050904083
Thank you for the swift update.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
panbingkun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561973227
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
*
dongjoon-hyun commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050903327
Yes, I want to have AS-IS implementation and to define clear boundary.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
panbingkun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561971976
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
*
dongjoon-hyun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561971654
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
panbingkun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561971448
##
core/src/main/scala/org/apache/spark/api/python/WriteInputFormatTestDataGenerator.scala:
##
@@ -56,33 +57,39 @@ case class TestWritable(var str: String, var int:
cloud-fan commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050899887
Yea there are still some gaps between Spark ANSI mode and the SQL standard.
But mostly ANSI is still a better default than no ANSI at all.
LuciferYang commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561970178
##
core/src/main/scala/org/apache/spark/api/python/WriteInputFormatTestDataGenerator.scala:
##
@@ -56,33 +57,39 @@ case class TestWritable(var str: String, var
panbingkun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561970075
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
*
dongjoon-hyun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561969234
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
LuciferYang commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561969284
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
dongjoon-hyun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561968756
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
dongjoon-hyun commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561968756
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
cloud-fan commented on code in PR #46019:
URL: https://github.com/apache/spark/pull/46019#discussion_r1561966672
##
connector/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:
##
@@ -40,6 +40,7 @@ import org.apache.spark.internal.Logging
*
panbingkun commented on PR #46019:
URL: https://github.com/apache/spark/pull/46019#issuecomment-2050891561
> BTW, to @panbingkun , we need to wait for this kind of change because we
need to collect more opinions than a normal PR. It will take at least 72 hours
in general.
Okay,
cloud-fan commented on code in PR #45946:
URL: https://github.com/apache/spark/pull/45946#discussion_r1561965680
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationFactory.java:
##
@@ -202,6 +202,22 @@ public static StringSearch getStringSearch(
panbingkun opened a new pull request, #46022:
URL: https://github.com/apache/spark/pull/46022
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
No.
### How was
cxzl25 commented on PR #45983:
URL: https://github.com/apache/spark/pull/45983#issuecomment-2050882315
Thanks dongjoon.
> which decompiler did you use for you screenshot
I use jd-gui.
https://github.com/java-decompiler/jd-gui
--
This is an automated message from the
yaooqinn commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050882987
Thank you @dongjoon-hyun for raising that thread.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
dongjoon-hyun commented on PR #46019:
URL: https://github.com/apache/spark/pull/46019#issuecomment-2050880237
BTW, to @panbingkun , we need to wait for this kind of change because we
need to collect more opinions than a normal PR. It will take at least 72 hours
in general.
--
This is an
dongjoon-hyun commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050875592
Thank you for the feedback, @yaooqinn .
Yes, I totally understand the situation. The reason why I raise this issue
is to make a decision (go/no-go) for this item.
yaooqinn commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050867779
I have added the above link and issues to SPARK-4.
I didn't continue to audit ANSI compatibility as SPARK-46374 didn't get much
attention.
--
This is an automated message
TakawaAkirayo commented on code in PR #45367:
URL: https://github.com/apache/spark/pull/45367#discussion_r1561947997
##
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##
@@ -1014,6 +1014,15 @@ package object config {
.timeConf(TimeUnit.NANOSECONDS)
wForget commented on code in PR #45589:
URL: https://github.com/apache/spark/pull/45589#discussion_r1561935953
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -187,57 +187,57 @@ class V2ExpressionBuilder(e: Expression, isPredicate:
wForget commented on code in PR #45589:
URL: https://github.com/apache/spark/pull/45589#discussion_r1561934026
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -187,8 +187,9 @@ class V2ExpressionBuilder(e: Expression, isPredicate:
dongjoon-hyun commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050839565
Ya, could you add them all to the PR description and SPARK-4, @yaooqinn ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
yaooqinn commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050839295
https://dev.mysql.com/doc/refman/8.3/en/sql-mode.html
It seems that ANSI mode stays OFF in MySQL 8
--
This is an automated message from the Apache Git Service.
To respond to the
yaooqinn commented on PR #46013:
URL: https://github.com/apache/spark/pull/46013#issuecomment-2050833640
There are still many behaviors incompatible w/ ANSI standard, such as
SPARK-46374
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
yaooqinn commented on PR #45982:
URL: https://github.com/apache/spark/pull/45982#issuecomment-2050828484
Merged to master
Thank you @beliefer @dongjoon-hyun @bjornjorgensen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
yaooqinn closed pull request #45982: [SPARK-47795][K8S][DOCS] Supplement the
doc of job schedule for K8S
URL: https://github.com/apache/spark/pull/45982
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
yaooqinn closed pull request #46006: [SPARK-47813][SQL] Replace
getArrayDimension with updateExtraColumnMeta
URL: https://github.com/apache/spark/pull/46006
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
itholic commented on PR #46021:
URL: https://github.com/apache/spark/pull/46021#issuecomment-2050822266
cc @HyukjinKwon FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
itholic opened a new pull request, #46021:
URL: https://github.com/apache/spark/pull/46021
### What changes were proposed in this pull request?
This PR proposes to add missing warnings for deprecated features
### Why are the changes needed?
Some APIs will be
HeartSaVioR commented on code in PR #45937:
URL: https://github.com/apache/spark/pull/45937#discussion_r1561908874
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/TransformWithListStateSuite.scala:
##
@@ -307,7 +307,10 @@ class TransformWithListStateSuite extends
panbingkun commented on PR #45957:
URL: https://github.com/apache/spark/pull/45957#issuecomment-2050798569
cc @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on code in PR #46014:
URL: https://github.com/apache/spark/pull/46014#discussion_r1561825080
##
docs/security.md:
##
@@ -169,6 +175,12 @@ The following table describes the different options
available for configuring th
2.2.0
+
+
dongjoon-hyun commented on code in PR #46014:
URL: https://github.com/apache/spark/pull/46014#discussion_r1561803406
##
docs/security.md:
##
@@ -169,6 +175,12 @@ The following table describes the different options
available for configuring th
2.2.0
+
+
dongjoon-hyun commented on PR #46015:
URL: https://github.com/apache/spark/pull/46015#issuecomment-2050796202
To @mridulm , I prefer to pass CI. :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon commented on PR #46019:
URL: https://github.com/apache/spark/pull/46019#issuecomment-2050795814
cc @HeartSaVioR too
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
panbingkun commented on code in PR #45957:
URL: https://github.com/apache/spark/pull/45957#discussion_r1561900138
##
common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala:
##
@@ -110,7 +121,8 @@ object LogKey extends Enumeration {
val REDUCE_ID = Value
val
dongjoon-hyun commented on PR #46019:
URL: https://github.com/apache/spark/pull/46019#issuecomment-2050793224
cc @cloud-fan , @HyukjinKwon , @LuciferYang , @zhengruifeng, @yaooqinn , too
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
dongjoon-hyun commented on PR #46016:
URL: https://github.com/apache/spark/pull/46016#issuecomment-2050791776
Merged to master for Apache Spark 4.0.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun closed pull request #46016: [SPARK-47541][SQL][FOLLOWUP] Fix
`AnsiTypeCoercion` to handle ArrayType
URL: https://github.com/apache/spark/pull/46016
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon closed pull request #45988: [SPARK-47174][CONNECT][SS][1/2] Server
side SparkConnectListenerBusListener for Client side streaming query listener
URL: https://github.com/apache/spark/pull/45988
--
This is an automated message from the Apache Git Service.
To respond to the message,
panbingkun commented on code in PR #45957:
URL: https://github.com/apache/spark/pull/45957#discussion_r1561897003
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:
##
@@ -857,7 +858,7 @@ private[spark] class ApplicationMaster(
HyukjinKwon commented on PR #45988:
URL: https://github.com/apache/spark/pull/45988#issuecomment-2050790536
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
panbingkun commented on code in PR #45957:
URL: https://github.com/apache/spark/pull/45957#discussion_r1561894980
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/DriverCommandFeatureStep.scala:
##
@@ -24,7 +24,8 @@ import
harshmotw-db commented on code in PR #46017:
URL: https://github.com/apache/spark/pull/46017#discussion_r1561892813
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/ExpressionTypeCheckingSuite.scala:
##
@@ -747,6 +748,18 @@ class ExpressionTypeCheckingSuite
harshmotw-db commented on code in PR #46011:
URL: https://github.com/apache/spark/pull/46011#discussion_r1561890832
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala:
##
@@ -822,6 +822,7 @@ object FunctionRegistry {
// Variant
harshmotw-db commented on code in PR #46011:
URL: https://github.com/apache/spark/pull/46011#discussion_r1561889662
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala:
##
@@ -822,6 +822,7 @@ object FunctionRegistry {
// Variant
harshmotw-db commented on code in PR #46017:
URL: https://github.com/apache/spark/pull/46017#discussion_r1561887921
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -914,6 +914,11 @@
"The must be between (current value =
)."
]
},
HyukjinKwon closed pull request #46018: [SPARK-47824][PS] Fix nondeterminism in
pyspark.pandas.series.asof
URL: https://github.com/apache/spark/pull/46018
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #46018:
URL: https://github.com/apache/spark/pull/46018#issuecomment-2050768062
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on code in PR #46007:
URL: https://github.com/apache/spark/pull/46007#discussion_r1561880330
##
python/pyspark/sql/session.py:
##
@@ -1756,6 +1763,13 @@ def table(self, tableName: str) -> DataFrame:
---
:class:`DataFrame`
+
HyukjinKwon commented on code in PR #46007:
URL: https://github.com/apache/spark/pull/46007#discussion_r1561880283
##
python/pyspark/sql/session.py:
##
@@ -1630,6 +1630,13 @@ def sql(
---
:class:`DataFrame`
+Notes
+-
+In
gene-db commented on PR #45826:
URL: https://github.com/apache/spark/pull/45826#issuecomment-2050758504
> Yeah, would be great if we can file a JIRA. then I will edit the title,
and link the Pr.
Here is the new jira: https://issues.apache.org/jira/browse/SPARK-47826
Thanks!
HyukjinKwon commented on PR #45826:
URL: https://github.com/apache/spark/pull/45826#issuecomment-2050753028
Yeah, would be great if we can file a JIRA. then I will edit the title, and
link the Pr.
--
This is an automated message from the Apache Git Service.
To respond to the message,
github-actions[bot] commented on PR #44197:
URL: https://github.com/apache/spark/pull/44197#issuecomment-2050753239
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #1:
URL: https://github.com/apache/spark/pull/1#issuecomment-2050753214
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
HyukjinKwon commented on code in PR #45977:
URL: https://github.com/apache/spark/pull/45977#discussion_r1561875139
##
python/pyspark/sql/datasource.py:
##
@@ -469,6 +501,188 @@ def stop(self) -> None:
...
+class SimpleInputPartition(InputPartition):
+def
HyukjinKwon closed pull request #46020: [MINOR][PS] Use expression instead of a
string column in Series.asof
URL: https://github.com/apache/spark/pull/46020
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
HyukjinKwon commented on PR #46020:
URL: https://github.com/apache/spark/pull/46020#issuecomment-2050749076
Oh, it's a duplicate of https://github.com/apache/spark/pull/46018. Let me
merge that instead
--
This is an automated message from the Apache Git Service.
To respond to the
HyukjinKwon opened a new pull request, #46020:
URL: https://github.com/apache/spark/pull/46020
### What changes were proposed in this pull request?
This PR proposes to use expression instead of a string column in Series.asof
### Why are the changes needed?
It's better to
HyukjinKwon closed pull request #45941: [SPARK-47811][PYTHON][CONNECT][TESTS]
Run ML tests for pyspark-connect package
URL: https://github.com/apache/spark/pull/45941
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
1 - 100 of 272 matches
Mail list logo