github-actions[bot] commented on PR #44537:
URL: https://github.com/apache/spark/pull/44537#issuecomment-2130548277
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #45099:
URL: https://github.com/apache/spark/pull/45099#issuecomment-2130548267
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
zeotuan commented on PR #46724:
URL: https://github.com/apache/spark/pull/46724#issuecomment-2130535309
@gengliangwang Please help review this. I will merge this after #46739
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
zml1206 opened a new pull request, #46741:
URL: https://github.com/apache/spark/pull/46741
### What changes were proposed in this pull request?
Refactor `RewriteWithExpression` logic to support related nested `WITH`
expression.
Generate `Project` order:
1. internally nested
viirya commented on PR #46696:
URL: https://github.com/apache/spark/pull/46696#issuecomment-2130496440
Looks good to me.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
GideonPotok closed pull request #46404: [WIP][SPARK-47353][SQL] Enable
collation support for the Mode expression using a scala TreeMap (RB Tree)
URL: https://github.com/apache/spark/pull/46404
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
dongjoon-hyun commented on PR #46706:
URL: https://github.com/apache/spark/pull/46706#issuecomment-2130495121
Merged to master only because this was defined as `Improvement`.
https://github.com/apache/spark/assets/9700541/e6901f63-cfb8-491b-9368-a840493befdc;>
--
This is an
GideonPotok closed pull request #46670: [WIP] Don't review: E2e
URL: https://github.com/apache/spark/pull/46670
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
dongjoon-hyun closed pull request #46706: [SPARK-48394][CORE] Cleanup
mapIdToMapIndex on mapoutput unregister
URL: https://github.com/apache/spark/pull/46706
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
dongjoon-hyun commented on PR #46739:
URL: https://github.com/apache/spark/pull/46739#issuecomment-2130487265
Pending CIs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
dongjoon-hyun closed pull request #46728: [SPARK-48407][SQL][DOCS] Teradata:
Document Type Conversion rules between Spark SQL and teradata
URL: https://github.com/apache/spark/pull/46728
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun commented on PR #46641:
URL: https://github.com/apache/spark/pull/46641#issuecomment-2130485187
Merged to master for Apache Spark 4.0.0.
Thank you, @bozhang2820 and all.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
dongjoon-hyun closed pull request #46641: [SPARK-48325][CORE] Always specify
messages in ExecutorRunner.killProcess
URL: https://github.com/apache/spark/pull/46641
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on PR #46696:
URL: https://github.com/apache/spark/pull/46696#issuecomment-2130363916
can we also revert https://github.com/apache/spark/pull/46562 in this PR?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
leovegas commented on code in PR #46678:
URL: https://github.com/apache/spark/pull/46678#discussion_r1613994258
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala:
##
@@ -62,10 +62,10 @@ object OrcUtils extends Logging {
val
leovegas commented on code in PR #46678:
URL: https://github.com/apache/spark/pull/46678#discussion_r1613992518
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileOperator.scala:
##
@@ -125,16 +125,4 @@ private[hive] object OrcFileOperator extends Logging {
cloud-fan commented on code in PR #46580:
URL: https://github.com/apache/spark/pull/46580#discussion_r1613945984
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveIdentifierClause.scala:
##
@@ -20,19 +20,23 @@ package
JoshRosen commented on code in PR #46697:
URL: https://github.com/apache/spark/pull/46697#discussion_r1613915291
##
core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala:
##
@@ -104,12 +104,40 @@ private[spark] object SerDeUtil extends Logging {
}
}
+ /**
+
JoshRosen commented on code in PR #46705:
URL: https://github.com/apache/spark/pull/46705#discussion_r1613904865
##
core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:
##
@@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging {
*/
def
JoshRosen commented on code in PR #46705:
URL: https://github.com/apache/spark/pull/46705#discussion_r1613904865
##
core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:
##
@@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging {
*/
def
JoshRosen commented on code in PR #46705:
URL: https://github.com/apache/spark/pull/46705#discussion_r1613896716
##
core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:
##
@@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging {
*/
def
JoshRosen commented on code in PR #46705:
URL: https://github.com/apache/spark/pull/46705#discussion_r1613891588
##
core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:
##
@@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging {
*/
def
JoshRosen commented on code in PR #46736:
URL: https://github.com/apache/spark/pull/46736#discussion_r1613884389
##
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala:
##
@@ -409,9 +407,6 @@ private[joins] class UnsafeHashedRelation(
val
wayneguow commented on code in PR #46731:
URL: https://github.com/apache/spark/pull/46731#discussion_r1613880314
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala:
##
@@ -90,8 +91,8 @@ class ResolveSessionCatalog(val catalogManager:
cloud-fan commented on PR #46594:
URL: https://github.com/apache/spark/pull/46594#issuecomment-2130135726
can we add a test? Besically any default column value that is unfoldable,
like `current_date()`, can trigger this bug
--
This is an automated message from the Apache Git Service.
To
cloud-fan commented on PR #46594:
URL: https://github.com/apache/spark/pull/46594#issuecomment-2130132833
Can you retry the github action job? Seems flaky
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
gengliangwang commented on code in PR #46634:
URL: https://github.com/apache/spark/pull/46634#discussion_r1613860819
##
common/utils/src/main/scala/org/apache/spark/internal/README.md:
##
Review Comment:
I will wait for @mridulm's response until this weekend.
--
This
wayneguow commented on code in PR #46731:
URL: https://github.com/apache/spark/pull/46731#discussion_r1613124606
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala:
##
@@ -90,8 +91,8 @@ class ResolveSessionCatalog(val catalogManager:
eason-yuchen-liu opened a new pull request, #46740:
URL: https://github.com/apache/spark/pull/46740
### What changes were proposed in this pull request?
This PR adds a test for API DropDuplicateWithinWatermark, which was
previously missing.
### Why are the changes needed?
gengliangwang commented on PR #46739:
URL: https://github.com/apache/spark/pull/46739#issuecomment-2130009429
cc @panbingkun @zeotuan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
gengliangwang opened a new pull request, #46739:
URL: https://github.com/apache/spark/pull/46739
### What changes were proposed in this pull request?
The PR aims to migrate logInfo in Core module with variables to structured
logging framework.
### Why are the
gengliangwang closed pull request #46735: [SPARK-47579][SQL][FOLLOWUP] Restore
the `--help` print format of spark sql shell
URL: https://github.com/apache/spark/pull/46735
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
gengliangwang commented on PR #46735:
URL: https://github.com/apache/spark/pull/46735#issuecomment-2129973290
Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
GideonPotok commented on PR #46597:
URL: https://github.com/apache/spark/pull/46597#issuecomment-2129923419
@uros-db I forgot but should I add collation support to
`org.apache.spark.sql.catalyst.expressions.aggregate.PandasMode`?
The only difference will be
1. Support for null
cloud-fan commented on code in PR #46440:
URL: https://github.com/apache/spark/pull/46440#discussion_r1613651414
##
connector/connect/common/src/test/resources/query-tests/explain-results/function_shiftleft.explain:
##
@@ -1,2 +1,2 @@
-Project [shiftleft(cast(b#0 as int), 2) AS
nikolamand-db commented on PR #46580:
URL: https://github.com/apache/spark/pull/46580#issuecomment-2129615940
> I think this fixes https://issues.apache.org/jira/browse/SPARK-46625 as
well. Can we add a test to verify?
Checked locally, seems like these changes don't resolve the
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613161561
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613448325
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,155 @@
* Utility class for collation-aware
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613447350
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,155 @@
* Utility class for collation-aware
johanl-db commented on code in PR #46734:
URL: https://github.com/apache/spark/pull/46734#discussion_r1613401422
##
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala:
##
@@ -396,8 +396,9 @@ case class AlterTableChangeColumnCommand(
val newDataSchema
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613408156
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,155 @@
* Utility class for collation-aware
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613407352
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,155 @@
* Utility class for collation-aware
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613404598
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,155 @@
* Utility class for collation-aware
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613401443
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,155 @@
* Utility class for collation-aware
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613389337
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
uros-db commented on code in PR #46700:
URL: https://github.com/apache/spark/pull/46700#discussion_r1613391451
##
common/unsafe/src/test/java/org/apache/spark/unsafe/types/CollationSupportSuite.java:
##
@@ -17,15 +17,136 @@
package org.apache.spark.unsafe.types;
import
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613389337
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613386426
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613384389
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/SqlScriptingLogicalOperators.scala:
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software
zhengruifeng commented on code in PR #46738:
URL: https://github.com/apache/spark/pull/46738#discussion_r1613379691
##
python/pyspark/sql/types.py:
##
@@ -120,9 +120,8 @@ def __eq__(self, other: Any) -> bool:
def __ne__(self, other: Any) -> bool:
return not
zhengruifeng opened a new pull request, #46738:
URL: https://github.com/apache/spark/pull/46738
### What changes were proposed in this pull request?
1, refactor `TypeName` to support parameterized datatypes
2, remove redundant simpleString/jsonValue methods, since they are type name
uros-db commented on code in PR #46700:
URL: https://github.com/apache/spark/pull/46700#discussion_r1613207707
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -147,6 +162,45 @@ public static String toLowerCase(final String
uros-db commented on code in PR #46597:
URL: https://github.com/apache/spark/pull/46597#discussion_r1613363048
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -74,16 +90,29 @@ case class Mode(
if (buffer.isEmpty) {
uros-db commented on code in PR #46597:
URL: https://github.com/apache/spark/pull/46597#discussion_r1613361676
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -74,16 +90,29 @@ case class Mode(
if (buffer.isEmpty) {
uros-db commented on code in PR #46597:
URL: https://github.com/apache/spark/pull/46597#discussion_r1613358004
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -74,16 +90,29 @@ case class Mode(
if (buffer.isEmpty) {
uros-db commented on code in PR #46597:
URL: https://github.com/apache/spark/pull/46597#discussion_r1613358004
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -74,16 +90,29 @@ case class Mode(
if (buffer.isEmpty) {
uros-db commented on code in PR #46597:
URL: https://github.com/apache/spark/pull/46597#discussion_r1613351378
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -48,6 +49,21 @@ case class Mode(
override def inputTypes:
imtzer commented on PR #45911:
URL: https://github.com/apache/spark/pull/45911#issuecomment-2129285181
> > same problem when using spark operator, it's weird why the code does not
throw anything when configmap is not created
>
> When using spark-submit, there is no error output in
guixiaowen commented on PR #46620:
URL: https://github.com/apache/spark/pull/46620#issuecomment-2129261156
> Hi, @guixiaowen I need some time to think about it as it might break some
existing workloads.
>
> Meantime, you can
>
> * Update the PR desc for better readability
>
stefankandic opened a new pull request, #46737:
URL: https://github.com/apache/spark/pull/46737
### What changes were proposed in this pull request?
Fix breaking change in `fromJson` method by having default param values.
### Why are the changes needed?
In
yaooqinn commented on PR #46620:
URL: https://github.com/apache/spark/pull/46620#issuecomment-2129183148
Hi, @guixiaowen I need some time to think about it as it might break some
existing workloads.
Meantime, you can
- Update the PR desc for better readability
- Update the PR
ulysses-you closed pull request #46440: [SPARK-48168][SQL] Add bitwise shifting
operators support
URL: https://github.com/apache/spark/pull/46440
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
ulysses-you commented on PR #46440:
URL: https://github.com/apache/spark/pull/46440#issuecomment-2129180382
thanks, merged to master(4.0.0)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
uros-db commented on code in PR #46682:
URL: https://github.com/apache/spark/pull/46682#discussion_r1613221638
##
common/unsafe/src/test/java/org/apache/spark/unsafe/types/CollationSupportSuite.java:
##
@@ -610,8 +610,42 @@ public void testFindInSet() throws SparkException {
dbatomic commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613213985
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
uros-db commented on code in PR #46700:
URL: https://github.com/apache/spark/pull/46700#discussion_r1613207707
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -147,6 +162,45 @@ public static String toLowerCase(final String
uros-db commented on code in PR #46599:
URL: https://github.com/apache/spark/pull/46599#discussion_r1612779931
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -279,6 +280,7 @@ abstract class Optimizer(catalogManager: CatalogManager)
zhengruifeng commented on PR #46733:
URL: https://github.com/apache/spark/pull/46733#issuecomment-2129082503
merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng closed pull request #46733: [SPARK-48412][PYTHON] Refactor data
type json parse
URL: https://github.com/apache/spark/pull/46733
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on code in PR #46736:
URL: https://github.com/apache/spark/pull/46736#discussion_r1613184450
##
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala:
##
@@ -409,9 +407,6 @@ private[joins] class UnsafeHashedRelation(
val
LuciferYang commented on code in PR #46736:
URL: https://github.com/apache/spark/pull/46736#discussion_r1613184450
##
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala:
##
@@ -409,9 +407,6 @@ private[joins] class UnsafeHashedRelation(
val
olaky commented on code in PR #46734:
URL: https://github.com/apache/spark/pull/46734#discussion_r1613167082
##
sql/api/src/main/scala/org/apache/spark/sql/types/DataType.scala:
##
@@ -408,6 +408,37 @@ object DataType {
}
}
+ /**
+ * Check if `from` is equal to
LuciferYang commented on PR #46736:
URL: https://github.com/apache/spark/pull/46736#issuecomment-2129074503
> How about removing the `TODO(josh)` at L412
[done](https://github.com/apache/spark/pull/9127/files#diff-127291a0287f790755be5473765ea03eb65f8b58b9ec0760955f124e21e3452f)
zml1206 commented on PR #46499:
URL: https://github.com/apache/spark/pull/46499#issuecomment-2129064493
> Can we make `With` nested as well?
I have thought about it for a long time, and I can change the logic of
withRewrite. The `alias` generated by the lowest layer with is in
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613161561
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
yaooqinn commented on PR #46736:
URL: https://github.com/apache/spark/pull/46736#issuecomment-2129046316
How about removing the `TODO(josh)` at L412
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
yabola commented on PR #46713:
URL: https://github.com/apache/spark/pull/46713#issuecomment-2129043469
@cloud-fan could you take a look, thank you~ This is useful in a shared sql
cluster. This will make it easier to control sql.
The picture below shows that cores used is consistent.
guixiaowen commented on PR #46668:
URL: https://github.com/apache/spark/pull/46668#issuecomment-2129040807
@LuciferYang Can you review this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
davidm-db commented on code in PR #46665:
URL: https://github.com/apache/spark/pull/46665#discussion_r1613136311
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -42,6 +42,29 @@ options { tokenVocab = SqlBaseLexer; }
public boolean
yaooqinn commented on PR #46704:
URL: https://github.com/apache/spark/pull/46704#issuecomment-2129013813
Merged to master. Thank you @panbingkun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
yaooqinn closed pull request #46704: [SPARK-48409][BUILD][TESTS] Upgrade MySQL
& Postgres & Mariadb docker image version
URL: https://github.com/apache/spark/pull/46704
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
LuciferYang commented on code in PR #46736:
URL: https://github.com/apache/spark/pull/46736#discussion_r1613125740
##
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala:
##
@@ -396,8 +396,6 @@ private[joins] class UnsafeHashedRelation(
val
wayneguow commented on code in PR #46731:
URL: https://github.com/apache/spark/pull/46731#discussion_r1613124606
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala:
##
@@ -90,8 +91,8 @@ class ResolveSessionCatalog(val catalogManager:
LuciferYang opened a new pull request, #46736:
URL: https://github.com/apache/spark/pull/46736
### What changes were proposed in this pull request?
This pr remove an outdated TODO from `UnsafeHashedRelation`:
```
// TODO(josh): This needs to be revisited before we merge this
panbingkun commented on code in PR #46704:
URL: https://github.com/apache/spark/pull/46704#discussion_r1613116527
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala:
##
@@ -38,7 +38,7 @@ class
panbingkun commented on code in PR #46704:
URL: https://github.com/apache/spark/pull/46704#discussion_r1613116527
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MariaDBKrbIntegrationSuite.scala:
##
@@ -38,7 +38,7 @@ class
panbingkun commented on PR #46704:
URL: https://github.com/apache/spark/pull/46704#issuecomment-2128980038
cc @dongjoon-hyun @yaooqinn
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
panbingkun commented on code in PR #46704:
URL: https://github.com/apache/spark/pull/46704#discussion_r1613092135
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresKrbIntegrationSuite.scala:
##
@@ -38,7 +38,7 @@ class
wayneguow commented on code in PR #46731:
URL: https://github.com/apache/spark/pull/46731#discussion_r1613088183
##
common/utils/src/main/resources/error/error-conditions.json:
##
Review Comment:
> We seem to lack a UT case related to `_LEGACY_ERROR_TEMP_1054`
zhengruifeng commented on code in PR #46733:
URL: https://github.com/apache/spark/pull/46733#discussion_r1613083917
##
python/pyspark/sql/types.py:
##
@@ -1756,13 +1756,45 @@ def toJson(self, zone_id: str = "UTC") -> str:
TimestampNTZType,
NullType,
VariantType,
zhengruifeng commented on PR #46733:
URL: https://github.com/apache/spark/pull/46733#issuecomment-2128905041
> Is this just refactoring or causing any behaviour change?
this is just refactoring
--
This is an automated message from the Apache Git Service.
To respond to the message,
panbingkun commented on PR #46731:
URL: https://github.com/apache/spark/pull/46731#issuecomment-2128896002
> cc @MaxGekk @panbingkun FYI
also cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
LuciferYang commented on PR #46695:
URL: https://github.com/apache/spark/pull/46695#issuecomment-2128895356
Merged into master for Spark 4.0. Thanks @panbingkun @dongjoon-hyun
@hasnain-db and @pan3793 ~
--
This is an automated message from the Apache Git Service.
To respond to the
LuciferYang closed pull request #46695: [SPARK-48384][BUILD] Exclude
`io.netty:netty-tcnative-boringssl-static` from `zookeeper`
URL: https://github.com/apache/spark/pull/46695
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
yaooqinn commented on PR #46735:
URL: https://github.com/apache/spark/pull/46735#issuecomment-2128887663
cc @gengliangwang thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on code in PR #46733:
URL: https://github.com/apache/spark/pull/46733#discussion_r1613069878
##
python/pyspark/sql/types.py:
##
@@ -1756,13 +1756,45 @@ def toJson(self, zone_id: str = "UTC") -> str:
TimestampNTZType,
NullType,
VariantType,
+
yaooqinn opened a new pull request, #46735:
URL: https://github.com/apache/spark/pull/46735
### What changes were proposed in this pull request?
Restore the print format of spark sql shell
### Why are the changes needed?
bugfix
### Does this PR introduce
HyukjinKwon commented on PR #46733:
URL: https://github.com/apache/spark/pull/46733#issuecomment-2128885962
Is this just refactoring or causing any behaviour change?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
liangyouze commented on PR #45911:
URL: https://github.com/apache/spark/pull/45911#issuecomment-2128884766
> same problem when using spark operator, it's weird why the code does not
throw anything when configmap is not created
When using spark-submit, there is no output in the
panbingkun commented on code in PR #46731:
URL: https://github.com/apache/spark/pull/46731#discussion_r1613064843
##
common/utils/src/main/resources/error/error-conditions.json:
##
Review Comment:
We seem to lack a UT case related to `_LEGACY_ERROR_TEMP_1054`
--
This
1 - 100 of 68215 matches
Mail list logo