MaxGekk commented on code in PR #41241:
URL: https://github.com/apache/spark/pull/41241#discussion_r1200040672
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -188,8 +188,11 @@ private[sql] object QueryParsingErrors extends
QueryErrors
MaxGekk closed pull request #41242: [SPARK-43598][SQL] Assign a name to the
error class _LEGACY_ERROR_TEMP_2400
URL: https://github.com/apache/spark/pull/41242
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
MaxGekk commented on PR #41242:
URL: https://github.com/apache/spark/pull/41242#issuecomment-1556632758
+1, LGTM. Merging to master.
Thank you, @beliefer.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abov
advancedxy commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1200028494
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -414,6 +414,9 @@ private[spark] class SparkSubmit extends Logging {
// directory too
LuciferYang commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r112396
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/CatalystDataToProtobuf.scala:
##
@@ -26,14 +26,14 @@ import org.apache.spark.sql.types.{BinaryTyp
LuciferYang commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1199973589
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1452,29 +1452,29 @@ class SparkConnectPlanner(val
HyukjinKwon closed pull request #41245: [MINOR][DOCS] Fix wrong reference
URL: https://github.com/apache/spark/pull/41245
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
HyukjinKwon commented on PR #41245:
URL: https://github.com/apache/spark/pull/41245#issuecomment-1556565071
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
Hisoka-X opened a new pull request, #41251:
URL: https://github.com/apache/spark/pull/41251
### What changes were proposed in this pull request?
Hive support CREATE TABLE LIKE FILE statement in
https://issues.apache.org/jira/browse/HIVE-26395 .
So this PR bring `CREATE TABLE LI
rangadi commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1199966873
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/protobuf/functions.scala:
##
@@ -45,12 +53,36 @@ object functions {
messageName: String,
HyukjinKwon commented on PR #41250:
URL: https://github.com/apache/spark/pull/41250#issuecomment-1556542346
@zhengruifeng @MaxGekk @vicennial @hvanhovell PTAL
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
HyukjinKwon opened a new pull request, #41250:
URL: https://github.com/apache/spark/pull/41250
### What changes were proposed in this pull request?
This PR implements `SparkSession.addArtifact(s)`. The logic is basically
translated from Scala (https://github.com/apache/spark/pull/4025
zhengruifeng commented on PR #41248:
URL: https://github.com/apache/spark/pull/41248#issuecomment-1556531491
@dongjoon-hyun thank you for the reviews.
merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and us
zhengruifeng closed pull request #41248: [SPARK-43601][INFRA] Remove the upper
bound of `matplotlib` in requirements
URL: https://github.com/apache/spark/pull/41248
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
turboFei commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1199937113
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -414,6 +414,9 @@ private[spark] class SparkSubmit extends Logging {
// directory too.
turboFei commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1199937113
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -414,6 +414,9 @@ private[spark] class SparkSubmit extends Logging {
// directory too.
turboFei commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1199937113
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -414,6 +414,9 @@ private[spark] class SparkSubmit extends Logging {
// directory too.
turboFei commented on PR #41181:
URL: https://github.com/apache/spark/pull/41181#issuecomment-1556452364
> And putting and locating extra configuration files in SPARK_HOME/conf is
also a suggested way from our docs, so is this step necessary?
I think it is necessary.
Hadoop and
LuciferYang commented on PR #41249:
URL: https://github.com/apache/spark/pull/41249#issuecomment-1556440769
Test first, will update pr description later
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
LuciferYang opened a new pull request, #41249:
URL: https://github.com/apache/spark/pull/41249
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was thi
dongjoon-hyun closed pull request #41247: [SPARK-43600][K8S][DOCS] Update K8s
doc to recommend K8s 1.24+
URL: https://github.com/apache/spark/pull/41247
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun commented on PR #41247:
URL: https://github.com/apache/spark/pull/41247#issuecomment-1556435985
Thank you so much, @zhengruifeng !
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
Stove-hust commented on code in PR #40412:
URL: https://github.com/apache/spark/pull/40412#discussion_r1199893005
##
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala:
##
@@ -273,7 +273,7 @@ private[spark] class DiskBlockManager(
Utils.getConfiguredLocal
itholic commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1199889980
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
self.assertRais
itholic commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1199889980
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
self.assertRais
itholic commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1199889980
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
self.assertRais
itholic commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1199889980
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
self.assertRais
dongjoon-hyun commented on PR #41247:
URL: https://github.com/apache/spark/pull/41247#issuecomment-1556424038
Could you review this PR, @zhengruifeng ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
dongjoon-hyun commented on PR #41248:
URL: https://github.com/apache/spark/pull/41248#issuecomment-1556423778
Then, Pefect! Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on PR #41248:
URL: https://github.com/apache/spark/pull/41248#issuecomment-1556423040
> Hi, @zhengruifeng . Does it mean we need to add a lower-bound instead?
>
> >
I guess we don't need a lower-bound:
1, we don't use a lower-bound in
[CI](https://gith
LuciferYang commented on PR #41226:
URL: https://github.com/apache/spark/pull/41226#issuecomment-1556417112
late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
zhengruifeng opened a new pull request, #41248:
URL: https://github.com/apache/spark/pull/41248
### What changes were proposed in this pull request?
Remove the upper bound of `matplotlib` in requirements
### Why are the changes needed?
1, actually, `matplotlib` is not pinned
beliefer commented on code in PR #41242:
URL: https://github.com/apache/spark/pull/41242#discussion_r1199860069
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala:
##
@@ -85,7 +85,7 @@ trait CheckAnalysis extends PredicateHelper with
Looku
dongjoon-hyun opened a new pull request, #41247:
URL: https://github.com/apache/spark/pull/41247
**Default K8s Version**
- EKS: v1.26 (Default)
- GKE: v1.24 (Stable), v1.25 (Regular), v1.27 (Rapid)
**End Of Support**
| K8s | AKS | GKE | EKS |
| | --- |
HyukjinKwon commented on code in PR #40256:
URL: https://github.com/apache/spark/pull/40256#discussion_r1199853838
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/ArtifactManager.scala:
##
@@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Softwa
github-actions[bot] commented on PR #38703:
URL: https://github.com/apache/spark/pull/38703#issuecomment-1556343378
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
HyukjinKwon commented on code in PR #40256:
URL: https://github.com/apache/spark/pull/40256#discussion_r1199852797
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/ArtifactManager.scala:
##
@@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Softwa
dongjoon-hyun closed pull request #41246: [SPARK-43599][CONNECT][BUILD] Upgrade
buf to v1.19.0
URL: https://github.com/apache/spark/pull/41246
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
justaparth commented on PR #41108:
URL: https://github.com/apache/spark/pull/41108#issuecomment-1556292297
> Where is the information loss or overflow? Java code generated by Protobuf
for a uint32 field also returns an `int`, not `long`.
sorry i didn't get a chance to reply to this un
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1199788038
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/higherOrderFunctions.scala:
##
@@ -130,6 +134,23 @@ case class LambdaFunction(
override
mridulm commented on code in PR #40412:
URL: https://github.com/apache/spark/pull/40412#discussion_r1199782358
##
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala:
##
@@ -273,7 +273,7 @@ private[spark] class DiskBlockManager(
Utils.getConfiguredLocalDir
mridulm commented on code in PR #40412:
URL: https://github.com/apache/spark/pull/40412#discussion_r1199782358
##
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala:
##
@@ -273,7 +273,7 @@ private[spark] class DiskBlockManager(
Utils.getConfiguredLocalDir
mridulm commented on code in PR #40412:
URL: https://github.com/apache/spark/pull/40412#discussion_r1199782358
##
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala:
##
@@ -273,7 +273,7 @@ private[spark] class DiskBlockManager(
Utils.getConfiguredLocalDir
mridulm commented on code in PR #40412:
URL: https://github.com/apache/spark/pull/40412#discussion_r1199782358
##
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala:
##
@@ -273,7 +273,7 @@ private[spark] class DiskBlockManager(
Utils.getConfiguredLocalDir
panbingkun opened a new pull request, #41246:
URL: https://github.com/apache/spark/pull/41246
### What changes were proposed in this pull request?
The pr aims to upgrade buf from 1.18.0 to 1.19.0
### Why are the changes needed?
Release Notes: https://github.com/bufbuild/buf/relea
Hisoka-X commented on code in PR #40632:
URL: https://github.com/apache/spark/pull/40632#discussion_r1199748644
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala:
##
@@ -134,54 +135,58 @@ class JacksonParser(
// List([str_a_1,null])
Hisoka-X commented on code in PR #40632:
URL: https://github.com/apache/spark/pull/40632#discussion_r1199748611
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala:
##
@@ -119,8 +119,9 @@ class JacksonParser(
} else {
new NoopFilters
panbingkun commented on code in PR #41214:
URL: https://github.com/apache/spark/pull/41214#discussion_r1199746634
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -407,8 +407,8 @@ private[sql] object QueryParsingErrors extends
QueryErro
panbingkun commented on code in PR #41236:
URL: https://github.com/apache/spark/pull/41236#discussion_r1199739194
##
core/src/main/resources/error/error-classes.json:
##
@@ -5627,4 +5627,4 @@
"Failed to get block , which is not a shuffle block"
]
}
-}
+}
Review C
panbingkun commented on code in PR #41245:
URL: https://github.com/apache/spark/pull/41245#discussion_r1199738326
##
streaming/src/main/scala/org/apache/spark/streaming/Checkpoint.scala:
##
@@ -173,7 +173,7 @@ object Checkpoint extends Logging {
// to find classes, which
panbingkun commented on code in PR #41245:
URL: https://github.com/apache/spark/pull/41245#discussion_r1199738222
##
core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala:
##
@@ -560,7 +560,7 @@ private[spark] object SparkHadoopUtil extends Logging {
* Create a f
panbingkun commented on code in PR #41245:
URL: https://github.com/apache/spark/pull/41245#discussion_r1199738085
##
connector/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:
##
@@ -39,15 +39,15 @@ import org.apache.spark.kafka010.KafkaConfi
panbingkun opened a new pull request, #41245:
URL: https://github.com/apache/spark/pull/41245
### What changes were proposed in this pull request?
The pr aims to fix wrong reference, include:
- Hadoop doc related
- Kafka doc related
- Other
### Why are the changes needed?
MaxGekk commented on code in PR #40632:
URL: https://github.com/apache/spark/pull/40632#discussion_r1199718277
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala:
##
@@ -119,8 +119,9 @@ class JacksonParser(
} else {
new NoopFilters
MaxGekk commented on code in PR #40970:
URL: https://github.com/apache/spark/pull/40970#discussion_r1199716559
##
core/src/main/resources/error/error-classes.json:
##
@@ -5627,4 +5642,4 @@
"Failed to get block , which is not a shuffle block"
]
}
-}
+}
Review Comm
MaxGekk commented on PR #40970:
URL: https://github.com/apache/spark/pull/40970#issuecomment-1556111345
> I want to land this first.
ok. Let's modify PR's title and its description according to your actual
changes.
--
This is an automated message from the Apache Git Service.
To res
56 matches
Mail list logo