olaky commented on PR #36753:
URL: https://github.com/apache/spark/pull/36753#issuecomment-1148258363
I cherry-picked 583a9c75bbb35387169d4f0cf763ef566d899954
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
olaky commented on PR #36762:
URL: https://github.com/apache/spark/pull/36762#issuecomment-1148259858
@dongjoon-hyun thanks a lot for picking this up for me!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
AngersZh commented on PR #35612:
URL: https://github.com/apache/spark/pull/35612#issuecomment-1148234463
Looks like the latest failed test not related to this pr
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
shiyuhang0 commented on PR #21990:
URL: https://github.com/apache/spark/pull/21990#issuecomment-1148248826
Why not port it to Spark < 3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
beliefer commented on code in PR #36776:
URL: https://github.com/apache/spark/pull/36776#discussion_r890808786
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -55,8 +55,13 @@ class V2ExpressionBuilder(
} else {
LuciferYang commented on PR #36732:
URL: https://github.com/apache/spark/pull/36732#issuecomment-1148250861
> No, we wouldn't backport this, that's more change. Does this offer any
benefit? I'm not sure it's more readable even.
If the readability is not improved, let me close this
zhengruifeng commented on PR #35250:
URL: https://github.com/apache/spark/pull/35250#issuecomment-1148255084
@cloud-fan Sure, Let me update this PR
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
huaxingao commented on code in PR #36776:
URL: https://github.com/apache/spark/pull/36776#discussion_r890797844
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -55,8 +55,13 @@ class V2ExpressionBuilder(
} else {
ArvinZheng commented on code in PR #35484:
URL: https://github.com/apache/spark/pull/35484#discussion_r890843318
##
connector/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/consumer/KafkaDataConsumer.scala:
##
@@ -298,9 +296,10 @@ private[kafka010] class
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r890949860
##
core/src/main/scala/org/apache/spark/deploy/master/ApplicationInfo.scala:
##
@@ -65,7 +66,70 @@ private[spark] class ApplicationInfo(
appSource = new
MaxGekk commented on PR #36703:
URL: https://github.com/apache/spark/pull/36703#issuecomment-1148502630
@cloud-fan Could you resolve conflicts, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
LuciferYang closed pull request #36732:
[SPARK-39345][CORE][SQL][DSTREAM][ML][MESOS][SS] Replace `filter(!condition)`
with `filterNot(condition)`
URL: https://github.com/apache/spark/pull/36732
--
This is an automated message from the Apache Git Service.
To respond to the message, please
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r890938856
##
core/src/main/scala/org/apache/spark/deploy/ExecutorDescription.scala:
##
@@ -25,10 +25,13 @@ package org.apache.spark.deploy
private[deploy] class
wangyum commented on PR #36784:
URL: https://github.com/apache/spark/pull/36784#issuecomment-1148459572
cc @yaooqinn @pan3793
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
AngersZh commented on PR #35612:
URL: https://github.com/apache/spark/pull/35612#issuecomment-1148499208
> @AngersZh can you retrigger the tests?
GA passed now
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
gengliangwang closed pull request #36174: [SPARK-34659][UI] Fix wrong
application ID when reverse proxy URL contains "proxy" or "history"
URL: https://github.com/apache/spark/pull/36174
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
gengliangwang closed pull request #34970: [DO NOT MERGE] investigate test
failures if we test ANSI mode in github actions
URL: https://github.com/apache/spark/pull/34970
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
AngersZh opened a new pull request, #36786:
URL: https://github.com/apache/spark/pull/36786
### What changes were proposed in this pull request?
In current code, when we use `spark-sql` `-e` , `-f` or use `ctrl + c` to
close `spark-sql` session, will remain hive session resource dir
Ngone51 commented on PR #36716:
URL: https://github.com/apache/spark/pull/36716#issuecomment-1148335486
cc @tgravescs for review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r890912848
##
core/src/main/scala/org/apache/spark/deploy/ApplicationDescription.scala:
##
@@ -19,23 +19,28 @@ package org.apache.spark.deploy
import java.net.URI
-import
MaxGekk commented on PR #36753:
URL: https://github.com/apache/spark/pull/36753#issuecomment-1148505223
I guess the failure is not related to PR's changes:
```
[info] - check simplified (tpcds-v1.4/q4) *** FAILED *** (945 milliseconds)
[info] Plans did not match:
```
--
This
cloud-fan commented on code in PR #36745:
URL: https://github.com/apache/spark/pull/36745#discussion_r890902366
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2881,6 +2881,15 @@ object SQLConf {
.booleanConf
MaxGekk commented on PR #36753:
URL: https://github.com/apache/spark/pull/36753#issuecomment-1148509294
+1, LGTM. Merging to 3.2 and trying to merge to 3.1/3.0.
Thank you, @olaky and @JoshRosen @dongjoon-hyun for review.
--
This is an automated message from the Apache Git Service.
To
Borjianamin98 commented on PR #36781:
URL: https://github.com/apache/spark/pull/36781#issuecomment-1148284070
> @Borjianamin98 Could you please add a test?
I agree. I added a test for this. This is my first experience participating
in the Spark project and I hope I did well. :)
--
cloud-fan commented on code in PR #35612:
URL: https://github.com/apache/spark/pull/35612#discussion_r890851508
##
sql/hive-thriftserver/src/main/java/org/apache/hive/service/server/HiveServer2.java:
##
@@ -259,7 +260,7 @@ static class HelpOptionExecutor implements
ulysses-you opened a new pull request, #36785:
URL: https://github.com/apache/spark/pull/36785
### What changes were proposed in this pull request?
Change AliasAwareOutputExpression to using expression rather than attribute
to track if we can nomalize. So the aliased
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r890982027
##
core/src/main/scala/org/apache/spark/deploy/master/Master.scala:
##
@@ -725,26 +729,38 @@ private[deploy] class Master(
*/
private def
MaxGekk commented on PR #36780:
URL: https://github.com/apache/spark/pull/36780#issuecomment-1148499879
+1, LGTM. Merging to master.
Thank you, @vli-databricks and @gengliangwang @HyukjinKwon for review.
--
This is an automated message from the Apache Git Service.
To respond to the
cloud-fan commented on PR #35612:
URL: https://github.com/apache/spark/pull/35612#issuecomment-1148292841
@AngersZh can you retrigger the tests?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
Resol1992 commented on PR #30317:
URL: https://github.com/apache/spark/pull/30317#issuecomment-1148301891
hi, @constzhou
Recently, the same issue aslo occurs to me, could I talk with you about this
issue?
--
This is an automated message from the Apache Git Service.
To respond to the
HyukjinKwon commented on PR #36784:
URL: https://github.com/apache/spark/pull/36784#issuecomment-1148308085
cc @wangyum FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
cloud-fan commented on PR #36662:
URL: https://github.com/apache/spark/pull/36662#issuecomment-1148337126
thanks, merging to master/3.3!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan closed pull request #36662: [SPARK-39286][DOC] Update documentation
for the decode function
URL: https://github.com/apache/spark/pull/36662
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r890973724
##
core/src/main/scala/org/apache/spark/deploy/master/ExecutorResourceDescription.scala:
##
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
chenzhx commented on code in PR #36663:
URL: https://github.com/apache/spark/pull/36663#discussion_r890253947
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/util/V2ExpressionBuilder.scala:
##
@@ -259,6 +259,55 @@ class V2ExpressionBuilder(
} else {
MaxGekk closed pull request #36780: [SPARK-39392][SQL] Refine ANSI error
messages for try_* function hints
URL: https://github.com/apache/spark/pull/36780
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
MaxGekk commented on PR #36780:
URL: https://github.com/apache/spark/pull/36780#issuecomment-1148501765
@vli-databricks Could you backport the changes to branch-3.3, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
gengliangwang commented on code in PR #36745:
URL: https://github.com/apache/spark/pull/36745#discussion_r891067965
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2881,6 +2881,15 @@ object SQLConf {
.booleanConf
MaxGekk commented on PR #36753:
URL: https://github.com/apache/spark/pull/36753#issuecomment-1148512960
@olaky The changes cause conflicts in branch-3.1. Could you PRs w/ backports
to 3.1 and 3.0, please.
--
This is an automated message from the Apache Git Service.
To respond to the
AngersZh commented on PR #36786:
URL: https://github.com/apache/spark/pull/36786#issuecomment-1148527000
ping @cloud-fan @wangyum
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
olaky commented on PR #36386:
URL: https://github.com/apache/spark/pull/36386#issuecomment-1148609532
I am facing the same issues here: https://github.com/apache/spark/pull/36753
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r891166646
##
core/src/main/scala/org/apache/spark/deploy/client/StandaloneAppClient.scala:
##
@@ -299,9 +300,10 @@ private[spark] class StandaloneAppClient(
*
* @return
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891293373
##
core/src/main/resources/error/error-classes.json:
##
@@ -333,7 +332,7 @@
},
"SECOND_FUNCTION_ARGUMENT_NOT_INTEGER" : {
"message" : [
- "The second
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891312375
##
core/src/test/scala/org/apache/spark/SparkFunSuite.scala:
##
@@ -264,6 +264,87 @@ abstract class SparkFunSuite
}
}
+ /**
+ * Checks an exception with
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891312743
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891311964
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
tgravescs commented on PR #36716:
URL: https://github.com/apache/spark/pull/36716#issuecomment-1148763057
> The feature is enabled when dynamic allocation enabled in standalone
cluster.
So last time I checked dynamic allocation in standalone mode had issues.
Have this been
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891318626
##
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala:
##
@@ -863,6 +872,29 @@ private[spark] class TaskSchedulerImpl(
executorUpdates)
}
wangyum commented on PR #36787:
URL: https://github.com/apache/spark/pull/36787#issuecomment-1148767712
The 2.7.2 will throw runtime exception:
```
22:38:20.734 ERROR org.apache.spark.util.Utils: Aborting task
java.lang.RuntimeException: Overflow of newLength.
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891322239
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
LuciferYang commented on PR #36694:
URL: https://github.com/apache/spark/pull/36694#issuecomment-1148767278
> Is it redundant because of the parent POM?
Yes
> yeah maybe but I don't think it hurts anything and it's 2 lines
so just leave these as they areļ¼
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891337273
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1108,45 +1164,48 @@ private[spark] class TaskSetManager(
//
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891337356
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -97,8 +98,18 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891336456
##
sql/core/src/main/scala/org/apache/spark/sql/catalog/interface.scala:
##
@@ -64,15 +65,34 @@ class Database(
@Stable
class Table(
val name: String,
-
srielau commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891349846
##
core/src/main/resources/error/error-classes.json:
##
@@ -333,7 +332,7 @@
},
"SECOND_FUNCTION_ARGUMENT_NOT_INTEGER" : {
"message" : [
- "The second
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891346881
##
sql/core/src/test/scala/org/apache/spark/sql/internal/CatalogSuite.scala:
##
@@ -553,4 +570,100 @@ class CatalogSuite extends SharedSparkSession with
AnalysisTest
gengliangwang commented on code in PR #36703:
URL: https://github.com/apache/spark/pull/36703#discussion_r891136404
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -1792,15 +1792,16 @@ class AstBuilder extends
HeartSaVioR closed pull request #35484: [SPARK-38181][SS][DOCS] Update comments
in KafkaDataConsumer.scala
URL: https://github.com/apache/spark/pull/35484
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36784:
URL: https://github.com/apache/spark/pull/36784#issuecomment-1148680767
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36778:
URL: https://github.com/apache/spark/pull/36778#issuecomment-1148680929
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891307188
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891314795
##
core/src/test/scala/org/apache/spark/SparkFunSuite.scala:
##
@@ -264,6 +264,87 @@ abstract class SparkFunSuite
}
}
+ /**
+ * Checks an exception with
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891314239
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891340370
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -117,14 +128,44 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891341003
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -117,14 +128,44 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891344191
##
sql/core/src/test/scala/org/apache/spark/sql/internal/CatalogSuite.scala:
##
@@ -290,7 +304,8 @@ class CatalogSuite extends SharedSparkSession with
AnalysisTest {
cloud-fan closed pull request #35612: [SPARK-38289][SQL] Refactor SQL CLI exit
code to make it more clear
URL: https://github.com/apache/spark/pull/35612
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r891129088
##
core/src/test/scala/org/apache/spark/deploy/JsonProtocolSuite.scala:
##
@@ -107,11 +107,11 @@ object JsonConstants {
|{"id":"id","starttime":3,"name":"name",
Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r891178775
##
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala:
##
@@ -530,6 +535,87 @@ class MasterSuite extends SparkFunSuite
gengliangwang commented on PR #36745:
URL: https://github.com/apache/spark/pull/36745#issuecomment-1148625244
I am merging this one to master now. We can have a new DS API for this later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
MaxGekk closed pull request #36753: [SPARK-39259][SQL][3.2] Evaluate timestamps
consistently in subqueries
URL: https://github.com/apache/spark/pull/36753
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
srowen commented on code in PR #36499:
URL: https://github.com/apache/spark/pull/36499#discussion_r891287048
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/TeradataDialect.scala:
##
@@ -96,4 +97,29 @@ private case object TeradataDialect extends JdbcDialect {
override
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891309492
##
core/src/test/scala/org/apache/spark/SparkFunSuite.scala:
##
@@ -264,6 +264,87 @@ abstract class SparkFunSuite
}
}
+ /**
+ * Checks an exception with
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891324885
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
srowen commented on code in PR #36737:
URL: https://github.com/apache/spark/pull/36737#discussion_r891330435
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -3963,8 +3966,10 @@ object TimeWindowing extends Rule[LogicalPlan] {
ulysses-you commented on PR #36785:
URL: https://github.com/apache/spark/pull/36785#issuecomment-1148578665
cc @cloud-fan @prakharjain09
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR commented on code in PR #36704:
URL: https://github.com/apache/spark/pull/36704#discussion_r891139790
##
connector/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaMicroBatchSourceSuite.scala:
##
@@ -666,9 +667,10 @@ abstract class
HeartSaVioR commented on PR #35484:
URL: https://github.com/apache/spark/pull/35484#issuecomment-1148596916
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
olaky commented on PR #36386:
URL: https://github.com/apache/spark/pull/36386#issuecomment-1148616709
So the only change in the plan I can see that makes the test fail is that
the last plan node has a source filename in it now, for example
`Scan parquet default.web_site
AmplabJenkins commented on PR #36781:
URL: https://github.com/apache/spark/pull/36781#issuecomment-1148680852
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
cloud-fan commented on code in PR #36703:
URL: https://github.com/apache/spark/pull/36703#discussion_r891256881
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -1792,15 +1792,16 @@ class AstBuilder extends
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891292604
##
core/src/main/resources/error/error-classes.json:
##
@@ -157,8 +157,7 @@
"See more details in SPARK-31404. You can set the SQL config
or",
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891290994
##
core/src/main/java/org/apache/spark/SparkThrowable.java:
##
@@ -36,6 +36,10 @@ public interface SparkThrowable {
// If null, error class is not set
String
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891304223
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -73,18 +73,20 @@ private[spark] object SparkThrowableHelper {
def getMessage(
errorClass:
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891320687
##
sql/catalyst/src/main/scala/org/apache/spark/sql/AnalysisException.scala:
##
@@ -36,13 +36,31 @@ class AnalysisException protected[sql] (
@transient val plan:
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891339983
##
sql/core/src/main/scala/org/apache/spark/sql/catalog/interface.scala:
##
@@ -55,7 +55,8 @@ class Database(
* A table in Spark, as returned by the `listTables`
cloud-fan commented on PR #35612:
URL: https://github.com/apache/spark/pull/35612#issuecomment-1148543123
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
gengliangwang closed pull request #36745: [SPARK-39359][SQL] Restrict DEFAULT
columns to allowlist of supported data source types
URL: https://github.com/apache/spark/pull/36745
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
olaky commented on PR #36753:
URL: https://github.com/apache/spark/pull/36753#issuecomment-1148633785
Merging is blocked because of a test failure that also surfaces in
https://github.com/apache/spark/pull/36386
--
This is an automated message from the Apache Git Service.
To respond to
cloud-fan commented on code in PR #36704:
URL: https://github.com/apache/spark/pull/36704#discussion_r891224222
##
connector/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaMicroBatchSourceSuite.scala:
##
@@ -666,9 +667,10 @@ abstract class
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891297431
##
core/src/main/java/org/apache/spark/memory/SparkOutOfMemoryError.java:
##
@@ -39,11 +39,17 @@ public SparkOutOfMemoryError(OutOfMemoryError e) {
}
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891301726
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -73,18 +73,20 @@ private[spark] object SparkThrowableHelper {
def getMessage(
errorClass:
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891302240
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -98,6 +100,29 @@ private[spark] object SparkThrowableHelper {
s"[$displayClass]
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891301726
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -73,18 +73,20 @@ private[spark] object SparkThrowableHelper {
def getMessage(
errorClass:
cloud-fan commented on code in PR #36693:
URL: https://github.com/apache/spark/pull/36693#discussion_r891302240
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -98,6 +100,29 @@ private[spark] object SparkThrowableHelper {
s"[$displayClass]
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891316334
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -800,6 +814,10 @@ private[spark] class TaskSetManager(
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891334793
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##
@@ -2185,6 +2185,11 @@ object QueryCompilationErrors extends
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891334508
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891335856
##
sql/core/src/main/scala/org/apache/spark/sql/catalog/interface.scala:
##
@@ -64,15 +65,34 @@ class Database(
@Stable
class Table(
val name: String,
-
Ngone51 commented on code in PR #36162:
URL: https://github.com/apache/spark/pull/36162#discussion_r891335474
##
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:
##
@@ -1217,6 +1260,71 @@ private[spark] class TaskSetManager(
def executorAdded(): Unit = {
1 - 100 of 218 matches
Mail list logo