sandip-db opened a new pull request, #44329:
URL: https://github.com/apache/spark/pull/44329
### What changes were proposed in this pull request?
Add TimestampNTZType support in XML data source.
### Why are the changes needed?
To bring in parity with json/csv.
### Does
yaooqinn commented on PR #44326:
URL: https://github.com/apache/spark/pull/44326#issuecomment-1853403770
cc @dongjoon-hyun @cloud-fan @ulysses-you
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
allisonwang-db commented on code in PR #44190:
URL: https://github.com/apache/spark/pull/44190#discussion_r1424951174
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -85,6 +86,11 @@ case class
HyukjinKwon closed pull request #44313: [SPARK-46379][PS][TESTS] Reorganize
`FrameInterpolateTests`
URL: https://github.com/apache/spark/pull/44313
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon commented on PR #44313:
URL: https://github.com/apache/spark/pull/44313#issuecomment-1853381109
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424923942
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4129,6 +4129,37 @@ class Dataset[T] private[sql](
new DataFrameWriterV2[T](table, this)
}
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424921268
##
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriterV2.scala:
##
@@ -167,6 +173,229 @@ final class DataFrameWriterV2[T] private[sql](table:
String, ds:
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424920459
##
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriterV2.scala:
##
@@ -167,6 +173,229 @@ final class DataFrameWriterV2[T] private[sql](table:
String, ds:
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424912156
##
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriterV2.scala:
##
@@ -167,6 +173,229 @@ final class DataFrameWriterV2[T] private[sql](table:
String, ds:
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424911090
##
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriterV2.scala:
##
@@ -167,6 +173,229 @@ final class DataFrameWriterV2[T] private[sql](table:
String, ds:
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424909883
##
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriterV2.scala:
##
@@ -343,3 +572,84 @@ trait CreateTableWriter[T] extends
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424909535
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4129,6 +4129,37 @@ class Dataset[T] private[sql](
new DataFrameWriterV2[T](table, this)
}
viirya commented on code in PR #44119:
URL: https://github.com/apache/spark/pull/44119#discussion_r1424908372
##
sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala:
##
@@ -4129,6 +4129,37 @@ class Dataset[T] private[sql](
new DataFrameWriterV2[T](table, this)
}
zhengruifeng commented on PR #44313:
URL: https://github.com/apache/spark/pull/44313#issuecomment-1853340010
cc @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
huanccwang commented on PR #44317:
URL: https://github.com/apache/spark/pull/44317#issuecomment-1853334315
> Your IDE seems to have reformatted Dataset.scala such that several hundred
lines appear changed. It would be difficult to tell what your change is. Also,
your Dataset.scala no
anchovYu opened a new pull request, #44328:
URL: https://github.com/apache/spark/pull/44328
### What changes were proposed in this pull request?
When the SQL conf `spark.sql.legacy.keepCommandOutputSchema` is set to true:
Before:
```
// support there is a xyyu-db-with-hyphen
bersprockets commented on PR #44317:
URL: https://github.com/apache/spark/pull/44317#issuecomment-1853329716
Your IDE seems to have reformatted Dataset.scala such that several hundred
lines appear changed. It would be difficult to tell what your change is. Also,
your Dataset.scala no
cloud-fan commented on code in PR #44184:
URL: https://github.com/apache/spark/pull/44184#discussion_r1424890931
##
sql/core/src/test/resources/sql-tests/inputs/mode.sql:
##
@@ -0,0 +1,112 @@
+-- Test data.
+CREATE OR REPLACE TEMPORARY VIEW basic_pays AS SELECT * FROM VALUES
panbingkun commented on PR #44208:
URL: https://github.com/apache/spark/pull/44208#issuecomment-1853325778
I don't think a four part approach may necessarily solve the above problem,
because in our testing, we found another jar: `org.slf4j#slf4j-api#1.7.30")`
with a similar problem that
cloud-fan commented on code in PR #44184:
URL: https://github.com/apache/spark/pull/44184#discussion_r1424888935
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##
@@ -1954,6 +1954,29 @@ private[sql] object QueryCompilationErrors extends
cloud-fan commented on code in PR #44184:
URL: https://github.com/apache/spark/pull/44184#discussion_r1424887678
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -18,22 +18,23 @@
package
cloud-fan commented on code in PR #44184:
URL: https://github.com/apache/spark/pull/44184#discussion_r1424887283
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -146,8 +116,99 @@ case class Mode(
override def
cloud-fan commented on code in PR #44184:
URL: https://github.com/apache/spark/pull/44184#discussion_r1424885592
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Mode.scala:
##
@@ -18,22 +18,23 @@
package
LuciferYang commented on code in PR #44327:
URL: https://github.com/apache/spark/pull/44327#discussion_r1424881010
##
common/network-common/src/main/java/org/apache/spark/network/util/RocksDBProvider.java:
##
@@ -100,7 +100,11 @@ public static RocksDB initRockDB(File dbFile,
LuciferYang commented on code in PR #44327:
URL: https://github.com/apache/spark/pull/44327#discussion_r1424880029
##
common/network-common/src/main/java/org/apache/spark/network/util/LevelDBProvider.java:
##
@@ -80,7 +80,12 @@ public static DB initLevelDB(File dbFile,
LuciferYang opened a new pull request, #44327:
URL: https://github.com/apache/spark/pull/44327
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
mridulm commented on PR #44315:
URL: https://github.com/apache/spark/pull/44315#issuecomment-1853302762
Thanks for continuing to work on this @robert3005, I do know you have worked
getting this in multiple times.
I had reviewed a past version of this, but unfortunately I am not very
mridulm commented on PR #43954:
URL: https://github.com/apache/spark/pull/43954#issuecomment-1853299046
Ah, interesting - I had not looked at barrier stage in as much detail; my
initial observation was it worked fine, but you are right - this does break the
assumption.
--
This is an
mridulm commented on code in PR #44264:
URL: https://github.com/apache/spark/pull/44264#discussion_r1424868406
##
common/network-common/src/main/java/org/apache/spark/network/ssl/SSLFactory.java:
##
@@ -391,13 +391,16 @@ private static TrustManager[] defaultTrustManagers(File
mridulm commented on PR #44264:
URL: https://github.com/apache/spark/pull/44264#issuecomment-1853298110
Merged to master.
Thanks for adding this @hasnain-db !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
mridulm closed pull request #44264: [SPARK-46132][CORE] Support key password
for JKS keys for RPC SSL
URL: https://github.com/apache/spark/pull/44264
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
mridulm commented on code in PR #44264:
URL: https://github.com/apache/spark/pull/44264#discussion_r1424867167
##
common/network-common/src/main/java/org/apache/spark/network/ssl/SSLFactory.java:
##
@@ -391,13 +391,16 @@ private static TrustManager[] defaultTrustManagers(File
LuciferYang commented on code in PR #44270:
URL: https://github.com/apache/spark/pull/44270#discussion_r1424841874
##
sql/hive/src/test/java/org/apache/spark/sql/hive/JavaMetastoreDataSourcesSuite.java:
##
@@ -67,7 +67,8 @@ public void setUp() throws IOException {
List
HyukjinKwon commented on PR #44163:
URL: https://github.com/apache/spark/pull/44163#issuecomment-1853261270
Mind retriggering https://github.com/shujingyang-db/spark/runs/19583242928
please?
--
This is an automated message from the Apache Git Service.
To respond to the message, please
junyuc25 commented on code in PR #44211:
URL: https://github.com/apache/spark/pull/44211#discussion_r1424821501
##
connector/kinesis-asl-assembly/pom.xml:
##
@@ -62,12 +62,18 @@
com.google.protobuf
protobuf-java
- 2.6.1
-
+ compile
+
+
cloud-fan commented on PR #44187:
URL: https://github.com/apache/spark/pull/44187#issuecomment-1853219614
Hi @beliefer , thanks for your time to clean up the Spark codebase! However,
I have a bit of concern about this kind of invasive changes. The new API is
good and we should definitely
yaooqinn commented on code in PR #41262:
URL: https://github.com/apache/spark/pull/41262#discussion_r1424806347
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -1080,7 +1077,7 @@ class Analyzer(override val catalogManager:
yaooqinn opened a new pull request, #44326:
URL: https://github.com/apache/spark/pull/44326
### What changes were proposed in this pull request?
This PR adds `query.resolved` as a pattern guard when HiveAnalysis converts
InsertIntoStatement to InsertIntoHiveTable.
LuciferYang commented on PR #44311:
URL: https://github.com/apache/spark/pull/44311#issuecomment-1853193629
> Yea it's better to have a JIRA ticket for it.
+1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
beliefer commented on PR #44184:
URL: https://github.com/apache/spark/pull/44184#issuecomment-1853133866
@peter-toth Please take a review if you have time.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
ulysses-you commented on code in PR #44013:
URL: https://github.com/apache/spark/pull/44013#discussion_r1423311867
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveRuleContext.scala:
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software
itholic opened a new pull request, #44325:
URL: https://github.com/apache/spark/pull/44325
### What changes were proposed in this pull request?
This PR proposes to add an information of `isocalendar` into migration guide
### Why are the changes needed?
We
WweiL closed pull request #44319: [DO-NOT-REVIEW] Streaming UI without
numRemovedRow change
URL: https://github.com/apache/spark/pull/44319
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
shujingyang-db commented on code in PR #44163:
URL: https://github.com/apache/spark/pull/44163#discussion_r1424733989
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/xml/StaxXmlParser.scala:
##
@@ -604,6 +606,24 @@ class XmlTokenizer(
return Some(str)
neilramaswamy opened a new pull request, #44323:
URL: https://github.com/apache/spark/pull/44323
### What changes were proposed in this pull request?
In these changes, we modify the stream-stream state removal logic to trigger
and drop state for one side of a stream-stream join by
cloud-fan commented on code in PR #44190:
URL: https://github.com/apache/spark/pull/44190#discussion_r1424659710
##
sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala:
##
@@ -725,8 +725,127 @@ class DataSourceV2Suite extends QueryTest with
cloud-fan commented on code in PR #44190:
URL: https://github.com/apache/spark/pull/44190#discussion_r1424657051
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -85,6 +86,11 @@ case class CreateTableAsSelectExec(
cloud-fan commented on code in PR #44190:
URL: https://github.com/apache/spark/pull/44190#discussion_r1424656357
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -887,6 +887,11 @@
],
"sqlState" : "42K02"
},
+ "DATA_SOURCE_TABLE_SCHEMA_MISMATCH" :
cloud-fan commented on PR #44311:
URL: https://github.com/apache/spark/pull/44311#issuecomment-1852932321
Yea it's better to have a JIRA ticket for it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
cloud-fan commented on PR #44302:
URL: https://github.com/apache/spark/pull/44302#issuecomment-1852930395
thanks, merging to master/3.5!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan closed pull request #44302: [SPARK-46370][SQL] Fix bug when querying
from table after changing column defaults
URL: https://github.com/apache/spark/pull/44302
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
HyukjinKwon commented on PR #44170:
URL: https://github.com/apache/spark/pull/44170#issuecomment-1852887774
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #44170: [SPARK-46253][PYTHON] Plan Python data
source read using MapInArrow
URL: https://github.com/apache/spark/pull/44170
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
WweiL commented on PR #44306:
URL: https://github.com/apache/spark/pull/44306#issuecomment-1852872828
Use empty string as default instead for test failures
https://github.com/WweiL/oss-spark/actions/runs/7186087215/job/19570752262
--
This is an automated message from the Apache Git
utkarsh39 opened a new pull request, #44321:
URL: https://github.com/apache/spark/pull/44321
### What changes were proposed in this pull request?
`AccumulableInfo` is one of the top heap consumers in driver's heap dumps
for stages with many tasks. For a stage with a large number
WweiL closed pull request #44320: [DO-NOT-MERGE] working on 3.5
URL: https://github.com/apache/spark/pull/44320
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
WweiL commented on PR #44319:
URL: https://github.com/apache/spark/pull/44319#issuecomment-1852850655
Also I checked the parent of the most recent change to
StreamingQueryStatisticsPage, https://github.com/apache/spark/pull/43666, it's
still not working.
So I think the issue comes
WweiL commented on PR #44319:
URL: https://github.com/apache/spark/pull/44319#issuecomment-1852847950
Also verified the same change works with 3.5
https://github.com/apache/spark/pull/44320
--
This is an automated message from the Apache Git Service.
To respond to the message, please
WweiL opened a new pull request, #44320:
URL: https://github.com/apache/spark/pull/44320
https://github.com/apache/spark/assets/10248890/c6396399-3260-4d2d-82c0-fc1b93cb0c33;>
https://github.com/apache/spark/assets/10248890/ab469ac2-76c9-4ea1-8729-5f5283756696;>
--
This is an
dbatomic commented on code in PR #44316:
URL: https://github.com/apache/spark/pull/44316#discussion_r1424536555
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala:
##
@@ -33,12 +34,14 @@ import
WweiL commented on PR #44319:
URL: https://github.com/apache/spark/pull/44319#issuecomment-1852730606
This is the error log
https://github.com/apache/spark/assets/10248890/cd477d48-a45e-4ee9-b45e-06dc1dbeb9d9;>
--
This is an automated message from the Apache Git Service.
To respond
WweiL commented on PR #44319:
URL: https://github.com/apache/spark/pull/44319#issuecomment-1852728485
https://github.com/apache/spark/assets/10248890/fdb78c92-2d6f-41a9-ba23-3068d128caa8;>
https://github.com/apache/spark/assets/10248890/642aa6c3-7728-43c7-8a11-cbf79c4362c5;>
--
WweiL opened a new pull request, #44319:
URL: https://github.com/apache/spark/pull/44319
\
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
shujingyang-db opened a new pull request, #44318:
URL: https://github.com/apache/spark/pull/44318
### What changes were proposed in this pull request?
In XML, elements typically consist of a name and a value, with the value
enclosed between the opening and closing tags. But
MaxGekk commented on PR #44314:
URL: https://github.com/apache/spark/pull/44314#issuecomment-1852614913
> For my understanding, the following minor PR was the actual foundation of
this PR, right?
Before this PR and https://github.com/apache/spark/pull/44311, we had only
those
MaxGekk commented on code in PR #44314:
URL: https://github.com/apache/spark/pull/44314#discussion_r1424440273
##
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala:
##
@@ -2638,10 +2638,14 @@ class SQLQuerySuite extends QueryTest with
SharedSparkSession with
cloud-fan commented on code in PR #44316:
URL: https://github.com/apache/spark/pull/44316#discussion_r1424433276
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala:
##
@@ -33,12 +34,14 @@ import
dtenedor commented on PR #44302:
URL: https://github.com/apache/spark/pull/44302#issuecomment-1852588610
@cloud-fan all tests are passing now:
![image](https://github.com/apache/spark/assets/99207096/c22520b2-334c-446d-8f65-7edcaa44b7c1)
--
This is an automated message from the
srielau commented on code in PR #44314:
URL: https://github.com/apache/spark/pull/44314#discussion_r1424416253
##
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala:
##
@@ -2638,10 +2638,14 @@ class SQLQuerySuite extends QueryTest with
SharedSparkSession with
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424416324
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424416052
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424414904
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424413358
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424412268
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424411503
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424410457
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424409883
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
MaxGekk commented on PR #44314:
URL: https://github.com/apache/spark/pull/44314#issuecomment-1852572628
@cloud-fan @dongjoon-hyun @LuciferYang @beliefer @srielau Could you review
this PR, please. After this PR all `AnalysisException ` and its sub-classes
will be migrated onto error
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424409198
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorJVMProfiler.scala:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424407928
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorProfilerPlugin.scala:
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424407504
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorProfilerPlugin.scala:
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424406829
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/ExecutorProfilerPlugin.scala:
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424404258
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424405666
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424403116
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424402439
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424401384
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424398819
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424398324
##
connector/profiler/src/main/scala/org/apache/spark/executor/profiler/package.scala:
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on PR #44310:
URL: https://github.com/apache/spark/pull/44310#issuecomment-1852552188
thanks for the review, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun commented on code in PR #44021:
URL: https://github.com/apache/spark/pull/44021#discussion_r1424396432
##
connector/profiler/README.md:
##
@@ -0,0 +1,109 @@
+# Spark JVM Profiler Plugin
+
+## Build
+
+To build
+```
+ ./build/mvn clean package -DskipTests
cloud-fan closed pull request #44310: [SPARK-46378][SQL] Still remove Sort
after converting Aggregate to Project
URL: https://github.com/apache/spark/pull/44310
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun commented on PR #44284:
URL: https://github.com/apache/spark/pull/44284#issuecomment-1852541764
Thank you so much, @viirya . I'll dig more this area with more test cases.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun closed pull request #44284: [SPARK-46353][CORE] Refactor to
improve `RegisterWorker` unit test coverage
URL: https://github.com/apache/spark/pull/44284
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun commented on code in PR #44284:
URL: https://github.com/apache/spark/pull/44284#discussion_r1424370626
##
core/src/main/scala/org/apache/spark/deploy/master/Master.scala:
##
@@ -676,6 +652,45 @@ private[deploy] class Master(
logInfo(f"Recovery complete in
dongjoon-hyun commented on code in PR #44284:
URL: https://github.com/apache/spark/pull/44284#discussion_r1424370626
##
core/src/main/scala/org/apache/spark/deploy/master/Master.scala:
##
@@ -676,6 +652,45 @@ private[deploy] class Master(
logInfo(f"Recovery complete in
HyukjinKwon closed pull request #43985: [SPARK-46075][CONNECT] Improvements to
SparkConnectSessionManager
URL: https://github.com/apache/spark/pull/43985
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #43985:
URL: https://github.com/apache/spark/pull/43985#issuecomment-1852508167
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
viirya commented on code in PR #44284:
URL: https://github.com/apache/spark/pull/44284#discussion_r1424356459
##
core/src/main/scala/org/apache/spark/deploy/master/Master.scala:
##
@@ -676,6 +652,45 @@ private[deploy] class Master(
logInfo(f"Recovery complete in
viirya commented on code in PR #44284:
URL: https://github.com/apache/spark/pull/44284#discussion_r1424356010
##
core/src/main/scala/org/apache/spark/deploy/master/Master.scala:
##
@@ -676,6 +652,45 @@ private[deploy] class Master(
logInfo(f"Recovery complete in
1 - 100 of 134 matches
Mail list logo