Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/22907
What if there is a FetchFailure and Spark reruns some tasks in the previous
succeeded shuffle map stage? That will be a new ShuffleMapStage and we will
still double counting the accumulators
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/20303
@cloud-fan @gatorsmile , are you ready to start reviewing this? I can bring
this update to date.
---
-
To unsubscribe, e
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/20303
@aaron-aa , the committers agreed to start reviewing the code after 2.4
release.
---
-
To unsubscribe, e-mail: reviews
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/21754
This LGTM as a fix. However, ideally we should also support reusing an
exchange used in different joins. There is no need to shuffle write the same
table twice, we just need read it differently
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/20303
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/20303
cc @cloud-fan , @gatorsmile , @yhuai
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/20303
[SPARK-23128][SQL] A new approach to do adaptive execution in Spark SQL
## What changes were proposed in this pull request?
This is the co-work with @yucai , @gczsjdy , @chenghao-intel
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/19681#discussion_r158226032
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala
---
@@ -0,0 +1,366 @@
+/*
+ * Licensed
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/19877#discussion_r154833757
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -1832,6 +1832,27 @@ class DAGSchedulerSuite extends
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/19877
cc @vanzin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/19877
[SPARK-22681]Accumulator should only updated once for each task in result
stage
## What changes were proposed in this pull request?
As the doc says "For accumulator updates performed i
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/19755
Can you please show the UI before and after the change?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/18169
@gatorsmile , why was this reverted? Are you going to open another PR to
fix it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user carsonwang closed the pull request at:
https://github.com/apache/spark/pull/17535
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17535
This fix will be included in #17540
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114238664
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17540
Yes, that's reasonable. I was asking because I noticed `withNewExecutionId`
was added in `hiveResultString` method so it should have been fixed.
---
If your project is set up for it, you can
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114235457
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r114234728
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala ---
@@ -73,21 +99,35 @@ object SQLExecution
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17540
Hi @rdblue , just wanted to confirm that this also fixed #17535 so we
should have the UI when executing queries in Spark SQL CLI?
---
If your project is set up for it, you can reply
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17535
Yes, it is closely related but two scenarios of adding
`SQLExecution.withNewExecutionId`.
Now some tests fail because `withNewExecutionId` is called twice
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/17535
[SPARK-20222][SQL] Bring back the Spark SQL UI when executing queries in
Spark SQL CLI
## What changes were proposed in this pull request?
There is no Spark SQL UI when executing queries
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/16952
@gatorsmile @cloud-fan @yhuai , can you help review and merge this minor
one line fix? The code change itself is straightforward.
---
If your project is set up for it, you can reply
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17009
Thanks @cloud-fan . `driver accumulators don't belong to this execution` is
more appropriate. I'll update the words.
---
If your project is set up for it, you can reply to this email and have
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17009#discussion_r102656311
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
---
@@ -147,6 +147,10 @@ class SQLListenerSuite extends
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/17009
cc @cloud-fan @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/16952
cc @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user carsonwang commented on the issue:
https://github.com/apache/spark/pull/16952
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/16952
[SPARK-19620][SQL]Fix incorrect exchange coordinator id in the physical plan
## What changes were proposed in this pull request?
When adaptive execution is enabled, an exchange coordinator
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/16419
[MINOR][DOC]Fix doc of ForeachWriter to use writeStream
## What changes were proposed in this pull request?
Fix the document of `ForeachWriter` to use `writeStream` instead of `write
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/12029#issuecomment-202772769
Just noticed another minor issue in the picture. It seems the container Id
is too long to fit in the black rectangle.
---
If your project is set up for it, you
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/12029#issuecomment-202767219
After applying the patch, the event timeline can be showed without
problems. Picture attached.
![afterpatch](https://cloud.githubusercontent.com/assets/9278199
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/12029#issuecomment-202746825
cc @sarutak
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/12029
[SPARK-14232][WebUI]Fix event timeline display issue when an executor is
removed with a multiple line reason.
## What changes were proposed in this pull request?
The event timeline doesn't
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11813#discussion_r56784342
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -66,14 +66,16 @@ private[spark] class ApplicationMaster
Github user carsonwang closed the pull request at:
https://github.com/apache/spark/pull/11813
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/11813
[SPARK-13889][YARN][Branch-1.6]Fix the calculation of the max number of
executor failure
## What changes were proposed in this pull request?
Backport #11713 to 1.6.
The max number
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11713#issuecomment-197923149
Thanks @srowen . There is no integer overflow in 1.6 but the max number of
executor failure is also 3 if dynamic allocation is enabled. It should use
Int.MaxValue
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11813#issuecomment-198215160
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11713#issuecomment-197105407
Without this patch, the application with dynamic allocation enabled will
fail when only 3 executors are lost.
---
If your project is set up for it, you can reply
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11713#discussion_r56270809
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -73,7 +73,8 @@ private[spark] class ApplicationMaster
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/11713
[SPARK-13889][YARN] Fix integer overflow when calculating the max number of
executor failure
## What changes were proposed in this pull request?
The max number of executor failure before
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11090#issuecomment-183801352
@srowen The 20 seconds improvements is the difference of the stage time.
i.e. before the patch, the stage runs 1.6 min. With this path it runs 1.2 min.
It takes
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11090#issuecomment-180652254
Reusing a `Calendar` object when the method will be called frequently is
something recommended by "Effective Java" mentioned
[here](http://www.informit.co
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10634#issuecomment-180659073
@JoshRosen , do you have any further comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10634#issuecomment-180659170
/cc @cloud-fan @andrewor14 , did you guys see spill size > 0 when the UI
was introduced? Can you take a look at this fix?
---
If your project is set
Github user carsonwang closed the pull request at:
https://github.com/apache/spark/pull/11071
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/11090
[SPARK-13185][SQL] Reuse Calendar object in DateTimeUtils.StringToDate
method to improve performance
The java `Calendar` object is expensive to create. I have a sub query like
this `SELECT
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11090#issuecomment-180233554
/cc @srowen @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11071#issuecomment-180172808
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/11071#discussion_r51972422
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -55,10 +56,19 @@ object DateTimeUtils
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/11071#issuecomment-180207538
I have a sub query like this `SELECT a, b, c FROM table UV WHERE
(datediff(UV.visitDate, '1997-01-01')>=0 AND datediff(UV.visitDate,
'2015-01-01')<=0)) `
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/11071
[SPARK-13185][SQL] Improve the performance of DateTimeUtils by reusing
TimeZone and Calendar objects
It is expensive to create java TimeZone and Calendar objects in each method
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10994#issuecomment-179567757
I think we should also reusing a Calendar object in each thread. It is
expensive to create java Calendar and Timezone objects each time the method is
called. I
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10994#issuecomment-179587937
OK. I will do that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10634#discussion_r50648300
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/UnsafeKVExternalSorter.java
---
@@ -125,7 +125,8 @@ public UnsafeKVExternalSorter
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10634#issuecomment-174398084
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10634#issuecomment-174421284
@JoshRosen , I now also update `diskBytesSpilled`. Previously it is not
updated for aggregation. Please help review this.
---
If your project is set up
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10634#discussion_r50659285
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -202,6 +201,7 @@ public long spill(long size
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10634#issuecomment-174422121
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10405#issuecomment-166184298
Thanks for catching this. I think the null check here is necessary, and it
seems the code that really pass a null taskMetrcis is from the `TaskSetManager`
line 796
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10352#discussion_r48114578
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala ---
@@ -115,7 +117,17 @@ class HistoryServer(
}
def
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10352#issuecomment-165705920
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10352#issuecomment-165635982
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/10352
[SPARK-12399] Display correct error message when accessing REST API with an
unknown app Id
I got an exception when accessing the below REST API with an unknown
application Id.
`/api/v1
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/10061#issuecomment-160889037
Hi @JoshRosen, the execution IDs are from the static `SQLExecution` object.
So I think they are always unique. Yes, previously each `SQLContext` has its
own
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9991#issuecomment-160858339
> It's a bit different but not in the way @carsonwang explained; whether
you use the hook or handle SparkListenerApplicationEnd, the listener will be
cleared w
Github user carsonwang closed the pull request at:
https://github.com/apache/spark/pull/9991
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9991#issuecomment-160876864
Close this and resubmit #10061
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10061#discussion_r46245965
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -1263,6 +1264,8 @@ object SQLContext {
*/
@transient private
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9991#discussion_r46245206
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
---
@@ -343,6 +343,8 @@ class SQLListenerMemoryLeakSuite
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/10061
[SPARK-11206] Support SQL UI on the history server (resubmit)
On the live web UI, there is a SQL tab which provides valuable information
for the SQL query. But once the workload is finished, we
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9991#issuecomment-160494777
@vanzin , I wrapped the calls to the hooks with
`Utils.tryLogNonFatalError`. I didn't clean up the `SQLListener` after a
application end event because another
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9991#issuecomment-160495888
@zsxwing , do you have any further comments regarding how the `SQLListener`
is cleaned up?
---
If your project is set up for it, you can reply to this email
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9991#issuecomment-160242651
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9991#discussion_r46014967
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
---
@@ -343,6 +343,8 @@ class SQLListenerMemoryLeakSuite
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9991#issuecomment-160055041
The original purpose of this PR is to fix the `SQLListenerMemoryLeakSuite`
test failure. This can be resolved by clearing `SQLContext.sqlListener` before
the test
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9991#discussion_r45952482
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
---
@@ -343,6 +343,8 @@ class SQLListenerMemoryLeakSuite
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/9991
[SPARK-11206] (Followup) Fix SQLListenerMemoryLeakSuite test error
A followup to #9297, fix the SQLListenerMemoryLeakSuite test error. The
[failure](https://amplab.cs.berkeley.edu/jenkins/job
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-159825154
I just submitted #9991 to fix the test failure. Details are described in
the new PR. Thanks all!
---
If your project is set up for it, you can reply to this email
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9991#discussion_r45948964
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
---
@@ -343,6 +343,8 @@ class SQLListenerMemoryLeakSuite
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-158264556
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r45288401
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -131,6 +135,17 @@ case class SparkListenerApplicationEnd(time: Long
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r45288842
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -127,6 +130,11 @@ private[spark] object SparkUI {
val DEFAULT_RETAINED_STAGES
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r45309477
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala ---
@@ -193,38 +214,39 @@ private[sql] class SQLListener(conf
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r45309470
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala ---
@@ -193,38 +214,39 @@ private[sql] class SQLListener(conf
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r45309655
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetrics.scala
---
@@ -104,21 +104,39 @@ private class LongSQLMetricParam(val
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-157977639
Thanks @vanzin and @chenghao-intel for reviewing. Just pushed updates to
address the comments.
---
If your project is set up for it, you can reply to this email
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r45309519
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetrics.scala
---
@@ -91,7 +91,7 @@ private[sql] class LongSQLMetric private
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-156287223
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-156339821
Hi @vanzin, I updated the code to address your comments. Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-156296483
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-156309871
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r44615239
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -96,6 +114,7 @@ private[spark] object JsonProtocol
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r44622811
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -150,7 +151,14 @@ private[spark] object SparkUI {
appName: String
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r44614968
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -96,6 +114,7 @@ private[spark] object JsonProtocol
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-155348070
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user carsonwang commented on the pull request:
https://github.com/apache/spark/pull/9297#issuecomment-152536961
@vanzin Thanks a lot for the comment. This sounds great and is very
helpful. I agree it is not a good idea to move more stuff to the core. The
underlying code
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/9297
[SPARK-11206] Support SQL UI on the history server
On the live web UI, there is a SQL tab which provides valuable information
for the SQL query. But once the workload is finished, we won't see
Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/9297#discussion_r43218812
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -504,9 +542,28 @@ private[spark] object JsonProtocol {
case
1 - 100 of 156 matches
Mail list logo