Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17620#discussion_r11204
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -539,7 +539,7 @@ private[deploy] class Master(
private def
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17617
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17588
Ping @srowen @ajbozarth do you have any further comment? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17617
[SPARK-20244][Core] Handle get bytesRead from different thread in Hadoop RDD
## What changes were proposed in this pull request?
Hadoop FileSystem's statistics in based on thread local
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
@tgravescs sorry for the confuse.
>if base URL's ACL (spark.acls.enable) is enabled but user A has no view
permission. User "A" cannot see the app list but could still a
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110804952
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110802758
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r110799889
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
---
@@ -571,6 +572,34 @@ class FsHistoryProviderSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110797585
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r110794078
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -320,14 +321,15 @@ private[history] class FsHistoryProvider
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17588
[SPARK-20275][UI] Do not display "Completed" column for in-progress
applications
## What changes were proposed in this pull request?
Current HistoryServer will display comp
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17580
I would say this is not a Spark program, it is absolutely a Kafka producer
code. To maintain a Kafka Producer example in Spark is not a good choice, this
is a legacy code. Because of the API
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17580
What is the purpose of adding this example? I think we already have a
`KafkaWordCountProducer` for the convenience of Kafka streaming example, and we
could use that to send events to Kafka. I
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17582
[SPARK-20239][Core] Improve HistoryServer's ACL mechanism
## What changes were proposed in this pull request?
Current SHS (Spark History Server) two different ACLs:
* ACL
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17558
@wangyum , the fix of your PR is more like a bug fix, whereas the comment
above is actually a feature request, these two things are not completely
matched. I would suggest to focus
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17558
@wangyum what if the task requires that jar? From your fix what I got is
that you catch the exception and make it warning log instead, but what if that
task requires the jar, will you fix
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
Ping @vanzin @tgravescs again. Sorry to bother you and really appreciate
your time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
I see. The current code leverages `SparkListenerBlockUpdated` event to
calculate memory usage, let me try to investigate the feasibility of using
`taskEnd.taskMetrics.updatedBlocks`, to see
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Thanks @squito .
Regarding showing memory usage in history server. My major concern is that
putting so many block update event into event log will significantly increase
the file size
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17551
@barnardb only in Spark standalone mode HistoryServer is embedded into
Master process for convenience IIRC. You can always start a standalone
HistoryServer process.
Also
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17551
Agree with @srowen , the proposed solution overlaps the key functionality
of history server. Usually we should stop the app and release the resources as
soon as application finished
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r109914191
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java
---
@@ -176,7 +176,8 @@ private void
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r109916009
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r109918033
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r109914966
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -184,7 +204,7 @@ protected void serviceInit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17480
Also CC @tgravescs @vanzin to help to review, they may have more thoughts
:).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r109870073
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r109869769
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Thanks @squito , thanks so much for your review, just address the comments
you mentioned.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109846490
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -115,8 +115,9 @@ private[spark] object ExecutorsPage {
val
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109282721
--- Diff: core/src/main/scala/org/apache/spark/status/api/v1/api.scala ---
@@ -111,7 +115,11 @@ class RDDDataDistribution private[spark](
val
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109282694
--- Diff: core/src/main/scala/org/apache/spark/status/api/v1/api.scala ---
@@ -75,7 +75,11 @@ class ExecutorSummary private[spark](
val
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109282643
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala
---
@@ -276,7 +276,8 @@ class BlockManagerMasterEndpoint
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r109282268
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,9 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17498
IMHO I thought this is still a unnecessary fix. I would doubt if user
really get confused without your fix? You can always correct me since I stand
on the of developers :).
---
If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
@tgravescs @vanzin , would you please help reviewing this PR. Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109271875
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17480
@witgo thanks for your explanation. But AFAIK if AM get restarted, it will
honor initial executor number to launch executors, so after executors are
launched, stage should be able to get executed
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
@squito , AFAIK we don't record block update events in history server. So
we could not calculate the used memory from event log.
---
If your project is set up for it, you can reply to this email
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109128494
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r109125950
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -320,14 +321,15 @@ private[history] class FsHistoryProvider
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17495
[SPARK-20172] Add file permission check when listing files in
FsHistoryProvider
## What changes were proposed in this pull request?
In the current Spark's HistoryServer we expected
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17220
Well, I still saw "tungsten-sort" in branch 2.1 and master
(https://github.com/apache/spark/blob/branch-2.1/core/src/main/scala/org/apache/spark/SparkEnv.scala#L320).
Can you
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17220
What's the meaning of "has been deleted in Spark 2.1.0"? I think the reason
mention above is quite clear.
---
If your project is set up for it, you can reply to this email and have
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Thanks @tgravescs , no problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17480
Would you please help to elaborate the problem you met? That would be
better to understand your scenario and fix.
---
If your project is set up for it, you can reply to this email and have your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17479#discussion_r108875608
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -24,7 +24,7 @@ Summary
RDD
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17479#discussion_r108862872
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -24,7 +24,7 @@ Summary
RDD
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Hi @squito , would you please review the code again? Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17388#discussion_r108580909
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -485,12 +485,17 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17458
There're many changes related to UI part, actually we don't have many unit
tests covered in this part, so I'm afraid these change may potentially
introduce regression.
---
If your project
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17458#discussion_r108418582
--- Diff: core/src/main/scala/org/apache/spark/ui/UIUtils.scala ---
@@ -317,7 +317,7 @@ private[spark] object UIUtils extends Logging {
def
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17430
The current fix LGTM, pending on committer's green light.
CC @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17387
@yaooqinn normally such a big behavior change requires design doc and well
discussion. It is not a good idea to push bunch of codes silently without any
discussion.
Besides, we're
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r108123141
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -166,6 +170,23 @@ protected void serviceInit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r108120305
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -166,6 +170,23 @@ protected void serviceInit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r108119959
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/network/yarn/YarnShuffleServiceMetricsSuite.scala
---
@@ -0,0 +1,74
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r108119704
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Thanks @squito , I will change some event logs files to test non zero the
off heap memory.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17430#discussion_r108092473
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -190,6 +190,7 @@ private[deploy] class SparkSubmitArguments(args
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17388
Thanks @vanzin for your comments. Yes, remote jars will also not be added
to client's classpath currently.
Any further comments?
---
If your project is set up for it, you can reply
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
Sorry @vanzin about it. Just update the description, please review again.
Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Jenkins test is abruptly with signal -9.
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r107840363
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,118 @@
+/*
+ * Licensed
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r107838311
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -166,6 +170,23 @@ protected void serviceInit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17401#discussion_r107838144
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleServiceMetrics.java
---
@@ -0,0 +1,118 @@
+/*
+ * Licensed
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17401
Actually we follow two space indent for java code in Spark, would you
please change the format?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
Thanks @vanzin , I agree with you. The scenario what @subrotosanyal
mentioned is a little bit customized, so this problem might be better to handle
out of Spark
Sure, I will update
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
I'm not sure if I understand your scenario correctly. In your case Spark
application is embeded into your own application, your application is still
worked after Spark is stopped. And because
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
@subrotosanyal would you please elaborate more about this:
> Resource Manager expires the tokens of an application after a certain
period of time lead to expiration of the token wh
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/10506
Thanks @srowen , I think the fix is OK, at least should be no worse than
previous code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
I've no idea about that issue, the description is so vague ("Resource
Manager cancels the Delegation Token after 10 minutes of shutting down the
spark context."). Not pretty sure th
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r107604884
--- Diff: docs/configuration.md ---
@@ -1411,6 +1411,15 @@ Apart from these, the following properties are also
available, and may be useful
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
Broaden this issue a bit. Currently in driver side (client mode), issued
delegation tokens are not added into current ugi, this makes follow-up
hdfs/metastore/hbase communication still use tgt
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107583524
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17388
@vanzin @tgravescs @mridulm do you think it necessary to add additional
jars and main jar into classloader for yarn cluster mode?
In my class I run Spark with HBase in secure cluster, so
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17387
Does kerberos authentication really work in non-yarn cluster mode? AFAIK I
don't see any code which will ship delegation tokens to executors other than
yarn.
---
If your project is set up
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17388
[SPARK-20059][YARN] Use the correct classloader for HBaseCredentialProvider
## What changes were proposed in this pull request?
Currently we use system classloader to find HBase jars
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17300
The fix LGTM, I think it is nice to have such topology priority. CC
@mridulm .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17342
CC @vanzin @tgravescs , can you please also review this PR? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107326694
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Hi All, thanks a lot for your comments.
Here is the UI after changed:
![screen shot 2017-03-22 at 11 10 22
am](https://cloud.githubusercontent.com/assets/850797/24180547/7180b5ac
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
Ping @vanzin , mind reviewing again? Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
@yaooqinn , You only need one principal (for example principal
"f...@example.com") to get authentication from different services, the
configurations for hive and NN mentioned abo
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
@yaooqinn , pushed another way to fix this issue, I think hdfs folder owner
should be the right user (proxy user).
---
If your project is set up for it, you can reply to this email and have your
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107075363
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107064584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r107064456
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2767,3 +2767,24 @@ private[spark] class CircularBuffer(sizeInBytes: Int
= 10240
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17334
"Change the exception log to add RDD id of the related the block".
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If yo
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17312
IMO "(2 executors)" should be enough :).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Thanks @tgravescs and @squito for your comments. Based on @tgravescs 's
point, looks like making them as a table column is more valid.
So I will revert back to use column and combine
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17335
Thanks @yaooqinn , that's really an issue here. That was my concern when I
had this fix, since we wrap the whole `SessionState.start` with real user, it
means all the operations inside
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17312
I think it is good to have such count, but just this `(2)` is a little
strange, maybe we could change the description so people will know the meanings
of this `(2)`.
---
If your project is set
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17334
Can we change the title to actually reflect what we did in the PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Change the UI according to @CodingCat 's comment.
![screen shot 2017-03-20 at 7 48 27
pm](https://cloud.githubusercontent.com/assets/850797/24098696/ceb431d6-0da6-11e7-9d01-5bdd5cb54613
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r106882606
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -87,8 +87,13 @@ case class
SparkListenerEnvironmentUpdate
1301 - 1400 of 2761 matches
Mail list logo