[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32655781 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the nodes which are cached. */ + def getCachedNodes: Seq[RDDOperationNode] = { +val cachedNodes = _childNodes.filter(_.cached) +_childClusters.foreach(cluster = cachedNodes ++= cluster._childNodes.filter(_.cached)) --- End diff -- style: ``` _childClusters.foreach { cluster = cachedNodes ++= cluster._childNodes.filter(_.cached) } ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112896107 [Test build #35049 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/35049/consoleFull) for PR 6839 at commit [`f98728b`](https://github.com/apache/spark/commit/f98728bdbef0d3388f36928dccd573fa15bc6536). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32656672 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the nodes which are cached. */ + def getCachedNodes: Seq[RDDOperationNode] = { +val cachedNodes = _childNodes.filter(_.cached) +_childClusters.foreach(cluster = cachedNodes ++= cluster._childNodes.filter(_.cached)) --- End diff -- also, another way to rewrite this would be: ``` _childNodes.filter(_.cached) ++ _childClusters.flatMap(_.getCachedNodes) ``` I think it's both more concise and easier to read --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112927192 [Test build #35049 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/35049/console) for PR 6839 at commit [`f98728b`](https://github.com/apache/spark/commit/f98728bdbef0d3388f36928dccd573fa15bc6536). * This patch **passes all tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112927231 Merged build finished. Test PASSed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112900896 Approach looks fine to me. Once you address the comments I'll merge this. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32656904 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the nodes which are cached. */ + def getCachedNodes: Seq[RDDOperationNode] = { +val cachedNodes = _childNodes.filter(_.cached) +_childClusters.foreach(cluster = cachedNodes ++= cluster._childNodes.filter(_.cached)) --- End diff -- I see, is it because we clone fewer nodes? AFAIK `++` on ArrayBuffer actually clones the entire thing first --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32656910 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the nodes which are cached. */ + def getCachedNodes: Seq[RDDOperationNode] = { +val cachedNodes = _childNodes.filter(_.cached) +_childClusters.foreach(cluster = cachedNodes ++= cluster._childNodes.filter(_.cached)) +cachedNodes --- End diff -- another way to rewrite this would be: ``` _childNodes.filter(_.cached) ++ _childClusters.flatMap(_.getCachedNodes) ``` I think it's both more concise and easier to read --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112895403 Hi @XuTingjun can you update the title to something more specific: RDDOperationGraph: getting cached nodes is slow or something? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112895667 Merged build started. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112895459 add to whitelist --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112895599 Merged build triggered. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user andrewor14 commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32656406 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the nodes which are cached. */ + def getCachedNodes: Seq[RDDOperationNode] = { +val cachedNodes = _childNodes.filter(_.cached) +_childClusters.foreach(cluster = cachedNodes ++= cluster._childNodes.filter(_.cached)) --- End diff -- maybe I'm missing something, but why is this faster? You're still iterating through all the nodes in the end so the complexity doesn't change. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112407985 Yeah, I think expand all nodes then filter every node, is slow and cost memory. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user WangTaoTheTonic commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32518242 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the node which are cached. */ --- End diff -- Nit: Return all the node`s` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user srowen commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32513468 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,16 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the node which are cached. */ + def getCachedNode: Seq[RDDOperationNode] = { --- End diff -- Also: `getCachedNodes` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user srowen commented on a diff in the pull request: https://github.com/apache/spark/pull/6839#discussion_r32513428 --- Diff: core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala --- @@ -70,6 +70,16 @@ private[ui] class RDDOperationCluster(val id: String, private var _name: String) def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } + + /** Return all the node which are cached. */ + def getCachedNode: Seq[RDDOperationNode] = { +var cachedNodes = new ListBuffer[RDDOperationNode] --- End diff -- Can be a `val`, and can add the initial elements straight away with just `ListBuffer(cachedNodes:_*)`. Below you have a missing space before `for`, but better than a `for` loop, why not ``` _childClusters.foreach(cluster = cachedNodes ++= cluster._childNodes.filter(_.cached)) ``` ? Unless I overlook something that works. Also, can you explain why you think this is slow to begin with? because nodes are expanded, then filtered? Your JIRAs have been missing this so please add clearer motivation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user JoshRosen commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112469240 /cc @andrewor14 for review. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/6839#issuecomment-112360016 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-8392] Improve the efficiency
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/6839 [SPARK-8392] Improve the efficiency def getAllNodes: Seq[RDDOperationNode] = { _childNodes ++ _childClusters.flatMap(_.childNodes) } when the _childClusters has so many nodes, the process will hang on. I think we can improve the efficiency here. You can merge this pull request into a Git repository by running: $ git pull https://github.com/XuTingjun/spark DAGImprove Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/6839.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #6839 commit 81f9fd247a4ce8c69636495ade2f130f6fa6aa6f Author: xutingjun xuting...@huawei.com Date: 2015-06-16T09:09:14Z put the filter inside --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org