[ https://issues.apache.org/jira/browse/SPARK-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167144#comment-14167144 ]
Nan Zhu commented on SPARK-2962: -------------------------------- Hi, [~mrid...@yahoo-inc.com] I think this has been fixed in https://github.com/apache/spark/pull/1313/files, {code:title=TaskSetManager.scala|borderStyle=solid} if (tasks(index).preferredLocations == Nil) { addTo(pendingTasksWithNoPrefs) } {code} Now, only tasks without explicit preference is added to pendingTasksWithNoPrefs, and NO_PREF tasks are always scheduled after NODE_LOCAL > Suboptimal scheduling in spark > ------------------------------ > > Key: SPARK-2962 > URL: https://issues.apache.org/jira/browse/SPARK-2962 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.1.0 > Environment: All > Reporter: Mridul Muralidharan > > In findTask, irrespective of 'locality' specified, pendingTasksWithNoPrefs > are always scheduled with PROCESS_LOCAL > pendingTasksWithNoPrefs contains tasks which currently do not have any alive > locations - but which could come in 'later' : particularly relevant when > spark app is just coming up and containers are still being added. > This causes a large number of non node local tasks to be scheduled incurring > significant network transfers in the cluster when running with non trivial > datasets. > The comment "// Look for no-pref tasks after rack-local tasks since they can > run anywhere." is misleading in the method code : locality levels start from > process_local down to any, and so no prefs get scheduled much before rack. > Also note that, currentLocalityIndex is reset to the taskLocality returned by > this method - so returning PROCESS_LOCAL as the level will trigger wait times > again. (Was relevant before recent change to scheduler, and might be again > based on resolution of this issue). > Found as part of writing test for SPARK-2931 > -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org