[ https://issues.apache.org/jira/browse/SPARK-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eren Avsarogullari updated SPARK-17759: --------------------------------------- Description: If _spark.scheduler.allocation.file_ has duplicate pools, all of them are created when _SparkContext_ is initialized but just one of them is used and the other ones look redundant. This causes _redundant pool_ creation and needs to be fixed. *Code to Reproduce* : {code:java} val conf = new SparkConf().setAppName("spark-fairscheduler").setMaster("local") conf.set("spark.scheduler.mode", "FAIR") conf.set("spark.scheduler.allocation.file", "src/main/resources/fairscheduler-duplicate-pools.xml") val sc = new SparkContext(conf) {code} *fairscheduler-duplicate-pools.xml* : The following sample just shows two default and duplicate_pool1 but this can also be thought for N default and/or other duplicate pools. {code:xml} <allocations> <pool name="default"> <minShare>0</minShare> <weight>1</weight> <schedulingMode>FAIR</schedulingMode> </pool> <pool name="default"> <minShare>0</minShare> <weight>1</weight> <schedulingMode>FAIR</schedulingMode> </pool> <pool name="duplicate_pool1"> <minShare>1</minShare> <weight>1</weight> <schedulingMode>FAIR</schedulingMode> </pool> <pool name="duplicate_pool1"> <minShare>2</minShare> <weight>2</weight> <schedulingMode>FAIR</schedulingMode> </pool> </allocations> {code} *Debug Screenshot* : This means Pool.schedulableQueue(ConcurrentLinkedQueue[Schedulable]) has 4 pools as default, default, duplicate_pool1, duplicate_pool1 but Pool.schedulableNameToSchedulable(ConcurrentHashMap[String, Schedulable]) has default and duplicate_pool1 due to pool name as key so one of default and duplicate_pool1 look as redundant and live in Pool.schedulableQueue. Please have a look for *attached screenshots* was: If _spark.scheduler.allocation.file_ has duplicate pools, all of them are created when _SparkContext_ is initialized but just one of them is used and the other ones look redundant. This causes _redundant pool_ creation and needs to be fixed. *Code to Reproduce* : {code:java} val conf = new SparkConf().setAppName("spark-fairscheduler").setMaster("local") conf.set("spark.scheduler.mode", "FAIR") conf.set("spark.scheduler.allocation.file", "src/main/resources/fairscheduler-duplicate-pools.xml") val sc = new SparkContext(conf) {code} *fairscheduler-duplicate-pools.xml* : The following sample just shows two default and duplicate_pool1 but this can also be thought for N default and/or other duplicate pools. {code:xml} <allocations> <pool name="default"> <minShare>0</minShare> <weight>1</weight> <schedulingMode>FAIR</schedulingMode> </pool> <pool name="default"> <minShare>0</minShare> <weight>1</weight> <schedulingMode>FAIR</schedulingMode> </pool> <pool name="duplicate_pool1"> <minShare>1</minShare> <weight>1</weight> <schedulingMode>FAIR</schedulingMode> </pool> <pool name="duplicate_pool1"> <minShare>2</minShare> <weight>2</weight> <schedulingMode>FAIR</schedulingMode> </pool> </allocations> {code} *Debug Screenshot* : This means Pool.schedulableQueue(ConcurrentLinkedQueue[Schedulable]) has 4 pools as default, default, duplicate_pool1, duplicate_pool1 but Pool.schedulableNameToSchedulable(ConcurrentHashMap[String, Schedulable]) has default and duplicate_pool1 due to pool name as key so one of default and duplicate_pool1 look as redundant and live in Pool.schedulableQueue. > SchedulableBuilder should avoid to create duplicate fair scheduler-pools. > ------------------------------------------------------------------------- > > Key: SPARK-17759 > URL: https://issues.apache.org/jira/browse/SPARK-17759 > Project: Spark > Issue Type: Bug > Components: Scheduler > Affects Versions: 2.1.0 > Reporter: Eren Avsarogullari > Attachments: duplicate_pools.png, duplicate_pools2.png > > > If _spark.scheduler.allocation.file_ has duplicate pools, all of them are > created when _SparkContext_ is initialized but just one of them is used and > the other ones look redundant. This causes _redundant pool_ creation and > needs to be fixed. > *Code to Reproduce* : > {code:java} > val conf = new > SparkConf().setAppName("spark-fairscheduler").setMaster("local") > conf.set("spark.scheduler.mode", "FAIR") > conf.set("spark.scheduler.allocation.file", > "src/main/resources/fairscheduler-duplicate-pools.xml") > val sc = new SparkContext(conf) > {code} > *fairscheduler-duplicate-pools.xml* : > The following sample just shows two default and duplicate_pool1 but this can > also be thought for N default and/or other duplicate pools. > {code:xml} > <allocations> > <pool name="default"> > <minShare>0</minShare> > <weight>1</weight> > <schedulingMode>FAIR</schedulingMode> > </pool> > <pool name="default"> > <minShare>0</minShare> > <weight>1</weight> > <schedulingMode>FAIR</schedulingMode> > </pool> > <pool name="duplicate_pool1"> > <minShare>1</minShare> > <weight>1</weight> > <schedulingMode>FAIR</schedulingMode> > </pool> > <pool name="duplicate_pool1"> > <minShare>2</minShare> > <weight>2</weight> > <schedulingMode>FAIR</schedulingMode> > </pool> > </allocations> > {code} > *Debug Screenshot* : > This means Pool.schedulableQueue(ConcurrentLinkedQueue[Schedulable]) has 4 > pools as default, default, duplicate_pool1, duplicate_pool1 but > Pool.schedulableNameToSchedulable(ConcurrentHashMap[String, Schedulable]) has > default and duplicate_pool1 due to pool name as key so one of default and > duplicate_pool1 look as redundant and live in Pool.schedulableQueue. > Please have a look for *attached screenshots* -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org