----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/51929/#review149681 -----------------------------------------------------------
src/main/java/org/apache/aurora/scheduler/scheduling/SchedulingModule.java (lines 93 - 96) <https://reviews.apache.org/r/51929/#comment217361> Regarding your notes in the RB description: I don't see a problem if we set this to a slightly higher value such as `10` or `15`. It seems like we will maintain the basic taskgroup round robin scheduling fairness even with slightly larger batch sizes, so I am ok with bumping the value. src/main/java/org/apache/aurora/scheduler/scheduling/TaskGroups.java (lines 173 - 174) <https://reviews.apache.org/r/51929/#comment217365> If I understand things correctly, I believe this line could have a performance bug in it: `batchWorker.execute` acquires the global storage lock before calling the `taskScheduler`. For the latter, we use the following definition: ``` this.taskScheduler = (store, taskIds) -> { settings.rateLimiter.acquire(); return taskScheduler.schedule(store, taskIds); }; ``` In combination, we will be throttled by the `rateLimiter` while holding the storage lock. Instead, we should try to acquire the log within the `Runnable` in `startGroup` before calling `batchWorker.execute` so that the global lock is not hold longer than absolutely necessary. - Stephan Erb On Sept. 16, 2016, 11:53 p.m., Maxim Khutornenko wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/51929/ > ----------------------------------------------------------- > > (Updated Sept. 16, 2016, 11:53 p.m.) > > > Review request for Aurora, Joshua Cohen, Stephan Erb, and Zameer Manji. > > > Repository: aurora > > > Description > ------- > > This is phase 2 of scheduling perf improvement effort started in > https://reviews.apache.org/r/51759/. > > We can now take multiple (configurable) number of task IDs from a given > `TaskGroup` per scheduling. The idea is to go deeper through the offer queue > and assign more than one task if possible. This approach delivers > substantially better MTTA and still ensures fairness across multiple > `TaskGroups`. We have observed almost linear improvement in MTTA (4x+ with 5 > tasks per round), which suggest the `max_tasks_per_schedule_attempt` can be > set even higher if the majority of cluster jobs have large number of > instances and/or update batch sizes. > > As far as a single round perf goes, we can consider the following 2 > worst-case scenarios: > - master: single task scheduling fails after trying all offers in the queue > - this patch: N tasks launched with the very last N offers in the queue + `(N > x single_task_launch_latency)` > > Assuming that matching N tasks against M offers takes exactly the same time > as 1 task against M offers (as they all share the same `TaskGroup`), the only > measurable difference comes from the additional `N x > single_task_launch_latency` overhead. Based on real cluster observations, the > `single_task_launch_latency` is less than 1% of a single task scheduling > attempt, which is << than the savings from avoided additional scheduling > rounds. > > As far as jmh results go, the new approach (batching + multiple tasks per > round) is only slightly more demanding (~8%). Both results though are MUCH > higher than the real cluster perf, which just confirms we are not bound by > CPU time here: > > Master: > ``` > Benchmark > Mode Cnt Score Error Units > SchedulingBenchmarks.InsufficientResourcesSchedulingBenchmark.runBenchmark > thrpt 10 17126.183 ± 488.425 ops/s > ``` > > This patch: > ``` > Benchmark > Mode Cnt Score Error Units > SchedulingBenchmarks.InsufficientResourcesSchedulingBenchmark.runBenchmark > thrpt 10 15838.051 ± 187.890 ops/s > ``` > > > Diffs > ----- > > src/jmh/java/org/apache/aurora/benchmark/SchedulingBenchmarks.java > 6f1cbfbc4510a037cffc95fee54f62f463d2b534 > src/main/java/org/apache/aurora/scheduler/filter/AttributeAggregate.java > 87b9e1928ab2d44668df1123f32ffdc4197c0c70 > src/main/java/org/apache/aurora/scheduler/scheduling/SchedulingModule.java > 664bc6cf964ede2473a4463e58bcdbcb65bc7413 > src/main/java/org/apache/aurora/scheduler/scheduling/TaskGroup.java > 5d319557057e27fd5fc6d3e553e9ca9139399c50 > src/main/java/org/apache/aurora/scheduler/scheduling/TaskGroups.java > d390c07522d22e43d79ce4370985f3643ef021ca > src/main/java/org/apache/aurora/scheduler/scheduling/TaskScheduler.java > 207d38d1ddfd373892602218a98c1daaf4a1325f > src/main/java/org/apache/aurora/scheduler/state/TaskAssigner.java > 7f7b4358ef05c0f0d0e14daac1a5c25488467dc9 > > src/test/java/org/apache/aurora/scheduler/events/NotifyingSchedulingFilterTest.java > ece476b918e6f2c128039e561eea23a94d8ed396 > > src/test/java/org/apache/aurora/scheduler/filter/AttributeAggregateTest.java > 209f9298a1d55207b9b41159f2ab366f92c1eb70 > > src/test/java/org/apache/aurora/scheduler/filter/SchedulingFilterImplTest.java > 0cf23df9f373c0d9b27e55a12adefd5f5fd81ba5 > src/test/java/org/apache/aurora/scheduler/http/AbstractJettyTest.java > c1c3eca4a6e6c88dab6b1c69fae3e2f290b58039 > > src/test/java/org/apache/aurora/scheduler/preemptor/PreemptionVictimFilterTest.java > ee5c6528af89cc62a35fdb314358c489556d8131 > src/test/java/org/apache/aurora/scheduler/preemptor/PreemptorImplTest.java > 98048fabc00f233925b6cca015c2525980556e2b > > src/test/java/org/apache/aurora/scheduler/preemptor/PreemptorModuleTest.java > 2c3e5f32c774be07a5fa28c8bcf3b9a5d88059a1 > src/test/java/org/apache/aurora/scheduler/scheduling/TaskGroupsTest.java > 88729626de5fa87b45472792c59cc0ff1ade3e93 > > src/test/java/org/apache/aurora/scheduler/scheduling/TaskSchedulerImplTest.java > a4e87d2216401f344dca64d69b945de7bcf8159a > src/test/java/org/apache/aurora/scheduler/state/TaskAssignerImplTest.java > b4d27f69ad5d4cce03da9f04424dc35d30e8af29 > > Diff: https://reviews.apache.org/r/51929/diff/ > > > Testing > ------- > > All types of testing including deploying to test and production clusters. > > > Thanks, > > Maxim Khutornenko > >