Github user jerrypeng commented on a diff in the pull request:

    https://github.com/apache/storm/pull/2400#discussion_r148923218
  
    --- Diff: docs/Resource_Aware_Scheduler_overview.md ---
    @@ -243,58 +243,81 @@ http://dl.acm.org/citation.cfm?id=2814808
     <div id='Specifying-Topology-Prioritization-Strategy'/>
     ### Specifying Topology Prioritization Strategy
     
    -The order of scheduling is a pluggable interface in which a user could 
define a strategy that prioritizes topologies.  For a user to define his or her 
own prioritization strategy, he or she needs to implement the 
ISchedulingPriorityStrategy interface.  A user can set the scheduling priority 
strategy by setting the *Config.RESOURCE_AWARE_SCHEDULER_PRIORITY_STRATEGY* to 
point to the class that implements the strategy. For instance:
    +The order of scheduling and eviction is determined by a pluggable 
interface in which the cluster owner can define how topologies should be 
scheduled.  For the owner to define his or her own prioritization strategy, she 
or he needs to implement the ISchedulingPriorityStrategy interface.  A user can 
set the scheduling priority strategy by setting the 
`DaemonConfig.RESOURCE_AWARE_SCHEDULER_PRIORITY_STRATEGY` to point to the class 
that implements the strategy. For instance:
     ```
         resource.aware.scheduler.priority.strategy: 
"org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy"
     ```
    -A default strategy will be provided.  The following explains how the 
default scheduling priority strategy works.
    +
    +Topologies are scheduled starting at the beginning of the list returned by 
this plugin.  If there are not enough resources to schedule the topology others 
are evicted starting at the end of the list.  Eviction stops when there are no 
lower priority topologies left to evict.
     
     **DefaultSchedulingPriorityStrategy**
     
    -The order of scheduling should be based on the distance between a user’s 
current resource allocation and his or her guaranteed allocation.  We should 
prioritize the users who are the furthest away from their resource guarantee. 
The difficulty of this problem is that a user may have multiple resource 
guarantees, and another user can have another set of resource guarantees, so 
how can we compare them in a fair manner?  Let's use the average percentage of 
resource guarantees satisfied as a method of comparison.
    +In the past the order of scheduling was based on the distance between a 
user’s current resource allocation and his or her guaranteed allocation.
    +
    +We currently use a slightly different approach. We simulate scheduling the 
highest priority topology for each user and score the topology for each of the 
resources using the formula
    +
    +```
    +(Requested + Assigned - Guaranteed)/Available
    +```
    +
    +Where
    +
    + * `Requested` is the resource requested by this topology (or a 
approximation of it for complex requests like shared memory)
    + * `Assigned` is the resources already assigned by the simulation.
    + * `Guaranteed` is the resource guarantee for this user
    + * `Available` is the amount of that resource currently available in the 
cluster.
     
    -For example:
    +This gives a score that is negative for guaranteed requests and a score 
that is positive for requests that are not within the guarantee.
     
    -|User|Resource Guarantee|Resource Allocated|
    -|----|------------------|------------------|
    -|A|<10 CPU, 50GB>|<2 CPU, 40 GB>|
    -|B|< 20 CPU, 25GB>|<15 CPU, 10 GB>|
    +To combine different resources the maximum of all the indavidual resource 
scores is used.  This guarantees that if a user would go over a guarantee for a 
single resource it would not be offset by being under guarantee on any other 
resources.
     
    -User A’s average percentage satisfied of resource guarantee: 
    +For Example:
     
    -(2/10+40/50)/2  = 0.5
    +Assume we have to schedule the following topologies.
     
    -User B’s average percentage satisfied of resource guarantee: 
    +|ID|User|CPU|Memory|Priority|
    +|---|----|---|------|-------|
    +|A-1|A|100|1,000|1|
    +|A-2|A|100|1,000|10|
    +|B-1|B|100|1,000|1|
    +|B-2|B|100|1,000|10|
     
    -(15/20+10/25)/2  = 0.575
    +The cluster as a whole has 300 CPU and 4,000 Memory.
     
    -Thus, in this example User A has a smaller average percentage of his or 
her resource guarantee satisfied than User B.  Thus, User A should get priority 
to be allocated more resource, i.e., schedule a topology submitted by User A.
    +User A is guaranteed 100 CPU and 1,000 Memory.  User B is guaranteed 200 
CPU and 1,500 Memory.  The scores for the most important, lowest priority 
number, topologies for each user would be.
    +
    +```
    +A-1 Score = max(CPU: (100 + 0 - 100)/300, MEM: (1,000 + 0 - 1,000)/4,000) 
= 0
    +B-1 Score = max(CPU: (100 + 0 - 200)/300, MEM: (1,000 + 0 - 1,500)/4,000) 
= -0.125
    +``` 
     
    -When scheduling, RAS sorts users by the average percentage satisfied of 
resource guarantee and schedule topologies from users based on that ordering 
starting from the users with the lowest average percentage satisfied of 
resource guarantee.  When a user’s resource guarantee is completely 
satisfied, the user’s average percentage satisfied of resource guarantee will 
be greater than or equal to 1.
    +`B-1` has the lowest score so it would be the highest priority topology to 
schedule. In the next round the scores would be.
     
    -<div id='Specifying-Eviction-Strategy'/>
    -### Specifying Eviction Strategy
    -The eviction strategy is used when there are not enough free resources in 
the cluster to schedule new topologies. If the cluster is full, we need a 
mechanism to evict topologies so that user resource guarantees can be met and 
additional resource can be shared fairly among users. The strategy for evicting 
topologies is also a pluggable interface in which the user can implement his or 
her own topology eviction strategy.  For a user to implement his or her own 
eviction strategy, he or she needs to implement the IEvictionStrategy Interface 
and set *Config.RESOURCE_AWARE_SCHEDULER_EVICTION_STRATEGY* to point to the 
implemented strategy class. For instance:
     ```
    -    resource.aware.scheduler.eviction.strategy: 
"org.apache.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy"
    +A-1 Score = max(CPU: (100 + 0 - 100)/200, MEM: (1,000 + 0 - 1,000)/3,000) 
= 0
    +B-2 Score = max(CPU: (100 + 100 - 200)/200, MEM: (1,000 + 1,000 - 
1,500)/3,000) = 0.167
     ```
    -A default eviction strategy is provided.  The following explains how the 
default topology eviction strategy works
     
    -**DefaultEvictionStrategy**
    +`A-1` has the lowest score now so it would be the next higest priority 
topology to schedule.
    --- End diff --
    
    misspelling "higest"


---

Reply via email to