Fix a few typos detected by the checkpatch script:
WARNING: 'intialized' may be misspelled - perhaps 'initialized'?
WARNING: 'Substract' may be misspelled - perhaps 'Subtract'?

Signed-off-by: Wen Yang <[email protected]>
CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
CC: [email protected]
---
 kernel/sched/fair.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f19aa66f9b15..dbcb0dd7332e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -703,9 +703,9 @@ void init_entity_runnable_average(struct sched_entity *se)
        memset(sa, 0, sizeof(*sa));
 
        /*
-        * Tasks are intialized with full load to be seen as heavy tasks until
+        * Tasks are initialized with full load to be seen as heavy tasks until
         * they get a chance to stabilize to their real load level.
-        * Group entities are intialized with zero load to reflect the fact that
+        * Group entities are initialized with zero load to reflect the fact 
that
         * nothing has been attached to the task group yet.
         */
        if (entity_is_task(se))
@@ -3977,8 +3977,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity 
*se, int flags)
        /*
         * When dequeuing a sched_entity, we must:
         *   - Update loads to have both entity and cfs_rq synced with now.
-        *   - Substract its load from the cfs_rq->runnable_avg.
-        *   - Substract its previous weight from cfs_rq->load.weight.
+        *   - Subtract its load from the cfs_rq->runnable_avg.
+        *   - Subtract its previous weight from cfs_rq->load.weight.
         *   - For group entity, update its weight to reflect the new share
         *     of its group cfs_rq.
         */
-- 
2.19.1

Reply via email to