Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 24 February 2015 at 12:29, Morten Rasmussen wrote: > On Tue, Feb 24, 2015 at 10:38:29AM +, Vincent Guittot wrote: >> On 23 February 2015 at 16:45, Morten Rasmussen >> wrote: >> > On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: >> >> On 20 February 2015 at 15:35, Morten Rasmussen >> >> wrote: >> >> > On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: >> >> >> On 20 February 2015 at 12:52, Morten Rasmussen >> >> >> wrote: >> >> >> > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: >> >> >> >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: >> >> >> >> >> >> >> >> > Also, it still not clear why patch 10 uses relative capacity >> >> >> >> > reduction >> >> >> >> > instead of absolute capacity available to CFS tasks. >> >> >> >> >> >> >> >> As present in your asymmetric big and small systems? Yes it would be >> >> >> >> unfortunate to migrate a task to an idle small core when the big >> >> >> >> core is >> >> >> >> still faster, even if reduced by rt/irq work. >> >> >> > >> >> >> > Yes, exactly. I don't think it would cause any harm for symmetric >> >> >> > cases >> >> >> > to use absolute capacity instead. Am I missing something? >> >> >> >> >> >> If absolute capacity is used, we will trig an active load balance from >> >> >> little to big core each time a little has got 1 task and a big core is >> >> >> idle whereas we only want to trig an active migration is the src_cpu's >> >> >> capacity that is available for the cfs task is significantly reduced >> >> >> by rt tasks. >> >> >> >> >> >> I can mix absolute and relative tests by 1st testing that the capacity >> >> >> of the src is reduced and then ensure that the dst_cpu has more >> >> >> absolute capacity than src_cpu >> >> > >> >> > If we use absolute capacity and check if the source cpu is fully >> >> > utilized, wouldn't that work? We want to migrate the task if it is >> >> >> >> we want to trig the migration before the cpu is fully utilized by >> >> rt/irq (which almost never occurs) >> > >> > I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, >> > get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly >> > smaller than capacity_of() which is may be reduced by rt/irq >> > utilization, there are still spare cycles and it is not strictly >> > required to migrate tasks away using active LB. But, tasks would be >> > moved away if the tasks are being allowed less cpu time due to rt/irq >> > (get_cpu_usage() >= capacity_of()). Wouldn't that work? Or, do you want >> > to migrate tasks regardless of whether there are still spare cycles >> > available on the cpu doing rt/irq work? >> >> In fact, we can see perf improvement even if the cpu is not fully used >> by thread and interrupts because the task becomes significantly >> preempted by interruptions. > > Unless the tasks are the consumers of those interrupts, then it would > harm performance to migrate them away :) I get your point though. Could > we have a short comment stating the intentions so we don't forget in a > couple of months? I will add more details in the commit log > >> >> > >> > The advantage of comparing get_cpu_usage() with capacity_of() is that it >> > would work for migrating cpu-intensive tasks away from little cpu on >> > big.LITTLE as well. Then we don't need another almost identical check >> > for that purpose :) >> >> I understand your point but the patch becomes inefficient for part of >> the issue that it's trying to originally solve if we compare >> get_cpu_usage with capacity_of. So we will probably need to add few >> more tests for the issue you point out above > > Right. If your goal is to avoid preemptions and not just make sure that > cpus aren't fully utilized then my proposal isn't sufficient. We will > have to add another condition to solve the big.LITTLE capacity thing > later. In fact we already have that somewhere deep down in the pile of > patches I posted some weeks ago. > >> >> > currently being restricted by the available capacity (due to rt/irq >> >> > work, being a little cpu, or both) and if there is a destination cpu >> >> > with more absolute capacity available. No? >> >> >> >> yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) >> >> enables us to know if the cpu is significantly used by irq/rt so it's >> >> worth to do an active load balance of the task. Then the absolute >> >> comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu >> >> checks that the dst_cpu is a better choice >> >> >> >> something like : >> >> if ((check_cpu_capacity(src_rq, sd)) && >> >>(capacity_of(src_cpu)*sd->imbalce_pct < capacity_of(dst_cpu)*100)) >> >> return 1; >> > >> > It should solve the big.LITTLE issue. Though I would prefer >> > get_cpu_usage() ~= capacity_of() approach as it could even improve >> > performance on big.LITTLE. >> >> ok. IMHO, it's worth having a dedicated patch for this issue > > Fine by me as long as we get the extra
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Tue, Feb 24, 2015 at 10:38:29AM +, Vincent Guittot wrote: > On 23 February 2015 at 16:45, Morten Rasmussen > wrote: > > On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: > >> On 20 February 2015 at 15:35, Morten Rasmussen > >> wrote: > >> > On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: > >> >> On 20 February 2015 at 12:52, Morten Rasmussen > >> >> wrote: > >> >> > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: > >> >> >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: > >> >> >> > >> >> >> > Also, it still not clear why patch 10 uses relative capacity > >> >> >> > reduction > >> >> >> > instead of absolute capacity available to CFS tasks. > >> >> >> > >> >> >> As present in your asymmetric big and small systems? Yes it would be > >> >> >> unfortunate to migrate a task to an idle small core when the big > >> >> >> core is > >> >> >> still faster, even if reduced by rt/irq work. > >> >> > > >> >> > Yes, exactly. I don't think it would cause any harm for symmetric > >> >> > cases > >> >> > to use absolute capacity instead. Am I missing something? > >> >> > >> >> If absolute capacity is used, we will trig an active load balance from > >> >> little to big core each time a little has got 1 task and a big core is > >> >> idle whereas we only want to trig an active migration is the src_cpu's > >> >> capacity that is available for the cfs task is significantly reduced > >> >> by rt tasks. > >> >> > >> >> I can mix absolute and relative tests by 1st testing that the capacity > >> >> of the src is reduced and then ensure that the dst_cpu has more > >> >> absolute capacity than src_cpu > >> > > >> > If we use absolute capacity and check if the source cpu is fully > >> > utilized, wouldn't that work? We want to migrate the task if it is > >> > >> we want to trig the migration before the cpu is fully utilized by > >> rt/irq (which almost never occurs) > > > > I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, > > get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly > > smaller than capacity_of() which is may be reduced by rt/irq > > utilization, there are still spare cycles and it is not strictly > > required to migrate tasks away using active LB. But, tasks would be > > moved away if the tasks are being allowed less cpu time due to rt/irq > > (get_cpu_usage() >= capacity_of()). Wouldn't that work? Or, do you want > > to migrate tasks regardless of whether there are still spare cycles > > available on the cpu doing rt/irq work? > > In fact, we can see perf improvement even if the cpu is not fully used > by thread and interrupts because the task becomes significantly > preempted by interruptions. Unless the tasks are the consumers of those interrupts, then it would harm performance to migrate them away :) I get your point though. Could we have a short comment stating the intentions so we don't forget in a couple of months? > > > > > The advantage of comparing get_cpu_usage() with capacity_of() is that it > > would work for migrating cpu-intensive tasks away from little cpu on > > big.LITTLE as well. Then we don't need another almost identical check > > for that purpose :) > > I understand your point but the patch becomes inefficient for part of > the issue that it's trying to originally solve if we compare > get_cpu_usage with capacity_of. So we will probably need to add few > more tests for the issue you point out above Right. If your goal is to avoid preemptions and not just make sure that cpus aren't fully utilized then my proposal isn't sufficient. We will have to add another condition to solve the big.LITTLE capacity thing later. In fact we already have that somewhere deep down in the pile of patches I posted some weeks ago. > >> > currently being restricted by the available capacity (due to rt/irq > >> > work, being a little cpu, or both) and if there is a destination cpu > >> > with more absolute capacity available. No? > >> > >> yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) > >> enables us to know if the cpu is significantly used by irq/rt so it's > >> worth to do an active load balance of the task. Then the absolute > >> comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu > >> checks that the dst_cpu is a better choice > >> > >> something like : > >> if ((check_cpu_capacity(src_rq, sd)) && > >>(capacity_of(src_cpu)*sd->imbalce_pct < capacity_of(dst_cpu)*100)) > >> return 1; > > > > It should solve the big.LITTLE issue. Though I would prefer > > get_cpu_usage() ~= capacity_of() approach as it could even improve > > performance on big.LITTLE. > > ok. IMHO, it's worth having a dedicated patch for this issue Fine by me as long as we get the extra check you proposed above to fix the big.LITTLE issue. Morten -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 23 February 2015 at 16:45, Morten Rasmussen wrote: > On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: >> On 20 February 2015 at 15:35, Morten Rasmussen >> wrote: >> > On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: >> >> On 20 February 2015 at 12:52, Morten Rasmussen >> >> wrote: >> >> > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: >> >> >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: >> >> >> >> >> >> > Also, it still not clear why patch 10 uses relative capacity >> >> >> > reduction >> >> >> > instead of absolute capacity available to CFS tasks. >> >> >> >> >> >> As present in your asymmetric big and small systems? Yes it would be >> >> >> unfortunate to migrate a task to an idle small core when the big core >> >> >> is >> >> >> still faster, even if reduced by rt/irq work. >> >> > >> >> > Yes, exactly. I don't think it would cause any harm for symmetric cases >> >> > to use absolute capacity instead. Am I missing something? >> >> >> >> If absolute capacity is used, we will trig an active load balance from >> >> little to big core each time a little has got 1 task and a big core is >> >> idle whereas we only want to trig an active migration is the src_cpu's >> >> capacity that is available for the cfs task is significantly reduced >> >> by rt tasks. >> >> >> >> I can mix absolute and relative tests by 1st testing that the capacity >> >> of the src is reduced and then ensure that the dst_cpu has more >> >> absolute capacity than src_cpu >> > >> > If we use absolute capacity and check if the source cpu is fully >> > utilized, wouldn't that work? We want to migrate the task if it is >> >> we want to trig the migration before the cpu is fully utilized by >> rt/irq (which almost never occurs) > > I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, > get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly > smaller than capacity_of() which is may be reduced by rt/irq > utilization, there are still spare cycles and it is not strictly > required to migrate tasks away using active LB. But, tasks would be > moved away if the tasks are being allowed less cpu time due to rt/irq > (get_cpu_usage() >= capacity_of()). Wouldn't that work? Or, do you want > to migrate tasks regardless of whether there are still spare cycles > available on the cpu doing rt/irq work? In fact, we can see perf improvement even if the cpu is not fully used by thread and interrupts because the task becomes significantly preempted by interruptions. > > The advantage of comparing get_cpu_usage() with capacity_of() is that it > would work for migrating cpu-intensive tasks away from little cpu on > big.LITTLE as well. Then we don't need another almost identical check > for that purpose :) I understand your point but the patch becomes inefficient for part of the issue that it's trying to originally solve if we compare get_cpu_usage with capacity_of. So we will probably need to add few more tests for the issue you point out above > >> >> > currently being restricted by the available capacity (due to rt/irq >> > work, being a little cpu, or both) and if there is a destination cpu >> > with more absolute capacity available. No? >> >> yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) >> enables us to know if the cpu is significantly used by irq/rt so it's >> worth to do an active load balance of the task. Then the absolute >> comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu >> checks that the dst_cpu is a better choice >> >> something like : >> if ((check_cpu_capacity(src_rq, sd)) && >>(capacity_of(src_cpu)*sd->imbalce_pct < capacity_of(dst_cpu)*100)) >> return 1; > > It should solve the big.LITTLE issue. Though I would prefer > get_cpu_usage() ~= capacity_of() approach as it could even improve > performance on big.LITTLE. ok. IMHO, it's worth having a dedicated patch for this issue Vincent -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 23 February 2015 at 16:45, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: On 20 February 2015 at 15:35, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is we want to trig the migration before the cpu is fully utilized by rt/irq (which almost never occurs) I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly smaller than capacity_of() which is may be reduced by rt/irq utilization, there are still spare cycles and it is not strictly required to migrate tasks away using active LB. But, tasks would be moved away if the tasks are being allowed less cpu time due to rt/irq (get_cpu_usage() = capacity_of()). Wouldn't that work? Or, do you want to migrate tasks regardless of whether there are still spare cycles available on the cpu doing rt/irq work? In fact, we can see perf improvement even if the cpu is not fully used by thread and interrupts because the task becomes significantly preempted by interruptions. The advantage of comparing get_cpu_usage() with capacity_of() is that it would work for migrating cpu-intensive tasks away from little cpu on big.LITTLE as well. Then we don't need another almost identical check for that purpose :) I understand your point but the patch becomes inefficient for part of the issue that it's trying to originally solve if we compare get_cpu_usage with capacity_of. So we will probably need to add few more tests for the issue you point out above currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) enables us to know if the cpu is significantly used by irq/rt so it's worth to do an active load balance of the task. Then the absolute comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu checks that the dst_cpu is a better choice something like : if ((check_cpu_capacity(src_rq, sd)) (capacity_of(src_cpu)*sd-imbalce_pct capacity_of(dst_cpu)*100)) return 1; It should solve the big.LITTLE issue. Though I would prefer get_cpu_usage() ~= capacity_of() approach as it could even improve performance on big.LITTLE. ok. IMHO, it's worth having a dedicated patch for this issue Vincent -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Tue, Feb 24, 2015 at 10:38:29AM +, Vincent Guittot wrote: On 23 February 2015 at 16:45, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: On 20 February 2015 at 15:35, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is we want to trig the migration before the cpu is fully utilized by rt/irq (which almost never occurs) I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly smaller than capacity_of() which is may be reduced by rt/irq utilization, there are still spare cycles and it is not strictly required to migrate tasks away using active LB. But, tasks would be moved away if the tasks are being allowed less cpu time due to rt/irq (get_cpu_usage() = capacity_of()). Wouldn't that work? Or, do you want to migrate tasks regardless of whether there are still spare cycles available on the cpu doing rt/irq work? In fact, we can see perf improvement even if the cpu is not fully used by thread and interrupts because the task becomes significantly preempted by interruptions. Unless the tasks are the consumers of those interrupts, then it would harm performance to migrate them away :) I get your point though. Could we have a short comment stating the intentions so we don't forget in a couple of months? The advantage of comparing get_cpu_usage() with capacity_of() is that it would work for migrating cpu-intensive tasks away from little cpu on big.LITTLE as well. Then we don't need another almost identical check for that purpose :) I understand your point but the patch becomes inefficient for part of the issue that it's trying to originally solve if we compare get_cpu_usage with capacity_of. So we will probably need to add few more tests for the issue you point out above Right. If your goal is to avoid preemptions and not just make sure that cpus aren't fully utilized then my proposal isn't sufficient. We will have to add another condition to solve the big.LITTLE capacity thing later. In fact we already have that somewhere deep down in the pile of patches I posted some weeks ago. currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) enables us to know if the cpu is significantly used by irq/rt so it's worth to do an active load balance of the task. Then the absolute comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu checks that the dst_cpu is a better choice something like : if ((check_cpu_capacity(src_rq, sd)) (capacity_of(src_cpu)*sd-imbalce_pct capacity_of(dst_cpu)*100)) return 1; It should solve the big.LITTLE issue. Though I would prefer get_cpu_usage() ~= capacity_of() approach as it could even improve performance on big.LITTLE. ok. IMHO, it's worth having a dedicated patch for this issue Fine by me as long as we get the extra check you proposed above to fix the big.LITTLE issue. Morten -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 24 February 2015 at 12:29, Morten Rasmussen morten.rasmus...@arm.com wrote: On Tue, Feb 24, 2015 at 10:38:29AM +, Vincent Guittot wrote: On 23 February 2015 at 16:45, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: On 20 February 2015 at 15:35, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is we want to trig the migration before the cpu is fully utilized by rt/irq (which almost never occurs) I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly smaller than capacity_of() which is may be reduced by rt/irq utilization, there are still spare cycles and it is not strictly required to migrate tasks away using active LB. But, tasks would be moved away if the tasks are being allowed less cpu time due to rt/irq (get_cpu_usage() = capacity_of()). Wouldn't that work? Or, do you want to migrate tasks regardless of whether there are still spare cycles available on the cpu doing rt/irq work? In fact, we can see perf improvement even if the cpu is not fully used by thread and interrupts because the task becomes significantly preempted by interruptions. Unless the tasks are the consumers of those interrupts, then it would harm performance to migrate them away :) I get your point though. Could we have a short comment stating the intentions so we don't forget in a couple of months? I will add more details in the commit log The advantage of comparing get_cpu_usage() with capacity_of() is that it would work for migrating cpu-intensive tasks away from little cpu on big.LITTLE as well. Then we don't need another almost identical check for that purpose :) I understand your point but the patch becomes inefficient for part of the issue that it's trying to originally solve if we compare get_cpu_usage with capacity_of. So we will probably need to add few more tests for the issue you point out above Right. If your goal is to avoid preemptions and not just make sure that cpus aren't fully utilized then my proposal isn't sufficient. We will have to add another condition to solve the big.LITTLE capacity thing later. In fact we already have that somewhere deep down in the pile of patches I posted some weeks ago. currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) enables us to know if the cpu is significantly used by irq/rt so it's worth to do an active load balance of the task. Then the absolute comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu checks that the dst_cpu is a better choice something like : if ((check_cpu_capacity(src_rq, sd)) (capacity_of(src_cpu)*sd-imbalce_pct capacity_of(dst_cpu)*100)) return 1; It should solve the big.LITTLE issue. Though I would prefer get_cpu_usage() ~= capacity_of() approach as it could even improve performance on big.LITTLE. ok. IMHO, it's worth having a dedicated patch for this issue Fine by me as long as we get the extra check you proposed above to fix the big.LITTLE issue. ok Morten -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: > On 20 February 2015 at 15:35, Morten Rasmussen > wrote: > > On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: > >> On 20 February 2015 at 12:52, Morten Rasmussen > >> wrote: > >> > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: > >> >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: > >> >> > >> >> > Also, it still not clear why patch 10 uses relative capacity reduction > >> >> > instead of absolute capacity available to CFS tasks. > >> >> > >> >> As present in your asymmetric big and small systems? Yes it would be > >> >> unfortunate to migrate a task to an idle small core when the big core is > >> >> still faster, even if reduced by rt/irq work. > >> > > >> > Yes, exactly. I don't think it would cause any harm for symmetric cases > >> > to use absolute capacity instead. Am I missing something? > >> > >> If absolute capacity is used, we will trig an active load balance from > >> little to big core each time a little has got 1 task and a big core is > >> idle whereas we only want to trig an active migration is the src_cpu's > >> capacity that is available for the cfs task is significantly reduced > >> by rt tasks. > >> > >> I can mix absolute and relative tests by 1st testing that the capacity > >> of the src is reduced and then ensure that the dst_cpu has more > >> absolute capacity than src_cpu > > > > If we use absolute capacity and check if the source cpu is fully > > utilized, wouldn't that work? We want to migrate the task if it is > > we want to trig the migration before the cpu is fully utilized by > rt/irq (which almost never occurs) I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly smaller than capacity_of() which is may be reduced by rt/irq utilization, there are still spare cycles and it is not strictly required to migrate tasks away using active LB. But, tasks would be moved away if the tasks are being allowed less cpu time due to rt/irq (get_cpu_usage() >= capacity_of()). Wouldn't that work? Or, do you want to migrate tasks regardless of whether there are still spare cycles available on the cpu doing rt/irq work? The advantage of comparing get_cpu_usage() with capacity_of() is that it would work for migrating cpu-intensive tasks away from little cpu on big.LITTLE as well. Then we don't need another almost identical check for that purpose :) > > > currently being restricted by the available capacity (due to rt/irq > > work, being a little cpu, or both) and if there is a destination cpu > > with more absolute capacity available. No? > > yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) > enables us to know if the cpu is significantly used by irq/rt so it's > worth to do an active load balance of the task. Then the absolute > comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu > checks that the dst_cpu is a better choice > > something like : > if ((check_cpu_capacity(src_rq, sd)) && >(capacity_of(src_cpu)*sd->imbalce_pct < capacity_of(dst_cpu)*100)) > return 1; It should solve the big.LITTLE issue. Though I would prefer get_cpu_usage() ~= capacity_of() approach as it could even improve performance on big.LITTLE. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Fri, Feb 20, 2015 at 02:54:09PM +, Vincent Guittot wrote: On 20 February 2015 at 15:35, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is we want to trig the migration before the cpu is fully utilized by rt/irq (which almost never occurs) I meant fully utilized by rt/irq and cfs tasks, sorry. Essentially, get_cpu_usage() ~= capacity_of(). If get_cpu_usage() is signficantly smaller than capacity_of() which is may be reduced by rt/irq utilization, there are still spare cycles and it is not strictly required to migrate tasks away using active LB. But, tasks would be moved away if the tasks are being allowed less cpu time due to rt/irq (get_cpu_usage() = capacity_of()). Wouldn't that work? Or, do you want to migrate tasks regardless of whether there are still spare cycles available on the cpu doing rt/irq work? The advantage of comparing get_cpu_usage() with capacity_of() is that it would work for migrating cpu-intensive tasks away from little cpu on big.LITTLE as well. Then we don't need another almost identical check for that purpose :) currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) enables us to know if the cpu is significantly used by irq/rt so it's worth to do an active load balance of the task. Then the absolute comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu checks that the dst_cpu is a better choice something like : if ((check_cpu_capacity(src_rq, sd)) (capacity_of(src_cpu)*sd-imbalce_pct capacity_of(dst_cpu)*100)) return 1; It should solve the big.LITTLE issue. Though I would prefer get_cpu_usage() ~= capacity_of() approach as it could even improve performance on big.LITTLE. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 20 February 2015 at 15:35, Morten Rasmussen wrote: > On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: >> On 20 February 2015 at 12:52, Morten Rasmussen >> wrote: >> > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: >> >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: >> >> >> >> > Also, it still not clear why patch 10 uses relative capacity reduction >> >> > instead of absolute capacity available to CFS tasks. >> >> >> >> As present in your asymmetric big and small systems? Yes it would be >> >> unfortunate to migrate a task to an idle small core when the big core is >> >> still faster, even if reduced by rt/irq work. >> > >> > Yes, exactly. I don't think it would cause any harm for symmetric cases >> > to use absolute capacity instead. Am I missing something? >> >> If absolute capacity is used, we will trig an active load balance from >> little to big core each time a little has got 1 task and a big core is >> idle whereas we only want to trig an active migration is the src_cpu's >> capacity that is available for the cfs task is significantly reduced >> by rt tasks. >> >> I can mix absolute and relative tests by 1st testing that the capacity >> of the src is reduced and then ensure that the dst_cpu has more >> absolute capacity than src_cpu > > If we use absolute capacity and check if the source cpu is fully > utilized, wouldn't that work? We want to migrate the task if it is we want to trig the migration before the cpu is fully utilized by rt/irq (which almost never occurs) > currently being restricted by the available capacity (due to rt/irq > work, being a little cpu, or both) and if there is a destination cpu > with more absolute capacity available. No? yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) enables us to know if the cpu is significantly used by irq/rt so it's worth to do an active load balance of the task. Then the absolute comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu checks that the dst_cpu is a better choice something like : if ((check_cpu_capacity(src_rq, sd)) && (capacity_of(src_cpu)*sd->imbalce_pct < capacity_of(dst_cpu)*100)) return 1; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: > On 20 February 2015 at 12:52, Morten Rasmussen > wrote: > > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: > >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: > >> > >> > Also, it still not clear why patch 10 uses relative capacity reduction > >> > instead of absolute capacity available to CFS tasks. > >> > >> As present in your asymmetric big and small systems? Yes it would be > >> unfortunate to migrate a task to an idle small core when the big core is > >> still faster, even if reduced by rt/irq work. > > > > Yes, exactly. I don't think it would cause any harm for symmetric cases > > to use absolute capacity instead. Am I missing something? > > If absolute capacity is used, we will trig an active load balance from > little to big core each time a little has got 1 task and a big core is > idle whereas we only want to trig an active migration is the src_cpu's > capacity that is available for the cfs task is significantly reduced > by rt tasks. > > I can mix absolute and relative tests by 1st testing that the capacity > of the src is reduced and then ensure that the dst_cpu has more > absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 20 February 2015 at 12:52, Morten Rasmussen wrote: > On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: >> On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: >> >> > Also, it still not clear why patch 10 uses relative capacity reduction >> > instead of absolute capacity available to CFS tasks. >> >> As present in your asymmetric big and small systems? Yes it would be >> unfortunate to migrate a task to an idle small core when the big core is >> still faster, even if reduced by rt/irq work. > > Yes, exactly. I don't think it would cause any harm for symmetric cases > to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: > On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: > > > Also, it still not clear why patch 10 uses relative capacity reduction > > instead of absolute capacity available to CFS tasks. > > As present in your asymmetric big and small systems? Yes it would be > unfortunate to migrate a task to an idle small core when the big core is > still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: > Also, it still not clear why patch 10 uses relative capacity reduction > instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On 20 February 2015 at 15:35, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 02:13:21PM +, Vincent Guittot wrote: On 20 February 2015 at 12:52, Morten Rasmussen morten.rasmus...@arm.com wrote: On Fri, Feb 20, 2015 at 11:34:47AM +, Peter Zijlstra wrote: On Thu, Feb 19, 2015 at 12:49:40PM +, Morten Rasmussen wrote: Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. As present in your asymmetric big and small systems? Yes it would be unfortunate to migrate a task to an idle small core when the big core is still faster, even if reduced by rt/irq work. Yes, exactly. I don't think it would cause any harm for symmetric cases to use absolute capacity instead. Am I missing something? If absolute capacity is used, we will trig an active load balance from little to big core each time a little has got 1 task and a big core is idle whereas we only want to trig an active migration is the src_cpu's capacity that is available for the cfs task is significantly reduced by rt tasks. I can mix absolute and relative tests by 1st testing that the capacity of the src is reduced and then ensure that the dst_cpu has more absolute capacity than src_cpu If we use absolute capacity and check if the source cpu is fully utilized, wouldn't that work? We want to migrate the task if it is we want to trig the migration before the cpu is fully utilized by rt/irq (which almost never occurs) currently being restricted by the available capacity (due to rt/irq work, being a little cpu, or both) and if there is a destination cpu with more absolute capacity available. No? yes, so the relative capacity (cpu_capacity vs cpu_capacity_orig) enables us to know if the cpu is significantly used by irq/rt so it's worth to do an active load balance of the task. Then the absolute comparison of cpu_capacity of src_cpu vs cpu_capacity of dst_cpu checks that the dst_cpu is a better choice something like : if ((check_cpu_capacity(src_rq, sd)) (capacity_of(src_cpu)*sd-imbalce_pct capacity_of(dst_cpu)*100)) return 1; -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Thu, Jan 15, 2015 at 10:09:20AM +, Vincent Guittot wrote: > This patchset consolidates several changes in the capacity and the usage > tracking of the CPU. It provides a frequency invariant metric of the usage of > CPUs and generally improves the accuracy of load/usage tracking in the > scheduler. The frequency invariant metric is the foundation required for the > consolidation of cpufreq and implementation of a fully invariant load > tracking. > These are currently WIP and require several changes to the load balancer > (including how it will use and interprets load and capacity metrics) and > extensive validation. The frequency invariance is done with > arch_scale_freq_capacity and this patchset doesn't provide the backends of > the function which are architecture dependent. > > As discussed at LPC14, Morten and I have consolidated our changes into a > single > patchset to make it easier to review and merge. I'm happy with patch 1, 3, 5, 6, and 7. Add my acked-by if you like :) The last few needs buy-in from somebody running SMT systems I think. Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. Morten -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
On Thu, Jan 15, 2015 at 10:09:20AM +, Vincent Guittot wrote: This patchset consolidates several changes in the capacity and the usage tracking of the CPU. It provides a frequency invariant metric of the usage of CPUs and generally improves the accuracy of load/usage tracking in the scheduler. The frequency invariant metric is the foundation required for the consolidation of cpufreq and implementation of a fully invariant load tracking. These are currently WIP and require several changes to the load balancer (including how it will use and interprets load and capacity metrics) and extensive validation. The frequency invariance is done with arch_scale_freq_capacity and this patchset doesn't provide the backends of the function which are architecture dependent. As discussed at LPC14, Morten and I have consolidated our changes into a single patchset to make it easier to review and merge. I'm happy with patch 1, 3, 5, 6, and 7. Add my acked-by if you like :) The last few needs buy-in from somebody running SMT systems I think. Also, it still not clear why patch 10 uses relative capacity reduction instead of absolute capacity available to CFS tasks. Morten -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
This patchset consolidates several changes in the capacity and the usage tracking of the CPU. It provides a frequency invariant metric of the usage of CPUs and generally improves the accuracy of load/usage tracking in the scheduler. The frequency invariant metric is the foundation required for the consolidation of cpufreq and implementation of a fully invariant load tracking. These are currently WIP and require several changes to the load balancer (including how it will use and interprets load and capacity metrics) and extensive validation. The frequency invariance is done with arch_scale_freq_capacity and this patchset doesn't provide the backends of the function which are architecture dependent. As discussed at LPC14, Morten and I have consolidated our changes into a single patchset to make it easier to review and merge. During load balance, the scheduler evaluates the number of tasks that a group of CPUs can handle. The current method assumes that tasks have a fix load of SCHED_LOAD_SCALE and CPUs have a default capacity of SCHED_CAPACITY_SCALE. This assumption generates wrong decision by creating ghost cores or by removing real ones when the original capacity of CPUs is different from the default SCHED_CAPACITY_SCALE. With this patch set, we don't try anymore to evaluate the number of available cores based on the group_capacity but instead we evaluate the usage of a group and compare it with its capacity. This patchset mainly replaces the old capacity_factor method by a new one and keeps the general policy almost unchanged. These new metrics will be also used in later patches. The CPU usage is based on a running time tracking version of the current implementation of the load average tracking. I also have a version that is based on the new implementation proposal [1] but I haven't provide the patches and results as [1] is still under review. I can provide change above [1] to change how CPU usage is computed and to adapt to new mecanism. Change since V8 - reorder patches Change since V7 - add freq invariance for usage tracking - add freq invariance for scale_rt - update comments and commits' message - fix init of utilization_avg_contrib - fix prefer_sibling Change since V6 - add group usage tracking - fix some commits' messages - minor fix like comments and argument order Change since V5 - remove patches that have been merged since v5 : patches 01, 02, 03, 04, 05, 07 - update commit log and add more details on the purpose of the patches - fix/remove useless code with the rebase on patchset [2] - remove capacity_orig in sched_group_capacity as it is not used - move code in the right patch - add some helper function to factorize code Change since V4 - rebase to manage conflicts with changes in selection of busiest group Change since V3: - add usage_avg_contrib statistic which sums the running time of tasks on a rq - use usage_avg_contrib instead of runnable_avg_sum for cpu_utilization - fix replacement power by capacity - update some comments Change since V2: - rebase on top of capacity renaming - fix wake_affine statistic update - rework nohz_kick_needed - optimize the active migration of a task from CPU with reduced capacity - rename group_activity by group_utilization and remove unused total_utilization - repair SD_PREFER_SIBLING and use it for SMT level - reorder patchset to gather patches with same topics Change since V1: - add 3 fixes - correct some commit messages - replace capacity computation by activity - take into account current cpu capacity [1] https://lkml.org/lkml/2014/10/10/131 [2] https://lkml.org/lkml/2014/7/25/589 Morten Rasmussen (2): sched: Track group sched_entity usage contributions sched: Make sched entity usage tracking scale-invariant Vincent Guittot (8): sched: add utilization_avg_contrib sched: remove frequency scaling from cpu_capacity sched: make scale_rt invariant with frequency sched: add per rq cpu_capacity_orig sched: get CPU's usage statistic sched: replace capacity_factor by usage sched: add SD_PREFER_SIBLING for SMT level sched: move cfs task on a CPU with higher capacity include/linux/sched.h | 21 ++- kernel/sched/core.c | 15 +- kernel/sched/debug.c | 12 +- kernel/sched/fair.c | 369 -- kernel/sched/sched.h | 15 +- 5 files changed, 276 insertions(+), 156 deletions(-) -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH RESEND v9 00/10] sched: consolidation of CPU capacity and usage
This patchset consolidates several changes in the capacity and the usage tracking of the CPU. It provides a frequency invariant metric of the usage of CPUs and generally improves the accuracy of load/usage tracking in the scheduler. The frequency invariant metric is the foundation required for the consolidation of cpufreq and implementation of a fully invariant load tracking. These are currently WIP and require several changes to the load balancer (including how it will use and interprets load and capacity metrics) and extensive validation. The frequency invariance is done with arch_scale_freq_capacity and this patchset doesn't provide the backends of the function which are architecture dependent. As discussed at LPC14, Morten and I have consolidated our changes into a single patchset to make it easier to review and merge. During load balance, the scheduler evaluates the number of tasks that a group of CPUs can handle. The current method assumes that tasks have a fix load of SCHED_LOAD_SCALE and CPUs have a default capacity of SCHED_CAPACITY_SCALE. This assumption generates wrong decision by creating ghost cores or by removing real ones when the original capacity of CPUs is different from the default SCHED_CAPACITY_SCALE. With this patch set, we don't try anymore to evaluate the number of available cores based on the group_capacity but instead we evaluate the usage of a group and compare it with its capacity. This patchset mainly replaces the old capacity_factor method by a new one and keeps the general policy almost unchanged. These new metrics will be also used in later patches. The CPU usage is based on a running time tracking version of the current implementation of the load average tracking. I also have a version that is based on the new implementation proposal [1] but I haven't provide the patches and results as [1] is still under review. I can provide change above [1] to change how CPU usage is computed and to adapt to new mecanism. Change since V8 - reorder patches Change since V7 - add freq invariance for usage tracking - add freq invariance for scale_rt - update comments and commits' message - fix init of utilization_avg_contrib - fix prefer_sibling Change since V6 - add group usage tracking - fix some commits' messages - minor fix like comments and argument order Change since V5 - remove patches that have been merged since v5 : patches 01, 02, 03, 04, 05, 07 - update commit log and add more details on the purpose of the patches - fix/remove useless code with the rebase on patchset [2] - remove capacity_orig in sched_group_capacity as it is not used - move code in the right patch - add some helper function to factorize code Change since V4 - rebase to manage conflicts with changes in selection of busiest group Change since V3: - add usage_avg_contrib statistic which sums the running time of tasks on a rq - use usage_avg_contrib instead of runnable_avg_sum for cpu_utilization - fix replacement power by capacity - update some comments Change since V2: - rebase on top of capacity renaming - fix wake_affine statistic update - rework nohz_kick_needed - optimize the active migration of a task from CPU with reduced capacity - rename group_activity by group_utilization and remove unused total_utilization - repair SD_PREFER_SIBLING and use it for SMT level - reorder patchset to gather patches with same topics Change since V1: - add 3 fixes - correct some commit messages - replace capacity computation by activity - take into account current cpu capacity [1] https://lkml.org/lkml/2014/10/10/131 [2] https://lkml.org/lkml/2014/7/25/589 Morten Rasmussen (2): sched: Track group sched_entity usage contributions sched: Make sched entity usage tracking scale-invariant Vincent Guittot (8): sched: add utilization_avg_contrib sched: remove frequency scaling from cpu_capacity sched: make scale_rt invariant with frequency sched: add per rq cpu_capacity_orig sched: get CPU's usage statistic sched: replace capacity_factor by usage sched: add SD_PREFER_SIBLING for SMT level sched: move cfs task on a CPU with higher capacity include/linux/sched.h | 21 ++- kernel/sched/core.c | 15 +- kernel/sched/debug.c | 12 +- kernel/sched/fair.c | 369 -- kernel/sched/sched.h | 15 +- 5 files changed, 276 insertions(+), 156 deletions(-) -- 1.9.1 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/