Re: [PATCH] sched: dynamically update the root-domain span/online maps
* Andrew Morton <[EMAIL PROTECTED]> wrote: > > i'm reluctant to apply it without test results, unless we have a > > very clear picture of what happened on Andrew's box and how this > > updated patch resolves that problem. (or once Andrew tests your > > patch and deems it OK.) > > hm, due to any of these patches? > > > Seems OK now - resume-from-RAM actually resumes. cool, thanks! I've added it to sched-devel.git and have updated it. Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched: dynamically update the root-domain span/online maps
Ingo Molnar wrote: well since i reverted the original patch, there's no regression. The question is, do we know whether this new patch works fine wrt. s2ram? Hi Ingo, I included the same patches into 2.6.23.9-rt13 and someone reported s2r failed for them. I've included Greg's updates into a pre release of -rt14 and sent that to the reporter. I'm waiting on a response before releasing -rt14. Although I did just get a response from Andrew Morton saying that the updated patch fixed his box. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched: dynamically update the root-domain span/online maps
On Tue, 18 Dec 2007 11:48:00 +0100 Ingo Molnar <[EMAIL PROTECTED]> wrote: > > * Gregory Haskins <[EMAIL PROTECTED]> wrote: > > > http://marc.info/?l=linux-mm-commits&m=119793598429477&w=2 > > > > I have confirmed that it builds and boots clean, and it passes > > checkpatch. However, my test machine seems to be having problems with > > suspend-to-ram that are unrelated to this patch that prevent me from > > verifying the fix entirely. If Gautham or Andrew could confirm that it > > resolves their suspend-to-ram issue, I would be most appreciative. > > i'm reluctant to apply it without test results, unless we have a very > clear picture of what happened on Andrew's box and how this updated > patch resolves that problem. (or once Andrew tests your patch and deems > it OK.) > Seems OK now - resume-from-RAM actually resumes. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched: dynamically update the root-domain span/online maps
* Gregory Haskins <[EMAIL PROTECTED]> wrote: > Hi Steven, > I posted a suspend-to-ram fix to sched-devel earlier today: > > http://lkml.org/lkml/2007/12/17/445 > > This fix should also be applied to -rt as I introduced the same > regression there. Here is a version of the fix for 23-rt13. I can > submit a version for 24-rc5-rt1 at your request. well since i reverted the original patch, there's no regression. The question is, do we know whether this new patch works fine wrt. s2ram? Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched: dynamically update the root-domain span/online maps
* Gregory Haskins <[EMAIL PROTECTED]> wrote: > http://marc.info/?l=linux-mm-commits&m=119793598429477&w=2 > > I have confirmed that it builds and boots clean, and it passes > checkpatch. However, my test machine seems to be having problems with > suspend-to-ram that are unrelated to this patch that prevent me from > verifying the fix entirely. If Gautham or Andrew could confirm that it > resolves their suspend-to-ram issue, I would be most appreciative. i'm reluctant to apply it without test results, unless we have a very clear picture of what happened on Andrew's box and how this updated patch resolves that problem. (or once Andrew tests your patch and deems it OK.) Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched: dynamically update the root-domain span/online maps
Gregory Haskins wrote: Hi Steven, I posted a suspend-to-ram fix to sched-devel earlier today: http://lkml.org/lkml/2007/12/17/445 This fix should also be applied to -rt as I introduced the same regression there. Here is a version of the fix for 23-rt13. I can submit a version for 24-rc5-rt1 at your request. Thanks Gregory, I'll put this into the 2.6.23.11-rt14 queue. I have someone that tells me that suspend to RAM breaks. I'll have him try the new update and let me know if it fixes his issue before releasing it. I'll also let everyone know the results as well. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched: dynamically update the root-domain span/online maps
Hi Steven, I posted a suspend-to-ram fix to sched-devel earlier today: http://lkml.org/lkml/2007/12/17/445 This fix should also be applied to -rt as I introduced the same regression there. Here is a version of the fix for 23-rt13. I can submit a version for 24-rc5-rt1 at your request. Regards, -Greg - The baseline code statically builds the span maps when the domain is formed. Previous attempts at dynamically updating the maps caused a suspend-to-ram regression, which should now be fixed. Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]> CC: Gautham R Shenoy <[EMAIL PROTECTED]> --- kernel/sched.c | 28 1 files changed, 16 insertions(+), 12 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index 244c4b5..95b8c99 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -281,8 +281,6 @@ struct rt_rq { * exclusive cpuset is created, we also create and attach a new root-domain * object. * - * By default the system creates a single root-domain with all cpus as - * members (mimicking the global state we have today). */ struct root_domain { atomic_t refcount; @@ -300,6 +298,10 @@ struct root_domain { #endif }; +/* + * By default the system creates a single root-domain with all cpus as + * members (mimicking the global state we have today). + */ static struct root_domain def_root_domain; #endif @@ -6066,6 +6068,10 @@ static void rq_attach_root(struct rq *rq, struct root_domain *rd) atomic_inc(&rd->refcount); rq->rd = rd; + cpu_set(rq->cpu, rd->span); + if (cpu_isset(rq->cpu, cpu_online_map)) + cpu_set(rq->cpu, rd->online); + for (class = sched_class_highest; class; class = class->next) { if (class->join_domain) class->join_domain(rq); @@ -6074,12 +6080,12 @@ static void rq_attach_root(struct rq *rq, struct root_domain *rd) spin_unlock_irqrestore(&rq->lock, flags); } -static void init_rootdomain(struct root_domain *rd, const cpumask_t *map) +static void init_rootdomain(struct root_domain *rd) { memset(rd, 0, sizeof(*rd)); - rd->span = *map; - cpus_and(rd->online, rd->span, cpu_online_map); + cpus_clear(rd->span); + cpus_clear(rd->online); cpupri_init(&rd->cpupri); @@ -6087,13 +6093,11 @@ static void init_rootdomain(struct root_domain *rd, const cpumask_t *map) static void init_defrootdomain(void) { - cpumask_t cpus = CPU_MASK_ALL; - - init_rootdomain(&def_root_domain, &cpus); + init_rootdomain(&def_root_domain); atomic_set(&def_root_domain.refcount, 1); } -static struct root_domain *alloc_rootdomain(const cpumask_t *map) +static struct root_domain *alloc_rootdomain(void) { struct root_domain *rd; @@ -6101,7 +6105,7 @@ static struct root_domain *alloc_rootdomain(const cpumask_t *map) if (!rd) return NULL; - init_rootdomain(rd, map); + init_rootdomain(rd); return rd; } @@ -6523,7 +6527,7 @@ static int build_sched_domains(const cpumask_t *cpu_map) sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes; #endif - rd = alloc_rootdomain(cpu_map); + rd = alloc_rootdomain(); if (!rd) { printk(KERN_WARNING "Cannot alloc root domain\n"); return -ENOMEM; @@ -7021,7 +7025,6 @@ void __init sched_init(void) #ifdef CONFIG_SMP rq->sd = NULL; rq->rd = NULL; - rq_attach_root(rq, &def_root_domain); rq->active_balance = 0; rq->next_balance = jiffies; rq->push_cpu = 0; @@ -7030,6 +7033,7 @@ void __init sched_init(void) INIT_LIST_HEAD(&rq->migration_queue); rq->rt.highest_prio = MAX_RT_PRIO; rq->rt.overloaded = 0; + rq_attach_root(rq, &def_root_domain); #endif atomic_set(&rq->nr_iowait, 0); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched: dynamically update the root-domain span/online maps
Hi Ingo, The following patch applies to sched-devel to replace the root-domain patch that was reverted as noted here: http://marc.info/?l=linux-mm-commits&m=119793598429477&w=2 I have confirmed that it builds and boots clean, and it passes checkpatch. However, my test machine seems to be having problems with suspend-to-ram that are unrelated to this patch that prevent me from verifying the fix entirely. If Gautham or Andrew could confirm that it resolves their suspend-to-ram issue, I would be most appreciative. Regards, -Greg -- sched: dynamically update the root-domain span/online maps The baseline code statically builds the span maps when the domain is formed. Previous attempts at dynamically updating the maps caused a suspend-to-ram regression, which should now be fixed. Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]> CC: Gautham R Shenoy <[EMAIL PROTECTED]> --- kernel/sched.c | 31 +++ 1 files changed, 19 insertions(+), 12 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index dc6fb24..48cfec1 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -359,8 +359,6 @@ struct rt_rq { * exclusive cpuset is created, we also create and attach a new root-domain * object. * - * By default the system creates a single root-domain with all cpus as - * members (mimicking the global state we have today). */ struct root_domain { atomic_t refcount; @@ -375,6 +373,10 @@ struct root_domain { atomic_t rto_count; }; +/* + * By default the system creates a single root-domain with all cpus as + * members (mimicking the global state we have today). + */ static struct root_domain def_root_domain; #endif @@ -5854,6 +5856,9 @@ static void rq_attach_root(struct rq *rq, struct root_domain *rd) class->leave_domain(rq); } + cpu_clear(rq->cpu, old_rd->span); + cpu_clear(rq->cpu, old_rd->online); + if (atomic_dec_and_test(&old_rd->refcount)) kfree(old_rd); } @@ -5861,6 +5866,10 @@ static void rq_attach_root(struct rq *rq, struct root_domain *rd) atomic_inc(&rd->refcount); rq->rd = rd; + cpu_set(rq->cpu, rd->span); + if (cpu_isset(rq->cpu, cpu_online_map)) + cpu_set(rq->cpu, rd->online); + for (class = sched_class_highest; class; class = class->next) { if (class->join_domain) class->join_domain(rq); @@ -5869,23 +5878,21 @@ static void rq_attach_root(struct rq *rq, struct root_domain *rd) spin_unlock_irqrestore(&rq->lock, flags); } -static void init_rootdomain(struct root_domain *rd, const cpumask_t *map) +static void init_rootdomain(struct root_domain *rd) { memset(rd, 0, sizeof(*rd)); - rd->span = *map; - cpus_and(rd->online, rd->span, cpu_online_map); + cpus_clear(rd->span); + cpus_clear(rd->online); } static void init_defrootdomain(void) { - cpumask_t cpus = CPU_MASK_ALL; - - init_rootdomain(&def_root_domain, &cpus); + init_rootdomain(&def_root_domain); atomic_set(&def_root_domain.refcount, 1); } -static struct root_domain *alloc_rootdomain(const cpumask_t *map) +static struct root_domain *alloc_rootdomain(void) { struct root_domain *rd; @@ -5893,7 +5900,7 @@ static struct root_domain *alloc_rootdomain(const cpumask_t *map) if (!rd) return NULL; - init_rootdomain(rd, map); + init_rootdomain(rd); return rd; } @@ -6314,7 +6321,7 @@ static int build_sched_domains(const cpumask_t *cpu_map) sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes; #endif - rd = alloc_rootdomain(cpu_map); + rd = alloc_rootdomain(); if (!rd) { printk(KERN_WARNING "Cannot alloc root domain\n"); return -ENOMEM; @@ -6886,7 +6893,6 @@ void __init sched_init(void) #ifdef CONFIG_SMP rq->sd = NULL; rq->rd = NULL; - rq_attach_root(rq, &def_root_domain); rq->active_balance = 0; rq->next_balance = jiffies; rq->push_cpu = 0; @@ -6895,6 +6901,7 @@ void __init sched_init(void) INIT_LIST_HEAD(&rq->migration_queue); rq->rt.highest_prio = MAX_RT_PRIO; rq->rt.overloaded = 0; + rq_attach_root(rq, &def_root_domain); #endif atomic_set(&rq->nr_iowait, 0); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/