* Con Kolivas <[EMAIL PROTECTED]> wrote: > > as a summary: i think your numbers demonstrate it nicely that the > > shorter 'timeslice length' that both CFS and SD utilizes does not have a > > measurable negative impact on your workload. To measure the total impact > > of 'timeslicing' you might want to try the exact same workload with a > > much higher 'timeslice length' of say 400 msecs, via: > > > > echo 400000000 > /proc/sys/kernel/sched_granularity_ns # on CFS > > echo 400 > /proc/sys/kernel/rr_interval # on SD > > I thought that the effective "timeslice" on CFS was double the > sched_granularity_ns so wouldn't this make the effective timeslice > double that of what you're setting SD to? [...]
The two settings are not really comparable. The "effective timeslice is the double of the granularity" thing i mentioned before is really a special-case: only true for a really undisturbed 100% CPU-using _two-task_ workload, if and only if the workload would not reschedule otherwise, but that is clearly not the case here: and if you look at the vmstat output provided by Michael you'll see that all 3 schedulers rescheduled this workload at around 1000/sec or 1 msec per scheduling atom. (But i'd agree that to be on the safe side the context-switch rate has to be monitored and if it seems too high on SD, the rr_interval should be increased.) > [...] Anyway the difference between 400 and 800ms timeslices is > unlikely to be significant so I don't mind. even on a totally idle system there's at least a 10 Hz 'background sound' of various activities, so any setting above 100 msecs rarely has any effect. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/