Peter Chubb <[EMAIL PROTECTED]> wrote: > > Timing data on threads at present is pretty crude: when the timer > interrupt occurs, a tick is added to either system time or user time > for the currently running thread. Thus in an unpacthed kernel one can > distinguish three timed states: On-cpu in userspace, on-cpu in system > space, and not running. > > The actual number of states is much larger. A thread can be on a > runqueue or the expired queue (i.e., ready to run but not running), > sleeping on a semaphore or on a futex, having its time stolen to > service an interrupt, etc., etc. > > This patch adds timers per-state to each struct task_struct, so that > time in all these states can be tracked. This patch contains the core > code do the timing, and to initialise the timers. Subsequent patches > enable the code (by adding Kconfig options) and add hooks to track > state changes.
Why does the kernel need this feature? Have you any numbers on the overhead? The preempt_disable() in sys_msa() seems odd. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/