Thomas De Schampheleire wrote:
Now, while this can take down the processor when idle, it will be
possibly be woken up again by the scheduler very shortly. So that's
where the story of the power-aware scheduler comes in. We could for
example envision a strategy that works opposite to load balancing.

At least in the sense where we try to load balance across physical processors,
pipelines, etc.

Instead of trying to balance, try to put as many tasks to the
currently available (in an active state) processors, and only when a
certain condition is reached (not sure yet which parameters, but I
think the cpu load, and queue length are of importance, right?) will a
sleeping processor be woken up and assigned a task to.

Perhaps as an example, when system utilization increases bring additional
power managable CPU resources into a higher power/performance state.
Also, the dispatcher could try to avoid running threads on CPUs in the lower
power/performance state.


For the us_drv.c, if I understand correctly, currently it only
supports cpu frequency scaling, and no power states. Is that correct?

I believe that's correct. Sarito can probably comment further.

It says that it is not DDI-compatible. What are the implications of this?

This would most likely be true if the driver depended on interfaces / structures that fall outside the stable DDI. The implication would be that those interfaces / structures could change in the core kernel, and this could break the driver. So even though such a driver is implemented in a loadable kernel module, it cannot be decoupled
from the core kernel.

There do seem to be us_attach() and us_detach() methods. What exactly
does the us stand for?
"UltraSPARC" I would imagine? Sarito?
Can these functions be used as an alternative
to cpu_add_unit() and _del_unit() for example?
The us_attach()/detach() are the driver entry points for when each instance of the device (in this case, CPU) attaches / detaches. I'm going to defer to Sarito
to comment as to when this happens in relation to cpu_add_unit() etc... :)

Thanks,
-Eric
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code

Reply via email to