On 7/10/07, Bryan Cantrill <bmc at eng.sun.com> wrote: > > > On Tue, Jul 10, 2007 at 11:49:48AM -0600, Bruce Shaw wrote: > > >The kstat facility has long been in need of a serious overhaul; this > > >presents a good opportunity to do this work. > > > > > > I want in on this. I use kstat to derive CPU information for net-snmp. > > If we're opening the API, I've got some suggestions. > > Excellent. So do I. ;) To me, this would be an incredibly valuable > project, and it would allow us to get the benefits of a "CPUfs" besides. > And I think this might be one of those instances where we want to at least > perform the thought experiment of breaking the API (especially the > in-kernel > API, which is unspeakably broken in that it exports far too much > implementation). Not that we necessarily need to break the API in order > to move kstat forward, just that this is a situation where we want to > at least understand what we might be able to do if we were to start with > a clean(er) sheet of paper...
I have been thinking about this, and one of the things I don't understand, is how the Scheduler selects, which CMT VCPU, CORE, or socket to schedule processes or threads. Could someone give me a quick explanation, and how it relates to kstat? (Or a reference book/link where I can find the info) Looking down the road, I can also see a further silicon layer, "core clusters" that share a databus, w/ the databuses tied together with an onchip xbar. (This may or may not come to fruition, but as we scale core counts, we will probably see (cc-)numa designs on a single piece of silicon. I ask because I am thinking about various workloads, will have different requirements. Some may perform best with core/cache affinity, and others that would best be utilized by "striping" processes/threads across cores/sockets. How do we account for this? --brian P.S. - What I seek is a stable interface to get this system information via the commandline. (IE: scriptable). I understand that this may require the API and the knobs to be hammered out first. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/observability-discuss/attachments/20070710/059eac0e/attachment.html>
