* Andi Kleen <[EMAIL PROTECTED]> wrote: > > hm, i'm not convinced about this one. It increases the code size a > > bit > > Tiny bit (<200 bytes) and the wait_for/sleep_on refactor patch in the > series saves over 1K so I should have some room for code size > increase. Overall it will be still considerable smaller.
there's no forced dependency between those two patches :-) So for now i've applied the one that saves text and skipped the one that bloats it. > > and it's a sched.c local hack. If then this should be done on a > > generic infrastructure level - lots of other code (VFS, networking, > > etc.) could benefit from it i suspect - and then should be > > .configurable as well. > > Unfortunately not -- for this to work (especially for inlining) > requires to > #include files implementing the sub calls. Except for the scheduler > #that > is pretty uncommon unfortunately. Also the situation regarding which > call target is the common one is typically much less clear than with > sched_fair / other scheduling classes. some workloads would call sched_fair uncommon too. To me this seems like a workaround for the lack of a particular hardware feature. > > Then the benefit might become measurable too. > > It might have been measurable if the context switch was measurable at > all. Unfortunately the lmbench3 lat_ctx test I tired fluctuated by > itself over 50%. Ok I suppose it would be possible to instrument the > kernel itself to measure cycles. Would that convince you? dunno, it would depend on the numbers. But really, in most workloads we do a lot more VFS indirect calls than scheduler indirect calls. So if this was an issue i'd really suggest to attack it in a generic way. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/