On 05/15/2014 07:57 PM, Peter Zijlstra wrote: [snip] >> >> It's like: >> >> /cgroup/cpu/l1/l2/l3/l4/l5/l6/A >> >> about level 7, the issue can not be solved any more. > > That's pretty retarded and yeah, that's way past the point where things > make sense. You might be lucky and have l1-5 as empty/pointless > hierarchy so the effective depth is less and then things will work, but > *shees*..
Exactly, that's the simulation of cgroup topology setup by libvirt, really doesn't make sense... rather torture than deployment, but they do make things like that... > [snip] >> I'm not sure which account will turns to be huge when group get deeper, >> the load accumulation will suffer discount when passing up, isn't it? >> > > It'll use 20 bits for precision instead of 10, so it gives a little more > 'room' for deeper hierarchies/big cpu-count. Got it :) > > All assuming you're running 64bit kernels of course. Yes, it's 64bit, I tried the testing with this feature on, seems like haven't address the issue... But we found that one difference when group get deeper is the tasks of that group become to gathered on CPU more often, some time all the dbench instances was running on the same CPU, this won't happen for l1 group, may could explain why dbench could not get CPU more than 100% any more. But why the gather happen when group get deeper is unclear... will try to make it out :) Regards, Michael Wang > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/