>>>> +void memcg_charge_kmem_nofail(struct mem_cgroup *memcg, u64 size) >>>> { >>>> + struct res_counter *fail_res; >>>> + >>>> + /* >>>> + * FIXME -- strictly speaking, this value should _also_ >>>> + * be charged into kmem counter. But since res_counter_charge >>>> + * is sub-optimal (takes locks) AND we do not care much >>>> + * about kmem limits (at least for now) we can just directly >>>> + * charge into mem counter. >>>> + */ >>> >>> Please charge kmem too. As I've already told you it should not make any >>> difference in terms of performance, because we already have a bottleneck >>> of the same bandwidth. >>> >>> Anyway, if we see any performance degradation, I will convert >>> mem_cgroup->kmem to a percpu counter. >> >> No, let's do it vice-versa -- first you fix the locking, then I update this >> code. > > I don't understand why, because you provide no arguments and keep > ignoring my reasoning why I think charging kmem along with res is OK, > which is one paragraph above.
The bandwidth of the bottleneck doesn't look to be the same -- the res counters in question are not in one cache-line and adding one more (btw, do we have swap account turned on by default?) will not come unnoticed. Yet again -- I don't mind changing this and charge TCP into kmem too, I'll do it, but after this charging becomes fast enough. -- Pavel _______________________________________________ Devel mailing list Devel@openvz.org https://lists.openvz.org/mailman/listinfo/devel