On 23.02.2015 21:54, Oleg Nesterov wrote:
> On 02/23, Oleg Nesterov wrote:
>>
>> On 02/23, Heinrich Schuchardt wrote:
>>>
>>> +static int memory_hotplug_callback(struct notifier_block *self,
>>> +                              unsigned long action, void *arg)
>>> +{
>>> +   switch (action) {
>>> +   case MEM_ONLINE:
>>> +           /*
>>> +            * If memory was added, try to maximize the number of allowed
>>> +            * threads.
>>> +            */
>>> +           set_max_threads(UINT_MAX);
>>> +           break;
>>> +   case MEM_OFFLINE:
>>> +           /*
>>> +            * If memory was removed, try to keep current value.
>>> +            */
>>> +           set_max_threads(max_threads);
>>> +           break;
>>> +   }
>>
>> can't understand... set_max_threads() added by 1/4 ignore its argument.
>> Why does it need "int max_threads_suggested" then?
> 
> OOPS sorry, missed 2/4 ;)
> 
>> And it changes the swapper/0's rlimits. This is pointless after we fork
>> /sbin/init.

So should writing to /proc/sys/max_threads update the limits of all
processes?

>>
>> It seems to me these patches need some cleanups. Plus I am not sure the
>> kernel should update max_threads automatically, we have the "threads-max"
>> sysctl.

The idea in the original version of fork_init is that max_threads should
be chosen such that the memory needed to store the meta-information of
max_threads threads should only be 1/8th of the total memory.

Somebody adding or removing memory will not necessarily update
/proc/sys/kernel/threads-max.

This means that if I remove 90 % of the memory I get to a situation
where max_threads allows so many threads to be created that the
meta-information occupies all memory.

With patch 4/4 max_threads is automatically reduced in this case.

Best regards

Heinrich
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to