On Friday 10 January 2014 07:53 PM, Andrew Morton wrote:
> On Thu, 5 Dec 2013 12:13:16 -0500 Santosh Shilimkar 
> <santosh.shilim...@ti.com> wrote:
> 
>> On Thursday 05 December 2013 11:59 AM, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Thu, Dec 05, 2013 at 03:12:30PM +0200, Grygorii Strashko wrote:
>>>> I'll try to provide more technical details here.
>>>> As Santosh mentioned in previous e-mails, it's not easy to simply
>>>> get rid of using MAX_NUMNODES:
>>>> 1) we introduce new interface memblock_allocX 
>>>> 2) our interface uses memblock APIs __next_free_mem_range_rev()
>>>>    and __next_free_mem_range()
>>>> 3) __next_free_mem_range_rev() and __next_free_mem_range() use MAX_NUMNODES
>>>> 4) _next_free_mem_range_rev() and __next_free_mem_range() are used 
>>>> standalone,
>>>>    outside of our interface as part of *for_each_free_mem_range* or 
>>>> for_each_mem_pfn_range ..
>>>>
>>>> The point [4] leads to necessity to find and correct all places where 
>>>> memmblock APIs
>>>> are used and where it's expected to get MAX_NUMNODES as input parameter.
>>>> The major problem is that simple "grep" will not work, because memmblock 
>>>> APIs calls
>>>> are hidden inside other MM modules and it's not always clear
>>>> what will be passed as input parameters to APIs of these MM modules
>>>> (for example sparse_memory_present_with_active_regions() or sparse.c).
>>>
>>> Isn't that kinda trivial to work around?  Make those functions accept
>>> both MAX_NUMNODES and NUMA_NO_NODE but emit warning on MAX_NUMNODES
>>> (preferably throttled reasonably).  Given the history of API, we'd
>>> probably want to keep such warning for extended period of time but
>>> that's what we'd need to do no matter what.
>>>
>> Looks a good idea.
>>
>>>> As result, WIP patch, I did, and which was posted by Santosh illustrates
>>>> the probable size and complexity of the change.
>>>
>>> Again, I don't really mind the order things happen but I don't think
>>> it's a good idea to spread misusage with a new API.  You gotta deal
>>> with it one way or the other.
>>>
>>>> Sorry, but question here is not "Do or not to do?", but rather 'how to 
>>>> do?",
>>>> taking into account complexity and state of the current MM code.
>>>> For example. would it be ok if I'll workaround the issue as in the 
>>>> attached patch?
>>>
>>> Well, it's more of when.  It's not really a technically difficult
>>> task and all I'm saying is it better be sooner than later.
>>>
>> Fair enough. Based on your suggestion, we will try to see if
>> we can proceed with 4) accepting both MAX_NUMNODES and NUMA_NO_NODE.
>>
>> Thanks for the suggestion.
> 
> So where do we now stand with this MAX_NUMNODES-vs-NUMA_NO_NODE mess? 
> Is the conversion to NUMA_NO_NODE in current linux-next completed and
> nicely tested?
> 
>From all the report so far, there were actually only 3 places in x86
code using MAX_NUMNODES and fix for that is already in your queue.
So I guess we are good on that aspect now.

Regards,
Santosh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to