On 03/27/2015 01:50 AM, David Rientjes wrote:
> On Thu, 26 Mar 2015, Andrey Ryabinin wrote:
> 
>>> +static void check_element(mempool_t *pool, void *element)
>>> +{
>>> +       /* Mempools backed by slab allocator */
>>> +       if (pool->free == mempool_free_slab || pool->free == mempool_kfree)
>>> +               __check_element(pool, element, ksize(element));
>>> +
>>> +       /* Mempools backed by page allocator */
>>> +       if (pool->free == mempool_free_pages) {
>>> +               int order = (int)(long)pool->pool_data;
>>> +               void *addr = page_address(element);
>>> +
>>> +               __check_element(pool, addr, 1UL << (PAGE_SHIFT + order));
>>>         }
>>>  }
>>>
>>> -static void poison_slab_element(mempool_t *pool, void *element)
>>> +static void __poison_element(void *element, size_t size)
>>>  {
>>> -       if (pool->alloc == mempool_alloc_slab ||
>>> -           pool->alloc == mempool_kmalloc) {
>>> -               size_t size = ksize(element);
>>> -               u8 *obj = element;
>>> +       u8 *obj = element;
>>> +
>>> +       memset(obj, POISON_FREE, size - 1);
>>> +       obj[size - 1] = POISON_END;
>>> +}
>>> +
>>> +static void poison_element(mempool_t *pool, void *element)
>>> +{
>>> +       /* Mempools backed by slab allocator */
>>> +       if (pool->alloc == mempool_alloc_slab || pool->alloc == 
>>> mempool_kmalloc)
>>> +               __poison_element(element, ksize(element));
>>> +
>>> +       /* Mempools backed by page allocator */
>>> +       if (pool->alloc == mempool_alloc_pages) {
>>> +               int order = (int)(long)pool->pool_data;
>>> +               void *addr = page_address(element);
>>>
>>> -               memset(obj, POISON_FREE, size - 1);
>>> -               obj[size - 1] = POISON_END;
>>> +               __poison_element(addr, 1UL << (PAGE_SHIFT + order));
>>
>> I think, it would be better to use kernel_map_pages() here and in
>> check_element().
> 
> Hmm, interesting suggestion.
> 
>> This implies that poison_element()/check_element() has to be moved out of
>> CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON ifdef (keeping only slab
>> poisoning under this ifdef).
> 
> The mempool poisoning introduced here is really its own poisoning built on 
> top of whatever the mempool allocator is.  Otherwise, it would have called 
> into the slab subsystem to do the poisoning and include any allocated 
> space beyond the object size itself. 

Perhaps, that would be a good thing to do. I mean it makes sense to check 
redzone
for corruption.

> Mempool poisoning is agnostic to the 
> underlying memory just like the chain of elements is, mempools don't even 
> store size.
> 
> We don't have a need to set PAGE_EXT_DEBUG_POISON on these pages sitting 
> in the reserved pool, nor do we have a need to do kmap_atomic() since it's 
> already mapped and must be mapped to be on the reserved pool, which is 
> handled by mempool_free().
> 

Well, yes. But this applies only to architectures that don't have 
ARCH_SUPPORTS_DEBUG_PAGEALLOC.
The rest of arches will only benefit from this as kernel_map_pages() 
potentially could find more bugs.



------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Jfs-discussion mailing list
Jfs-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to