Re: [Devel] [RFC PATCH 1/2] autofs: set compat flag on sbi when daemon uses 32bit addressation

2017-09-14 Thread Stanislav Kinsburskiy
14.09.2017 02:38, Ian Kent пишет: > On 01/09/17 19:21, Stanislav Kinsburskiy wrote: >> Signed-off-by: Stanislav Kinsburskiy >> --- >> fs/autofs4/autofs_i.h |3 +++ >> fs/autofs4/dev-ioctl.c |3 +++ >> fs/autofs4/inode.c |4 +++- >> 3 files changed, 9 insertions(+), 1 deletion(-

Re: [Devel] [RFC PATCH 1/2] autofs: set compat flag on sbi when daemon uses 32bit addressation

2017-09-14 Thread Ian Kent
On 14/09/17 17:24, Stanislav Kinsburskiy wrote: > > > 14.09.2017 02:38, Ian Kent пишет: >> On 01/09/17 19:21, Stanislav Kinsburskiy wrote: >>> Signed-off-by: Stanislav Kinsburskiy >>> --- >>> fs/autofs4/autofs_i.h |3 +++ >>> fs/autofs4/dev-ioctl.c |3 +++ >>> fs/autofs4/inode.c |

Re: [Devel] [RFC PATCH 1/2] autofs: set compat flag on sbi when daemon uses 32bit addressation

2017-09-14 Thread Stanislav Kinsburskiy
14.09.2017 13:29, Ian Kent пишет: > On 14/09/17 17:24, Stanislav Kinsburskiy wrote: >> >> >> 14.09.2017 02:38, Ian Kent пишет: >>> On 01/09/17 19:21, Stanislav Kinsburskiy wrote: Signed-off-by: Stanislav Kinsburskiy --- fs/autofs4/autofs_i.h |3 +++ fs/autofs4/dev-ioctl

Re: [Devel] [RFC PATCH 1/2] autofs: set compat flag on sbi when daemon uses 32bit addressation

2017-09-14 Thread Ian Kent
On 14/09/17 19:39, Stanislav Kinsburskiy wrote: > > > 14.09.2017 13:29, Ian Kent пишет: >> On 14/09/17 17:24, Stanislav Kinsburskiy wrote: >>> >>> >>> 14.09.2017 02:38, Ian Kent пишет: On 01/09/17 19:21, Stanislav Kinsburskiy wrote: > Signed-off-by: Stanislav Kinsburskiy > --- >

Re: [Devel] [RFC PATCH 1/2] autofs: set compat flag on sbi when daemon uses 32bit addressation

2017-09-14 Thread Stanislav Kinsburskiy
14.09.2017 13:45, Ian Kent пишет: > On 14/09/17 19:39, Stanislav Kinsburskiy wrote: >> >> >> 14.09.2017 13:29, Ian Kent пишет: >>> On 14/09/17 17:24, Stanislav Kinsburskiy wrote: 14.09.2017 02:38, Ian Kent пишет: > On 01/09/17 19:21, Stanislav Kinsburskiy wrote: >> Signed-o

[Devel] [PATCH rh7 05/39] mm/mempool: avoid KASAN marking mempool poison checks as use-after-free

2017-09-14 Thread Andrey Ryabinin
From: Matthew Dawson When removing an element from the mempool, mark it as unpoisoned in KASAN before verifying its contents for SLUB/SLAB debugging. Otherwise KASAN will flag the reads checking the element use-after-free writes as use-after-free reads. Signed-off-by: Matthew Dawson Acked-by:

[Devel] [PATCH rh7 04/39] mm, kasan: SLAB support

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Add KASAN hooks to SLAB allocator. This patch is based on the "mm: kasan: unified support for SLUB and SLAB allocators" patch originally prepared by Dmitry Chernenkov. Signed-off-by: Alexander Potapenko Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc:

[Devel] [PATCH rh7 12/39] lib/stackdepot: avoid to return 0 handle

2017-09-14 Thread Andrey Ryabinin
From: Joonsoo Kim Recently, we allow to save the stacktrace whose hashed value is 0. It causes the problem that stackdepot could return 0 even if in success. User of stackdepot cannot distinguish whether it is success or not so we need to solve this problem. In this patch, 1 bit are added to ha

[Devel] [PATCH rh7 19/39] mm/kasan, slub: don't disable interrupts when object leaves quarantine

2017-09-14 Thread Andrey Ryabinin
SLUB doesn't require disabled interrupts to call ___cache_free(). Link: http://lkml.kernel.org/r/1470062715-14077-3-git-send-email-aryabi...@virtuozzo.com Signed-off-by: Andrey Ryabinin Acked-by: Alexander Potapenko Cc: Dmitry Vyukov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds

[Devel] [PATCH rh7 09/39] mm, kasan: fix compilation for CONFIG_SLAB

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Add the missing argument to set_track(). Fixes: cd11016e5f52 ("mm, kasan: stackdepot implementation. Enable stackdepot for SLAB") Signed-off-by: Alexander Potapenko Cc: Andrey Konovalov Cc: Christoph Lameter Cc: Dmitry Vyukov Cc: Andrey Ryabinin Cc: Steven Rostedt

[Devel] [PATCH rh7 11/39] lib/stackdepot.c: allow the stack trace hash to be zero

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Do not bail out from depot_save_stack() if the stack trace has zero hash. Initially depot_save_stack() silently dropped stack traces with zero hashes, however there's actually no point in reserving this zero value. Reported-by: Joonsoo Kim Signed-off-by: Alexander Pota

[Devel] [PATCH rh7 03/39] kasan: various fixes in documentation

2017-09-14 Thread Andrey Ryabinin
From: Andrey Konovalov [a...@linux-foundation.org: coding-style fixes] Signed-off-by: Andrey Konovalov Cc: Andrey Ryabinin Cc: Dmitry Vyukov Cc: Alexander Potapenko Cc: Konstantin Serebryany Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds https://jira.sw.ru/browse/PSBM-69081 (c

[Devel] [PATCH rh7 24/39] x86, kasan, ftrace: Put APIC interrupt handlers into .irqentry.text

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Dmitry Vyukov has reported unexpected KASAN stackdepot growth: https://github.com/google/kasan/issues/36 ... which is caused by the APIC handlers not being present in .irqentry.text: When building with CONFIG_FUNCTION_GRAPH_TRACER=y or CONFIG_KASAN=y, put the APIC i

[Devel] [PATCH rh7 29/39] kasan: update kasan_global for gcc 7

2017-09-14 Thread Andrey Ryabinin
From: Dmitry Vyukov kasan_global struct is part of compiler/runtime ABI. gcc revision 241983 has added a new field to kasan_global struct. Update kernel definition of kasan_global struct to include the new field. Without this patch KASAN is broken with gcc 7. Link: http://lkml.kernel.org/r/1

[Devel] [PATCH rh7 13/39] mm, kasan: don't call kasan_krealloc() from ksize().

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Instead of calling kasan_krealloc(), which replaces the memory allocation stack ID (if stack depot is used), just unpoison the whole memory chunk. Signed-off-by: Alexander Potapenko Acked-by: Andrey Ryabinin Cc: Andrey Konovalov Cc: Dmitry Vyukov Cc: Christoph Lamet

[Devel] [PATCH rh7 10/39] mm: kasan: initial memory quarantine implementation

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Quarantine isolates freed objects in a separate queue. The objects are returned to the allocator later, which helps to detect use-after-free errors. When the object is freed, its state changes from KASAN_STATE_ALLOC to KASAN_STATE_QUARANTINE. The object is poisoned an

[Devel] [PATCH rh7 17/39] lib/stackdepot.c: use __GFP_NOWARN for stack allocations

2017-09-14 Thread Andrey Ryabinin
From: "Kirill A. Shutemov" This (large, atomic) allocation attempt can fail. We expect and handle that, so avoid the scary warning. Link: http://lkml.kernel.org/r/20160720151905.gb19...@node.shutemov.name Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Michal Hocko Cc: Rik van Riel Cc: Davi

[Devel] [PATCH rh7 08/39] mm, kasan: stackdepot implementation. Enable stackdepot for SLAB

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot will allow KASAN store allocation/deallocation stack traces for memory chunks. The stack traces are stored in a hash table and referenced by handles which reside in the kasan_alloc_meta and kasan_free

[Devel] [PATCH rh7 25/39] kasan: remove the unnecessary WARN_ONCE from quarantine.c

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko It's quite unlikely that the user will so little memory that the per-CPU quarantines won't fit into the given fraction of the available memory. Even in that case he won't be able to do anything with the information given in the warning. Link: http://lkml.kernel.org/r/1

[Devel] [PATCH rh7 16/39] mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko For KASAN builds: - switch SLUB allocator to using stackdepot instead of storing the allocation/deallocation stacks in the objects; - change the freelist hook so that parts of the freelist can be put into the quarantine. [aryabi...@virtuozzo.com: fixes] Link:

[Devel] [PATCH rh7 27/39] kcov: do not instrument lib/stackdepot.c

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko There's no point in collecting coverage from lib/stackdepot.c, as it is not a function of syscall inputs. Disabling kcov instrumentation for that file will reduce the coverage noise level. Link: http://lkml.kernel.org/r/1474640972-104131-1-git-send-email-gli...@google

[Devel] [PATCH rh7 01/39] kasan: show gcc version requirements in Kconfig and Documentation

2017-09-14 Thread Andrey Ryabinin
From: Joe Perches The documentation shows a need for gcc > 4.9.2, but it's really >=. The Kconfig entries don't show require versions so add them. Correct a latter/later typo too. Also mention that gcc 5 required to catch out of bounds accesses to global and stack variables. Signed-off-by: Jo

[Devel] [PATCH rh7 15/39] kasan/quarantine: fix bugs on qlist_move_cache()

2017-09-14 Thread Andrey Ryabinin
From: Joonsoo Kim There are two bugs on qlist_move_cache(). One is that qlist's tail isn't set properly. curr->next can be NULL since it is singly linked list and NULL value on tail is invalid if there is one item on qlist. Another one is that if cache is matched, qlist_put() is called and it w

[Devel] [PATCH rh7 26/39] mm, mempolicy: task->mempolicy must be NULL before dropping final reference

2017-09-14 Thread Andrey Ryabinin
From: David Rientjes KASAN allocates memory from the page allocator as part of kmem_cache_free(), and that can reference current->mempolicy through any number of allocation functions. It needs to be NULL'd out before the final reference is dropped to prevent a use-after-free bug: BUG: K

[Devel] [PATCH rh7 30/39] kasan: eliminate long stalls during quarantine reduction

2017-09-14 Thread Andrey Ryabinin
From: Dmitry Vyukov Currently we dedicate 1/32 of RAM for quarantine and then reduce it by 1/4 of total quarantine size. This can be a significant amount of memory. For example, with 4GB of RAM total quarantine size is 128MB and it is reduced by 32MB at a time. With 128GB of RAM total quaranti

[Devel] [PATCH rh7 21/39] mm/kasan: get rid of ->state in struct kasan_alloc_meta

2017-09-14 Thread Andrey Ryabinin
The state of object currently tracked in two places - shadow memory, and the ->state field in struct kasan_alloc_meta. We can get rid of the latter. The will save us a little bit of memory. Also, this allow us to move free stack into struct kasan_alloc_meta, without increasing memory consumption

[Devel] [PATCH rh7 22/39] kasan: improve double-free reports

2017-09-14 Thread Andrey Ryabinin
Currently we just dump stack in case of double free bug. Let's dump all info about the object that we have. [aryabi...@virtuozzo.com: change double free message per Alexander] Link: http://lkml.kernel.org/r/1470153654-30160-1-git-send-email-aryabi...@virtuozzo.com Link: http://lkml.kernel.org/

[Devel] [PATCH rh7 07/39] arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko KASAN needs to know whether the allocation happens in an IRQ handler. This lets us strip everything below the IRQ entry point to reduce the number of unique stack traces needed to be stored. Move the definition of __irq_entry to so that the users don't need to pull in

[Devel] [PATCH rh7 23/39] kasan: avoid overflowing quarantine size on low memory systems

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko If the total amount of memory assigned to quarantine is less than the amount of memory assigned to per-cpu quarantines, |new_quarantine_size| may overflow. Instead, set it to zero. [a...@linux-foundation.org: cleanup: use WARN_ONCE return value] Link: http://lkml.kern

[Devel] [PATCH rh7 18/39] mm/kasan: fix corruptions and false positive reports

2017-09-14 Thread Andrey Ryabinin
Once an object is put into quarantine, we no longer own it, i.e. object could leave the quarantine and be reallocated. So having set_track() call after the quarantine_put() may corrupt slab objects. BUG kmalloc-4096 (Not tainted): Poison overwritten

[Devel] [PATCH rh7 20/39] mm/kasan: get rid of ->alloc_size in struct kasan_alloc_meta

2017-09-14 Thread Andrey Ryabinin
Size of slab object already stored in cache->object_size. Note, that kmalloc() internally rounds up size of allocation, so object_size may be not equal to alloc_size, but, usually we don't need to know the exact size of allocated object. In case if we need that information, we still can figure it

[Devel] [PATCH rh7 06/39] mm, kasan: add GFP flags to KASAN API

2017-09-14 Thread Andrey Ryabinin
From: Alexander Potapenko Add GFP flags to KASAN hooks for future patches to use. This patch is based on the "mm: kasan: unified support for SLUB and SLAB allocators" patch originally prepared by Dmitry Chernenkov. Signed-off-by: Alexander Potapenko Cc: Christoph Lameter Cc: Pekka Enberg Cc:

[Devel] [PATCH rh7 14/39] kasan: add newline to messages

2017-09-14 Thread Andrey Ryabinin
From: Dmitry Vyukov Currently GPF messages with KASAN look as follows: kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: [#1] SMP DEBUG_PAGEALLOC KASAN Add newlines. Link: http://lkml.kernel.org/r/1467294357-98002-1-git-send-email-dvyu...@goog

[Devel] [PATCH rh7 02/39] Documentation: kasan: fix a typo

2017-09-14 Thread Andrey Ryabinin
From: Wang Long Fix a couple of typos in the kasan document. Signed-off-by: Wang Long Signed-off-by: Jonathan Corbet https://jira.sw.ru/browse/PSBM-69081 (cherry picked from commit f66fa08bf9e59b1231aba9e3c2ec28dcf08f0389) Signed-off-by: Andrey Ryabinin --- Documentation/kasan.txt | 2 +- 1

[Devel] [PATCH rh7 28/39] lib/stackdepot.c: bump stackdepot capacity from 16MB to 128MB

2017-09-14 Thread Andrey Ryabinin
From: Dmitry Vyukov KASAN uses stackdepot to memorize stacks for all kmalloc/kfree calls. Current stackdepot capacity is 16MB (1024 top level entries x 4 pages on second level). Size of each stack is (num_frames + 3) * sizeof(long). Which gives us ~84K stacks. This capacity was chosen empirical

[Devel] [PATCH rh7 39/39] module: Fix load_module() error path

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra The load_module() error path frees a module but forgot to take it out of the mod_tree, leaving a dangling entry in the tree, causing havoc. Cc: Mathieu Desnoyers Reported-by: Arthur Marsh Tested-by: Arthur Marsh Fixes: 93c2e105f6bc ("module: Optimize __module_address() us

[Devel] [PATCH rh7 33/39] rbtree: Make lockless searches non-fatal

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra Change the insert and erase code such that lockless searches are non-fatal. In and of itself an rbtree cannot be correctly searched while in-modification, we can however provide weaker guarantees that will allow the rbtree to be used in conjunction with other techniques, suc

[Devel] [PATCH rh7 37/39] rbtree: Implement generic latch_tree

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra Implement a latched RB-tree in order to get unconditional RCU/lockless lookups. Cc: Oleg Nesterov Cc: Michel Lespinasse Cc: Andrea Arcangeli Cc: David Woodhouse Cc: Rik van Riel Cc: Mathieu Desnoyers Cc: "Paul E. McKenney" Signed-off-by: Peter Zijlstra (Intel) Signed

[Devel] [PATCH rh7 34/39] seqlock: Better document raw_write_seqcount_latch()

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra Improve the documentation of the latch technique as used in the current timekeeping code, such that it can be readily employed elsewhere. Borrow from the comments in timekeeping and replace those with a reference to this more generic comment. Cc: Andrea Arcangeli Cc: David

[Devel] [PATCH rh7 38/39] module: Optimize __module_address() using a latched RB-tree

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra Currently __module_address() is using a linear search through all modules in order to find the module corresponding to the provided address. With a lot of modules this can take a lot of time. One of the users of this is kernel_text_address() which is employed in many stack u

[Devel] [PATCH rh7 32/39] kasan: fix races in quarantine_remove_cache()

2017-09-14 Thread Andrey Ryabinin
From: Dmitry Vyukov quarantine_remove_cache() frees all pending objects that belong to the cache, before we destroy the cache itself. However there are currently two possibilities how it can fail to do so. First, another thread can hold some of the objects from the cache in temp list in quarant

[Devel] [PATCH rh7 31/39] kasan: drain quarantine of memcg slab objects

2017-09-14 Thread Andrey Ryabinin
From: Greg Thelen Per memcg slab accounting and kasan have a problem with kmem_cache destruction. - kmem_cache_create() allocates a kmem_cache, which is used for allocations from processes running in root (top) memcg. - Processes running in non root memcg and allocating with either __GFP_

[Devel] [PATCH rh7 35/39] rcu: Move lockless_dereference() out of rcupdate.h

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra I want to use lockless_dereference() from seqlock.h, which would mean including rcupdate.h from it, however rcupdate.h already includes seqlock.h. Avoid this by moving lockless_dereference() into compiler.h. This is somewhat tricky since it uses smp_read_barrier_depends() wh

[Devel] [PATCH rh7 36/39] seqlock: Introduce raw_read_seqcount_latch()

2017-09-14 Thread Andrey Ryabinin
From: Peter Zijlstra Because with latches there is a strict data dependency on the seq load we can avoid the rmb in favour of a read_barrier_depends. Suggested-by: Ingo Molnar Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Rusty Russell https://jira.sw.ru/browse/PSBM-69081 (cherry pick