Hi Bruce,

As we discussed before, these patch series intended to fix a potential
kernel crash, which is due to a invalid mm context from worker_create,
and it is introduced by efb5fea2, which is a continuation of efforts 
trying to optimize find_vma(), avoiding potentially expensive rbtree 
walks to locate a vma upon faults, and the following is the crash point:

CPU: 4 PID: 26 Comm: kworker/�#< Not tainted 3.14.22-WR7.0.0.0_standard #1
task: ffffffc03dd8cd00 ti: ffffffc03ddcc000 task.ti: ffffffc03ddcc000
PC is at 0x3c3e4402
LR is at 0x800000003c3e4402
pc : [<000000003c3e4402>] lr : [<800000003c3e4402>] pstate: 600001c5
sp : ffffffc03ddcfbf0
x29: ffffffc03ddcfbf0 x28: ffffffc000692128 
x27: 0000000000000000 x26: ffffffc000a79000 
x25: ffffffc00095c0d8 x24: ffffffc00092bcc0 
x23: 800000003c23a202 x22: ffffffc03ddcc000 
x21: ffffffc03dd8cd00 x20: ffffffc03dd8cd00 
x19: ffffffc03dce9600 x18: 0000000000000000 
x17: 0000007f86f1b180 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000c00 
x13: ffffffc039e8b580 x12: ffffffc00081b050 
x11: 0000000000000400 x10: ffffffc000000000 
x9 : ffffffc03ddcfbf0 x8 : ffffffc03dd8d240 
x7 : 0000000000080068 x6 : 0000000000000007 
x5 : ffffffc03c159678 x4 : 0000000000000001 
x3 : 0000000000000002 x2 : ffffffc03aae0c00 
x1 : ffffffc03dd8cd00 x0 : ffffffc03c159600 

Process kworker/�#< (pid: 26, stack limit = 0xffffffc03ddcc058)
Stack: (0xffffffc03ddcfbf0 to 0xffffffc03ddd0000)
fbe0:                                     3ddcfc10 ffffffc0 00691cc0 ffffffc0
fc00: 3ff5dcc0 ffffffc0 3dce9600 ffffffc0 3ddcfdb0 ffffffc0 00692128 ffffffc0
fc20: 3dc70080 ffffffc0 3dc700b0 ffffffc0 3ff5d918 ffffffc0 3ff5d900 ffffffc0
fc40: 3ddcc000 ffffffc0 fffffef7 00000000 009b4b33 ffffffc0 00808158 ffffffc0
fc60: 00000001 00000000 00000000 00000000 00809940 ffffffc0 009b4b71 ffffffc0
fc80: 0092bcc0 ffffffc0 3dce9600 ffffffc0 3dc5c240 ffffffc0 009c0f48 ffffffc0
fca0: 00804e48 ffffffc0 3dc70080 ffffffc0 000b069c ffffffc0 00000000 00000000
fcc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fce0: 00809940 ffffffc0 009b4b71 ffffffc0 0092bcc0 ffffffc0 3dcabaf0 ffffffc0
fd00: 00000000 00000000 00000000 00000000 00000003 00000000 00000001 00000000
fd20: 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fd40: 00000000 00000000 00000003 00000000 3ddcfd80 ffffffc0 000d5ea4 ffffffc0
fd60: 3dcabad8 ffffffc0 00000000 00000000 00000003 00000000 00000000 00000000
fd80: 3ddcfda0 ffffffc0 006957c8 ffffffc0 3ff5d900 ffffffc0 3dcabae0 ffffffc0
fda0: 3ddcfdc0 ffffffc0 000b0940 ffffffc0 3ddcfdc0 ffffffc0 000b0944 ffffffc0
fdc0: 3ddcfe30 ffffffc0 000b7300 ffffffc0 3dc5c240 ffffffc0 009c0f48 ffffffc0
fde0: 00804e48 ffffffc0 3dc70080 ffffffc0 000b069c ffffffc0 00000000 00000000
fe00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fe20: 00804e48 ffffffc0 3dc70080 ffffffc0 00000000 00000000 00083950 ffffffc0
fe40: 000b7224 ffffffc0 3dc5c240 ffffffc0 00000000 00000000 00000000 00000000
fe60: 00000000 00000000 3dc5c240 ffffffc0 00000000 00000000 00000000 00000000
fe80: 00000000 00000000 3dc70080 ffffffc0 00000000 00000000 00000000 00000000
fea0: 3ddcfea0 ffffffc0 3ddcfea0 ffffffc0 00000000 ffffffc0 00000000 00000000
fec0: 3ddcfec0 ffffffc0 3ddcfec0 ffffffc0 00000000 00000000 00000000 00000000
fee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ff80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ffa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ffc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000005 00000000
ffe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Call trace:
[<000000003c3e4402>] 0x3c3e4402
[<ffffffc000691cbc>] __schedule+0x23c/0x678
[<ffffffc000692124>] schedule+0x2c/0x80
[<ffffffc0000b0940>] worker_thread+0x2a4/0x3c8
[<ffffffc0000b72fc>] kthread+0xd8/0xf0
Code: bad PC value

So I think 3.14.x still needs these patches as well, you can easily cherry
pick up from mainline according to commit IDs.

Remaining Changes (diffstat):
-----------------------------
 kernel/workqueue.c          |  253 ++++++++++++++-----------------------------
 kernel/workqueue_internal.h |    2 +

Cheers,
Zumeng

commit 5fdfd3872c0faea4a9b60ad29c7d1a2500875c6c
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:27 2014 +0800

    workqueue: use manager lock only to protect worker_idr
    
    worker_idr is highly bound to managers and is always/only accessed in 
manager
    lock context. So we don't need pool->lock for it.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    Signed-off-by: Tejun Heo <t...@kernel.org>
    (cherry picked from commit 9625ab1727743f6a164df26b7b1eeeced7380b42)

commit 2517907f496b1ef8b2a2d169cc65524797826c8b
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:28 2014 +0800

    workqueue: destroy_worker() should destroy idle workers only
    
    We used to have the CPU online failure path where a worker is created
    and then destroyed without being started. A worker was created for
    the CPU coming online and if the online operation failed the created worker
    was shut down without being started.  But this behavior was changed.
    The first worker is created and started at the same time for the CPU coming
    online.
    
    It means that we had already ensured in the code that destroy_worker()
    destroys only idle workers and we don't want to allow it to destroy
    any non-idle worker in the future. Otherwise, it may be buggy and it
    may be extremely hard to check. We should force destroy_worker() to
    destroy only idle workers explicitly.
    
    Since destroy_worker() destroys only idle workers, this patch does not
    change any functionality. We just need to update the comments and the
    sanity check code.
    
    In the sanity check code, we will refuse to destroy the worker
    if !(worker->flags & WORKER_IDLE).
    
    If the worker entered idle which means it is already started,
    so we remove the check of "worker->flags & WORKER_STARTED",
    after this removal, WORKER_STARTED is totally unneeded,
    so we remove WORKER_STARTED too.
    
    In the comments for create_worker(), "Create a new worker which is bound..."
    is changed to "... which is attached..." due to we change the name of this
    behavior to attaching.
    
    tj: Minor description / comment updates.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    Signed-off-by: Tejun Heo <t...@kernel.org>
    (cherry picked from commit 73eb7fe73ae303996187fff38b1c162f1df0e9d1)

commit c73f6d0eae4a192b59ee8348bf1fd86ad8635e11
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:29 2014 +0800

    workqueue: async worker destruction
    
    worker destruction includes these parts of code:
        adjust pool's stats
        remove the worker from idle list
        detach the worker from the pool
        kthread_stop() to wait for the worker's task exit
        free the worker struct
    
    We can find out that there is no essential work to do after
    kthread_stop(), which means destroy_worker() doesn't need to wait for
    the worker's task exit, so we can remove kthread_stop() and free the
    worker struct in the worker exiting path.
    
    However, put_unbound_pool() still needs to sync the all the workers'
    destruction before destroying the pool; otherwise, the workers may
    access to the invalid pool when they are exiting.
    
    So we also move the code of "detach the worker" to the exiting
    path and let put_unbound_pool() to sync with this code via
    detach_completion.
    
    The code of "detach the worker" is wrapped in a new function
    "worker_detach_from_pool()" although worker_detach_from_pool() is only
    called once (in worker_thread()) after this patch, but we need to wrap
    it for these reasons:
    
      1) The code of "detach the worker" is not short enough to unfold them
         in worker_thread().
      2) the name of "worker_detach_from_pool()" is self-comment, and we add
         some comments above the function.
      3) it will be shared by rescuer in later patch which allows rescuer
         and normal thread use the same attach/detach frameworks.
    
    The worker id is freed when detaching which happens before the worker
    is fully dead, but this id of the dying worker may be re-used for a
    new worker, so the dying worker's task name is changed to
    "worker/dying" to avoid two or several workers having the same name.
    
    Since "detach the worker" is moved out from destroy_worker(),
    destroy_worker() doesn't require manager_mutex, so the
    "lockdep_assert_held(&pool->manager_mutex)" in destroy_worker() is
    removed, and destroy_worker() is not protected by manager_mutex in
    put_unbound_pool().
    
    tj: Minor description updates.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    Signed-off-by: Tejun Heo <t...@kernel.org>
    (cherry picked from commit 60f5a4bcf852b5dec698b08cd34efc302ea72f2b)

commit ddc0fbfbb38448a91f1df0936e1e1b8a3580b5f2
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:30 2014 +0800

    workqueue: destroy worker directly in the idle timeout handler
    
    Since destroy_worker() doesn't need to sleep nor require manager_mutex,
    destroy_worker() can be directly called in the idle timeout
    handler, it helps us remove POOL_MANAGE_WORKERS and
    maybe_destroy_worker() and simplify the manage_workers()
    
    After POOL_MANAGE_WORKERS is removed, worker_thread() doesn't
    need to test whether it needs to manage after processed works.
    So we can remove the test branch.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    (cherry picked from commit 3347fc9f36e7e5d3ebe504fc4034745b5d8971d3)

commit cdc88667d3dc37b7b96ba81a42c3df2748011669
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:31 2014 +0800

    workqueue: separate iteration role from worker_idr
    
    worker_idr has the iteration (iterating for attached workers) and
    worker ID duties. These two duties don't have to be tied together. We
    can separate them and use a list for tracking attached workers and
    iteration.
    
    Before this separation, it wasn't possible to add rescuer workers to
    worker_idr due to rescuer workers couldn't allocate ID dynamically
    because ID-allocation depends on memory-allocation, which rescuer
    can't depend on.
    
    After separation, we can easily add the rescuer workers to the list for
    iteration without any memory-allocation. It is required when we attach
    the rescuer worker to the pool in later patch.
    
    tj: Minor description updates.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    Signed-off-by: Tejun Heo <t...@kernel.org>
    (cherry picked from commit da028469ba173e9c634b6ecf80bb0c69c7d1024d)

commit 5d4c1dcb75c3d35ced37abe14eff78b50a26b0b8
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:32 2014 +0800

    workqueue: convert worker_idr to worker_ida
    
    We no longer iterate workers via worker_idr and worker_idr is used
    only for allocating/freeing ID, so we can convert it to worker_ida.
    
    By using ida_simple_get/remove(), worker_ida doesn't require external
    synchronization, so we don't need manager_mutex to protect it and the
    ID-removal code is allowed to be moved out from
    worker_detach_from_pool().
    
    In a later patch, worker_detach_from_pool() will be used in rescuers
    which don't have IDs, so we move the ID-removal code out from
    worker_detach_from_pool() into worker_thread().
    
    tj: Minor description updates.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    Signed-off-by: Tejun Heo <t...@kernel.org>
    (cherry picked from commit 7cda9aae0596d871a8d7a6888d7b447c60e5ab30)

commit b485a459e5370c52cdf64707bf839d3e26963495
Author: Lai Jiangshan <la...@cn.fujitsu.com>
Date:   Tue May 20 17:46:33 2014 +0800

    workqueue: narrow the protection range of manager_mutex
    
    In create_worker(), as pool->worker_ida now uses
    ida_simple_get()/ida_simple_put() and doesn't require external
    synchronization, it doesn't need manager_mutex.
    
    struct worker allocation and kthread allocation are not visible by any
    one before attached, so they don't need manager_mutex either.
    
    The above operations are before the attaching operation which attaches
    the worker to the pool. Between attaching and starting the worker, the
    worker is already attached to the pool, so the cpu hotplug will handle
    cpu-binding for the worker correctly and we don't need the
    manager_mutex after attaching.
    
    The conclusion is that only the attaching operation needs manager_mutex,
    so we narrow the protection section of manager_mutex in create_worker().
    
    Some comments about manager_mutex are removed, because we will rename
    it to attach_mutex and add worker_attach_to_pool() later which will be
    self-explanatory.
    
    tj: Minor description updates.
    
    Signed-off-by: Lai Jiangshan <la...@cn.fujitsu.com>
    Signed-off-by: Tejun Heo <t...@kernel.org>
    (cherry picked from commit 4d757c5c81edba2052aae10d5b36dfcb9902b141)

-- 
_______________________________________________
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto

Reply via email to