Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Sun, 23 Dec 2012 at 13:34, Christian Kujau wrote: > On Sat, 22 Dec 2012 at 16:28, Maciej Rutecki wrote: > > Got during suspend to disk: > > I got a similar message on a powerpc G4 system, right after bootup (no > suspend involved): > > http://nerdbynature.de/bits/3.8.0-rc1/ FWIW, this is still present with 3.8.0-rc2. C. > [ 97.803049] == > [ 97.803051] [ INFO: possible circular locking dependency detected ] > [ 97.803059] 3.8.0-rc1-dirty #2 Not tainted > [ 97.803060] --- > [ 97.803066] kworker/0:1/235 is trying to acquire lock: > [ 97.803097] ((fb_notifier_list).rwsem){.+.+.+}, at: [] > __blocking_notifier_call_chain+0x44/0x88 > [ 97.803099] > [ 97.803099] but task is already holding lock: > [ 97.803110] (console_lock){+.+.+.}, at: [] > console_callback+0x20/0x194 > [ 97.803112] > [ 97.803112] which lock already depends on the new lock. > > ...and on it goes. Please see the URL above for the whole dmesg and > .config. > > @Li Zhong: I have applied your fix for the "MAX_STACK_TRACE_ENTRIES too >low" warning[0] to 3.8-rc1 (hence the -dirty flag), but in the >backtrace "ret_from_kernel_thread" shows up again. FWIW, your >patch helped to make the "MAX_STACK_TRACE_ENTRIES too low" >warning go away in 3.7.0-rc7 and it did not re-appear ever >since. > > Thanks, > Christian. > > [0] http://lkml.indiana.edu/hypermail/linux/kernel/1211.3/01917.html > > > [ 269.784867] [ INFO: possible circular locking dependency detected ] > > [ 269.784869] 3.8.0-rc1 #1 Not tainted > > [ 269.784870] --- > > [ 269.784871] kworker/u:3/56 is trying to acquire lock: > > [ 269.784878] ((fb_notifier_list).rwsem){.+.+.+}, at: > > [] > > __blocking_notifier_call_chain+0x49/0x80 > > [ 269.784879] > > [ 269.784879] but task is already holding lock: > > [ 269.784884] (console_lock){+.+.+.}, at: [] > > i915_drm_freeze+0x9e/0xbb > > [ 269.784884] > > [ 269.784884] which lock already depends on the new lock. > > [ 269.784884] > > [ 269.784885] > > [ 269.784885] the existing dependency chain (in reverse order) is: > > [ 269.784887] > > [ 269.784887] -> #1 (console_lock){+.+.+.}: > > [ 269.784890][] lock_acquire+0x95/0x105 > > [ 269.784893][] console_lock+0x59/0x5b > > [ 269.784897][] register_con_driver+0x36/0x128 > > [ 269.784899][] take_over_console+0x1e/0x45 > > [ 269.784903][] fbcon_takeover+0x56/0x98 > > [ 269.784906][] fbcon_event_notify+0x2c1/0x5ea > > [ 269.784909][] notifier_call_chain+0x67/0x92 > > [ 269.784911][] > > __blocking_notifier_call_chain+0x5f/0x80 > > [ 269.784912][] > > blocking_notifier_call_chain+0xf/0x11 > > [ 269.784915][] fb_notifier_call_chain+0x16/0x18 > > [ 269.784917][] register_framebuffer+0x20a/0x26e > > [ 269.784920][] > > drm_fb_helper_single_fb_probe+0x1ce/0x297 > > [ 269.784922][] > > drm_fb_helper_initial_config+0x1d7/0x1ef > > [ 269.784924][] intel_fbdev_init+0x6f/0x82 > > [ 269.784927][] i915_driver_load+0xa9e/0xc78 > > [ 269.784929][] drm_get_pci_dev+0x165/0x26d > > [ 269.784931][] i915_pci_probe+0x60/0x69 > > [ 269.784933][] local_pci_probe+0x39/0x61 > > [ 269.784935][] pci_device_probe+0xba/0xe0 > > [ 269.784938][] driver_probe_device+0x99/0x1c4 > > [ 269.784940][] __driver_attach+0x4e/0x6f > > [ 269.784942][] bus_for_each_dev+0x52/0x84 > > [ 269.784944][] driver_attach+0x19/0x1b > > [ 269.784946][] bus_add_driver+0xdf/0x203 > > [ 269.784948][] driver_register+0x8e/0x114 > > [ 269.784952][] __pci_register_driver+0x5d/0x62 > > [ 269.784953][] drm_pci_init+0x81/0xe6 > > [ 269.784957][] i915_init+0x66/0x68 > > [ 269.784959][] do_one_initcall+0x7a/0x136 > > [ 269.784962][] kernel_init+0x141/0x296 > > [ 269.784964][] ret_from_fork+0x7c/0xb0 > > [ 269.784966] > > [ 269.784966] -> #0 ((fb_notifier_list).rwsem){.+.+.+}: > > [ 269.784967][] __lock_acquire+0xa7e/0xddd > > [ 269.784969][] lock_acquire+0x95/0x105 > > [ 269.784971][] down_read+0x34/0x43 > > [ 269.784973][] > > __blocking_notifier_call_chain+0x49/0x80 > > [ 269.784975][] > > blocking_notifier_call_chain+0xf/0x11 > > [ 269.784977][] fb_notifier_call_chain+0x16/0x18 > > [ 269.784979][] fb_set_suspend+0x22/0x4d > > [ 269.784981][] intel_fbdev_set_suspend+0x20/0x22 > > [ 269.784983][] i915_drm_freeze+0xab/0xbb > > [ 269.784985][] i915_pm_freeze+0x3d/0x41 > > [ 269.784987][] pci_pm_freeze+0x65/0x8d > > [ 269.784990][] dpm_run_callback.isra.3+0x27/0x56 > > [ 269.784993][]
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Sun, 23 Dec 2012 at 13:34, Christian Kujau wrote: On Sat, 22 Dec 2012 at 16:28, Maciej Rutecki wrote: Got during suspend to disk: I got a similar message on a powerpc G4 system, right after bootup (no suspend involved): http://nerdbynature.de/bits/3.8.0-rc1/ FWIW, this is still present with 3.8.0-rc2. C. [ 97.803049] == [ 97.803051] [ INFO: possible circular locking dependency detected ] [ 97.803059] 3.8.0-rc1-dirty #2 Not tainted [ 97.803060] --- [ 97.803066] kworker/0:1/235 is trying to acquire lock: [ 97.803097] ((fb_notifier_list).rwsem){.+.+.+}, at: [c00606a0] __blocking_notifier_call_chain+0x44/0x88 [ 97.803099] [ 97.803099] but task is already holding lock: [ 97.803110] (console_lock){+.+.+.}, at: [c03b9fd0] console_callback+0x20/0x194 [ 97.803112] [ 97.803112] which lock already depends on the new lock. ...and on it goes. Please see the URL above for the whole dmesg and .config. @Li Zhong: I have applied your fix for the MAX_STACK_TRACE_ENTRIES too low warning[0] to 3.8-rc1 (hence the -dirty flag), but in the backtrace ret_from_kernel_thread shows up again. FWIW, your patch helped to make the MAX_STACK_TRACE_ENTRIES too low warning go away in 3.7.0-rc7 and it did not re-appear ever since. Thanks, Christian. [0] http://lkml.indiana.edu/hypermail/linux/kernel/1211.3/01917.html [ 269.784867] [ INFO: possible circular locking dependency detected ] [ 269.784869] 3.8.0-rc1 #1 Not tainted [ 269.784870] --- [ 269.784871] kworker/u:3/56 is trying to acquire lock: [ 269.784878] ((fb_notifier_list).rwsem){.+.+.+}, at: [81062a1d] __blocking_notifier_call_chain+0x49/0x80 [ 269.784879] [ 269.784879] but task is already holding lock: [ 269.784884] (console_lock){+.+.+.}, at: [812ee4ce] i915_drm_freeze+0x9e/0xbb [ 269.784884] [ 269.784884] which lock already depends on the new lock. [ 269.784884] [ 269.784885] [ 269.784885] the existing dependency chain (in reverse order) is: [ 269.784887] [ 269.784887] - #1 (console_lock){+.+.+.}: [ 269.784890][810890e4] lock_acquire+0x95/0x105 [ 269.784893][810405a1] console_lock+0x59/0x5b [ 269.784897][812ba125] register_con_driver+0x36/0x128 [ 269.784899][812bb27e] take_over_console+0x1e/0x45 [ 269.784903][81257a04] fbcon_takeover+0x56/0x98 [ 269.784906][8125b857] fbcon_event_notify+0x2c1/0x5ea [ 269.784909][8149a211] notifier_call_chain+0x67/0x92 [ 269.784911][81062a33] __blocking_notifier_call_chain+0x5f/0x80 [ 269.784912][81062a63] blocking_notifier_call_chain+0xf/0x11 [ 269.784915][8124e85e] fb_notifier_call_chain+0x16/0x18 [ 269.784917][812505d7] register_framebuffer+0x20a/0x26e [ 269.784920][812d3ca0] drm_fb_helper_single_fb_probe+0x1ce/0x297 [ 269.784922][812d3f40] drm_fb_helper_initial_config+0x1d7/0x1ef [ 269.784924][8132cee2] intel_fbdev_init+0x6f/0x82 [ 269.784927][812f22f6] i915_driver_load+0xa9e/0xc78 [ 269.784929][812e020c] drm_get_pci_dev+0x165/0x26d [ 269.784931][812ee8da] i915_pci_probe+0x60/0x69 [ 269.784933][8123fe8e] local_pci_probe+0x39/0x61 [ 269.784935][812400f5] pci_device_probe+0xba/0xe0 [ 269.784938][8133d3b6] driver_probe_device+0x99/0x1c4 [ 269.784940][8133d52f] __driver_attach+0x4e/0x6f [ 269.784942][8133bae1] bus_for_each_dev+0x52/0x84 [ 269.784944][8133cec6] driver_attach+0x19/0x1b [ 269.784946][8133cb65] bus_add_driver+0xdf/0x203 [ 269.784948][8133dad3] driver_register+0x8e/0x114 [ 269.784952][8123f581] __pci_register_driver+0x5d/0x62 [ 269.784953][812e0395] drm_pci_init+0x81/0xe6 [ 269.784957][81af7612] i915_init+0x66/0x68 [ 269.784959][810020b4] do_one_initcall+0x7a/0x136 [ 269.784962][8147ceaa] kernel_init+0x141/0x296 [ 269.784964][8149c7bc] ret_from_fork+0x7c/0xb0 [ 269.784966] [ 269.784966] - #0 ((fb_notifier_list).rwsem){.+.+.+}: [ 269.784967][81088955] __lock_acquire+0xa7e/0xddd [ 269.784969][810890e4] lock_acquire+0x95/0x105 [ 269.784971][81495092] down_read+0x34/0x43 [ 269.784973][81062a1d] __blocking_notifier_call_chain+0x49/0x80 [ 269.784975][81062a63]
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Thu, Dec 27, 2012 at 08:03:24AM -0500, Peter Hurley wrote: > On Thu, 2012-12-27 at 16:36 +0800, Shawn Guo wrote: > > On Wed, Dec 26, 2012 at 10:34:39AM +0800, Shawn Guo wrote: > > > It seems that I'm running into the same locking issue. My setup is: > > > > > > - i.MX28 (ARM) > > > - v3.8-rc1 > > > - mxs_defconfig > > - The warning is seen when LCD is blanking > > > > > > > The warning disappears after reverting patch daee779 (console: implement > > lockdep support for console_lock). Is it suggesting that the mxs > > frame buffer driver (drivers/video/mxsfb.c) is doing something bad? > > > > Shawn > > > > > > > > [ 602.229899] == > > > [ 602.229905] [ INFO: possible circular locking dependency detected ] > > > [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted > > > [ 602.229933] --- > > > [ 602.229951] kworker/0:1/21 is trying to acquire lock: > > > [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [] > > > __blocking_notifier_call_chain+0x2c/0x60 > > You want this patch https://patchwork.kernel.org/patch/1757061/ > Thanks for the pointer, Peter. It does fix the problem for me. Shawn -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Thu, Dec 27, 2012 at 08:03:24AM -0500, Peter Hurley wrote: On Thu, 2012-12-27 at 16:36 +0800, Shawn Guo wrote: On Wed, Dec 26, 2012 at 10:34:39AM +0800, Shawn Guo wrote: It seems that I'm running into the same locking issue. My setup is: - i.MX28 (ARM) - v3.8-rc1 - mxs_defconfig - The warning is seen when LCD is blanking The warning disappears after reverting patch daee779 (console: implement lockdep support for console_lock). Is it suggesting that the mxs frame buffer driver (drivers/video/mxsfb.c) is doing something bad? Shawn [ 602.229899] == [ 602.229905] [ INFO: possible circular locking dependency detected ] [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted [ 602.229933] --- [ 602.229951] kworker/0:1/21 is trying to acquire lock: [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [c0041f34] __blocking_notifier_call_chain+0x2c/0x60 You want this patch https://patchwork.kernel.org/patch/1757061/ Thanks for the pointer, Peter. It does fix the problem for me. Shawn -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Thu, 2012-12-27 at 16:36 +0800, Shawn Guo wrote: > On Wed, Dec 26, 2012 at 10:34:39AM +0800, Shawn Guo wrote: > > It seems that I'm running into the same locking issue. My setup is: > > > > - i.MX28 (ARM) > > - v3.8-rc1 > > - mxs_defconfig > - The warning is seen when LCD is blanking > > > > The warning disappears after reverting patch daee779 (console: implement > lockdep support for console_lock). Is it suggesting that the mxs > frame buffer driver (drivers/video/mxsfb.c) is doing something bad? > > Shawn > > > > > [ 602.229899] == > > [ 602.229905] [ INFO: possible circular locking dependency detected ] > > [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted > > [ 602.229933] --- > > [ 602.229951] kworker/0:1/21 is trying to acquire lock: > > [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [] > > __blocking_notifier_call_chain+0x2c/0x60 You want this patch https://patchwork.kernel.org/patch/1757061/ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Wed, Dec 26, 2012 at 10:34:39AM +0800, Shawn Guo wrote: > It seems that I'm running into the same locking issue. My setup is: > > - i.MX28 (ARM) > - v3.8-rc1 > - mxs_defconfig - The warning is seen when LCD is blanking > The warning disappears after reverting patch daee779 (console: implement lockdep support for console_lock). Is it suggesting that the mxs frame buffer driver (drivers/video/mxsfb.c) is doing something bad? Shawn > > [ 602.229899] == > [ 602.229905] [ INFO: possible circular locking dependency detected ] > [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted > [ 602.229933] --- > [ 602.229951] kworker/0:1/21 is trying to acquire lock: > [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [] > __blocking_notifier_call_chain+0x2c/0x60 > [ 602.230047] > [ 602.230047] but task is already holding lock: > [ 602.230090] (console_lock){+.+.+.}, at: [] > console_callback+0xc/0x12c > [ 602.230098] > [ 602.230098] which lock already depends on the new lock. > [ 602.230098] > [ 602.230104] > [ 602.230104] the existing dependency chain (in reverse order) is: > [ 602.230126] > [ 602.230126] -> #1 (console_lock){+.+.+.}: > [ 602.230174][] lock_acquire+0x9c/0x124 > [ 602.230205][] console_lock+0x58/0x6c > [ 602.230250][] register_con_driver+0x38/0x138 > [ 602.230284][] take_over_console+0x18/0x44 > [ 602.230314][] fbcon_takeover+0x64/0xc8 > [ 602.230352][] notifier_call_chain+0x44/0x80 > [ 602.230386][] __blocking_notifier_call_chain+0x48/0x60 > [ 602.230416][] blocking_notifier_call_chain+0x18/0x20 > [ 602.230459][] register_framebuffer+0x170/0x250 > [ 602.230492][] mxsfb_probe+0x574/0x738 > [ 602.230528][] platform_drv_probe+0x14/0x18 > [ 602.230556][] driver_probe_device+0x78/0x20c > [ 602.230583][] __driver_attach+0x94/0x98 > [ 602.230610][] bus_for_each_dev+0x54/0x7c > [ 602.230636][] bus_add_driver+0x180/0x250 > [ 602.230662][] driver_register+0x78/0x144 > [ 602.230690][] do_one_initcall+0x30/0x16c > [ 602.230721][] kernel_init+0xf4/0x290 > [ 602.230756][] ret_from_fork+0x14/0x2c > [ 602.230781] > [ 602.230781] -> #0 ((fb_notifier_list).rwsem){.+.+.+}: > [ 602.230825][] __lock_acquire+0x1354/0x19b0 > [ 602.230854][] lock_acquire+0x9c/0x124 > [ 602.230895][] down_read+0x3c/0x4c > [ 602.230933][] __blocking_notifier_call_chain+0x2c/0x60 > [ 602.230962][] blocking_notifier_call_chain+0x18/0x20 > [ 602.230997][] fb_blank+0x34/0x98 > [ 602.231024][] fbcon_blank+0x1dc/0x27c > [ 602.231065][] do_blank_screen+0x1b0/0x268 > [ 602.231093][] console_callback+0x68/0x12c > [ 602.231121][] process_one_work+0x1a8/0x560 > [ 602.231145][] worker_thread+0x160/0x480 > [ 602.231180][] kthread+0xa4/0xb0 > [ 602.231210][] ret_from_fork+0x14/0x2c > [ 602.231218] > [ 602.231218] other info that might help us debug this: > [ 602.231218] > [ 602.231225] Possible unsafe locking scenario: > [ 602.231225] > [ 602.231230]CPU0CPU1 > [ 602.231235] > [ 602.231249] lock(console_lock); > [ 602.231263]lock((fb_notifier_list).rwsem); > [ 602.231275]lock(console_lock); > [ 602.231287] lock((fb_notifier_list).rwsem); > [ 602.231292] > [ 602.231292] *** DEADLOCK *** > [ 602.231292] > [ 602.231305] 3 locks held by kworker/0:1/21: > [ 602.231345] #0: (events){.+.+.+}, at: [] > process_one_work+0x128/0x560 > [ 602.231388] #1: (console_work){+.+...}, at: [] > process_one_work+0x128/0x560 > [ 602.231430] #2: (console_lock){+.+.+.}, at: [] > console_callback+0xc/0x12c > [ 602.231437] > [ 602.231437] stack backtrace: > [ 602.231491] [] (unwind_backtrace+0x0/0xf0) from [] > (print_circular_bug+0x254/0x2a0) > [ 602.231547] [] (print_circular_bug+0x254/0x2a0) from > [] (__lock_acquire+0x1354/0x19b0) > [ 602.231596] [] (__lock_acquire+0x1354/0x19b0) from [] > (lock_acquire+0x9c/0x124) > [ 602.231640] [] (lock_acquire+0x9c/0x124) from [] > (down_read+0x3c/0x4c) > [ 602.231694] [] (down_read+0x3c/0x4c) from [] > (__blocking_notifier_call_chain+0x2c/0x60) > [ 602.231741] [] (__blocking_notifier_call_chain+0x2c/0x60) from > [] (blocking_notifier_call_chain+0x18/0x20) > [ 602.231791] [] (blocking_notifier_call_chain+0x18/0x20) from > [] (fb_blank+0x34/0x98) > [ 602.231836] [] (fb_blank+0x34/0x98) from [] > (fbcon_blank+0x1dc/0x27c) > [ 602.231886] [] (fbcon_blank+0x1dc/0x27c) from [] > (do_blank_screen+0x1b0/0x268) > [ 602.231931] [] (do_blank_screen+0x1b0/0x268) from [] > (console_callback+0x68/0x12c) > [ 602.231970] [] (console_callback+0x68/0x12c)
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Wed, Dec 26, 2012 at 10:34:39AM +0800, Shawn Guo wrote: It seems that I'm running into the same locking issue. My setup is: - i.MX28 (ARM) - v3.8-rc1 - mxs_defconfig - The warning is seen when LCD is blanking The warning disappears after reverting patch daee779 (console: implement lockdep support for console_lock). Is it suggesting that the mxs frame buffer driver (drivers/video/mxsfb.c) is doing something bad? Shawn [ 602.229899] == [ 602.229905] [ INFO: possible circular locking dependency detected ] [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted [ 602.229933] --- [ 602.229951] kworker/0:1/21 is trying to acquire lock: [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [c0041f34] __blocking_notifier_call_chain+0x2c/0x60 [ 602.230047] [ 602.230047] but task is already holding lock: [ 602.230090] (console_lock){+.+.+.}, at: [c02a1d60] console_callback+0xc/0x12c [ 602.230098] [ 602.230098] which lock already depends on the new lock. [ 602.230098] [ 602.230104] [ 602.230104] the existing dependency chain (in reverse order) is: [ 602.230126] [ 602.230126] - #1 (console_lock){+.+.+.}: [ 602.230174][c005cb20] lock_acquire+0x9c/0x124 [ 602.230205][c001dc78] console_lock+0x58/0x6c [ 602.230250][c029ea60] register_con_driver+0x38/0x138 [ 602.230284][c02a0018] take_over_console+0x18/0x44 [ 602.230314][c027bc80] fbcon_takeover+0x64/0xc8 [ 602.230352][c0041c94] notifier_call_chain+0x44/0x80 [ 602.230386][c0041f50] __blocking_notifier_call_chain+0x48/0x60 [ 602.230416][c0041f80] blocking_notifier_call_chain+0x18/0x20 [ 602.230459][c0275efc] register_framebuffer+0x170/0x250 [ 602.230492][c02837f4] mxsfb_probe+0x574/0x738 [ 602.230528][c02b276c] platform_drv_probe+0x14/0x18 [ 602.230556][c02b14cc] driver_probe_device+0x78/0x20c [ 602.230583][c02b16f4] __driver_attach+0x94/0x98 [ 602.230610][c02afdb4] bus_for_each_dev+0x54/0x7c [ 602.230636][c02b0d14] bus_add_driver+0x180/0x250 [ 602.230662][c02b1bb8] driver_register+0x78/0x144 [ 602.230690][c00087c8] do_one_initcall+0x30/0x16c [ 602.230721][c0428fcc] kernel_init+0xf4/0x290 [ 602.230756][c000e9c8] ret_from_fork+0x14/0x2c [ 602.230781] [ 602.230781] - #0 ((fb_notifier_list).rwsem){.+.+.+}: [ 602.230825][c005bfa0] __lock_acquire+0x1354/0x19b0 [ 602.230854][c005cb20] lock_acquire+0x9c/0x124 [ 602.230895][c0430148] down_read+0x3c/0x4c [ 602.230933][c0041f34] __blocking_notifier_call_chain+0x2c/0x60 [ 602.230962][c0041f80] blocking_notifier_call_chain+0x18/0x20 [ 602.230997][c0274a78] fb_blank+0x34/0x98 [ 602.231024][c027c7b8] fbcon_blank+0x1dc/0x27c [ 602.231065][c029f194] do_blank_screen+0x1b0/0x268 [ 602.231093][c02a1dbc] console_callback+0x68/0x12c [ 602.231121][c00368c0] process_one_work+0x1a8/0x560 [ 602.231145][c0036fd8] worker_thread+0x160/0x480 [ 602.231180][c003c040] kthread+0xa4/0xb0 [ 602.231210][c000e9c8] ret_from_fork+0x14/0x2c [ 602.231218] [ 602.231218] other info that might help us debug this: [ 602.231218] [ 602.231225] Possible unsafe locking scenario: [ 602.231225] [ 602.231230]CPU0CPU1 [ 602.231235] [ 602.231249] lock(console_lock); [ 602.231263]lock((fb_notifier_list).rwsem); [ 602.231275]lock(console_lock); [ 602.231287] lock((fb_notifier_list).rwsem); [ 602.231292] [ 602.231292] *** DEADLOCK *** [ 602.231292] [ 602.231305] 3 locks held by kworker/0:1/21: [ 602.231345] #0: (events){.+.+.+}, at: [c0036840] process_one_work+0x128/0x560 [ 602.231388] #1: (console_work){+.+...}, at: [c0036840] process_one_work+0x128/0x560 [ 602.231430] #2: (console_lock){+.+.+.}, at: [c02a1d60] console_callback+0xc/0x12c [ 602.231437] [ 602.231437] stack backtrace: [ 602.231491] [c0013e58] (unwind_backtrace+0x0/0xf0) from [c042b0e4] (print_circular_bug+0x254/0x2a0) [ 602.231547] [c042b0e4] (print_circular_bug+0x254/0x2a0) from [c005bfa0] (__lock_acquire+0x1354/0x19b0) [ 602.231596] [c005bfa0] (__lock_acquire+0x1354/0x19b0) from [c005cb20] (lock_acquire+0x9c/0x124) [ 602.231640] [c005cb20] (lock_acquire+0x9c/0x124) from [c0430148] (down_read+0x3c/0x4c) [ 602.231694] [c0430148] (down_read+0x3c/0x4c) from [c0041f34] (__blocking_notifier_call_chain+0x2c/0x60) [ 602.231741] [c0041f34] (__blocking_notifier_call_chain+0x2c/0x60) from [c0041f80] (blocking_notifier_call_chain+0x18/0x20) [ 602.231791] [c0041f80] (blocking_notifier_call_chain+0x18/0x20) from [c0274a78]
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Thu, 2012-12-27 at 16:36 +0800, Shawn Guo wrote: On Wed, Dec 26, 2012 at 10:34:39AM +0800, Shawn Guo wrote: It seems that I'm running into the same locking issue. My setup is: - i.MX28 (ARM) - v3.8-rc1 - mxs_defconfig - The warning is seen when LCD is blanking The warning disappears after reverting patch daee779 (console: implement lockdep support for console_lock). Is it suggesting that the mxs frame buffer driver (drivers/video/mxsfb.c) is doing something bad? Shawn [ 602.229899] == [ 602.229905] [ INFO: possible circular locking dependency detected ] [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted [ 602.229933] --- [ 602.229951] kworker/0:1/21 is trying to acquire lock: [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [c0041f34] __blocking_notifier_call_chain+0x2c/0x60 You want this patch https://patchwork.kernel.org/patch/1757061/ -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Sun, 2012-12-23 at 13:34 -0800, Christian Kujau wrote: > On Sat, 22 Dec 2012 at 16:28, Maciej Rutecki wrote: > > Got during suspend to disk: > > I got a similar message on a powerpc G4 system, right after bootup (no > suspend involved): > > http://nerdbynature.de/bits/3.8.0-rc1/ > > [ 97.803049] == > [ 97.803051] [ INFO: possible circular locking dependency detected ] > [ 97.803059] 3.8.0-rc1-dirty #2 Not tainted > [ 97.803060] --- > [ 97.803066] kworker/0:1/235 is trying to acquire lock: > [ 97.803097] ((fb_notifier_list).rwsem){.+.+.+}, at: [] > __blocking_notifier_call_chain+0x44/0x88 > [ 97.803099] > [ 97.803099] but task is already holding lock: > [ 97.803110] (console_lock){+.+.+.}, at: [] > console_callback+0x20/0x194 > [ 97.803112] > [ 97.803112] which lock already depends on the new lock. > > ...and on it goes. Please see the URL above for the whole dmesg and > .config. > > @Li Zhong: I have applied your fix for the "MAX_STACK_TRACE_ENTRIES too >low" warning[0] to 3.8-rc1 (hence the -dirty flag), but in the >backtrace "ret_from_kernel_thread" shows up again. FWIW, your >patch helped to make the "MAX_STACK_TRACE_ENTRIES too low" >warning go away in 3.7.0-rc7 and it did not re-appear ever >since. The patch fixing "MAX_STACK_TRACE_ENTRIES too low" warning clears the stack back chain at "ret_from_kernel_thread", so I think it's fine to see it on the top of the stack. Thank, Zhong > Thanks, > Christian. > > [0] http://lkml.indiana.edu/hypermail/linux/kernel/1211.3/01917.html > > > [ 269.784867] [ INFO: possible circular locking dependency detected ] > > [ 269.784869] 3.8.0-rc1 #1 Not tainted > > [ 269.784870] --- > > [ 269.784871] kworker/u:3/56 is trying to acquire lock: > > [ 269.784878] ((fb_notifier_list).rwsem){.+.+.+}, at: > > [] > > __blocking_notifier_call_chain+0x49/0x80 > > [ 269.784879] > > [ 269.784879] but task is already holding lock: > > [ 269.784884] (console_lock){+.+.+.}, at: [] > > i915_drm_freeze+0x9e/0xbb > > [ 269.784884] > > [ 269.784884] which lock already depends on the new lock. > > [ 269.784884] > > [ 269.784885] > > [ 269.784885] the existing dependency chain (in reverse order) is: > > [ 269.784887] > > [ 269.784887] -> #1 (console_lock){+.+.+.}: > > [ 269.784890][] lock_acquire+0x95/0x105 > > [ 269.784893][] console_lock+0x59/0x5b > > [ 269.784897][] register_con_driver+0x36/0x128 > > [ 269.784899][] take_over_console+0x1e/0x45 > > [ 269.784903][] fbcon_takeover+0x56/0x98 > > [ 269.784906][] fbcon_event_notify+0x2c1/0x5ea > > [ 269.784909][] notifier_call_chain+0x67/0x92 > > [ 269.784911][] > > __blocking_notifier_call_chain+0x5f/0x80 > > [ 269.784912][] > > blocking_notifier_call_chain+0xf/0x11 > > [ 269.784915][] fb_notifier_call_chain+0x16/0x18 > > [ 269.784917][] register_framebuffer+0x20a/0x26e > > [ 269.784920][] > > drm_fb_helper_single_fb_probe+0x1ce/0x297 > > [ 269.784922][] > > drm_fb_helper_initial_config+0x1d7/0x1ef > > [ 269.784924][] intel_fbdev_init+0x6f/0x82 > > [ 269.784927][] i915_driver_load+0xa9e/0xc78 > > [ 269.784929][] drm_get_pci_dev+0x165/0x26d > > [ 269.784931][] i915_pci_probe+0x60/0x69 > > [ 269.784933][] local_pci_probe+0x39/0x61 > > [ 269.784935][] pci_device_probe+0xba/0xe0 > > [ 269.784938][] driver_probe_device+0x99/0x1c4 > > [ 269.784940][] __driver_attach+0x4e/0x6f > > [ 269.784942][] bus_for_each_dev+0x52/0x84 > > [ 269.784944][] driver_attach+0x19/0x1b > > [ 269.784946][] bus_add_driver+0xdf/0x203 > > [ 269.784948][] driver_register+0x8e/0x114 > > [ 269.784952][] __pci_register_driver+0x5d/0x62 > > [ 269.784953][] drm_pci_init+0x81/0xe6 > > [ 269.784957][] i915_init+0x66/0x68 > > [ 269.784959][] do_one_initcall+0x7a/0x136 > > [ 269.784962][] kernel_init+0x141/0x296 > > [ 269.784964][] ret_from_fork+0x7c/0xb0 > > [ 269.784966] > > [ 269.784966] -> #0 ((fb_notifier_list).rwsem){.+.+.+}: > > [ 269.784967][] __lock_acquire+0xa7e/0xddd > > [ 269.784969][] lock_acquire+0x95/0x105 > > [ 269.784971][] down_read+0x34/0x43 > > [ 269.784973][] > > __blocking_notifier_call_chain+0x49/0x80 > > [ 269.784975][] > > blocking_notifier_call_chain+0xf/0x11 > > [ 269.784977][] fb_notifier_call_chain+0x16/0x18 > > [ 269.784979][] fb_set_suspend+0x22/0x4d > > [ 269.784981][] intel_fbdev_set_suspend+0x20/0x22 > > [ 269.784983][] i915_drm_freeze+0xab/0xbb > > [ 269.784985][] i915_pm_freeze+0x3d/0x41 >
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Sun, 2012-12-23 at 13:34 -0800, Christian Kujau wrote: On Sat, 22 Dec 2012 at 16:28, Maciej Rutecki wrote: Got during suspend to disk: I got a similar message on a powerpc G4 system, right after bootup (no suspend involved): http://nerdbynature.de/bits/3.8.0-rc1/ [ 97.803049] == [ 97.803051] [ INFO: possible circular locking dependency detected ] [ 97.803059] 3.8.0-rc1-dirty #2 Not tainted [ 97.803060] --- [ 97.803066] kworker/0:1/235 is trying to acquire lock: [ 97.803097] ((fb_notifier_list).rwsem){.+.+.+}, at: [c00606a0] __blocking_notifier_call_chain+0x44/0x88 [ 97.803099] [ 97.803099] but task is already holding lock: [ 97.803110] (console_lock){+.+.+.}, at: [c03b9fd0] console_callback+0x20/0x194 [ 97.803112] [ 97.803112] which lock already depends on the new lock. ...and on it goes. Please see the URL above for the whole dmesg and .config. @Li Zhong: I have applied your fix for the MAX_STACK_TRACE_ENTRIES too low warning[0] to 3.8-rc1 (hence the -dirty flag), but in the backtrace ret_from_kernel_thread shows up again. FWIW, your patch helped to make the MAX_STACK_TRACE_ENTRIES too low warning go away in 3.7.0-rc7 and it did not re-appear ever since. The patch fixing MAX_STACK_TRACE_ENTRIES too low warning clears the stack back chain at ret_from_kernel_thread, so I think it's fine to see it on the top of the stack. Thank, Zhong Thanks, Christian. [0] http://lkml.indiana.edu/hypermail/linux/kernel/1211.3/01917.html [ 269.784867] [ INFO: possible circular locking dependency detected ] [ 269.784869] 3.8.0-rc1 #1 Not tainted [ 269.784870] --- [ 269.784871] kworker/u:3/56 is trying to acquire lock: [ 269.784878] ((fb_notifier_list).rwsem){.+.+.+}, at: [81062a1d] __blocking_notifier_call_chain+0x49/0x80 [ 269.784879] [ 269.784879] but task is already holding lock: [ 269.784884] (console_lock){+.+.+.}, at: [812ee4ce] i915_drm_freeze+0x9e/0xbb [ 269.784884] [ 269.784884] which lock already depends on the new lock. [ 269.784884] [ 269.784885] [ 269.784885] the existing dependency chain (in reverse order) is: [ 269.784887] [ 269.784887] - #1 (console_lock){+.+.+.}: [ 269.784890][810890e4] lock_acquire+0x95/0x105 [ 269.784893][810405a1] console_lock+0x59/0x5b [ 269.784897][812ba125] register_con_driver+0x36/0x128 [ 269.784899][812bb27e] take_over_console+0x1e/0x45 [ 269.784903][81257a04] fbcon_takeover+0x56/0x98 [ 269.784906][8125b857] fbcon_event_notify+0x2c1/0x5ea [ 269.784909][8149a211] notifier_call_chain+0x67/0x92 [ 269.784911][81062a33] __blocking_notifier_call_chain+0x5f/0x80 [ 269.784912][81062a63] blocking_notifier_call_chain+0xf/0x11 [ 269.784915][8124e85e] fb_notifier_call_chain+0x16/0x18 [ 269.784917][812505d7] register_framebuffer+0x20a/0x26e [ 269.784920][812d3ca0] drm_fb_helper_single_fb_probe+0x1ce/0x297 [ 269.784922][812d3f40] drm_fb_helper_initial_config+0x1d7/0x1ef [ 269.784924][8132cee2] intel_fbdev_init+0x6f/0x82 [ 269.784927][812f22f6] i915_driver_load+0xa9e/0xc78 [ 269.784929][812e020c] drm_get_pci_dev+0x165/0x26d [ 269.784931][812ee8da] i915_pci_probe+0x60/0x69 [ 269.784933][8123fe8e] local_pci_probe+0x39/0x61 [ 269.784935][812400f5] pci_device_probe+0xba/0xe0 [ 269.784938][8133d3b6] driver_probe_device+0x99/0x1c4 [ 269.784940][8133d52f] __driver_attach+0x4e/0x6f [ 269.784942][8133bae1] bus_for_each_dev+0x52/0x84 [ 269.784944][8133cec6] driver_attach+0x19/0x1b [ 269.784946][8133cb65] bus_add_driver+0xdf/0x203 [ 269.784948][8133dad3] driver_register+0x8e/0x114 [ 269.784952][8123f581] __pci_register_driver+0x5d/0x62 [ 269.784953][812e0395] drm_pci_init+0x81/0xe6 [ 269.784957][81af7612] i915_init+0x66/0x68 [ 269.784959][810020b4] do_one_initcall+0x7a/0x136 [ 269.784962][8147ceaa] kernel_init+0x141/0x296 [ 269.784964][8149c7bc] ret_from_fork+0x7c/0xb0 [ 269.784966] [ 269.784966] - #0 ((fb_notifier_list).rwsem){.+.+.+}: [ 269.784967][81088955] __lock_acquire+0xa7e/0xddd [ 269.784969][810890e4] lock_acquire+0x95/0x105 [ 269.784971][81495092] down_read+0x34/0x43 [
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
It seems that I'm running into the same locking issue. My setup is: - i.MX28 (ARM) - v3.8-rc1 - mxs_defconfig Shawn [ 602.229899] == [ 602.229905] [ INFO: possible circular locking dependency detected ] [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted [ 602.229933] --- [ 602.229951] kworker/0:1/21 is trying to acquire lock: [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [] __blocking_notifier_call_chain+0x2c/0x60 [ 602.230047] [ 602.230047] but task is already holding lock: [ 602.230090] (console_lock){+.+.+.}, at: [] console_callback+0xc/0x12c [ 602.230098] [ 602.230098] which lock already depends on the new lock. [ 602.230098] [ 602.230104] [ 602.230104] the existing dependency chain (in reverse order) is: [ 602.230126] [ 602.230126] -> #1 (console_lock){+.+.+.}: [ 602.230174][] lock_acquire+0x9c/0x124 [ 602.230205][] console_lock+0x58/0x6c [ 602.230250][] register_con_driver+0x38/0x138 [ 602.230284][] take_over_console+0x18/0x44 [ 602.230314][] fbcon_takeover+0x64/0xc8 [ 602.230352][] notifier_call_chain+0x44/0x80 [ 602.230386][] __blocking_notifier_call_chain+0x48/0x60 [ 602.230416][] blocking_notifier_call_chain+0x18/0x20 [ 602.230459][] register_framebuffer+0x170/0x250 [ 602.230492][] mxsfb_probe+0x574/0x738 [ 602.230528][] platform_drv_probe+0x14/0x18 [ 602.230556][] driver_probe_device+0x78/0x20c [ 602.230583][] __driver_attach+0x94/0x98 [ 602.230610][] bus_for_each_dev+0x54/0x7c [ 602.230636][] bus_add_driver+0x180/0x250 [ 602.230662][] driver_register+0x78/0x144 [ 602.230690][] do_one_initcall+0x30/0x16c [ 602.230721][] kernel_init+0xf4/0x290 [ 602.230756][] ret_from_fork+0x14/0x2c [ 602.230781] [ 602.230781] -> #0 ((fb_notifier_list).rwsem){.+.+.+}: [ 602.230825][] __lock_acquire+0x1354/0x19b0 [ 602.230854][] lock_acquire+0x9c/0x124 [ 602.230895][] down_read+0x3c/0x4c [ 602.230933][] __blocking_notifier_call_chain+0x2c/0x60 [ 602.230962][] blocking_notifier_call_chain+0x18/0x20 [ 602.230997][] fb_blank+0x34/0x98 [ 602.231024][] fbcon_blank+0x1dc/0x27c [ 602.231065][] do_blank_screen+0x1b0/0x268 [ 602.231093][] console_callback+0x68/0x12c [ 602.231121][] process_one_work+0x1a8/0x560 [ 602.231145][] worker_thread+0x160/0x480 [ 602.231180][] kthread+0xa4/0xb0 [ 602.231210][] ret_from_fork+0x14/0x2c [ 602.231218] [ 602.231218] other info that might help us debug this: [ 602.231218] [ 602.231225] Possible unsafe locking scenario: [ 602.231225] [ 602.231230]CPU0CPU1 [ 602.231235] [ 602.231249] lock(console_lock); [ 602.231263]lock((fb_notifier_list).rwsem); [ 602.231275]lock(console_lock); [ 602.231287] lock((fb_notifier_list).rwsem); [ 602.231292] [ 602.231292] *** DEADLOCK *** [ 602.231292] [ 602.231305] 3 locks held by kworker/0:1/21: [ 602.231345] #0: (events){.+.+.+}, at: [] process_one_work+0x128/0x560 [ 602.231388] #1: (console_work){+.+...}, at: [] process_one_work+0x128/0x560 [ 602.231430] #2: (console_lock){+.+.+.}, at: [] console_callback+0xc/0x12c [ 602.231437] [ 602.231437] stack backtrace: [ 602.231491] [] (unwind_backtrace+0x0/0xf0) from [] (print_circular_bug+0x254/0x2a0) [ 602.231547] [] (print_circular_bug+0x254/0x2a0) from [] (__lock_acquire+0x1354/0x19b0) [ 602.231596] [] (__lock_acquire+0x1354/0x19b0) from [] (lock_acquire+0x9c/0x124) [ 602.231640] [] (lock_acquire+0x9c/0x124) from [] (down_read+0x3c/0x4c) [ 602.231694] [] (down_read+0x3c/0x4c) from [] (__blocking_notifier_call_chain+0x2c/0x60) [ 602.231741] [] (__blocking_notifier_call_chain+0x2c/0x60) from [] (blocking_notifier_call_chain+0x18/0x20) [ 602.231791] [] (blocking_notifier_call_chain+0x18/0x20) from [] (fb_blank+0x34/0x98) [ 602.231836] [] (fb_blank+0x34/0x98) from [] (fbcon_blank+0x1dc/0x27c) [ 602.231886] [] (fbcon_blank+0x1dc/0x27c) from [] (do_blank_screen+0x1b0/0x268) [ 602.231931] [] (do_blank_screen+0x1b0/0x268) from [] (console_callback+0x68/0x12c) [ 602.231970] [] (console_callback+0x68/0x12c) from [] (process_one_work+0x1a8/0x560) [ 602.232010] [] (process_one_work+0x1a8/0x560) from [] (worker_thread+0x160/0x480) [ 602.232054] [] (worker_thread+0x160/0x480) from [] (kthread+0xa4/0xb0) [ 602.232100] [] (kthread+0xa4/0xb0) from [] (ret_from_fork+0x14/0x2c) On Sat, Dec 22, 2012 at 04:28:26PM +0100, Maciej Rutecki wrote: > Got during suspend to disk: > > [ 269.784867] [ INFO: possible circular locking dependency detected ] > [ 269.784869] 3.8.0-rc1 #1 Not tainted > [ 269.784870]
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
It seems that I'm running into the same locking issue. My setup is: - i.MX28 (ARM) - v3.8-rc1 - mxs_defconfig Shawn [ 602.229899] == [ 602.229905] [ INFO: possible circular locking dependency detected ] [ 602.229926] 3.8.0-rc1-3-gde4ae7f #767 Not tainted [ 602.229933] --- [ 602.229951] kworker/0:1/21 is trying to acquire lock: [ 602.230037] ((fb_notifier_list).rwsem){.+.+.+}, at: [c0041f34] __blocking_notifier_call_chain+0x2c/0x60 [ 602.230047] [ 602.230047] but task is already holding lock: [ 602.230090] (console_lock){+.+.+.}, at: [c02a1d60] console_callback+0xc/0x12c [ 602.230098] [ 602.230098] which lock already depends on the new lock. [ 602.230098] [ 602.230104] [ 602.230104] the existing dependency chain (in reverse order) is: [ 602.230126] [ 602.230126] - #1 (console_lock){+.+.+.}: [ 602.230174][c005cb20] lock_acquire+0x9c/0x124 [ 602.230205][c001dc78] console_lock+0x58/0x6c [ 602.230250][c029ea60] register_con_driver+0x38/0x138 [ 602.230284][c02a0018] take_over_console+0x18/0x44 [ 602.230314][c027bc80] fbcon_takeover+0x64/0xc8 [ 602.230352][c0041c94] notifier_call_chain+0x44/0x80 [ 602.230386][c0041f50] __blocking_notifier_call_chain+0x48/0x60 [ 602.230416][c0041f80] blocking_notifier_call_chain+0x18/0x20 [ 602.230459][c0275efc] register_framebuffer+0x170/0x250 [ 602.230492][c02837f4] mxsfb_probe+0x574/0x738 [ 602.230528][c02b276c] platform_drv_probe+0x14/0x18 [ 602.230556][c02b14cc] driver_probe_device+0x78/0x20c [ 602.230583][c02b16f4] __driver_attach+0x94/0x98 [ 602.230610][c02afdb4] bus_for_each_dev+0x54/0x7c [ 602.230636][c02b0d14] bus_add_driver+0x180/0x250 [ 602.230662][c02b1bb8] driver_register+0x78/0x144 [ 602.230690][c00087c8] do_one_initcall+0x30/0x16c [ 602.230721][c0428fcc] kernel_init+0xf4/0x290 [ 602.230756][c000e9c8] ret_from_fork+0x14/0x2c [ 602.230781] [ 602.230781] - #0 ((fb_notifier_list).rwsem){.+.+.+}: [ 602.230825][c005bfa0] __lock_acquire+0x1354/0x19b0 [ 602.230854][c005cb20] lock_acquire+0x9c/0x124 [ 602.230895][c0430148] down_read+0x3c/0x4c [ 602.230933][c0041f34] __blocking_notifier_call_chain+0x2c/0x60 [ 602.230962][c0041f80] blocking_notifier_call_chain+0x18/0x20 [ 602.230997][c0274a78] fb_blank+0x34/0x98 [ 602.231024][c027c7b8] fbcon_blank+0x1dc/0x27c [ 602.231065][c029f194] do_blank_screen+0x1b0/0x268 [ 602.231093][c02a1dbc] console_callback+0x68/0x12c [ 602.231121][c00368c0] process_one_work+0x1a8/0x560 [ 602.231145][c0036fd8] worker_thread+0x160/0x480 [ 602.231180][c003c040] kthread+0xa4/0xb0 [ 602.231210][c000e9c8] ret_from_fork+0x14/0x2c [ 602.231218] [ 602.231218] other info that might help us debug this: [ 602.231218] [ 602.231225] Possible unsafe locking scenario: [ 602.231225] [ 602.231230]CPU0CPU1 [ 602.231235] [ 602.231249] lock(console_lock); [ 602.231263]lock((fb_notifier_list).rwsem); [ 602.231275]lock(console_lock); [ 602.231287] lock((fb_notifier_list).rwsem); [ 602.231292] [ 602.231292] *** DEADLOCK *** [ 602.231292] [ 602.231305] 3 locks held by kworker/0:1/21: [ 602.231345] #0: (events){.+.+.+}, at: [c0036840] process_one_work+0x128/0x560 [ 602.231388] #1: (console_work){+.+...}, at: [c0036840] process_one_work+0x128/0x560 [ 602.231430] #2: (console_lock){+.+.+.}, at: [c02a1d60] console_callback+0xc/0x12c [ 602.231437] [ 602.231437] stack backtrace: [ 602.231491] [c0013e58] (unwind_backtrace+0x0/0xf0) from [c042b0e4] (print_circular_bug+0x254/0x2a0) [ 602.231547] [c042b0e4] (print_circular_bug+0x254/0x2a0) from [c005bfa0] (__lock_acquire+0x1354/0x19b0) [ 602.231596] [c005bfa0] (__lock_acquire+0x1354/0x19b0) from [c005cb20] (lock_acquire+0x9c/0x124) [ 602.231640] [c005cb20] (lock_acquire+0x9c/0x124) from [c0430148] (down_read+0x3c/0x4c) [ 602.231694] [c0430148] (down_read+0x3c/0x4c) from [c0041f34] (__blocking_notifier_call_chain+0x2c/0x60) [ 602.231741] [c0041f34] (__blocking_notifier_call_chain+0x2c/0x60) from [c0041f80] (blocking_notifier_call_chain+0x18/0x20) [ 602.231791] [c0041f80] (blocking_notifier_call_chain+0x18/0x20) from [c0274a78] (fb_blank+0x34/0x98) [ 602.231836] [c0274a78] (fb_blank+0x34/0x98) from [c027c7b8] (fbcon_blank+0x1dc/0x27c) [ 602.231886] [c027c7b8] (fbcon_blank+0x1dc/0x27c) from [c029f194] (do_blank_screen+0x1b0/0x268) [ 602.231931] [c029f194] (do_blank_screen+0x1b0/0x268) from [c02a1dbc] (console_callback+0x68/0x12c) [ 602.231970] [c02a1dbc] (console_callback+0x68/0x12c) from [c00368c0] (process_one_work+0x1a8/0x560) [
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Sat, 22 Dec 2012 at 16:28, Maciej Rutecki wrote: > Got during suspend to disk: I got a similar message on a powerpc G4 system, right after bootup (no suspend involved): http://nerdbynature.de/bits/3.8.0-rc1/ [ 97.803049] == [ 97.803051] [ INFO: possible circular locking dependency detected ] [ 97.803059] 3.8.0-rc1-dirty #2 Not tainted [ 97.803060] --- [ 97.803066] kworker/0:1/235 is trying to acquire lock: [ 97.803097] ((fb_notifier_list).rwsem){.+.+.+}, at: [] __blocking_notifier_call_chain+0x44/0x88 [ 97.803099] [ 97.803099] but task is already holding lock: [ 97.803110] (console_lock){+.+.+.}, at: [] console_callback+0x20/0x194 [ 97.803112] [ 97.803112] which lock already depends on the new lock. ...and on it goes. Please see the URL above for the whole dmesg and .config. @Li Zhong: I have applied your fix for the "MAX_STACK_TRACE_ENTRIES too low" warning[0] to 3.8-rc1 (hence the -dirty flag), but in the backtrace "ret_from_kernel_thread" shows up again. FWIW, your patch helped to make the "MAX_STACK_TRACE_ENTRIES too low" warning go away in 3.7.0-rc7 and it did not re-appear ever since. Thanks, Christian. [0] http://lkml.indiana.edu/hypermail/linux/kernel/1211.3/01917.html > [ 269.784867] [ INFO: possible circular locking dependency detected ] > [ 269.784869] 3.8.0-rc1 #1 Not tainted > [ 269.784870] --- > [ 269.784871] kworker/u:3/56 is trying to acquire lock: > [ 269.784878] ((fb_notifier_list).rwsem){.+.+.+}, at: [] > __blocking_notifier_call_chain+0x49/0x80 > [ 269.784879] > [ 269.784879] but task is already holding lock: > [ 269.784884] (console_lock){+.+.+.}, at: [] > i915_drm_freeze+0x9e/0xbb > [ 269.784884] > [ 269.784884] which lock already depends on the new lock. > [ 269.784884] > [ 269.784885] > [ 269.784885] the existing dependency chain (in reverse order) is: > [ 269.784887] > [ 269.784887] -> #1 (console_lock){+.+.+.}: > [ 269.784890][] lock_acquire+0x95/0x105 > [ 269.784893][] console_lock+0x59/0x5b > [ 269.784897][] register_con_driver+0x36/0x128 > [ 269.784899][] take_over_console+0x1e/0x45 > [ 269.784903][] fbcon_takeover+0x56/0x98 > [ 269.784906][] fbcon_event_notify+0x2c1/0x5ea > [ 269.784909][] notifier_call_chain+0x67/0x92 > [ 269.784911][] > __blocking_notifier_call_chain+0x5f/0x80 > [ 269.784912][] > blocking_notifier_call_chain+0xf/0x11 > [ 269.784915][] fb_notifier_call_chain+0x16/0x18 > [ 269.784917][] register_framebuffer+0x20a/0x26e > [ 269.784920][] > drm_fb_helper_single_fb_probe+0x1ce/0x297 > [ 269.784922][] > drm_fb_helper_initial_config+0x1d7/0x1ef > [ 269.784924][] intel_fbdev_init+0x6f/0x82 > [ 269.784927][] i915_driver_load+0xa9e/0xc78 > [ 269.784929][] drm_get_pci_dev+0x165/0x26d > [ 269.784931][] i915_pci_probe+0x60/0x69 > [ 269.784933][] local_pci_probe+0x39/0x61 > [ 269.784935][] pci_device_probe+0xba/0xe0 > [ 269.784938][] driver_probe_device+0x99/0x1c4 > [ 269.784940][] __driver_attach+0x4e/0x6f > [ 269.784942][] bus_for_each_dev+0x52/0x84 > [ 269.784944][] driver_attach+0x19/0x1b > [ 269.784946][] bus_add_driver+0xdf/0x203 > [ 269.784948][] driver_register+0x8e/0x114 > [ 269.784952][] __pci_register_driver+0x5d/0x62 > [ 269.784953][] drm_pci_init+0x81/0xe6 > [ 269.784957][] i915_init+0x66/0x68 > [ 269.784959][] do_one_initcall+0x7a/0x136 > [ 269.784962][] kernel_init+0x141/0x296 > [ 269.784964][] ret_from_fork+0x7c/0xb0 > [ 269.784966] > [ 269.784966] -> #0 ((fb_notifier_list).rwsem){.+.+.+}: > [ 269.784967][] __lock_acquire+0xa7e/0xddd > [ 269.784969][] lock_acquire+0x95/0x105 > [ 269.784971][] down_read+0x34/0x43 > [ 269.784973][] > __blocking_notifier_call_chain+0x49/0x80 > [ 269.784975][] > blocking_notifier_call_chain+0xf/0x11 > [ 269.784977][] fb_notifier_call_chain+0x16/0x18 > [ 269.784979][] fb_set_suspend+0x22/0x4d > [ 269.784981][] intel_fbdev_set_suspend+0x20/0x22 > [ 269.784983][] i915_drm_freeze+0xab/0xbb > [ 269.784985][] i915_pm_freeze+0x3d/0x41 > [ 269.784987][] pci_pm_freeze+0x65/0x8d > [ 269.784990][] dpm_run_callback.isra.3+0x27/0x56 > [ 269.784993][] __device_suspend+0x136/0x1b1 > [ 269.784995][] async_suspend+0x1a/0x58 > [ 269.784997][] async_run_entry_fn+0xa4/0x17c > [ 269.785000][] process_one_work+0x1cf/0x38e > [ 269.785002][] worker_thread+0x12e/0x1cc > [ 269.785004][] kthread+0xac/0xb4 > [ 269.785006][]
Re: [REGRESSION][3.8.-rc1][ INFO: possible circular locking dependency detected ]
On Sat, 22 Dec 2012 at 16:28, Maciej Rutecki wrote: Got during suspend to disk: I got a similar message on a powerpc G4 system, right after bootup (no suspend involved): http://nerdbynature.de/bits/3.8.0-rc1/ [ 97.803049] == [ 97.803051] [ INFO: possible circular locking dependency detected ] [ 97.803059] 3.8.0-rc1-dirty #2 Not tainted [ 97.803060] --- [ 97.803066] kworker/0:1/235 is trying to acquire lock: [ 97.803097] ((fb_notifier_list).rwsem){.+.+.+}, at: [c00606a0] __blocking_notifier_call_chain+0x44/0x88 [ 97.803099] [ 97.803099] but task is already holding lock: [ 97.803110] (console_lock){+.+.+.}, at: [c03b9fd0] console_callback+0x20/0x194 [ 97.803112] [ 97.803112] which lock already depends on the new lock. ...and on it goes. Please see the URL above for the whole dmesg and .config. @Li Zhong: I have applied your fix for the MAX_STACK_TRACE_ENTRIES too low warning[0] to 3.8-rc1 (hence the -dirty flag), but in the backtrace ret_from_kernel_thread shows up again. FWIW, your patch helped to make the MAX_STACK_TRACE_ENTRIES too low warning go away in 3.7.0-rc7 and it did not re-appear ever since. Thanks, Christian. [0] http://lkml.indiana.edu/hypermail/linux/kernel/1211.3/01917.html [ 269.784867] [ INFO: possible circular locking dependency detected ] [ 269.784869] 3.8.0-rc1 #1 Not tainted [ 269.784870] --- [ 269.784871] kworker/u:3/56 is trying to acquire lock: [ 269.784878] ((fb_notifier_list).rwsem){.+.+.+}, at: [81062a1d] __blocking_notifier_call_chain+0x49/0x80 [ 269.784879] [ 269.784879] but task is already holding lock: [ 269.784884] (console_lock){+.+.+.}, at: [812ee4ce] i915_drm_freeze+0x9e/0xbb [ 269.784884] [ 269.784884] which lock already depends on the new lock. [ 269.784884] [ 269.784885] [ 269.784885] the existing dependency chain (in reverse order) is: [ 269.784887] [ 269.784887] - #1 (console_lock){+.+.+.}: [ 269.784890][810890e4] lock_acquire+0x95/0x105 [ 269.784893][810405a1] console_lock+0x59/0x5b [ 269.784897][812ba125] register_con_driver+0x36/0x128 [ 269.784899][812bb27e] take_over_console+0x1e/0x45 [ 269.784903][81257a04] fbcon_takeover+0x56/0x98 [ 269.784906][8125b857] fbcon_event_notify+0x2c1/0x5ea [ 269.784909][8149a211] notifier_call_chain+0x67/0x92 [ 269.784911][81062a33] __blocking_notifier_call_chain+0x5f/0x80 [ 269.784912][81062a63] blocking_notifier_call_chain+0xf/0x11 [ 269.784915][8124e85e] fb_notifier_call_chain+0x16/0x18 [ 269.784917][812505d7] register_framebuffer+0x20a/0x26e [ 269.784920][812d3ca0] drm_fb_helper_single_fb_probe+0x1ce/0x297 [ 269.784922][812d3f40] drm_fb_helper_initial_config+0x1d7/0x1ef [ 269.784924][8132cee2] intel_fbdev_init+0x6f/0x82 [ 269.784927][812f22f6] i915_driver_load+0xa9e/0xc78 [ 269.784929][812e020c] drm_get_pci_dev+0x165/0x26d [ 269.784931][812ee8da] i915_pci_probe+0x60/0x69 [ 269.784933][8123fe8e] local_pci_probe+0x39/0x61 [ 269.784935][812400f5] pci_device_probe+0xba/0xe0 [ 269.784938][8133d3b6] driver_probe_device+0x99/0x1c4 [ 269.784940][8133d52f] __driver_attach+0x4e/0x6f [ 269.784942][8133bae1] bus_for_each_dev+0x52/0x84 [ 269.784944][8133cec6] driver_attach+0x19/0x1b [ 269.784946][8133cb65] bus_add_driver+0xdf/0x203 [ 269.784948][8133dad3] driver_register+0x8e/0x114 [ 269.784952][8123f581] __pci_register_driver+0x5d/0x62 [ 269.784953][812e0395] drm_pci_init+0x81/0xe6 [ 269.784957][81af7612] i915_init+0x66/0x68 [ 269.784959][810020b4] do_one_initcall+0x7a/0x136 [ 269.784962][8147ceaa] kernel_init+0x141/0x296 [ 269.784964][8149c7bc] ret_from_fork+0x7c/0xb0 [ 269.784966] [ 269.784966] - #0 ((fb_notifier_list).rwsem){.+.+.+}: [ 269.784967][81088955] __lock_acquire+0xa7e/0xddd [ 269.784969][810890e4] lock_acquire+0x95/0x105 [ 269.784971][81495092] down_read+0x34/0x43 [ 269.784973][81062a1d] __blocking_notifier_call_chain+0x49/0x80 [ 269.784975][81062a63] blocking_notifier_call_chain+0xf/0x11 [ 269.784977][8124e85e] fb_notifier_call_chain+0x16/0x18 [ 269.784979][8124ec47] fb_set_suspend+0x22/0x4d [ 269.784981]