Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-08-18 Thread Tetsuo Handa
Peter, Ingo and Will. Would you answer (Q1) and (Q2)?

On 2020/08/18 21:02, Dmitry Vyukov wrote:
> On Tue, Aug 18, 2020 at 1:07 PM Tetsuo Handa
>  wrote:
>>
>> On 2020/08/18 18:57, Dmitry Vyukov wrote:
>>> On Tue, Aug 4, 2020 at 4:36 AM Tetsuo Handa
>>>  wrote:

 Hello, Peter, Ingo and Will.

 (Q1) Can we change the capacity using kernel config?

 (Q2) If we can change the capacity, is it OK to specify these constants
  independently? (In other words, is there inter-dependency among
  these constants?)
>>>
>>>
>>> I think we should do this.
>>> syzbot uses a very beefy kernel config and very broad load.
>>> We are hitting "BUG: MAX_LOCKDEP_ENTRIES too low!" for the part 428
>>> days and already hit it 96K times. It's just harming overall kernel
>>> testing:
>>> https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
>>>
>>> I think it's better if exact values are not hardcoded, but rather
>>> specified in the config. Today we are switching from 4K to 8K, but as
>>> we enable more configs and learn to reach more code, we may need 16K.
>>
>> For short term, increasing the capacity would be fine. But for long term, I 
>> doubt.
>>
>> Learning more locks being held within one boot by enabling more configs, I 
>> suspect
>> that it becomes more and more timing dependent and difficult to hold all 
>> locks that
>> can generate a lockdep warning.
>>
>>>
>>>
 (Q3) Do you think that we can extend lockdep to be used as a tool for 
 auditing
  locks held in kernel space and rebuilding lock dependency map in user 
 space?
>>>
>>> This looks like lots of work. Also unpleasant dependencies on
>>> user-space. If there is a user-space component, it will need to be
>>> deployed to _all_ of kernel testing systems and for all users of
>>> syzkaller. And it will also be a dependency for reproducers. Currently
>>> one can run a C reproducer and get the errors message from LOCKDEP. It
>>> seems that a user-space component will make it significantly more
>>> complicated.
>>
>> My suggestion is to detach lockdep warning from realtime alarming.
>>
>> Since not all locks are always held (e.g. some locks are held only if 
>> exceeding
>> some threshold), requiring all locks being held within one boot sounds 
>> difficult.
>> Such requirement results in flaky bisection like "Fix bisection: failed" in
>> https://syzkaller.appspot.com/bug?id=b23ec126241ad0d86628de6eb5c1cff57d282632
>>  .
>>
>> Then, I'm wishing that we could build non-realtime alarming based on all 
>> locks held
>> across all boots on each vmlinux file.
> 
> Unless I am missing something, deployment/maintenance story for this
> for syzbot, syzkaller users, other kernel testing, reproducer
> extraction, bisection, resproducer hermeticity is quite complicated. I
> don't see it outweighing any potential benefit in reporting quality.

What I'm imaging is: do not try to judge lock dependency problems within
syzkaller (or other kernel testing) kernels. That is, no reproducer for
lock dependency problems, no bisection for lock dependency problems.
Utilize their resources for gathering only, and create lock dependency
(like kcov data) in some dedicated userspace component.

> 
> I also don't see how it will improve reproducer/bisection quality: to
> confirm presence of a bug we still need to trigger all cycle edges
> within a single run anyway, it does not have to be a single VM, but
> still needs to be a single test case. And this "having all edges
> within a single test case" seems to be the root problem. I don't see
> how this proposal addresses this problem.
> 
 On 2020/07/25 14:23, Tetsuo Handa wrote:
>> Also somebody may use it to _reduce_ size of the table for a smaller 
>> kernel.
>
> Maybe. But my feeling is that it is very rare that the kernel actually 
> deadlocks
> as soon as lockdep warned the possibility of deadlock.
>
> Since syzbot runs many instances in parallel, a lot of CPU resource is 
> spent for
> checking the same dependency tree. However, the possibility of deadlock 
> can be
> warned for only locks held within each kernel boot, and it is impossible 
> to hold
> all locks with one kernel boot.
>
> Then, it might be nice if lockdep can audit only "which lock was held 
> from which
> context and what backtrace" and export that log like KCOV data (instead 
> of evaluating
> the possibility of deadlock), and rebuild the whole dependency (and 
> evaluate the
> possibility of deadlock) across multiple kernel boots in userspace.

>>



Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-08-18 Thread Dmitry Vyukov
On Tue, Aug 18, 2020 at 1:07 PM Tetsuo Handa
 wrote:
>
> On 2020/08/18 18:57, Dmitry Vyukov wrote:
> > On Tue, Aug 4, 2020 at 4:36 AM Tetsuo Handa
> >  wrote:
> >>
> >> Hello, Peter, Ingo and Will.
> >>
> >> (Q1) Can we change the capacity using kernel config?
> >>
> >> (Q2) If we can change the capacity, is it OK to specify these constants
> >>  independently? (In other words, is there inter-dependency among
> >>  these constants?)
> >
> >
> > I think we should do this.
> > syzbot uses a very beefy kernel config and very broad load.
> > We are hitting "BUG: MAX_LOCKDEP_ENTRIES too low!" for the part 428
> > days and already hit it 96K times. It's just harming overall kernel
> > testing:
> > https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
> >
> > I think it's better if exact values are not hardcoded, but rather
> > specified in the config. Today we are switching from 4K to 8K, but as
> > we enable more configs and learn to reach more code, we may need 16K.
>
> For short term, increasing the capacity would be fine. But for long term, I 
> doubt.
>
> Learning more locks being held within one boot by enabling more configs, I 
> suspect
> that it becomes more and more timing dependent and difficult to hold all 
> locks that
> can generate a lockdep warning.
>
> >
> >
> >> (Q3) Do you think that we can extend lockdep to be used as a tool for 
> >> auditing
> >>  locks held in kernel space and rebuilding lock dependency map in user 
> >> space?
> >
> > This looks like lots of work. Also unpleasant dependencies on
> > user-space. If there is a user-space component, it will need to be
> > deployed to _all_ of kernel testing systems and for all users of
> > syzkaller. And it will also be a dependency for reproducers. Currently
> > one can run a C reproducer and get the errors message from LOCKDEP. It
> > seems that a user-space component will make it significantly more
> > complicated.
>
> My suggestion is to detach lockdep warning from realtime alarming.
>
> Since not all locks are always held (e.g. some locks are held only if 
> exceeding
> some threshold), requiring all locks being held within one boot sounds 
> difficult.
> Such requirement results in flaky bisection like "Fix bisection: failed" in
> https://syzkaller.appspot.com/bug?id=b23ec126241ad0d86628de6eb5c1cff57d282632 
> .
>
> Then, I'm wishing that we could build non-realtime alarming based on all 
> locks held
> across all boots on each vmlinux file.

Unless I am missing something, deployment/maintenance story for this
for syzbot, syzkaller users, other kernel testing, reproducer
extraction, bisection, resproducer hermeticity is quite complicated. I
don't see it outweighing any potential benefit in reporting quality.

I also don't see how it will improve reproducer/bisection quality: to
confirm presence of a bug we still need to trigger all cycle edges
within a single run anyway, it does not have to be a single VM, but
still needs to be a single test case. And this "having all edges
within a single test case" seems to be the root problem. I don't see
how this proposal addresses this problem.

> >> On 2020/07/25 14:23, Tetsuo Handa wrote:
>  Also somebody may use it to _reduce_ size of the table for a smaller 
>  kernel.
> >>>
> >>> Maybe. But my feeling is that it is very rare that the kernel actually 
> >>> deadlocks
> >>> as soon as lockdep warned the possibility of deadlock.
> >>>
> >>> Since syzbot runs many instances in parallel, a lot of CPU resource is 
> >>> spent for
> >>> checking the same dependency tree. However, the possibility of deadlock 
> >>> can be
> >>> warned for only locks held within each kernel boot, and it is impossible 
> >>> to hold
> >>> all locks with one kernel boot.
> >>>
> >>> Then, it might be nice if lockdep can audit only "which lock was held 
> >>> from which
> >>> context and what backtrace" and export that log like KCOV data (instead 
> >>> of evaluating
> >>> the possibility of deadlock), and rebuild the whole dependency (and 
> >>> evaluate the
> >>> possibility of deadlock) across multiple kernel boots in userspace.
> >>
>


Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-08-18 Thread Tetsuo Handa
On 2020/08/18 18:57, Dmitry Vyukov wrote:
> On Tue, Aug 4, 2020 at 4:36 AM Tetsuo Handa
>  wrote:
>>
>> Hello, Peter, Ingo and Will.
>>
>> (Q1) Can we change the capacity using kernel config?
>>
>> (Q2) If we can change the capacity, is it OK to specify these constants
>>  independently? (In other words, is there inter-dependency among
>>  these constants?)
> 
> 
> I think we should do this.
> syzbot uses a very beefy kernel config and very broad load.
> We are hitting "BUG: MAX_LOCKDEP_ENTRIES too low!" for the part 428
> days and already hit it 96K times. It's just harming overall kernel
> testing:
> https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
> 
> I think it's better if exact values are not hardcoded, but rather
> specified in the config. Today we are switching from 4K to 8K, but as
> we enable more configs and learn to reach more code, we may need 16K.

For short term, increasing the capacity would be fine. But for long term, I 
doubt.

Learning more locks being held within one boot by enabling more configs, I 
suspect
that it becomes more and more timing dependent and difficult to hold all locks 
that
can generate a lockdep warning.

> 
> 
>> (Q3) Do you think that we can extend lockdep to be used as a tool for 
>> auditing
>>  locks held in kernel space and rebuilding lock dependency map in user 
>> space?
> 
> This looks like lots of work. Also unpleasant dependencies on
> user-space. If there is a user-space component, it will need to be
> deployed to _all_ of kernel testing systems and for all users of
> syzkaller. And it will also be a dependency for reproducers. Currently
> one can run a C reproducer and get the errors message from LOCKDEP. It
> seems that a user-space component will make it significantly more
> complicated.

My suggestion is to detach lockdep warning from realtime alarming.

Since not all locks are always held (e.g. some locks are held only if exceeding
some threshold), requiring all locks being held within one boot sounds 
difficult.
Such requirement results in flaky bisection like "Fix bisection: failed" in
https://syzkaller.appspot.com/bug?id=b23ec126241ad0d86628de6eb5c1cff57d282632 .

Then, I'm wishing that we could build non-realtime alarming based on all locks 
held
across all boots on each vmlinux file.

> 
> 
>> On 2020/07/25 14:23, Tetsuo Handa wrote:
 Also somebody may use it to _reduce_ size of the table for a smaller 
 kernel.
>>>
>>> Maybe. But my feeling is that it is very rare that the kernel actually 
>>> deadlocks
>>> as soon as lockdep warned the possibility of deadlock.
>>>
>>> Since syzbot runs many instances in parallel, a lot of CPU resource is 
>>> spent for
>>> checking the same dependency tree. However, the possibility of deadlock can 
>>> be
>>> warned for only locks held within each kernel boot, and it is impossible to 
>>> hold
>>> all locks with one kernel boot.
>>>
>>> Then, it might be nice if lockdep can audit only "which lock was held from 
>>> which
>>> context and what backtrace" and export that log like KCOV data (instead of 
>>> evaluating
>>> the possibility of deadlock), and rebuild the whole dependency (and 
>>> evaluate the
>>> possibility of deadlock) across multiple kernel boots in userspace.
>>



Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-08-18 Thread Dmitry Vyukov
On Tue, Aug 4, 2020 at 4:36 AM Tetsuo Handa
 wrote:
>
> Hello, Peter, Ingo and Will.
>
> (Q1) Can we change the capacity using kernel config?
>
> (Q2) If we can change the capacity, is it OK to specify these constants
>  independently? (In other words, is there inter-dependency among
>  these constants?)


I think we should do this.
syzbot uses a very beefy kernel config and very broad load.
We are hitting "BUG: MAX_LOCKDEP_ENTRIES too low!" for the part 428
days and already hit it 96K times. It's just harming overall kernel
testing:
https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b

I think it's better if exact values are not hardcoded, but rather
specified in the config. Today we are switching from 4K to 8K, but as
we enable more configs and learn to reach more code, we may need 16K.


> (Q3) Do you think that we can extend lockdep to be used as a tool for auditing
>  locks held in kernel space and rebuilding lock dependency map in user 
> space?

This looks like lots of work. Also unpleasant dependencies on
user-space. If there is a user-space component, it will need to be
deployed to _all_ of kernel testing systems and for all users of
syzkaller. And it will also be a dependency for reproducers. Currently
one can run a C reproducer and get the errors message from LOCKDEP. It
seems that a user-space component will make it significantly more
complicated.


> On 2020/07/25 14:23, Tetsuo Handa wrote:
> >> Also somebody may use it to _reduce_ size of the table for a smaller 
> >> kernel.
> >
> > Maybe. But my feeling is that it is very rare that the kernel actually 
> > deadlocks
> > as soon as lockdep warned the possibility of deadlock.
> >
> > Since syzbot runs many instances in parallel, a lot of CPU resource is 
> > spent for
> > checking the same dependency tree. However, the possibility of deadlock can 
> > be
> > warned for only locks held within each kernel boot, and it is impossible to 
> > hold
> > all locks with one kernel boot.
> >
> > Then, it might be nice if lockdep can audit only "which lock was held from 
> > which
> > context and what backtrace" and export that log like KCOV data (instead of 
> > evaluating
> > the possibility of deadlock), and rebuild the whole dependency (and 
> > evaluate the
> > possibility of deadlock) across multiple kernel boots in userspace.
>


Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-08-03 Thread Tetsuo Handa
Hello, Peter, Ingo and Will.

(Q1) Can we change the capacity using kernel config?

(Q2) If we can change the capacity, is it OK to specify these constants
 independently? (In other words, is there inter-dependency among
 these constants?)

(Q3) Do you think that we can extend lockdep to be used as a tool for auditing
 locks held in kernel space and rebuilding lock dependency map in user 
space?

On 2020/07/25 14:23, Tetsuo Handa wrote:
>> Also somebody may use it to _reduce_ size of the table for a smaller kernel.
> 
> Maybe. But my feeling is that it is very rare that the kernel actually 
> deadlocks
> as soon as lockdep warned the possibility of deadlock.
> 
> Since syzbot runs many instances in parallel, a lot of CPU resource is spent 
> for
> checking the same dependency tree. However, the possibility of deadlock can be
> warned for only locks held within each kernel boot, and it is impossible to 
> hold
> all locks with one kernel boot.
> 
> Then, it might be nice if lockdep can audit only "which lock was held from 
> which
> context and what backtrace" and export that log like KCOV data (instead of 
> evaluating
> the possibility of deadlock), and rebuild the whole dependency (and evaluate 
> the
> possibility of deadlock) across multiple kernel boots in userspace.



Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-07-24 Thread Tetsuo Handa
On 2020/07/25 13:48, Dmitry Vyukov wrote:
>> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
>> index 29a8de4..85ba7eb 100644
>> --- a/kernel/locking/lockdep.c
>> +++ b/kernel/locking/lockdep.c
>> @@ -1349,7 +1349,11 @@ static int add_lock_to_list(struct lock_class *this,
>>  /*
>>   * For good efficiency of modular, we use power of 2
>>   */
>> +#ifdef CONFIG_LOCKDEP_LARGE
>> +#define MAX_CIRCULAR_QUEUE_SIZE8192UL
>> +#else
>>  #define MAX_CIRCULAR_QUEUE_SIZE4096UL
> 
> Maybe this number should be the config value? So that we don't ever
> return here to introduce "VERY_LARGE" :)

They can be "tiny, small, medium, compact, large and huge". Yeah, it's a joke. 
:-)

> Also somebody may use it to _reduce_ size of the table for a smaller kernel.

Maybe. But my feeling is that it is very rare that the kernel actually deadlocks
as soon as lockdep warned the possibility of deadlock.

Since syzbot runs many instances in parallel, a lot of CPU resource is spent for
checking the same dependency tree. However, the possibility of deadlock can be
warned for only locks held within each kernel boot, and it is impossible to hold
all locks with one kernel boot.

Then, it might be nice if lockdep can audit only "which lock was held from which
context and what backtrace" and export that log like KCOV data (instead of 
evaluating
the possibility of deadlock), and rebuild the whole dependency (and evaluate the
possibility of deadlock) across multiple kernel boots in userspace.

> 
>> +#endif
>>  #define CQ_MASK(MAX_CIRCULAR_QUEUE_SIZE-1)



Re: [PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-07-24 Thread Dmitry Vyukov
On Sat, Jul 25, 2020 at 3:30 AM Tetsuo Handa
 wrote:
>
> Since syzkaller continues various test cases until the kernel crashes,
> syzkaller tends to examine more locking dependencies than normal systems.
> As a result, syzbot is reporting that the fuzz testing was terminated
> due to hitting upper limits lockdep can track [1] [2] [3].
>
> Like CONFIG_LOCKDEP_SMALL which halves the upper limits, let's introduce
> CONFIG_LOCKDEP_LARGE which doubles the upper limits.
>
> [1] 
> https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
> [2] 
> https://syzkaller.appspot.com/bug?id=381cb436fe60dc03d7fd2a092b46d7f09542a72a
> [3] 
> https://syzkaller.appspot.com/bug?id=a588183ac34c1437fc0785e8f220e88282e5a29f
>
> Reported-by: syzbot 
> Reported-by: syzbot 
> Reported-by: syzbot 
> Signed-off-by: Tetsuo Handa 
> ---
>  kernel/locking/lockdep.c   | 4 
>  kernel/locking/lockdep_internals.h | 5 +
>  lib/Kconfig.debug  | 8 
>  3 files changed, 17 insertions(+)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 29a8de4..85ba7eb 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -1349,7 +1349,11 @@ static int add_lock_to_list(struct lock_class *this,
>  /*
>   * For good efficiency of modular, we use power of 2
>   */
> +#ifdef CONFIG_LOCKDEP_LARGE
> +#define MAX_CIRCULAR_QUEUE_SIZE8192UL
> +#else
>  #define MAX_CIRCULAR_QUEUE_SIZE4096UL

Maybe this number should be the config value? So that we don't ever
return here to introduce "VERY_LARGE" :)
Also somebody may use it to _reduce_ size of the table for a smaller kernel.

> +#endif
>  #define CQ_MASK(MAX_CIRCULAR_QUEUE_SIZE-1)
>
>  /*
> diff --git a/kernel/locking/lockdep_internals.h 
> b/kernel/locking/lockdep_internals.h
> index baca699..00a3ec3 100644
> --- a/kernel/locking/lockdep_internals.h
> +++ b/kernel/locking/lockdep_internals.h
> @@ -93,6 +93,11 @@ enum {
>  #define MAX_LOCKDEP_CHAINS_BITS15
>  #define MAX_STACK_TRACE_ENTRIES262144UL
>  #define STACK_TRACE_HASH_SIZE  8192
> +#elif defined(CONFIG_LOCKDEP_LARGE)
> +#define MAX_LOCKDEP_ENTRIES65536UL
> +#define MAX_LOCKDEP_CHAINS_BITS17
> +#define MAX_STACK_TRACE_ENTRIES1048576UL
> +#define STACK_TRACE_HASH_SIZE  32768
>  #else
>  #define MAX_LOCKDEP_ENTRIES32768UL
>
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 9ad9210..69ba624 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1266,6 +1266,14 @@ config LOCKDEP
>  config LOCKDEP_SMALL
> bool
>
> +config LOCKDEP_LARGE
> +   bool "Use larger buffer for tracking more locking dependencies"
> +   depends on LOCKDEP && !LOCKDEP_SMALL
> +   help
> + If you say Y here, the upper limits the lock dependency engine uses 
> will
> + be doubled. Useful for fuzz testing which tends to test many 
> complecated
> + dependencies than normal systems.
> +
>  config DEBUG_LOCKDEP
> bool "Lock dependency engine debugging"
> depends on DEBUG_KERNEL && LOCKDEP
> --
> 1.8.3.1
>


[PATCH] lockdep: Introduce CONFIG_LOCKDEP_LARGE

2020-07-24 Thread Tetsuo Handa
Since syzkaller continues various test cases until the kernel crashes,
syzkaller tends to examine more locking dependencies than normal systems.
As a result, syzbot is reporting that the fuzz testing was terminated
due to hitting upper limits lockdep can track [1] [2] [3].

Like CONFIG_LOCKDEP_SMALL which halves the upper limits, let's introduce
CONFIG_LOCKDEP_LARGE which doubles the upper limits.

[1] 
https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
[2] 
https://syzkaller.appspot.com/bug?id=381cb436fe60dc03d7fd2a092b46d7f09542a72a
[3] 
https://syzkaller.appspot.com/bug?id=a588183ac34c1437fc0785e8f220e88282e5a29f

Reported-by: syzbot 
Reported-by: syzbot 
Reported-by: syzbot 
Signed-off-by: Tetsuo Handa 
---
 kernel/locking/lockdep.c   | 4 
 kernel/locking/lockdep_internals.h | 5 +
 lib/Kconfig.debug  | 8 
 3 files changed, 17 insertions(+)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 29a8de4..85ba7eb 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1349,7 +1349,11 @@ static int add_lock_to_list(struct lock_class *this,
 /*
  * For good efficiency of modular, we use power of 2
  */
+#ifdef CONFIG_LOCKDEP_LARGE
+#define MAX_CIRCULAR_QUEUE_SIZE8192UL
+#else
 #define MAX_CIRCULAR_QUEUE_SIZE4096UL
+#endif
 #define CQ_MASK(MAX_CIRCULAR_QUEUE_SIZE-1)
 
 /*
diff --git a/kernel/locking/lockdep_internals.h 
b/kernel/locking/lockdep_internals.h
index baca699..00a3ec3 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -93,6 +93,11 @@ enum {
 #define MAX_LOCKDEP_CHAINS_BITS15
 #define MAX_STACK_TRACE_ENTRIES262144UL
 #define STACK_TRACE_HASH_SIZE  8192
+#elif defined(CONFIG_LOCKDEP_LARGE)
+#define MAX_LOCKDEP_ENTRIES65536UL
+#define MAX_LOCKDEP_CHAINS_BITS17
+#define MAX_STACK_TRACE_ENTRIES1048576UL
+#define STACK_TRACE_HASH_SIZE  32768
 #else
 #define MAX_LOCKDEP_ENTRIES32768UL
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 9ad9210..69ba624 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1266,6 +1266,14 @@ config LOCKDEP
 config LOCKDEP_SMALL
bool
 
+config LOCKDEP_LARGE
+   bool "Use larger buffer for tracking more locking dependencies"
+   depends on LOCKDEP && !LOCKDEP_SMALL
+   help
+ If you say Y here, the upper limits the lock dependency engine uses 
will
+ be doubled. Useful for fuzz testing which tends to test many 
complecated
+ dependencies than normal systems.
+
 config DEBUG_LOCKDEP
bool "Lock dependency engine debugging"
depends on DEBUG_KERNEL && LOCKDEP
-- 
1.8.3.1