On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > + /*
> > +* Each work of workqueue might run in a different context,
> > +* thanks to concurrency support of workqueue. So we have to
> > +* distinguis
On Sun, Mar 05, 2017 at 12:08:45PM +0900, Byungchul Park wrote:
> On Fri, Mar 03, 2017 at 09:14:16AM +0100, Peter Zijlstra wrote:
> >
> > Now the problem with the above condition is that it makes reports
> > harder to decipher, because by avoiding adding redundant links to our
> > graph we loose
On Fri, Mar 03, 2017 at 08:50:03AM +0900, Byungchul Park wrote:
> On Thu, Mar 02, 2017 at 06:39:49AM -0800, Matthew Wilcox wrote:
> > On Thu, Mar 02, 2017 at 01:45:35PM +0900, byungchul.park wrote:
> > > From: Matthew Wilcox [mailto:wi...@infradead.org]
> > > > On Tue, Feb 28, 2017 at 07:15:47PM +0
On Fri, Mar 03, 2017 at 10:13:38AM +0100, Peter Zijlstra wrote:
> On Fri, Mar 03, 2017 at 09:14:16AM +0100, Peter Zijlstra wrote:
>
> Two boots + a make defconfig, the first didn't have the redundant bit
> in, the second did (full diff below still includes the reclaim rework,
> because that was st
On Fri, Mar 03, 2017 at 09:14:16AM +0100, Peter Zijlstra wrote:
> On Fri, Mar 03, 2017 at 09:17:37AM +0900, Byungchul Park wrote:
> > On Thu, Mar 02, 2017 at 02:40:31PM +0100, Peter Zijlstra wrote:
>
> > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > > index a95e5d1..7baea8
On Fri, Mar 03, 2017 at 09:14:16AM +0100, Peter Zijlstra wrote:
> That said; I'd be fairly interested in numbers on how many links this
> avoids, I'll go make a check_redundant() version of the above and put a
> proper counter in so I can see what it does for a regular boot etc..
Two boots + a ma
On Fri, Mar 03, 2017 at 10:13:38AM +0100, Peter Zijlstra wrote:
> On Fri, Mar 03, 2017 at 09:14:16AM +0100, Peter Zijlstra wrote:
>
> > That said; I'd be fairly interested in numbers on how many links this
> > avoids, I'll go make a check_redundant() version of the above and put a
> > proper count
On Fri, Mar 03, 2017 at 09:17:37AM +0900, Byungchul Park wrote:
> On Thu, Mar 02, 2017 at 02:40:31PM +0100, Peter Zijlstra wrote:
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index a95e5d1..7baea89 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockd
On Thu, Mar 02, 2017 at 02:40:31PM +0100, Peter Zijlstra wrote:
> [*] A while ago someone, and I cannot find the email just now, asked if
> we could not implement the RECLAIM_FS inversion stuff with a 'fake' lock
It looks interesting to me.
> like we use for other things like workqueues etc. I th
On Thu, Mar 02, 2017 at 02:40:31PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 01, 2017 at 01:28:43PM +0100, Peter Zijlstra wrote:
> > On Wed, Mar 01, 2017 at 02:43:23PM +0900, Byungchul Park wrote:
>
> > > It's an optimization, but very essential and important optimization.
>
> Since its not for
On Thu, Mar 02, 2017 at 02:41:03PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index a6c8db1..7890661 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.debug
> > @@ -1042,6 +1042,19
On Thu, Mar 02, 2017 at 06:39:49AM -0800, Matthew Wilcox wrote:
> On Thu, Mar 02, 2017 at 01:45:35PM +0900, byungchul.park wrote:
> > From: Matthew Wilcox [mailto:wi...@infradead.org]
> > > On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > > > (And we should not be returning to us
On Wed, Mar 01, 2017 at 01:28:43PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 01, 2017 at 02:43:23PM +0900, Byungchul Park wrote:
> > It's an optimization, but very essential and important optimization.
Since its not for correctness, please put it in a separate patch with a
good Changelog, the be
On Thu, Mar 02, 2017 at 01:45:35PM +0900, byungchul.park wrote:
> From: Matthew Wilcox [mailto:wi...@infradead.org]
> > On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > > (And we should not be returning to userspace with locks held anyway --
> > > lockdep already has a check for
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index a6c8db1..7890661 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1042,6 +1042,19 @@ config DEBUG_LOCK_ALLOC
>spin_lock_init()/mutex_init()/etc., or w
name; linux-
> ker...@vger.kernel.org; linux...@kvack.org; iamjoonsoo@lge.com;
> a...@linux-foundation.org; npig...@gmail.com
> Subject: Re: [PATCH v5 06/13] lockdep: Implement crossrelease feature
>
> On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > (And we should not be
On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> (And we should not be returning to userspace with locks held anyway --
> lockdep already has a check for that).
Don't we return to userspace with page locks held, eg during async directio?
On Wed, Mar 01, 2017 at 02:17:07PM +0900, Byungchul Park wrote:
> > > +void lock_commit_crosslock(struct lockdep_map *lock)
> > > +{
> > > + struct cross_lock *xlock;
> > > + unsigned long flags;
> > > +
> > > + if (!current->xhlocks)
> > > + return;
> > > +
> > > + if (unlikely(current->lo
On Wed, Mar 01, 2017 at 02:43:23PM +0900, Byungchul Park wrote:
> On Tue, Feb 28, 2017 at 02:40:18PM +0100, Peter Zijlstra wrote:
> > > +static int commit_xhlocks(struct cross_lock *xlock)
> > > +{
> > > + struct task_struct *curr = current;
> > > + struct hist_lock *xhlock_c = xhlock_curr(curr);
>
On Wed, Mar 01, 2017 at 11:45:48AM +0100, Peter Zijlstra wrote:
> On Wed, Mar 01, 2017 at 01:40:33PM +0900, Byungchul Park wrote:
>
> > Right. I decided to force MAX_XHLOCKS_NR to be power of 2 and everything
> > became easy. Thank you very much.
>
> Something like:
>
> BUILD_BUG_ON(MAX_XH
On Wed, Mar 01, 2017 at 11:43:28AM +0100, Peter Zijlstra wrote:
> On Wed, Mar 01, 2017 at 04:21:28PM +0900, Byungchul Park wrote:
> > On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > > > + /*
> > > > +
On Wed, Mar 01, 2017 at 01:40:33PM +0900, Byungchul Park wrote:
> Right. I decided to force MAX_XHLOCKS_NR to be power of 2 and everything
> became easy. Thank you very much.
Something like:
BUILD_BUG_ON(MAX_XHLOCKS_NR & (MAX_XHLOCK_NR - 1));
Should enforce I think.
On Wed, Mar 01, 2017 at 04:21:28PM +0900, Byungchul Park wrote:
> On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > > + /*
> > > + * Each work of workqueue might run in a different context,
> > > + * thanks to c
On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > + /*
> > +* Each work of workqueue might run in a different context,
> > +* thanks to concurrency support of workqueue. So we have to
> > +* distinguis
On Tue, Feb 28, 2017 at 01:49:06PM +0100, Peter Zijlstra wrote:
> On Tue, Feb 28, 2017 at 01:45:07PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > > + /*
> > > + * struct held_lock does not have an indicator whether in nmi.
> > > + */
> > > +
On Wed, Mar 01, 2017 at 02:17:07PM +0900, Byungchul Park wrote:
> On Tue, Feb 28, 2017 at 04:49:00PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> >
> > > +struct cross_lock {
> > > + /*
> > > + * When more than one acquisition of crosslocks ar
On Tue, Feb 28, 2017 at 02:40:18PM +0100, Peter Zijlstra wrote:
> > +static int commit_xhlocks(struct cross_lock *xlock)
> > +{
> > + struct task_struct *curr = current;
> > + struct hist_lock *xhlock_c = xhlock_curr(curr);
> > + struct hist_lock *xhlock = xhlock_c;
> > +
> > + do {
> > +
On Tue, Feb 28, 2017 at 07:29:02PM +0100, Peter Zijlstra wrote:
> On Tue, Feb 28, 2017 at 10:24:44PM +0900, Byungchul Park wrote:
> > On Tue, Feb 28, 2017 at 02:10:12PM +0100, Peter Zijlstra wrote:
>
> > > > +/* For easy access to xhlock */
> > > > +#define xhlock(t, i) ((t)->xhlocks + (
On Tue, Feb 28, 2017 at 04:49:00PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
>
> > +struct cross_lock {
> > + /*
> > +* When more than one acquisition of crosslocks are overlapped,
> > +* we do actual commit only when ref == 0.
> > +
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> + /*
> + * Each work of workqueue might run in a different context,
> + * thanks to concurrency support of workqueue. So we have to
> + * distinguish each work to avoid false positive.
> + *
> + * TODO: W
On Tue, Feb 28, 2017 at 10:24:44PM +0900, Byungchul Park wrote:
> On Tue, Feb 28, 2017 at 02:10:12PM +0100, Peter Zijlstra wrote:
> > > +/* For easy access to xhlock */
> > > +#define xhlock(t, i) ((t)->xhlocks + (i))
> > > +#define xhlock_prev(t, l)xhlock(t, idx_prev((l) - (t)
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> +struct cross_lock {
> + /*
> + * When more than one acquisition of crosslocks are overlapped,
> + * we do actual commit only when ref == 0.
> + */
> + atomic_t ref;
That comment doesn't seem right, should th
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> + /*
> + * If the previous in held_locks can create a proper dependency
> + * with a target crosslock, then we can skip commiting this,
> + * since "the target crosslock -> the previous lock" and
> + * "the pr
On Tue, Feb 28, 2017 at 02:10:12PM +0100, Peter Zijlstra wrote:
> > +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> > +
> > +#define idx(t) ((t)->xhlock_idx)
> > +#define idx_prev(i)((i) ? (i) - 1 : MAX_XHLOCKS_NR - 1)
> > +#define idx_next(i)(((i) + 1) % M
On Tue, Feb 28, 2017 at 02:35:21PM +0100, Peter Zijlstra wrote:
> On Tue, Feb 28, 2017 at 10:28:20PM +0900, Byungchul Park wrote:
> > On Tue, Feb 28, 2017 at 02:05:13PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > > > +#define MAX_XHLOCKS_NR
On Tue, Feb 28, 2017 at 01:45:07PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > + /*
> > +* struct held_lock does not have an indicator whether in nmi.
> > +*/
> > + int nmi;
>
> Do we really need this? Lockdep doesn't really know a
On Tue, Feb 28, 2017 at 10:28:20PM +0900, Byungchul Park wrote:
> On Tue, Feb 28, 2017 at 02:05:13PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > > +#define MAX_XHLOCKS_NR 64UL
> >
> > > +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> > > + if (tsk->xh
On Tue, Feb 28, 2017 at 02:05:13PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> > +#define MAX_XHLOCKS_NR 64UL
>
> > +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> > + if (tsk->xhlocks) {
> > + void *tmp = tsk->xhlocks;
> > + /* Disa
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> +#define MAX_XHLOCKS_NR 64UL
> +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> + if (tsk->xhlocks) {
> + void *tmp = tsk->xhlocks;
> + /* Disable crossrelease for current */
> + tsk->xhlocks = NULL;
> +
> +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> +
> +#define idx(t) ((t)->xhlock_idx)
> +#define idx_prev(i) ((i) ? (i) - 1 : MAX_XHLOCKS_NR - 1)
> +#define idx_next(i) (((i) + 1) % MAX_XHLOCKS_NR)
Note that:
#define idx_prev(i) (((i) - 1) % MAX_XHLOCKS
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> + /*
> + * struct held_lock does not have an indicator whether in nmi.
> + */
> + int nmi;
Do we really need this? Lockdep doesn't really know about NMI context,
so its weird to now partially introduce it.
On Wed, Jan 18, 2017 at 10:17:32PM +0900, Byungchul Park wrote:
> + /*
> + * Each work of workqueue might run in a different context,
> + * thanks to concurrency support of workqueue. So we have to
> + * distinguish each work to avoid false positive.
> + *
> + * TODO: W
42 matches
Mail list logo