On Wed, Oct 24, 2018 at 09:43:54AM +1100, NeilBrown wrote:
> This took longer that I had wanted, due to various reasons - sorry.
> And I'm now posting it in a merge window, which is not ideal. I don't
> expect it to be included in this merge window and I won't be at all
> impatient for review,
On Wed, Oct 24, 2018 at 09:43:54AM +1100, NeilBrown wrote:
> This took longer that I had wanted, due to various reasons - sorry.
> And I'm now posting it in a merge window, which is not ideal. I don't
> expect it to be included in this merge window and I won't be at all
> impatient for review,
Thanks, applying.--b.
On Tue, Oct 02, 2018 at 12:08:48PM +0200, Gustavo A. R. Silva wrote:
> Replace "fallthru" with a proper "fall through" annotation.
>
> Also, add an annotation were it is expected to fall through.
>
> These fixes are part of the ongoing efforts to enabling
>
Thanks, applying.--b.
On Tue, Oct 02, 2018 at 12:08:48PM +0200, Gustavo A. R. Silva wrote:
> Replace "fallthru" with a proper "fall through" annotation.
>
> Also, add an annotation were it is expected to fall through.
>
> These fixes are part of the ongoing efforts to enabling
>
Thanks, applying for 4.20.--b.
On Tue, Sep 25, 2018 at 11:22:53AM +, YueHaibing wrote:
> Fixes gcc '-Wunused-but-set-variable' warning:
>
> fs/nfsd/vfs.c: In function 'nfsd_create':
> fs/nfsd/vfs.c:1279:16: warning:
> variable 'dirp' set but not used [-Wunused-but-set-variable]
>
>
Thanks, applying for 4.20.--b.
On Tue, Sep 25, 2018 at 11:22:53AM +, YueHaibing wrote:
> Fixes gcc '-Wunused-but-set-variable' warning:
>
> fs/nfsd/vfs.c: In function 'nfsd_create':
> fs/nfsd/vfs.c:1279:16: warning:
> variable 'dirp' set but not used [-Wunused-but-set-variable]
>
>
On Wed, Sep 05, 2018 at 06:55:15AM -0400, Jeff Layton wrote:
> There is no requirement for a filesystem to flush data on close(). In
> fact, most local filesystems do not. NFS does, but that's because it has
> to in order to provide close-to-open cache consistency semantics.
And these days even
On Wed, Sep 05, 2018 at 06:55:15AM -0400, Jeff Layton wrote:
> There is no requirement for a filesystem to flush data on close(). In
> fact, most local filesystems do not. NFS does, but that's because it has
> to in order to provide close-to-open cache consistency semantics.
And these days even
On Tue, Sep 04, 2018 at 06:23:48PM +0200, Rogier Wolff wrote:
> On Tue, Sep 04, 2018 at 12:12:03PM -0400, J. Bruce Fields wrote:
> > Well, I think the point was that in the above examples you'd prefer that
> > the read just fail--no need to keep the data. A bit marking the fi
On Tue, Sep 04, 2018 at 06:23:48PM +0200, Rogier Wolff wrote:
> On Tue, Sep 04, 2018 at 12:12:03PM -0400, J. Bruce Fields wrote:
> > Well, I think the point was that in the above examples you'd prefer that
> > the read just fail--no need to keep the data. A bit marking the fi
On Tue, Sep 04, 2018 at 11:44:20AM -0400, Jeff Layton wrote:
> On Tue, 2018-09-04 at 22:56 +0800, 焦晓冬 wrote:
> > A practical and concrete example may be,
> > A disk cleaner program that first searches for garbage files that won't be
> > used
> > anymore and save the list in a file
On Tue, Sep 04, 2018 at 11:44:20AM -0400, Jeff Layton wrote:
> On Tue, 2018-09-04 at 22:56 +0800, 焦晓冬 wrote:
> > A practical and concrete example may be,
> > A disk cleaner program that first searches for garbage files that won't be
> > used
> > anymore and save the list in a file
Apologies, I messed up my pull request. The result was just a few
commits mentioned in the tag description but missing from the tip of the
branch, and I thought simplest was to make a new tag and a new pull
request with the additional commits, I hope that works OK. Thanks to
Chuck for noticing
Apologies, I messed up my pull request. The result was just a few
commits mentioned in the tag description but missing from the tip of the
branch, and I thought simplest was to make a new tag and a new pull
request with the additional commits, I hope that works OK. Thanks to
Chuck for noticing
):
nfsd: constify write_op[]
Gustavo A. R. Silva (2):
nfsd: Mark expected switch fall-through
nfsd: use true and false for boolean values
J. Bruce Fields (9):
nfsd4: extend reclaim period for reclaiming clients
nfsd4: return default lease period
nfsd: fix NFSv4
):
nfsd: constify write_op[]
Gustavo A. R. Silva (2):
nfsd: Mark expected switch fall-through
nfsd: use true and false for boolean values
J. Bruce Fields (9):
nfsd4: extend reclaim period for reclaiming clients
nfsd4: return default lease period
nfsd: fix NFSv4
On Mon, Aug 20, 2018 at 01:02:21PM +0200, Martin Wilck wrote:
> On Wed, 2018-08-08 at 14:29 -0400, J. Bruce Fields wrote:
> > On Wed, Aug 08, 2018 at 12:47:22PM -0400, Jeff Layton wrote:
> > > On Wed, 2018-08-08 at 11:51 +1000, NeilBrown wrote:
> > > > If you have
On Mon, Aug 20, 2018 at 01:02:21PM +0200, Martin Wilck wrote:
> On Wed, 2018-08-08 at 14:29 -0400, J. Bruce Fields wrote:
> > On Wed, Aug 08, 2018 at 12:47:22PM -0400, Jeff Layton wrote:
> > > On Wed, 2018-08-08 at 11:51 +1000, NeilBrown wrote:
> > > > If you have
On Sat, Aug 18, 2018 at 12:51:14AM -0500, Eric W. Biederman wrote:
> I was dismayed when I saw the syzbot report triggered someone to remove
> themselves from MAINTAINERS.
You're talking about my patch? I think you misread it, I'm not removing
myself from MAINTAINERS.
--b.
On Sat, Aug 18, 2018 at 12:51:14AM -0500, Eric W. Biederman wrote:
> I was dismayed when I saw the syzbot report triggered someone to remove
> themselves from MAINTAINERS.
You're talking about my patch? I think you misread it, I'm not removing
myself from MAINTAINERS.
--b.
On Fri, Aug 17, 2018 at 01:22:31PM -0500, Eric W. Biederman wrote:
> Dmitry Vyukov writes:
>
> > On Wed, Aug 15, 2018 at 9:01 PM, Eric W. Biederman
> > wrote:
> >> Dmitry Vyukov writes:
> >>
> >>> On Tue, Aug 14, 2018 at 12:11 PM, J. Bruce F
On Fri, Aug 17, 2018 at 01:22:31PM -0500, Eric W. Biederman wrote:
> Dmitry Vyukov writes:
>
> > On Wed, Aug 15, 2018 at 9:01 PM, Eric W. Biederman
> > wrote:
> >> Dmitry Vyukov writes:
> >>
> >>> On Tue, Aug 14, 2018 at 12:11 PM, J. Bruce F
On Tue, Aug 14, 2018 at 01:50:20PM -0700, Dmitry Vyukov wrote:
> On Tue, Aug 14, 2018 at 12:11 PM, J. Bruce Fields
> wrote:
> > On Mon, Aug 13, 2018 at 06:33:02AM -0700, syzbot wrote:
> >> syzbot has found a reproducer for the following crash on:
> >>
> >>
On Tue, Aug 14, 2018 at 01:50:20PM -0700, Dmitry Vyukov wrote:
> On Tue, Aug 14, 2018 at 12:11 PM, J. Bruce Fields
> wrote:
> > On Mon, Aug 13, 2018 at 06:33:02AM -0700, syzbot wrote:
> >> syzbot has found a reproducer for the following crash on:
> >>
> >>
On Wed, Aug 15, 2018 at 10:11:20AM +1000, Stephen Rothwell wrote:
> Hi Bruce,
>
> On Tue, 14 Aug 2018 13:50:20 -0700 Dmitry Vyukov wrote:
> >
> > On Tue, Aug 14, 2018 at 12:11 PM, J. Bruce Fields
> > wrote:
> > > On Mon, Aug 13, 2018 at 06:33:02AM -0700, syz
On Wed, Aug 15, 2018 at 10:11:20AM +1000, Stephen Rothwell wrote:
> Hi Bruce,
>
> On Tue, 14 Aug 2018 13:50:20 -0700 Dmitry Vyukov wrote:
> >
> > On Tue, Aug 14, 2018 at 12:11 PM, J. Bruce Fields
> > wrote:
> > > On Mon, Aug 13, 2018 at 06:33:02AM -0700, syz
On Mon, Aug 13, 2018 at 06:33:02AM -0700, syzbot wrote:
> syzbot has found a reproducer for the following crash on:
>
> HEAD commit:5ed5da74de9e Add linux-next specific files for 20180813
> git tree: linux-next
I fetched linux-next but don't have 5ed5da74de9e.
I'm also not sure why
On Mon, Aug 13, 2018 at 06:33:02AM -0700, syzbot wrote:
> syzbot has found a reproducer for the following crash on:
>
> HEAD commit:5ed5da74de9e Add linux-next specific files for 20180813
> git tree: linux-next
I fetched linux-next but don't have 5ed5da74de9e.
I'm also not sure why
This version looks correct to me, and simpler. I'll be curious to hear
whatever you learn from testing!
--b.
On Tue, Aug 14, 2018 at 01:56:51PM +1000, NeilBrown wrote:
>
> V2, which added wake_non_conflicts() was more broken than V1 - as
> Bruce explained there is no transitivity in the
This version looks correct to me, and simpler. I'll be curious to hear
whatever you learn from testing!
--b.
On Tue, Aug 14, 2018 at 01:56:51PM +1000, NeilBrown wrote:
>
> V2, which added wake_non_conflicts() was more broken than V1 - as
> Bruce explained there is no transitivity in the
On Sat, Aug 11, 2018 at 07:56:25AM -0400, Jeff Layton wrote:
> FWIW, I did a bit of testing with lockperf tests that I had written on
> an earlier rework of this code:
>
> https://git.samba.org/jlayton/linux.git/?p=jlayton/lockperf.git;a=summary
>
>
> The posix01 and flock01 tests in there
On Sat, Aug 11, 2018 at 07:56:25AM -0400, Jeff Layton wrote:
> FWIW, I did a bit of testing with lockperf tests that I had written on
> an earlier rework of this code:
>
> https://git.samba.org/jlayton/linux.git/?p=jlayton/lockperf.git;a=summary
>
>
> The posix01 and flock01 tests in there
On Sat, Aug 11, 2018 at 07:51:13AM -0400, Jeff Layton wrote:
> On Thu, 2018-08-09 at 20:29 -0400, J. Bruce Fields wrote:
> > On Fri, Aug 10, 2018 at 08:12:43AM +1000, NeilBrown wrote:
> > > On Thu, Aug 09 2018, J. Bruce Fields wrote:
> > >
> > > > I thi
On Sat, Aug 11, 2018 at 07:51:13AM -0400, Jeff Layton wrote:
> On Thu, 2018-08-09 at 20:29 -0400, J. Bruce Fields wrote:
> > On Fri, Aug 10, 2018 at 08:12:43AM +1000, NeilBrown wrote:
> > > On Thu, Aug 09 2018, J. Bruce Fields wrote:
> > >
> > > > I thi
On Fri, Aug 10, 2018 at 01:17:14PM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
>
> > On Fri, Aug 10, 2018 at 11:50:58AM +1000, NeilBrown wrote:
> >> You're good at this game!
> >
> > Everybody's got to have a hobby, mine is pat
On Fri, Aug 10, 2018 at 01:17:14PM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
>
> > On Fri, Aug 10, 2018 at 11:50:58AM +1000, NeilBrown wrote:
> >> You're good at this game!
> >
> > Everybody's got to have a hobby, mine is pat
On Fri, Aug 10, 2018 at 11:50:58AM +1000, NeilBrown wrote:
> You're good at this game!
Everybody's got to have a hobby, mine is pathological posix locking
cases
> So, because a locker with the same "owner" gets a free pass, you can
> *never* say that any lock which conflicts with A also
On Fri, Aug 10, 2018 at 11:50:58AM +1000, NeilBrown wrote:
> You're good at this game!
Everybody's got to have a hobby, mine is pathological posix locking
cases
> So, because a locker with the same "owner" gets a free pass, you can
> *never* say that any lock which conflicts with A also
On Fri, Aug 10, 2018 at 09:56:07AM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
> > If we only cared about the former, and only in simple cases, we could
> > walk the entire list and skip waking up only the locks that conflict
> > with the first one
On Fri, Aug 10, 2018 at 09:56:07AM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
> > If we only cared about the former, and only in simple cases, we could
> > walk the entire list and skip waking up only the locks that conflict
> > with the first one
On Fri, Aug 10, 2018 at 09:40:35AM +1000, NeilBrown wrote:
> caller_fl is first and sys_fl is second.
>
> if sys_fl, the second, is a read lock, and caller_fl, the first, is a
> write lock, they clearly conflict but any other lock that conflict
> with caller_fl (The write lock) would *not*
On Fri, Aug 10, 2018 at 09:40:35AM +1000, NeilBrown wrote:
> caller_fl is first and sys_fl is second.
>
> if sys_fl, the second, is a read lock, and caller_fl, the first, is a
> write lock, they clearly conflict but any other lock that conflict
> with caller_fl (The write lock) would *not*
On Fri, Aug 10, 2018 at 08:19:26AM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
> > I think you could simplify the code a lot by maintaining the tree so
> > that it always satisfies the condition that waiters are always strictly
> > "weaker&q
On Fri, Aug 10, 2018 at 08:19:26AM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
> > I think you could simplify the code a lot by maintaining the tree so
> > that it always satisfies the condition that waiters are always strictly
> > "weaker&q
On Fri, Aug 10, 2018 at 08:12:43AM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
>
> > I think there's also a problem with multiple tasks sharing the same
> > lock owner.
> >
> > So, all locks are exclusive locks for the same range. We have
On Fri, Aug 10, 2018 at 08:12:43AM +1000, NeilBrown wrote:
> On Thu, Aug 09 2018, J. Bruce Fields wrote:
>
> > I think there's also a problem with multiple tasks sharing the same
> > lock owner.
> >
> > So, all locks are exclusive locks for the same range. We have
I think there's also a problem with multiple tasks sharing the same
lock owner.
So, all locks are exclusive locks for the same range. We have four
tasks. Tasks 1 and 4 share the same owner, the others' owners are
distinct.
- Task 1 gets a lock.
- Task 2 gets a conflicting lock.
I think there's also a problem with multiple tasks sharing the same
lock owner.
So, all locks are exclusive locks for the same range. We have four
tasks. Tasks 1 and 4 share the same owner, the others' owners are
distinct.
- Task 1 gets a lock.
- Task 2 gets a conflicting lock.
On Thu, Aug 09, 2018 at 12:04:41PM +1000, NeilBrown wrote:
> When we find an existing lock which conflicts with a request,
> and the request wants to wait, we currently add the request
> to a list. When the lock is removed, the whole list is woken.
> This can cause the thundering-herd problem.
>
On Thu, Aug 09, 2018 at 12:04:41PM +1000, NeilBrown wrote:
> When we find an existing lock which conflicts with a request,
> and the request wants to wait, we currently add the request
> to a list. When the lock is removed, the whole list is woken.
> This can cause the thundering-herd problem.
>
On Thu, Aug 09, 2018 at 12:04:41PM +1000, NeilBrown wrote:
> In a future patch we will need to differentiate between conflicts that
> are "transitive" and those that aren't.
> A "transitive" conflict is defined as one where any lock that
> conflicts with the first (newly requested) lock would
On Thu, Aug 09, 2018 at 12:04:41PM +1000, NeilBrown wrote:
> In a future patch we will need to differentiate between conflicts that
> are "transitive" and those that aren't.
> A "transitive" conflict is defined as one where any lock that
> conflicts with the first (newly requested) lock would
On Wed, Aug 08, 2018 at 06:50:06PM -0400, Jeff Layton wrote:
> That seems like a legit problem.
>
> One possible fix might be to have the waiter on (1,2) walk down the
> entire subtree and wake up any waiter that is waiting on a lock that
> doesn't conflict with the lock on which it's waiting.
>
On Wed, Aug 08, 2018 at 06:50:06PM -0400, Jeff Layton wrote:
> That seems like a legit problem.
>
> One possible fix might be to have the waiter on (1,2) walk down the
> entire subtree and wake up any waiter that is waiting on a lock that
> doesn't conflict with the lock on which it's waiting.
>
On Wed, Aug 08, 2018 at 04:09:12PM -0400, J. Bruce Fields wrote:
> On Wed, Aug 08, 2018 at 03:54:45PM -0400, J. Bruce Fields wrote:
> > On Wed, Aug 08, 2018 at 11:51:07AM +1000, NeilBrown wrote:
> > > If you have a many-core machine, and have many threads all wanting to
> &g
On Wed, Aug 08, 2018 at 04:09:12PM -0400, J. Bruce Fields wrote:
> On Wed, Aug 08, 2018 at 03:54:45PM -0400, J. Bruce Fields wrote:
> > On Wed, Aug 08, 2018 at 11:51:07AM +1000, NeilBrown wrote:
> > > If you have a many-core machine, and have many threads all wanting to
> &g
On Wed, Aug 08, 2018 at 03:54:45PM -0400, J. Bruce Fields wrote:
> On Wed, Aug 08, 2018 at 11:51:07AM +1000, NeilBrown wrote:
> > If you have a many-core machine, and have many threads all wanting to
> > briefly lock a give file (udev is known to do this), you can get quite
>
On Wed, Aug 08, 2018 at 03:54:45PM -0400, J. Bruce Fields wrote:
> On Wed, Aug 08, 2018 at 11:51:07AM +1000, NeilBrown wrote:
> > If you have a many-core machine, and have many threads all wanting to
> > briefly lock a give file (udev is known to do this), you can get quite
>
On Wed, Aug 08, 2018 at 11:51:07AM +1000, NeilBrown wrote:
> If you have a many-core machine, and have many threads all wanting to
> briefly lock a give file (udev is known to do this), you can get quite
> poor performance.
>
> When one thread releases a lock, it wakes up all other threads that
>
On Wed, Aug 08, 2018 at 11:51:07AM +1000, NeilBrown wrote:
> If you have a many-core machine, and have many threads all wanting to
> briefly lock a give file (udev is known to do this), you can get quite
> poor performance.
>
> When one thread releases a lock, it wakes up all other threads that
>
On Wed, Aug 08, 2018 at 06:47:34AM -0400, Jeff Layton wrote:
> On Wed, 2018-08-08 at 11:51 +1000, NeilBrown wrote:
> > struct file lock contains an 'fl_next' pointer which
> > is used to point to the lock that this request is blocked
> > waiting for. So rename it to fl_blocker.
> >
> > The
On Wed, Aug 08, 2018 at 06:47:34AM -0400, Jeff Layton wrote:
> On Wed, 2018-08-08 at 11:51 +1000, NeilBrown wrote:
> > struct file lock contains an 'fl_next' pointer which
> > is used to point to the lock that this request is blocked
> > waiting for. So rename it to fl_blocker.
> >
> > The
On Wed, Aug 08, 2018 at 12:47:22PM -0400, Jeff Layton wrote:
> On Wed, 2018-08-08 at 11:51 +1000, NeilBrown wrote:
> > If you have a many-core machine, and have many threads all wanting to
> > briefly lock a give file (udev is known to do this), you can get quite
> > poor performance.
> >
> >
On Wed, Aug 08, 2018 at 12:47:22PM -0400, Jeff Layton wrote:
> On Wed, 2018-08-08 at 11:51 +1000, NeilBrown wrote:
> > If you have a many-core machine, and have many threads all wanting to
> > briefly lock a give file (udev is known to do this), you can get quite
> > poor performance.
> >
> >
On Tue, Aug 07, 2018 at 01:02:16PM +0800, Rong Chen wrote:
>
>
> On 08/01/2018 07:46 PM, J. Bruce Fields wrote:
> > On Fri, Jul 27, 2018 at 08:22:25AM +0800, Ye Xiaolong wrote:
> > > On 07/16, Ye Xiaolong wrote:
> > > > On 07/04, Huang, Ying wrote:
On Tue, Aug 07, 2018 at 01:02:16PM +0800, Rong Chen wrote:
>
>
> On 08/01/2018 07:46 PM, J. Bruce Fields wrote:
> > On Fri, Jul 27, 2018 at 08:22:25AM +0800, Ye Xiaolong wrote:
> > > On 07/16, Ye Xiaolong wrote:
> > > > On 07/04, Huang, Ying wrote:
OK, applied for for-4.19, thanks.--b.
On Wed, Aug 01, 2018 at 07:44:05PM -0500, Gustavo A. R. Silva wrote:
> Return statements in functions returning bool should use true or false
> instead of an integer value.
>
> This issue was detected with the help of Coccinelle.
>
> Signed-off-by: Gustavo
OK, applied for for-4.19, thanks.--b.
On Wed, Aug 01, 2018 at 07:44:05PM -0500, Gustavo A. R. Silva wrote:
> Return statements in functions returning bool should use true or false
> instead of an integer value.
>
> This issue was detected with the help of Coccinelle.
>
> Signed-off-by: Gustavo
On Thu, Aug 02, 2018 at 01:00:26PM +, Nixiaoming wrote:
> On Wednesday, August 01, 2018 11:18 PM , J. Bruce Fields wrote:
> >On Mon, Jul 23, 2018 at 09:57:11AM +0800, nixiaoming wrote:
> >> READ_BUF(8);
> >> dummy = be32_to_cpup(p++);
> >> dummy = be32_to
On Thu, Aug 02, 2018 at 01:00:26PM +, Nixiaoming wrote:
> On Wednesday, August 01, 2018 11:18 PM , J. Bruce Fields wrote:
> >On Mon, Jul 23, 2018 at 09:57:11AM +0800, nixiaoming wrote:
> >> READ_BUF(8);
> >> dummy = be32_to_cpup(p++);
> >> dummy = be32_to
On Mon, Jul 23, 2018 at 09:57:11AM +0800, nixiaoming wrote:
> READ_BUF(8);
> dummy = be32_to_cpup(p++);
> dummy = be32_to_cpup(p++);
> ...
> READ_BUF(4);
> dummy = be32_to_cpup(p++);
>
> Assigning value to "dummy" here, but that stored value
> is overwritten before it can be used.
> At the same
On Mon, Jul 23, 2018 at 09:57:11AM +0800, nixiaoming wrote:
> READ_BUF(8);
> dummy = be32_to_cpup(p++);
> dummy = be32_to_cpup(p++);
> ...
> READ_BUF(4);
> dummy = be32_to_cpup(p++);
>
> Assigning value to "dummy" here, but that stored value
> is overwritten before it can be used.
> At the same
On Tue, Jul 31, 2018 at 09:24:10PM -0500, Gustavo A. R. Silva wrote:
> In preparation to enabling -Wimplicit-fallthrough, mark switch cases
> where we are expecting to fall through.
Thanks, applying for 2.19.
--b.
>
> Warning level 2 was used: -Wimplicit-fallthrough=2
>
> Signed-off-by:
On Tue, Jul 31, 2018 at 09:24:10PM -0500, Gustavo A. R. Silva wrote:
> In preparation to enabling -Wimplicit-fallthrough, mark switch cases
> where we are expecting to fall through.
Thanks, applying for 2.19.
--b.
>
> Warning level 2 was used: -Wimplicit-fallthrough=2
>
> Signed-off-by:
On Fri, Jul 27, 2018 at 08:22:25AM +0800, Ye Xiaolong wrote:
> On 07/16, Ye Xiaolong wrote:
> >On 07/04, Huang, Ying wrote:
> >>"J. Bruce Fields" writes:
> >>
> >>> Thanks!
> >>>
> >>> On Wed, Jun 20, 2018 at 02:52:43PM +
On Fri, Jul 27, 2018 at 08:22:25AM +0800, Ye Xiaolong wrote:
> On 07/16, Ye Xiaolong wrote:
> >On 07/04, Huang, Ying wrote:
> >>"J. Bruce Fields" writes:
> >>
> >>> Thanks!
> >>>
> >>> On Wed, Jun 20, 2018 at 02:52:43PM +
Cc'ing Olga.--b.
On Tue, Jul 17, 2018 at 10:33:59AM +0100, Colin Ian King wrote:
> On 17/07/18 10:30, Dan Carpenter wrote:
> > On Mon, Jul 16, 2018 at 01:09:54PM +0100, Colin King wrote:
> >> From: Colin Ian King
> >>
> >> In the case where async_copy is successfully allocated but
> >> the call
Cc'ing Olga.--b.
On Tue, Jul 17, 2018 at 10:33:59AM +0100, Colin Ian King wrote:
> On 17/07/18 10:30, Dan Carpenter wrote:
> > On Mon, Jul 16, 2018 at 01:09:54PM +0100, Colin King wrote:
> >> From: Colin Ian King
> >>
> >> In the case where async_copy is successfully allocated but
> >> the call
Please pull:
git://linux-nfs.org/~bfields/linux.git tags/nfsd-4.18
A relatively quiet cycle for nfsd. The largest piece is an RDMA update
from Chuck Lever with new trace points, miscellaneous cleanups, and
streamlining of the send and receive paths. Other than that, some
miscellaneous
Please pull:
git://linux-nfs.org/~bfields/linux.git tags/nfsd-4.18
A relatively quiet cycle for nfsd. The largest piece is an RDMA update
from Chuck Lever with new trace points, miscellaneous cleanups, and
streamlining of the send and receive paths. Other than that, some
miscellaneous
On Tue, May 15, 2018 at 07:44:24PM +0100, Ben Hutchings wrote:
> On Wed, 2018-04-11 at 20:34 +0200, Greg Kroah-Hartman wrote:
> > 4.4-stable review patch. If anyone has any objections, please let me know.
> >
> > ------
> >
> > From: &quo
On Tue, May 15, 2018 at 07:44:24PM +0100, Ben Hutchings wrote:
> On Wed, 2018-04-11 at 20:34 +0200, Greg Kroah-Hartman wrote:
> > 4.4-stable review patch. If anyone has any objections, please let me know.
> >
> > ------
> >
> > From: "J. B
On Mon, Apr 16, 2018 at 09:02:01PM -0700, syzbot wrote:
> syzbot hit the following crash on bpf-next commit
> 5d1365940a68dd57b031b6e3c07d7d451cd69daf (Thu Apr 12 18:09:05 2018 +)
> Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> syzbot dashboard link:
>
On Mon, Apr 16, 2018 at 09:02:01PM -0700, syzbot wrote:
> syzbot hit the following crash on bpf-next commit
> 5d1365940a68dd57b031b6e3c07d7d451cd69daf (Thu Apr 12 18:09:05 2018 +)
> Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> syzbot dashboard link:
>
On Sun, Apr 15, 2018 at 02:37:39PM +, Sasha Levin wrote:
> On Mon, Apr 09, 2018 at 11:27:47AM -0400, J. Bruce Fields wrote:
> >What's your default on these patches on these AUTOSEL patches if you
> >don't get an ACK or NACK? Do you apply them anyway?
>
> Right now it's
On Sun, Apr 15, 2018 at 02:37:39PM +, Sasha Levin wrote:
> On Mon, Apr 09, 2018 at 11:27:47AM -0400, J. Bruce Fields wrote:
> >What's your default on these patches on these AUTOSEL patches if you
> >don't get an ACK or NACK? Do you apply them anyway?
>
> Right now it's
e low, I'm mainly just curious.)
--b.
On Mon, Apr 09, 2018 at 12:19:02AM +, Sasha Levin wrote:
> From: "J. Bruce Fields" <bfie...@redhat.com>
>
> [ Upstream commit 0078117c6d9160031b866cfa1853514d4f6865d2 ]
>
> A client that sends more than a hundred ops in
e low, I'm mainly just curious.)
--b.
On Mon, Apr 09, 2018 at 12:19:02AM +, Sasha Levin wrote:
> From: "J. Bruce Fields"
>
> [ Upstream commit 0078117c6d9160031b866cfa1853514d4f6865d2 ]
>
> A client that sends more than a hundred ops in a single compound
> cu
coders
Colin Ian King (1):
lockd: make nlm_ntf_refcnt and nlm_ntf_wq static
Eric Biggers (1):
sunrpc: remove incorrect HMAC request initialization
Fengguang Wu (1):
nfsd: fix boolreturn.cocci warnings
J. Bruce Fields (11):
nfsd: remove unsused "cp_consecutive"
coders
Colin Ian King (1):
lockd: make nlm_ntf_refcnt and nlm_ntf_wq static
Eric Biggers (1):
sunrpc: remove incorrect HMAC request initialization
Fengguang Wu (1):
nfsd: fix boolreturn.cocci warnings
J. Bruce Fields (11):
nfsd: remove unsused "cp_consecutive"
On Wed, Mar 28, 2018 at 02:03:34PM -0400, J. Bruce Fields wrote:
> Thanks, got it. Do you know how to find a commit id for that change?
> It's not entirely fair to blame the crypto change for what was really a
> latent nfs bug, but it might still be worth adding a Fixes: line just so
On Wed, Mar 28, 2018 at 02:03:34PM -0400, J. Bruce Fields wrote:
> Thanks, got it. Do you know how to find a commit id for that change?
> It's not entirely fair to blame the crypto change for what was really a
> latent nfs bug, but it might still be worth adding a Fixes: line just so
On Wed, Mar 28, 2018 at 10:50:51AM -0700, Eric Biggers wrote:
> On Wed, Mar 28, 2018 at 11:46:28AM -0400, J. Bruce Fields wrote:
> > On Tue, Mar 27, 2018 at 03:29:50PM -0700, Eric Biggers wrote:
> > > Hi Michael,
> > >
> > > On Tue, Mar 27, 2018 at 1
On Wed, Mar 28, 2018 at 10:50:51AM -0700, Eric Biggers wrote:
> On Wed, Mar 28, 2018 at 11:46:28AM -0400, J. Bruce Fields wrote:
> > On Tue, Mar 27, 2018 at 03:29:50PM -0700, Eric Biggers wrote:
> > > Hi Michael,
> > >
> > > On Tue, Mar 27, 2018 at 1
On Tue, Mar 27, 2018 at 03:29:50PM -0700, Eric Biggers wrote:
> Hi Michael,
>
> On Tue, Mar 27, 2018 at 11:06:14PM +0100, Michael Young wrote:
> > NFS mounts stopped working on one of my computers after a kernel update from
> > 4.15.3 to 4.15.4. I traced the problem to the commit
> >
On Tue, Mar 27, 2018 at 03:29:50PM -0700, Eric Biggers wrote:
> Hi Michael,
>
> On Tue, Mar 27, 2018 at 11:06:14PM +0100, Michael Young wrote:
> > NFS mounts stopped working on one of my computers after a kernel update from
> > 4.15.3 to 4.15.4. I traced the problem to the commit
> >
Applying, thanks!--b.
On Mon, Mar 19, 2018 at 11:37:05PM +0100, Stefan Agner wrote:
> Use enum nfs_cb_opnum4 in decode_cb_op_status. This fixes warnings
> seen with clang:
> fs/nfsd/nfs4callback.c:451:36: warning: implicit conversion from
> enumeration type 'enum nfs_cb_opnum4' to
Applying, thanks!--b.
On Mon, Mar 19, 2018 at 11:37:05PM +0100, Stefan Agner wrote:
> Use enum nfs_cb_opnum4 in decode_cb_op_status. This fixes warnings
> seen with clang:
> fs/nfsd/nfs4callback.c:451:36: warning: implicit conversion from
> enumeration type 'enum nfs_cb_opnum4' to
Please pull an nfsd bugfix from:
git://linux-nfs.org/~bfields/linux.git tags/nfsd-4.16-1
Just one fix for an occasional panic from Jeff Layton.
--b.
Jeff Layton (1):
nfsd: remove blocked locks on client teardown
Please pull an nfsd bugfix from:
git://linux-nfs.org/~bfields/linux.git tags/nfsd-4.16-1
Just one fix for an occasional panic from Jeff Layton.
--b.
Jeff Layton (1):
nfsd: remove blocked locks on client teardown
201 - 300 of 2741 matches
Mail list logo