On 04/10/2019 11:10 AM, Peter Zijlstra wrote:
> On Fri, Apr 05, 2019 at 03:21:05PM -0400, Waiman Long wrote:
>> +#define RWSEM_WAIT_TIMEOUT ((HZ - 1)/200 + 1)
> Given the choices in HZ, the above seems fairly 'optimistic'.
I can tighten it up and make it less "optimistic" :-)
Cheers,
Longman
On 04/10/2019 11:07 AM, Peter Zijlstra wrote:
> On Fri, Apr 05, 2019 at 03:21:05PM -0400, Waiman Long wrote:
>> Because of writer lock stealing, it is possible that a constant
>> stream of incoming writers will cause a waiting writer or reader to
>> wait indefinitely leadi
On 04/10/2019 06:00 AM, Ingo Molnar wrote:
> * Waiman Long wrote:
>
>># of Threads Before Patch After Patch
>> ---
>> 21,179 9,436
>> 41,505 8,268
On Sun, 2019-03-10 at 19:15 +0800, Nicolas Boichat wrote:
> On Thu, Mar 7, 2019 at 9:45 AM Long Cheng wrote:
> >
> > In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> > If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> >
From: Long Li
When packets are waiting for outbound I/O and interrupted, return the
proper error code to user process.
Signed-off-by: Long Li
---
fs/cifs/smbdirect.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 7259427
From: Long Li
Now upper layer is handling the transport shutdown and reconnect, remove
the code that handling transport shutdown on RDMA disconnect.
Signed-off-by: Long Li
---
fs/cifs/cifs_debug.c | 8 ++--
fs/cifs/smbdirect.c | 120 +++
fs
From: Long Li
Failure to send a packet doesn't mean it's a permanent failure, it can't be
returned to user process. This I/O should be retried or failed based on
server packet response and transport health. This logic is handled by the
upper layer.
Give this decision to upper layer.
Signed-off
From: Long Li
Memory registration failure doesn't mean this I/O has failed, it means the
transport is hitting I/O error or needs reconnect. This error is not from
the server.
Indicate this error to upper layer, and let upper layer decide how to
reconnect and proceed with this I/O.
Signed-off
From: Long Li
When transport is being destroyed, it's possible that some processes may
hold memory registrations that need to be deregistred.
Call them first so nobody is using transport resources, and it can be
destroyed.
Signed-off-by: Long Li
---
fs/cifs/connect.c | 36
became much more fair,
though there was a drop of about 26% in the mean locking operations
done which was a tradeoff of having better fairness.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h | 2 +
kernel/locking/rwsem-xadd.c | 154 ++
kernel
and optimistic spinning paths whenever
this bit is set. So all those extra readers will be put to sleep in
the wait queue. Wakeup will not happen until the reader count reaches 0.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 38 +++-
kernel/locking/rwsem.h
Before combining owner and count, we are adding two new helpers for
accessing the owner value in the rwsem.
1) struct task_struct *rwsem_get_owner(struct rw_semaphore *sem)
2) bool is_rwsem_reader_owned(struct rw_semaphore *sem)
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 15
1,862
32 2,388 5303,717 359
64 1,424 3224,060 401
128 1,642 5104,488 628
It is obvious that RT tasks can benefit pretty significantly with this set
of patches.
Signed-off-by: Waiman Long
---
kernel/locking
, the extra constant argument to
rwsem_try_write_lock() and rwsem_try_write_lock_unqueued() should be
optimized out by the compiler.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 25 ++---
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/kernel
rwsem_sleep_reader=308201
rwsem_sleep_writer=72281
So a lot more threads acquired the lock in the slowpath and more threads
went to sleep.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem-xadd.c | 76 +--
kernel/locking/rwsem.h
wasn't significant in this case, but this change
is required by a follow-on patch.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem-xadd.c | 88 ++-
kernel/locking/rwsem.h| 3 ++
3 files changed, 80
1,727 1,918
32 1,263 1,956
64 889 1,343
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 36 +---
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/kernel/locking
to be optimized.
To make rwsem more sane, a new locking scheme similar to the one in
qrwlock is now being used. The atomic long count has the following
bit definitions:
Bit 0 - writer locked bit
Bit 1 - waiters present bit
Bits 2-7 - reserved for future extension
Bits 8-X - reader count (24/56
allowed.
This is part 2 of a 3-part (0/1/2) series to rearchitect the internal
operation of rwsem.
part 0: merged into tip
part 1: https://lore.kernel.org/lkml/20190404174320.22416-1-long...@redhat.com/
This patchset revamps the current rwsem-xadd implementation to make
it saner and easier
924
32 78 300
64 38 195
240 50 149
There is no performance gain at low contention level. At high contention
level, however, this patch gives a pretty decent performance boost.
Signed-off-by: Waiman Long
()/up_write()")
will have to be reverted.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 74 -
1 file changed, 74 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 58b3a64e6f2c..4f036bda9063 100644
--- a/ker
.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 40 ++---
kernel/locking/rwsem.h | 5 +
2 files changed, 38 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 4f036bda9063..35891c53338b
On 04/04/2019 05:38 AM, Peter Zijlstra wrote:
> On Thu, Apr 04, 2019 at 07:05:24AM +0200, Juergen Gross wrote:
>
>> Without PARAVIRT_SPINLOCK this would be just an alternative() then?
> That could maybe work yes. This is all early enough.
Yes, alternative() should work as it is done before SMP
On bare metail, the pvqspinlock event counts will always be 0. So there
is no point in showing their corresponding debugfs files. So they are
skipped in this case.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/lock_events.c | 28 +++-
1 file
directory.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
arch/Kconfig| 10 +++
arch/x86/Kconfig| 8 --
kernel/locking/Makefile | 1 +
kernel/locking/lock_events.c| 153
kernel/locking/lock_events.h
ds.
Waiman Long (11):
locking/rwsem: Relocate rwsem_down_read_failed()
locking/rwsem: Move owner setting code from rwsem.c to rwsem.h
locking/rwsem: Move rwsem internal function declarations to
rwsem-xadd.h
locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()
locking/rwsem: Add
rwsem_down_read_failed() returns, for instance.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/rwsem-xadd.c | 6 +++---
kernel/locking/rwsem.c | 19 ++-
kernel/locking/rwsem.h | 17 +++--
3 files changed, 20 insertions(+), 22 deletions
() calls are replaced by either lockevent_inc() or
lockevent_cond_inc() calls.
The qstat_hop() call is renamed to lockevent_pv_hop(). The "reset_counters"
debugfs file is also renamed to ".reset_counts".
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/lock_e
() are also moved over to rwsem-xadd.h.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/rwsem.c | 3 ---
kernel/locking/rwsem.h | 12 ++--
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index
microbenchmark with one locking thread was
run to write-lock and write-unlock an array of rwsems separated 2
cachelines apart in a 1M byte memory block. The locking rates (kops/s)
of the microbenchmark when the rwsems are at various "long" (8-byte)
offsets from beginning of the cachel
The atomic_long_cmpxchg_acquire() in rwsem_try_read_lock_unqueued() is
replaced by atomic_long_try_cmpxchg_acquire() to simpify the code and
generate slightly better assembly code. There is no functional change.
Signed-off-by: Waiman Long
Acked-by: Will Deacon
Acked-by: Davidlohr Bueso
in the slowpath were
write-locks in the optimistic spinning code path with no sleeping at
all. For this system, over 97% of the locks are acquired via optimistic
spinning. It illustrates the importance of optimistic spinning in
improving the performance of rwsem.
Signed-off-by: Waiman Long
Acked-by: Davidlohr
The rwsem_down_read_failed*() functions were relocated from above the
optimistic spinning section to below that section. This enables the
reader functions to use optimisitic spinning in future patches. There
is no code change.
Signed-off-by: Waiman Long
Acked-by: Will Deacon
Acked-by: Davidlohr
We don't need to expose rwsem internal functions which are not supposed
to be called directly from other kernel code.
Signed-off-by: Waiman Long
Acked-by: Will Deacon
Acked-by: Davidlohr Bueso
---
include/linux/rwsem.h | 7 ---
kernel/locking/rwsem.h | 7 +++
2 files changed, 7
of the rwsem count and owner fields to give more information
about what is wrong with the rwsem. The debug_locks_off() function is
called as is done inside DEBUG_LOCKS_WARN_ON().
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/rwsem.c | 3 ++-
kernel/locking/rwsem.h | 21
On 04/04/2019 12:44 PM, Josh Poimboeuf wrote:
> Keeping track of the number of mitigations for all the CPU speculation
> bugs has become overwhelming for many users. It's getting more and more
> complicated to decide which mitigations are needed for a given
> architecture. Complicating matters
On 04/03/2019 01:16 PM, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 12:33:20PM -0400, Waiman Long wrote:
>> static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32
>> val)
>> {
>> if (static_bra
On 04/02/2019 05:43 AM, Peter Zijlstra wrote:
> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>> On 03/29/2019 11:20 AM, Alex Kogan wrote:
>>> +config NUMA_AWARE_SPINLOCKS
>>> + bool "Numa-aware spinlocks"
>>> + depends on NUMA
>
On 04/03/2019 08:59 AM, Peter Zijlstra wrote:
> On Thu, Mar 28, 2019 at 02:10:54PM -0400, Waiman Long wrote:
>> This is part 2 of a 3-part (0/1/2) series to rearchitect the internal
>> operation of rwsem.
>>
>> part 0: https://lkml.org/lkml/2019/3/22/1662
>> part 1
On 04/03/2019 11:39 AM, Alex Kogan wrote:
> Peter, Longman, many thanks for your detailed comments!
>
> A few follow-up questions are inlined below.
>
>> On Apr 2, 2019, at 5:43 AM, Peter Zijlstra wrote:
>>
>> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wro
On 04/03/2019 09:12 AM, Peter Zijlstra wrote:
> On Thu, Feb 28, 2019 at 02:09:41PM -0500, Waiman Long wrote:
>> For an uncontended rwsem, count and owner are the only fields a task
>> needs to touch when acquiring the rwsem. So they are put next to each
>> other t
On 04/03/2019 09:09 AM, Peter Zijlstra wrote:
> On Thu, Feb 28, 2019 at 02:09:36PM -0500, Waiman Long wrote:
>> diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
>> index 1d8f722..c8fd3f1 100644
>> --- a/kernel/locking/rwsem.h
>> +++ b/kernel/locking
On 04/02/2019 05:18 PM, Johannes Weiner wrote:
> On Tue, Apr 02, 2019 at 03:38:10PM -0400, Waiman Long wrote:
>> The output of the PSI files show a bunch of numbers with no unit.
>> The psi.txt documentation file also does not indicate what units
>> are used. One can onl
Commit-ID: ddb20d1d3aed8f130519c0a29cd5392efcc067b8
Gitweb: https://git.kernel.org/tip/ddb20d1d3aed8f130519c0a29cd5392efcc067b8
Author: Waiman Long
AuthorDate: Fri, 22 Mar 2019 10:30:08 -0400
Committer: Ingo Molnar
CommitDate: Wed, 3 Apr 2019 14:50:52 +0200
locking/rwsem: Optimize
Commit-ID: 390a0c62c23cb026cd4664a66f6f45fed3a215f6
Gitweb: https://git.kernel.org/tip/390a0c62c23cb026cd4664a66f6f45fed3a215f6
Author: Waiman Long
AuthorDate: Fri, 22 Mar 2019 10:30:07 -0400
Committer: Ingo Molnar
CommitDate: Wed, 3 Apr 2019 14:50:52 +0200
locking/rwsem: Remove rwsem
Commit-ID: 46ad0840b1584b92b5ff2cc3ed0b011dd6b8e0f1
Gitweb: https://git.kernel.org/tip/46ad0840b1584b92b5ff2cc3ed0b011dd6b8e0f1
Author: Waiman Long
AuthorDate: Fri, 22 Mar 2019 10:30:06 -0400
Committer: Ingo Molnar
CommitDate: Wed, 3 Apr 2019 14:50:50 +0200
locking/rwsem: Remove arch
Commit-ID: 0975e3df30eb5849284c01be66c2ec16d8a48114
Gitweb: https://git.kernel.org/tip/0975e3df30eb5849284c01be66c2ec16d8a48114
Author: Waiman Long
AuthorDate: Fri, 22 Mar 2019 10:30:08 -0400
Committer: Ingo Molnar
CommitDate: Wed, 3 Apr 2019 11:42:35 +0200
locking/rwsem: Optimize
Commit-ID: 701fd16f3b4e3e5f317a051b36962b8cc756c138
Gitweb: https://git.kernel.org/tip/701fd16f3b4e3e5f317a051b36962b8cc756c138
Author: Waiman Long
AuthorDate: Fri, 22 Mar 2019 10:30:06 -0400
Committer: Ingo Molnar
CommitDate: Wed, 3 Apr 2019 11:42:33 +0200
locking/rwsem: Remove arch
Commit-ID: 79407a77fe0ea11c0d38c5f4a3936bf35a994965
Gitweb: https://git.kernel.org/tip/79407a77fe0ea11c0d38c5f4a3936bf35a994965
Author: Waiman Long
AuthorDate: Fri, 22 Mar 2019 10:30:07 -0400
Committer: Ingo Molnar
CommitDate: Wed, 3 Apr 2019 11:42:34 +0200
locking/rwsem: Remove rwsem
On 04/02/2019 03:17 PM, Jan Harkes wrote:
> On Sun, Mar 31, 2019 at 03:13:47PM -0400, Jan Harkes wrote:
>> On Sun, Mar 31, 2019 at 02:14:13PM -0400, Waiman Long wrote:
>>> One possibility is that there is a previous reference to the memory
>>> currently occupied by
On 03/29/2019 11:20 AM, Alex Kogan wrote:
> In CNA, spinning threads are organized in two queues, a main queue for
> threads running on the same node as the current lock holder, and a
> secondary queue for threads running on other nodes. At the unlock time,
> the lock holder scans the main queue
On 04/01/2019 02:38 AM, Juergen Gross wrote:
> On 25/03/2019 19:03, Waiman Long wrote:
>> On 03/25/2019 12:40 PM, Juergen Gross wrote:
>>> On 25/03/2019 16:57, Waiman Long wrote:
>>>> It was found that passing an invalid cpu number to pv_vcpu_is_preempted()
>&
On 03/31/2019 12:00 AM, Jan Harkes wrote:
> On Fri, Mar 29, 2019 at 05:53:22PM +, Waiman Long wrote:
>> On 03/29/2019 12:10 PM, Jan Harkes wrote:
>>> I knew I definitely had never seen this problem with the stable kernel
>>> on Ubuntu xenial (4.4) so I bisecte
On 03/29/2019 12:10 PM, Jan Harkes wrote:
> I was testing Coda on the 5.1-rc2 kernel and noticed that when I run a
> binary out of /coda, the binary would never exit and the system would
> detect a soft lockup. I narrowed it down to a very simple reproducible
> case of running a statically linked
On 03/28/2019 04:56 PM, Linus Torvalds wrote:
> On Thu, Mar 28, 2019 at 1:47 PM Linus Torvalds
> wrote:
>> On Thu, Mar 28, 2019 at 11:12 AM Waiman Long wrote:
>>> With the merging of owner into count for x86-64, there is only 16 bits
>>> left for reader count
On 03/28/2019 04:47 PM, Linus Torvalds wrote:
> On Thu, Mar 28, 2019 at 11:12 AM Waiman Long wrote:
>> With the merging of owner into count for x86-64, there is only 16 bits
>> left for reader count. It is theoretically possible for an application to
>> cause more than 6
, the extra constant argument to
rwsem_try_write_lock() and rwsem_try_write_lock_unqueued() should be
optimized out by the compiler.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 25 ++---
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/kernel
1,727 1,918
32 1,263 1,956
64 889 1,343
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 38 ++---
1 file changed, 31 insertions(+), 7 deletions(-)
diff --git a/kernel
Before combining owner and count, we are adding two new helpers for
accessing the owner value in the rwsem.
1) struct task_struct *rwsem_get_owner(struct rw_semaphore *sem)
2) bool is_rwsem_reader_owned(struct rw_semaphore *sem)
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 15
the maximum reader count to 32k.
A limit of 256 is also imposed on the number of readers that can be woken
up in one wakeup function call. This will eliminate the possibility of
waking up more than 64k readers and overflowing the count.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h
to be optimized.
To make rwsem more sane, a new locking scheme similar to the one in
qrwlock is now being used. The atomic long count has the following
bit definitions:
Bit 0 - writer locked bit
Bit 1 - waiters present bit
Bits 2-7 - reserved for future extension
Bits 8-X - reader count (24/56
rwsem_sleep_reader=308201
rwsem_sleep_writer=72281
So a lot more threads acquired the lock in the slowpath and more threads
went to sleep.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem-xadd.c | 62 ---
kernel/locking/rwsem.h
.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 40 ++---
kernel/locking/rwsem.h | 5 +
2 files changed, 38 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 4f036bda9063..35891c53338b
wasn't significant in this case, but this change
is required by a follow-on patch.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem-xadd.c | 88 ++-
kernel/locking/rwsem.h| 3 ++
3 files changed, 80
()/up_write()")
will have to be reverted.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 74 -
1 file changed, 74 deletions(-)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 58b3a64e6f2c..4f036bda9063 100644
--- a/ker
2,388 5303,717 359
64 1,424 3224,060 401
128 1,642 5104,488 628
It is obvious that RT tasks can benefit pretty significantly with this set
of patches.
Signed-off-by: Waiman Long
---
kernel/locking/rwsem-xadd.c | 11
924
32 78 300
64 38 195
240 50 149
There is no performance gain at low contention level. At high contention
level, however, this patch gives a pretty decent performance boost.
Signed-off-by: Waiman Long
became much more fair,
though there was a drop of about 26% in the mean locking operations
done which was a tradeoff of having better fairness.
Signed-off-by: Waiman Long
---
kernel/locking/lock_events_list.h | 2 +
kernel/locking/rwsem-xadd.c | 154 ++
kernel
spinning enables readers to acquire the lock more
quickly. So workloads that use a mix of readers and writers should
see an increase in performance as long as the reader critical sections
are short.
Finally, storing the write-lock owner into the count will allow
optimistic spinners to get to the lock
On 03/22/2019 01:50 PM, Christopher Lameter wrote:
> On Fri, 22 Mar 2019, Waiman Long wrote:
>
>> I am looking forward to it.
> There is also alrady rcu being used in these paths. kfree_rcu() would not
> be enough? It is an estalished mechanism that is mature and well
> under
On 03/22/2019 07:16 AM, Oleg Nesterov wrote:
> On 03/21, Matthew Wilcox wrote:
>> On Thu, Mar 21, 2019 at 05:45:10PM -0400, Waiman Long wrote:
>>
>>> To avoid this dire condition and reduce lock hold time of tasklist_lock,
>>> flush_sigqueue() is modified to p
On 03/21/2019 06:00 PM, Peter Zijlstra wrote:
> On Thu, Mar 21, 2019 at 05:45:12PM -0400, Waiman Long wrote:
>> If the freeing queue has many objects, freeing all of them consecutively
>> may cause soft lockup especially on a debug kernel. So kmem_free_up_q()
>> is modified
Add a new free_uid_to_q() function to put the user structure on
freeing queue instead of freeing it directly. That new function is then
called from __sigqueue_free() with a free_q parameter.
Signed-off-by: Waiman Long
---
include/linux/sched/user.h | 3 +++
kernel/signal.c| 2
If the freeing queue has many objects, freeing all of them consecutively
may cause soft lockup especially on a debug kernel. So kmem_free_up_q()
is modified to call cond_resched() if running in the process context.
Signed-off-by: Waiman Long
---
mm/slab_common.c | 11 ++-
1 file changed
the actual freeing of memory objects can be deferred until after the
tasklist_lock is released and irq re-enabled.
Signed-off-by: Waiman Long
---
include/linux/signal.h | 4 +++-
kernel/exit.c| 12
kernel/signal.c | 27 ---
securi
on, kmem_free_up_q() can be called to free all the memory
objects in the freeing queue after releasing the lock.
Signed-off-by: Waiman Long
---
include/linux/slab.h | 28
mm/slab_common.c | 41 +
2 files changed, 69 insertions
nel.
Waiman Long (4):
mm: Implement kmem objects freeing queue
signal: Make flush_sigqueue() use free_q to release memory
signal: Add free_uid_to_q()
mm: Do periodic rescheduling when freeing objects in kmem_free_up_q()
include/linux/sched/user.h | 3 +++
include/linux/signal.h |
On Thu, Mar 21, 2019 at 12:54 AM Jon Maloy wrote:
>
>
>
> > -Original Message-
> > From: Dmitry Vyukov
> > Sent: 20-Mar-19 17:41
> > To: Jon Maloy
> > Cc: syzbot ;
> > da...@davemloft.net; kuz...@ms2.inr.ac.ru; linux-
> > ker...@vger.kernel.org; net...@vger.kernel.org; syzkaller-
> >
On 03/18/2019 04:44 AM, Zhenzhong Duan wrote:
>
> On 2019/3/15 22:17, Waiman Long wrote:
>> On 03/15/2019 05:25 AM, Peter Zijlstra wrote:
>>> On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
>>>> This reverts commit f99fd22e4d4bc84880a8a31
On 03/15/2019 05:25 AM, Peter Zijlstra wrote:
> On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
>> This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.
>>
>> It's unnecessory after commit "acpi_pm: Fix bootup softlockup due to PMTMR
>> counter read contention", the simple
From: Long Li
When sending a rdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Change in v2: adjust_credits before re-sending
Signed-off-by: Long Li
---
fs/cifs/file.c | 71
From: Long Li
When sending a wdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Change in v2: adjust_credits before re-sending
Signed-off-by: Long Li
---
fs/cifs/file.c | 77
ipc_obtain_object_check()
will not incorrectly match a deleted IPC id to to a new one.
Reported-by: Manfred Spraul
Signed-off-by: Waiman Long
---
ipc/util.c | 25 ++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index 78e14acb51a7
In DMA engine framework, add 8250 uart dma to support MediaTek uart.
If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
the performance, can enable the function.
Signed-off-by: Long Cheng
---
drivers/dma/mediatek/Kconfig | 11 +
drivers/dma/mediatek/Makefile
The filename matches mtk-uart-apdma.c.
So using "mtk-uart-apdma.txt" should be better.
And add some property.
Signed-off-by: Long Cheng
Reviewed-by: Rob Herring
---
.../devicetree/bindings/dma/8250_mtk_dma.txt | 33
.../devicetree/bindings/dma/mtk-uart
Modify uart rx and complete for DMA.
Signed-off-by: Long Cheng
---
drivers/tty/serial/8250/8250_mtk.c | 53
1 file changed, 23 insertions(+), 30 deletions(-)
diff --git a/drivers/tty/serial/8250/8250_mtk.c
b/drivers/tty/serial/8250/8250_mtk.c
index
1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function
Signed-off-by: Long Cheng
---
arch/arm64/boot/dts/mediatek/mt2712e.dtsi | 51 +
1 file changed, 51 insertions(+)
diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
b/arch/arm64/boot
:
-remove unimportant parameters
-instead of cookie, use APIs of virtual channel.
-use of_dma_xlate_by_chan_id.
Changes compared to v1:
-mian revised file, 8250_mtk_dma.c
--parameters renamed for standard
--remove atomic operation
Long Cheng (4):
dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
On Wed, Mar 6, 2019 at 9:42 AM syzbot
wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:63bdf4284c38 Merge branch 'linus' of git://git.kernel.org/..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=100347cb20
> kernel config:
From: Long Li
When sending a wdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Signed-off-by: Long Li
---
fs/cifs/file.c | 61 +-
1 file changed, 31 insertions(+), 30
From: Long Li
When sending a rdata, transport may return -EAGAIN. In this case
we should re-obtain credits because the session may have been
reconnected.
Signed-off-by: Long Li
---
fs/cifs/file.c | 51 +-
1 file changed, 26 insertions(+), 25
For an uncontended rwsem, count and owner are the only fields a task
needs to touch when acquiring the rwsem. So they are put next to each
other to increase the chance that they will share the same cacheline.
Suggested-by: Linus Torvalds
Signed-off-by: Waiman Long
---
include/linux/rwsem.h
of the rwsem count and owner fields to give more information
about what is wrong with the rwsem.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/rwsem.c | 3 ++-
kernel/locking/rwsem.h | 19 ---
2 files changed, 14 insertions(+), 8 deletions(-)
diff --git
On bare metail, the pvqspinlock event counts will always be 0. So there
is no point in showing their corresponding debugfs files. So they are
skipped in this case.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/lock_events.c | 28 +++-
1 file
in the slowpath were
write-locks in the optimistic spinning code path with no sleeping at
all. For this system, over 97% of the locks are acquired via optimistic
spinning. It illustrates the importance of optimistic spinning in
improving the performance of rwsem.
Signed-off-by: Waiman Long
Acked-by: Davidlohr
() calls are replaced by either lockevent_inc() or
lockevent_cond_inc() calls.
The qstat_hop() call is renamed to lockevent_pv_hop(). The "reset_counters"
debugfs file is also renamed to ".reset_counts".
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/lock_e
The atomic_long_cmpxchg_acquire() in rwsem_try_read_lock_unqueued() is
replaced by atomic_long_try_cmpxchg_acquire() to simpify the code and
generate slightly better assembly code. There is no functional change.
Signed-off-by: Waiman Long
Acked-by: Will Deacon
Acked-by: Davidlohr Bueso
The rwsem_down_read_failed*() functions were relocted from above the
optimistic spinning section to below that section. This enables the
reader functions to use optimisitic spinning in future patches. There
is no code change.
Signed-off-by: Waiman Long
Acked-by: Will Deacon
Acked-by: Davidlohr
() are also moved over to rwsem-xadd.h.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
kernel/locking/rwsem.c | 3 ---
kernel/locking/rwsem.h | 12 ++--
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 59e5848
directory.
Signed-off-by: Waiman Long
Acked-by: Davidlohr Bueso
---
arch/Kconfig| 10 +++
arch/x86/Kconfig| 8 ---
kernel/locking/Makefile | 1 +
kernel/locking/lock_events.c| 153
kernel/locking
901 - 1000 of 9157 matches
Mail list logo