On Thu, Sep 01, 2016 at 01:51:34PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 01:04:26PM +0200, Manfred Spraul wrote:
>
> > >So for both power and arm64, you can in fact model spin_unlock_wait()
> > >as LOCK+UNLOCK.
>
> > Is this consensus?
>
> Dunno, but it was done to fix your
On Thu, Sep 01, 2016 at 01:51:34PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 01:04:26PM +0200, Manfred Spraul wrote:
>
> > >So for both power and arm64, you can in fact model spin_unlock_wait()
> > >as LOCK+UNLOCK.
>
> > Is this consensus?
>
> Dunno, but it was done to fix your
On Mon, Aug 29, 2016 at 03:16:52PM +, Mathieu Desnoyers wrote:
> - On Aug 27, 2016, at 12:22 AM, Josh Triplett j...@joshtriplett.org wrote:
>
> > On Thu, Aug 25, 2016 at 05:56:25PM +, Ben Maurer wrote:
> >> rseq opens up a whole world of algorithms to userspace – algorithms
> >> that
On Mon, Aug 29, 2016 at 03:16:52PM +, Mathieu Desnoyers wrote:
> - On Aug 27, 2016, at 12:22 AM, Josh Triplett j...@joshtriplett.org wrote:
>
> > On Thu, Aug 25, 2016 at 05:56:25PM +, Ben Maurer wrote:
> >> rseq opens up a whole world of algorithms to userspace – algorithms
> >> that
On Fri, Aug 12, 2016 at 08:43:55PM +0200, Manfred Spraul wrote:
> Hi Boqun,
>
> On 08/12/2016 04:47 AM, Boqun Feng wrote:
> > > We should not be doing an smp_mb() right after a spin_lock(), makes no
> > > sense. The
> > > spinlock machinery should guarant
On Fri, Aug 12, 2016 at 08:43:55PM +0200, Manfred Spraul wrote:
> Hi Boqun,
>
> On 08/12/2016 04:47 AM, Boqun Feng wrote:
> > > We should not be doing an smp_mb() right after a spin_lock(), makes no
> > > sense. The
> > > spinlock machinery should guarant
ition check on whether the
id is no less than nr_cpu_ids in the sibling CPU iteration code.
Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
---
arch/powerpc/kernel/smp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/ker
than nr_cpu_ids in the sibling CPU iteration code.
Signed-off-by: Boqun Feng
---
arch/powerpc/kernel/smp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 25a39052bf6b..9c6f3fd58059 100644
--- a/arch/powerpc/kernel/
On Sun, Aug 14, 2016 at 03:02:20PM +, Mathieu Desnoyers wrote:
> - On Aug 12, 2016, at 9:28 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Fri, Aug 12, 2016 at 06:11:45PM +, Mathieu Desnoyers wrote:
> >> - On Aug 12, 2016, at 12:35 PM, Boqun Feng boqun
On Sun, Aug 14, 2016 at 03:02:20PM +, Mathieu Desnoyers wrote:
> - On Aug 12, 2016, at 9:28 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Fri, Aug 12, 2016 at 06:11:45PM +, Mathieu Desnoyers wrote:
> >> - On Aug 12, 2016, at 12:35 PM, Boqun Feng boqun
On Fri, Aug 12, 2016 at 06:11:45PM +, Mathieu Desnoyers wrote:
> - On Aug 12, 2016, at 12:35 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Fri, Aug 12, 2016 at 01:30:15PM +0800, Boqun Feng wrote:
> > [snip]
> >> > > Besides, do we allow users
On Fri, Aug 12, 2016 at 06:11:45PM +, Mathieu Desnoyers wrote:
> - On Aug 12, 2016, at 12:35 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Fri, Aug 12, 2016 at 01:30:15PM +0800, Boqun Feng wrote:
> > [snip]
> >> > > Besides, do we allow users
On Fri, Aug 12, 2016 at 01:30:15PM +0800, Boqun Feng wrote:
[snip]
> > > Besides, do we allow userspace programs do read-only access to the
> > > memory objects modified by do_rseq(). If so, we have a problem when
> > > there are two writes in a do_rseq()(either i
On Fri, Aug 12, 2016 at 01:30:15PM +0800, Boqun Feng wrote:
[snip]
> > > Besides, do we allow userspace programs do read-only access to the
> > > memory objects modified by do_rseq(). If so, we have a problem when
> > > there are two writes in a do_rseq()(either i
On Fri, Aug 12, 2016 at 03:10:38AM +, Mathieu Desnoyers wrote:
> - On Aug 11, 2016, at 9:28 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Thu, Aug 11, 2016 at 11:26:30PM +, Mathieu Desnoyers wrote:
> >> - On Jul 24, 2016, at 2:01 PM, Dave Watson dav
On Fri, Aug 12, 2016 at 03:10:38AM +, Mathieu Desnoyers wrote:
> - On Aug 11, 2016, at 9:28 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Thu, Aug 11, 2016 at 11:26:30PM +, Mathieu Desnoyers wrote:
> >> - On Jul 24, 2016, at 2:01 PM, Dave Watson dav
On Thu, Aug 11, 2016 at 11:31:06AM -0700, Davidlohr Bueso wrote:
> On Thu, 11 Aug 2016, Peter Zijlstra wrote:
>
> > On Wed, Aug 10, 2016 at 04:29:22PM -0700, Davidlohr Bueso wrote:
> >
> > > (1) As Manfred suggested, have a patch 1 that fixes the race against
> > > mainline
> > > with the
On Thu, Aug 11, 2016 at 11:31:06AM -0700, Davidlohr Bueso wrote:
> On Thu, 11 Aug 2016, Peter Zijlstra wrote:
>
> > On Wed, Aug 10, 2016 at 04:29:22PM -0700, Davidlohr Bueso wrote:
> >
> > > (1) As Manfred suggested, have a patch 1 that fixes the race against
> > > mainline
> > > with the
On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote:
> On Wed, 10 Aug 2016, Manfred Spraul wrote:
>
> > On 08/10/2016 02:05 AM, Benjamin Herrenschmidt wrote:
> > > On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
> > > > Hi Benjamin, Hi Michael,
> > > >
> > > > regarding
On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote:
> On Wed, 10 Aug 2016, Manfred Spraul wrote:
>
> > On 08/10/2016 02:05 AM, Benjamin Herrenschmidt wrote:
> > > On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
> > > > Hi Benjamin, Hi Michael,
> > > >
> > > > regarding
On Thu, Aug 11, 2016 at 11:26:30PM +, Mathieu Desnoyers wrote:
> - On Jul 24, 2016, at 2:01 PM, Dave Watson davejwat...@fb.com wrote:
>
> >>> +static inline __attribute__((always_inline))
> >>> +bool rseq_finish(struct rseq_lock *rlock,
> >>> + intptr_t *p, intptr_t to_write,
> >>> +
On Thu, Aug 11, 2016 at 11:26:30PM +, Mathieu Desnoyers wrote:
> - On Jul 24, 2016, at 2:01 PM, Dave Watson davejwat...@fb.com wrote:
>
> >>> +static inline __attribute__((always_inline))
> >>> +bool rseq_finish(struct rseq_lock *rlock,
> >>> + intptr_t *p, intptr_t to_write,
> >>> +
On Wed, Aug 10, 2016 at 05:33:44PM +, Mathieu Desnoyers wrote:
> - On Aug 9, 2016, at 12:13 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
>
>
> >
> > However, I'm thinking maybe we can use some tricks to avoid unnecessary
> > aborts-on-preemption.
>
On Wed, Aug 10, 2016 at 05:33:44PM +, Mathieu Desnoyers wrote:
> - On Aug 9, 2016, at 12:13 PM, Boqun Feng boqun.f...@gmail.com wrote:
>
>
>
> >
> > However, I'm thinking maybe we can use some tricks to avoid unnecessary
> > aborts-on-preemption.
>
On Wed, Aug 03, 2016 at 10:03:32PM -0700, Andy Lutomirski wrote:
> On Wed, Aug 3, 2016 at 9:27 PM, Boqun Feng <boqun.f...@gmail.com> wrote:
> > On Wed, Aug 03, 2016 at 09:37:57AM -0700, Andy Lutomirski wrote:
> >> On Wed, Aug 3, 2016 at 5:27 AM, Peter Zijlstra <pet...
On Wed, Aug 03, 2016 at 10:03:32PM -0700, Andy Lutomirski wrote:
> On Wed, Aug 3, 2016 at 9:27 PM, Boqun Feng wrote:
> > On Wed, Aug 03, 2016 at 09:37:57AM -0700, Andy Lutomirski wrote:
> >> On Wed, Aug 3, 2016 at 5:27 AM, Peter Zijlstra
> >> wrote:
> >>
On Sun, Aug 07, 2016 at 03:36:24PM +, Mathieu Desnoyers wrote:
> - On Aug 3, 2016, at 11:45 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Wed, Aug 03, 2016 at 03:19:40PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 21, 2016 at 05:14:16PM -0400, Mathieu Desnoye
On Sun, Aug 07, 2016 at 03:36:24PM +, Mathieu Desnoyers wrote:
> - On Aug 3, 2016, at 11:45 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Wed, Aug 03, 2016 at 03:19:40PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 21, 2016 at 05:14:16PM -0400, Mathieu Desnoye
On Wed, Aug 03, 2016 at 09:37:57AM -0700, Andy Lutomirski wrote:
> On Wed, Aug 3, 2016 at 5:27 AM, Peter Zijlstra wrote:
> > On Tue, Jul 26, 2016 at 03:02:19AM +, Mathieu Desnoyers wrote:
> >> We really care about preemption here. Every migration implies a
> >>
On Wed, Aug 03, 2016 at 09:37:57AM -0700, Andy Lutomirski wrote:
> On Wed, Aug 3, 2016 at 5:27 AM, Peter Zijlstra wrote:
> > On Tue, Jul 26, 2016 at 03:02:19AM +, Mathieu Desnoyers wrote:
> >> We really care about preemption here. Every migration implies a
> >> preemption from a user-space
On Wed, Aug 03, 2016 at 03:19:40PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 21, 2016 at 05:14:16PM -0400, Mathieu Desnoyers wrote:
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 1209323..daef027 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -5085,6 +5085,13 @@ M: Joe Perches
On Wed, Aug 03, 2016 at 03:19:40PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 21, 2016 at 05:14:16PM -0400, Mathieu Desnoyers wrote:
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 1209323..daef027 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -5085,6 +5085,13 @@ M: Joe Perches
As rseq syscall is enabled on PPC, implement the self-tests on PPC to
verify the implementation of the syscall.
Please note we only support 32bit userspace on BE kernel.
Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
---
v1-->v2:
1. Remove branch in rseq_finish() fastpath
As rseq syscall is enabled on PPC, implement the self-tests on PPC to
verify the implementation of the syscall.
Please note we only support 32bit userspace on BE kernel.
Signed-off-by: Boqun Feng
---
v1-->v2:
1. Remove branch in rseq_finish() fastpath
2. Use bne- instead of
On Thu, Jul 28, 2016 at 02:59:45AM +, Mathieu Desnoyers wrote:
> - On Jul 27, 2016, at 11:05 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > As rseq syscall is enabled on PPC, implement the self-tests on PPC to
> > verify the implementation of the syscall.
> >
On Thu, Jul 28, 2016 at 02:59:45AM +, Mathieu Desnoyers wrote:
> - On Jul 27, 2016, at 11:05 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > As rseq syscall is enabled on PPC, implement the self-tests on PPC to
> > verify the implementation of the syscall.
> >
-conditional atomics.
Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/asm/unistd.h | 2 +-
arch/powerpc/include/uapi/asm/unistd.h | 1 +
3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/inclu
As rseq syscall is enabled on PPC, implement the self-tests on PPC to
verify the implementation of the syscall.
Please note we only support 32bit userspace on BE kernel.
Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
---
tools/testing/selftests/rseq/param_test.c | 14
tools/t
-conditional atomics.
Signed-off-by: Boqun Feng
---
arch/powerpc/include/asm/systbl.h | 1 +
arch/powerpc/include/asm/unistd.h | 2 +-
arch/powerpc/include/uapi/asm/unistd.h | 1 +
3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/systbl.h
b/arch/powerpc
As rseq syscall is enabled on PPC, implement the self-tests on PPC to
verify the implementation of the syscall.
Please note we only support 32bit userspace on BE kernel.
Signed-off-by: Boqun Feng
---
tools/testing/selftests/rseq/param_test.c | 14
tools/testing/selftests/rseq/rseq.h
Call the rseq_handle_notify_resume() function on return to userspace if
TIF_NOTIFY_RESUME thread flag is set.
Increment the event counter and perform fixup on the pre-signal when a
signal is delivered on top of a restartable sequence critical section.
Signed-off-by: Boqun Feng <boqu
Call the rseq_handle_notify_resume() function on return to userspace if
TIF_NOTIFY_RESUME thread flag is set.
Increment the event counter and perform fixup on the pre-signal when a
signal is delivered on top of a restartable sequence critical section.
Signed-off-by: Boqun Feng
---
arch/powerpc
, having test_data_entry::count as int needs more
care on endian handling.
To make things simpler and more consistent, convert
test_data_entry::count to type intptr_t, which also makes the coming
tests for ppc64le and ppc64 share the same code.
Signed-off-by: Boqun Feng <boqun.f...@gmail.
, having test_data_entry::count as int needs more
care on endian handling.
To make things simpler and more consistent, convert
test_data_entry::count to type intptr_t, which also makes the coming
tests for ppc64le and ppc64 share the same code.
Signed-off-by: Boqun Feng
---
tools/testing/selftests
Hi Mathieu,
On Thu, Jul 21, 2016 at 05:14:16PM -0400, Mathieu Desnoyers wrote:
> Expose a new system call allowing each thread to register one userspace
> memory area to be used as an ABI between kernel and user-space for two
> purposes: user-space restartable sequences and quick access to read
Hi Mathieu,
On Thu, Jul 21, 2016 at 05:14:16PM -0400, Mathieu Desnoyers wrote:
> Expose a new system call allowing each thread to register one userspace
> memory area to be used as an ABI between kernel and user-space for two
> purposes: user-space restartable sequences and quick access to read
On Fri, Jul 15, 2016 at 06:35:56PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 15, 2016 at 12:07:03PM +0200, Peter Zijlstra wrote:
> > > So if we are kicked by the unlock_slowpath, and the lock is stealed by
> > > someone else, we need hash its node again and set l->locked to
> > > _Q_SLOW_VAL,
On Fri, Jul 15, 2016 at 06:35:56PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 15, 2016 at 12:07:03PM +0200, Peter Zijlstra wrote:
> > > So if we are kicked by the unlock_slowpath, and the lock is stealed by
> > > someone else, we need hash its node again and set l->locked to
> > > _Q_SLOW_VAL,
On Thu, Jul 14, 2016 at 11:46:26AM +0200, Peter Zijlstra wrote:
> On Thu, Jul 14, 2016 at 11:37:33AM +0200, Peter Zijlstra wrote:
> > static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)
> > {
> > return (u8 *)lock + 3 * IS_BUILTIN(__BIG_ENDIAN);
> > }
>
> Bugger, that doesn't
On Thu, Jul 14, 2016 at 11:46:26AM +0200, Peter Zijlstra wrote:
> On Thu, Jul 14, 2016 at 11:37:33AM +0200, Peter Zijlstra wrote:
> > static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)
> > {
> > return (u8 *)lock + 3 * IS_BUILTIN(__BIG_ENDIAN);
> > }
>
> Bugger, that doesn't
On Mon, Jul 11, 2016 at 01:32:11PM -0400, Waiman Long wrote:
> The percpu APIs are extensively used in the Linux kernel to reduce
> cacheline contention and improve performance. For some use cases, the
> percpu APIs may be too fine-grain for distributed resources whereas
> a per-node based
On Mon, Jul 11, 2016 at 01:32:11PM -0400, Waiman Long wrote:
> The percpu APIs are extensively used in the Linux kernel to reduce
> cacheline contention and improve performance. For some use cases, the
> percpu APIs may be too fine-grain for distributed resources whereas
> a per-node based
On Mon, Jul 04, 2016 at 03:42:59PM +0900, Byungchul Park wrote:
[snip]
> > > +2. A lock has dependency with all locks in the releasing context, having
> > > + been held since the lock was held.
> >
> > But you cannot tell this. The 'since the lock was held' thing fully
> > depends on timing and
On Mon, Jul 04, 2016 at 03:42:59PM +0900, Byungchul Park wrote:
[snip]
> > > +2. A lock has dependency with all locks in the releasing context, having
> > > + been held since the lock was held.
> >
> > But you cannot tell this. The 'since the lock was held' thing fully
> > depends on timing and
On Tue, Jun 28, 2016 at 11:39:18AM +0800, xinhui wrote:
[snip]
> > > +{
> > > + struct lppaca *lp = _of(cpu);
> > > +
> > > + if (unlikely(!(lppaca_shared_proc(lp) ||
> > > + lppaca_dedicated_proc(lp
> >
> > Do you want to detect whether we are running in a guest(ie. pseries
>
On Tue, Jun 28, 2016 at 11:39:18AM +0800, xinhui wrote:
[snip]
> > > +{
> > > + struct lppaca *lp = _of(cpu);
> > > +
> > > + if (unlikely(!(lppaca_shared_proc(lp) ||
> > > + lppaca_dedicated_proc(lp
> >
> > Do you want to detect whether we are running in a guest(ie. pseries
>
and dedicated mode.
> So add lppaca_dedicated_proc macro in lppaca.h
>
> Suggested-by: Boqun Feng <boqun.f...@gmail.com>
> Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
> ---
> arch/powerpc/include/asm/lppaca.h | 6 ++
> arch/powerpc/include/asm/spi
and dedicated mode.
> So add lppaca_dedicated_proc macro in lppaca.h
>
> Suggested-by: Boqun Feng
> Signed-off-by: Pan Xinhui
> ---
> arch/powerpc/include/asm/lppaca.h | 6 ++
> arch/powerpc/include/asm/spinlock.h | 15 +++
> 2 files changed, 21 insertions
On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> this supports to fix lock holder preempted issue which run as a guest
>
> for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech
> if one vcpu is preempted or not.
>
> The default implementation is a macrodefined by
On Mon, Jun 27, 2016 at 01:41:28PM -0400, Pan Xinhui wrote:
> this supports to fix lock holder preempted issue which run as a guest
>
> for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech
> if one vcpu is preempted or not.
>
> The default implementation is a macrodefined by
On Mon, Jun 27, 2016 at 10:09:59AM +0200, Peter Zijlstra wrote:
[snip]
>
> No, this is entirely insane, also broken.
>
> No vectors, no actual function calls, nothing like that. You want the
> below to completely compile away and generate the exact 100% same code
> it does today.
>
Point
On Mon, Jun 27, 2016 at 10:09:59AM +0200, Peter Zijlstra wrote:
[snip]
>
> No, this is entirely insane, also broken.
>
> No vectors, no actual function calls, nothing like that. You want the
> below to completely compile away and generate the exact 100% same code
> it does today.
>
Point
On Sun, Jun 26, 2016 at 02:59:26PM +0800, Boqun Feng wrote:
[snip]
>
> This should be:
>
> extern struct vcpu_preempt_ops vcpu_preempt_ops;
>
> And I tested this one along with modified version of Xinhui's patch.
>
> The test showed that even in a not over-commit
On Sun, Jun 26, 2016 at 02:59:26PM +0800, Boqun Feng wrote:
[snip]
>
> This should be:
>
> extern struct vcpu_preempt_ops vcpu_preempt_ops;
>
> And I tested this one along with modified version of Xinhui's patch.
>
> The test showed that even in a not over-commit
On Sun, Jun 26, 2016 at 03:08:20PM +0800, panxinhui wrote:
[snip]
> > @@ -106,6 +109,9 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> > node->prev = prev;
> > WRITE_ONCE(prev->next, node);
> >
> > + old = old - 1;
> > + vpc = vcpu_preempt_count();
> > +
> > /*
> > *
On Sun, Jun 26, 2016 at 03:08:20PM +0800, panxinhui wrote:
[snip]
> > @@ -106,6 +109,9 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> > node->prev = prev;
> > WRITE_ONCE(prev->next, node);
> >
> > + old = old - 1;
> > + vpc = vcpu_preempt_count();
> > +
> > /*
> > *
> could detect whether a vcpu preemption happens between them.
> >
> > 2. vcpu_is_preempted(), used to check whether other cpu's vcpu is
> > preempted.
> >
> > This patch also implements those primitives on pseries and wire them up.
> >
> > Si
> could detect whether a vcpu preemption happens between them.
> >
> > 2. vcpu_is_preempted(), used to check whether other cpu's vcpu is
> > preempted.
> >
> > This patch also implements those primitives on pseries and wire them up.
> >
> > Signed
On Sun, Jun 26, 2016 at 02:10:57PM +0800, Boqun Feng wrote:
> On Sun, Jun 26, 2016 at 01:21:04PM +0800, panxinhui wrote:
> >
> > > 在 2016年6月26日,03:20,Peter Zijlstra <pet...@infradead.org> 写道:
> > >
> > > On Sun, Jun 26, 2016 at 01:27:56AM +0800, pan
On Sun, Jun 26, 2016 at 02:10:57PM +0800, Boqun Feng wrote:
> On Sun, Jun 26, 2016 at 01:21:04PM +0800, panxinhui wrote:
> >
> > > 在 2016年6月26日,03:20,Peter Zijlstra 写道:
> > >
> > > On Sun, Jun 26, 2016 at 01:27:56AM +0800, panxinhui wrote:
> > >>&
between
locking functions and arch or hypervisor related code.
There are two sets of primitives:
1. vcpu_preempt_count() and vcpu_has_preempted(), they must be used
pairwisely in a same preempt disable critical section. And they
could detect whether a vcpu preemption happ
n
locking functions and arch or hypervisor related code.
There are two sets of primitives:
1. vcpu_preempt_count() and vcpu_has_preempted(), they must be used
pairwisely in a same preempt disable critical section. And they
could detect whether a vcpu preemption happens between them.
2.
On Sat, Jun 25, 2016 at 09:20:25PM +0200, Peter Zijlstra wrote:
> On Sun, Jun 26, 2016 at 01:27:56AM +0800, panxinhui wrote:
> > >> Would that not have issues where the owner cpu is kept running but the
> > >> spinner (ie. _this_ vcpu) gets preempted? I would think that in that
> > >> case we too
On Sat, Jun 25, 2016 at 09:20:25PM +0200, Peter Zijlstra wrote:
> On Sun, Jun 26, 2016 at 01:27:56AM +0800, panxinhui wrote:
> > >> Would that not have issues where the owner cpu is kept running but the
> > >> spinner (ie. _this_ vcpu) gets preempted? I would think that in that
> > >> case we too
On Sat, Jun 25, 2016 at 06:15:40PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 11:21:30PM +0800, Boqun Feng wrote:
> > So on PPC, we have lppaca::yield_count to detect when an vcpu is
> > preempted, if the yield_count is even, the vcpu is running, otherwise it
&
On Sat, Jun 25, 2016 at 06:15:40PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 11:21:30PM +0800, Boqun Feng wrote:
> > So on PPC, we have lppaca::yield_count to detect when an vcpu is
> > preempted, if the yield_count is even, the vcpu is running, otherwise it
&
On Sat, Jun 25, 2016 at 06:09:22PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 11:21:30PM +0800, Boqun Feng wrote:
> > >
> > > int vpc = vcpu_preempt_count();
> > >
> > > ...
> > >
> > >
On Sat, Jun 25, 2016 at 06:09:22PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 11:21:30PM +0800, Boqun Feng wrote:
> > >
> > > int vpc = vcpu_preempt_count();
> > >
> > > ...
> > >
> > >
On Sat, Jun 25, 2016 at 04:24:47PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 01:42:03PM -0400, Pan Xinhui wrote:
> > An over-committed guest with more vCPUs than pCPUs has a heavy overload
> > in osq_lock().
> >
> > This is because vCPU A hold the osq lock and yield out, vCPU B wait
On Sat, Jun 25, 2016 at 04:24:47PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 01:42:03PM -0400, Pan Xinhui wrote:
> > An over-committed guest with more vCPUs than pCPUs has a heavy overload
> > in osq_lock().
> >
> > This is because vCPU A hold the osq lock and yield out, vCPU B wait
Hi Wei Fang,
On Wed, Jun 22, 2016 at 11:01:15AM +0800, Wei Fang wrote:
> We triggered soft-lockup under stress test which
> open/access/write/close one file concurrently on more than
> five different CPUs:
>
> WARN: soft lockup - CPU#0 stuck for 11s! [who:30631]
> ...
> [] dput+0x100/0x298
> []
Hi Wei Fang,
On Wed, Jun 22, 2016 at 11:01:15AM +0800, Wei Fang wrote:
> We triggered soft-lockup under stress test which
> open/access/write/close one file concurrently on more than
> five different CPUs:
>
> WARN: soft lockup - CPU#0 stuck for 11s! [who:30631]
> ...
> [] dput+0x100/0x298
> []
Hi Paul,
On Tue, Jun 21, 2016 at 11:39:46AM -0700, Paul E. McKenney wrote:
> On Mon, Jun 20, 2016 at 09:29:56PM +0200, Arnd Bergmann wrote:
> > On Monday, June 20, 2016 11:37:57 AM CEST Paul E. McKenney wrote:
> > > On Mon, Jun 20, 2016 at 08:29:48PM +0200, Arnd Bergmann wrote:
> > > > On Monday,
Hi Paul,
On Tue, Jun 21, 2016 at 11:39:46AM -0700, Paul E. McKenney wrote:
> On Mon, Jun 20, 2016 at 09:29:56PM +0200, Arnd Bergmann wrote:
> > On Monday, June 20, 2016 11:37:57 AM CEST Paul E. McKenney wrote:
> > > On Mon, Jun 20, 2016 at 08:29:48PM +0200, Arnd Bergmann wrote:
> > > > On Monday,
On Fri, Jun 17, 2016 at 02:17:27PM -0400, Waiman Long wrote:
> On 06/17/2016 11:45 AM, Will Deacon wrote:
> > On Fri, Jun 17, 2016 at 11:26:41AM -0400, Waiman Long wrote:
> > > On 06/16/2016 08:48 PM, Boqun Feng wrote:
> > > > On Thu, Jun 16, 2016 at 05:35
On Fri, Jun 17, 2016 at 02:17:27PM -0400, Waiman Long wrote:
> On 06/17/2016 11:45 AM, Will Deacon wrote:
> > On Fri, Jun 17, 2016 at 11:26:41AM -0400, Waiman Long wrote:
> > > On 06/16/2016 08:48 PM, Boqun Feng wrote:
> > > > On Thu, Jun 16, 2016 at 05:35
On Thu, Jun 16, 2016 at 05:35:54PM -0400, Waiman Long wrote:
> On 06/15/2016 10:19 PM, Boqun Feng wrote:
> > On Wed, Jun 15, 2016 at 03:01:19PM -0400, Waiman Long wrote:
> > > On 06/15/2016 04:04 AM, Boqun Feng wrote:
> > > > Hi Waiman,
> > > >
> &
On Thu, Jun 16, 2016 at 05:35:54PM -0400, Waiman Long wrote:
> On 06/15/2016 10:19 PM, Boqun Feng wrote:
> > On Wed, Jun 15, 2016 at 03:01:19PM -0400, Waiman Long wrote:
> > > On 06/15/2016 04:04 AM, Boqun Feng wrote:
> > > > Hi Waiman,
> > > >
> &
On Wed, Jun 15, 2016 at 03:01:19PM -0400, Waiman Long wrote:
> On 06/15/2016 04:04 AM, Boqun Feng wrote:
> > Hi Waiman,
> >
> > On Tue, Jun 14, 2016 at 06:48:04PM -0400, Waiman Long wrote:
> > > The osq_lock() and osq_unlock() function may not provide the neces
On Wed, Jun 15, 2016 at 03:01:19PM -0400, Waiman Long wrote:
> On 06/15/2016 04:04 AM, Boqun Feng wrote:
> > Hi Waiman,
> >
> > On Tue, Jun 14, 2016 at 06:48:04PM -0400, Waiman Long wrote:
> > > The osq_lock() and osq_unlock() function may not provide the neces
Hi Waiman,
On Tue, Jun 14, 2016 at 06:48:04PM -0400, Waiman Long wrote:
> The osq_lock() and osq_unlock() function may not provide the necessary
> acquire and release barrier in some cases. This patch makes sure
> that the proper barriers are provided when osq_lock() is successful
> or when
Hi Waiman,
On Tue, Jun 14, 2016 at 06:48:04PM -0400, Waiman Long wrote:
> The osq_lock() and osq_unlock() function may not provide the necessary
> acquire and release barrier in some cases. This patch makes sure
> that the proper barriers are provided when osq_lock() is successful
> or when
On Mon, Jun 13, 2016 at 12:45:23PM -0700, Davidlohr Bueso wrote:
> On Fri, 03 Jun 2016, Pan Xinhui wrote:
>
> > The existing version uses a heavy barrier while only release semantics
> > is required. So use atomic_sub_return_release instead.
> >
> > Suggested-by: Peter Zijlstra (Intel)
On Mon, Jun 13, 2016 at 12:45:23PM -0700, Davidlohr Bueso wrote:
> On Fri, 03 Jun 2016, Pan Xinhui wrote:
>
> > The existing version uses a heavy barrier while only release semantics
> > is required. So use atomic_sub_return_release instead.
> >
> > Suggested-by: Peter Zijlstra (Intel)
> >
atch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.
Suggested-by: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
Signed-off-by: Boqun F
atch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.
Suggested-by: "Paul E. McKenney"
Signed-off-by: Boqun Feng
Reviewed-by:
On Fri, Jun 10, 2016 at 01:25:03AM +0800, Boqun Feng wrote:
> On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> > On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> > >
&
On Fri, Jun 10, 2016 at 01:25:03AM +0800, Boqun Feng wrote:
> On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> > On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> > >
&
On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> >
> > > > Ok; what tree does this go in? I have this dependent series which I'd
> > > > like to get
On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> >
> > > > Ok; what tree does this go in? I have this dependent series which I'd
> > > > like to get
1001 - 1100 of 1680 matches
Mail list logo