Peter Zijlstra writes:
> On Wed, Mar 28, 2018 at 01:04:36PM +0200, Peter Zijlstra wrote:
>> On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
>> > Documenting it would definitely be good, but even then I'd be inclined
>> > to leave the barrier in our
Peter Zijlstra writes:
> On Wed, Mar 28, 2018 at 01:04:36PM +0200, Peter Zijlstra wrote:
>> On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
>> > Documenting it would definitely be good, but even then I'd be inclined
>> > to leave the barrier in our implementation. Matching the
Peter Zijlstra writes:
> On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
>> That was tempting, but it leaves unfixed all the other potential
>> callers, both in in-tree and out-of-tree and in code that's yet to be
>> written.
>
> So I myself don't care one
Peter Zijlstra writes:
> On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
>> That was tempting, but it leaves unfixed all the other potential
>> callers, both in in-tree and out-of-tree and in code that's yet to be
>> written.
>
> So I myself don't care one teeny tiny bit about
On Wed, Mar 28, 2018 at 01:04:36PM +0200, Peter Zijlstra wrote:
> On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
> > Documenting it would definitely be good, but even then I'd be inclined
> > to leave the barrier in our implementation. Matching the documented
> > behaviour is
On Wed, Mar 28, 2018 at 01:04:36PM +0200, Peter Zijlstra wrote:
> On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
> > Documenting it would definitely be good, but even then I'd be inclined
> > to leave the barrier in our implementation. Matching the documented
> > behaviour is
On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
> That was tempting, but it leaves unfixed all the other potential
> callers, both in in-tree and out-of-tree and in code that's yet to be
> written.
So I myself don't care one teeny tiny bit about out of tree code, they
get to
On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
> That was tempting, but it leaves unfixed all the other potential
> callers, both in in-tree and out-of-tree and in code that's yet to be
> written.
So I myself don't care one teeny tiny bit about out of tree code, they
get to
On Wed, Mar 28, 2018 at 08:51:35AM +1100, Benjamin Herrenschmidt wrote:
> On Tue, 2018-03-27 at 15:13 +0200, Andrea Parri wrote:
> > >
> > > So unless it's very performance sensitive, I'd rather have things like
> > > spin_is_locked be conservative by default and provide simpler ordering
> > >
On Wed, Mar 28, 2018 at 08:51:35AM +1100, Benjamin Herrenschmidt wrote:
> On Tue, 2018-03-27 at 15:13 +0200, Andrea Parri wrote:
> > >
> > > So unless it's very performance sensitive, I'd rather have things like
> > > spin_is_locked be conservative by default and provide simpler ordering
> > >
Andrea Parri writes:
> On Tue, Mar 27, 2018 at 11:06:56AM +1100, Benjamin Herrenschmidt wrote:
>> On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
>> > Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
>> > added an smp_mb() to
Andrea Parri writes:
> On Tue, Mar 27, 2018 at 11:06:56AM +1100, Benjamin Herrenschmidt wrote:
>> On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
>> > Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
>> > added an smp_mb() to arch_spin_is_locked(), in order to ensure
On Tue, 2018-03-27 at 15:13 +0200, Andrea Parri wrote:
> >
> > So unless it's very performance sensitive, I'd rather have things like
> > spin_is_locked be conservative by default and provide simpler ordering
> > semantics.
>
> Well, it might not be "very performance sensitive" but allow me to
On Tue, 2018-03-27 at 15:13 +0200, Andrea Parri wrote:
> >
> > So unless it's very performance sensitive, I'd rather have things like
> > spin_is_locked be conservative by default and provide simpler ordering
> > semantics.
>
> Well, it might not be "very performance sensitive" but allow me to
On Tue, Mar 27, 2018 at 10:33:06PM +1100, Benjamin Herrenschmidt wrote:
> On Tue, 2018-03-27 at 12:25 +0200, Andrea Parri wrote:
> > > I would rather wait until it is properly documented. Debugging that IPC
> > > problem took a *LOT* of time and energy, I wouldn't want these issues
> > > to come
On Tue, Mar 27, 2018 at 10:33:06PM +1100, Benjamin Herrenschmidt wrote:
> On Tue, 2018-03-27 at 12:25 +0200, Andrea Parri wrote:
> > > I would rather wait until it is properly documented. Debugging that IPC
> > > problem took a *LOT* of time and energy, I wouldn't want these issues
> > > to come
On Tue, 2018-03-27 at 12:25 +0200, Andrea Parri wrote:
> > I would rather wait until it is properly documented. Debugging that IPC
> > problem took a *LOT* of time and energy, I wouldn't want these issues
> > to come and bite us again.
>
> I understand. And I'm grateful for this debugging as well
On Tue, 2018-03-27 at 12:25 +0200, Andrea Parri wrote:
> > I would rather wait until it is properly documented. Debugging that IPC
> > problem took a *LOT* of time and energy, I wouldn't want these issues
> > to come and bite us again.
>
> I understand. And I'm grateful for this debugging as well
On Tue, Mar 27, 2018 at 11:06:56AM +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
> > Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
> > added an smp_mb() to arch_spin_is_locked(), in order to ensure that
> >
> > Thread 0
On Tue, Mar 27, 2018 at 11:06:56AM +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
> > Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
> > added an smp_mb() to arch_spin_is_locked(), in order to ensure that
> >
> > Thread 0
On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
> Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
> added an smp_mb() to arch_spin_is_locked(), in order to ensure that
>
> Thread 0Thread 1
>
> spin_lock(A);
On Mon, 2018-03-26 at 12:37 +0200, Andrea Parri wrote:
> Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
> added an smp_mb() to arch_spin_is_locked(), in order to ensure that
>
> Thread 0Thread 1
>
> spin_lock(A);
Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
added an smp_mb() to arch_spin_is_locked(), in order to ensure that
Thread 0Thread 1
spin_lock(A); spin_lock(B);
r0 = spin_is_locked(B) r1 =
Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
added an smp_mb() to arch_spin_is_locked(), in order to ensure that
Thread 0Thread 1
spin_lock(A); spin_lock(B);
r0 = spin_is_locked(B) r1 =
24 matches
Mail list logo