Michael Ellerman writes:
> Peter Zijlstra writes:
...
>
> I'll do some kernbench runs tomorrow and see if it shows up there.
Finally got some numbers.
The summary is it's in the noise, the average delta is +0.1s (total time
is ~67s, so 0.17%) but the standard deviation is ~0.2s.
So that's prob
Linus Torvalds writes:
> On Tue, Jul 17, 2018 at 7:45 AM Michael Ellerman wrote:
>>
>>
>> Interesting. I don't see anything as high as 18%, it's more spread out:
>>
>> 7.81% context_switch [kernel.kallsyms] [k] cgroup_rstat_updated
>
> Oh, see that's the difference.
>
> You're running in
On Tue, Jul 17, 2018 at 12:37 PM Alan Stern wrote:
>
> Why not? Instructions are allowed to migrate _into_ critical sections,
> just not _out_ of them. So a store preceding the start of a spinlocked
> region can migrate in and be executed after a load that is inside the
> region.
Hmm, yes of co
On Tue, Jul 17, 2018 at 09:40:01PM +0200, Andrea Parri wrote:
> > > That said, I don't understand the powerpc memory ordering. I thought
> > > the rules were "isync on lock, lwsync on unlock".
> > >
> > > That's what the AIX docs imply, at least.
> > >
> > > In particular, I find:
> > >
> > >
On Tue, 17 Jul 2018, Peter Zijlstra wrote:
> Isn't ISYNC the instruction-sync pipeline flush instruction? That is
> used as an smp_rmb() here to, together with the control dependency from the
> LL/SC, to form a LOAD->{LOAD,STORE} (aka LOAD-ACQUIRE) ordering?
That's right.
> Where LWSYNC provides
On Tue, Jul 17, 2018 at 11:49:41AM -0700, Linus Torvalds wrote:
> On Tue, Jul 17, 2018 at 11:44 AM Linus Torvalds
> wrote:
> >
> > (a) lwsync is a memory barrier for all the "easy" cases (ie
> > load->store, load->load, and store->load).
>
> That last one should have been "store->store", of cour
> > That said, I don't understand the powerpc memory ordering. I thought
> > the rules were "isync on lock, lwsync on unlock".
> >
> > That's what the AIX docs imply, at least.
> >
> > In particular, I find:
> >
> > "isync is not a memory barrier instruction, but the
> > load-compare-condition
On Tue, Jul 17, 2018 at 08:42:55PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 17, 2018 at 11:33:41AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 17, 2018 at 09:19:15AM -0700, Linus Torvalds wrote:
> > >
> > > In particular, I find:
> > >
> > > "isync is not a memory barrier instruction, but
On Tue, 17 Jul 2018, Linus Torvalds wrote:
> On Tue, Jul 17, 2018 at 11:31 AM Paul E. McKenney
> wrote:
> >
> > The isync provides ordering roughly similar to lwsync, but nowhere near
> > as strong as sync, and it is sync that would be needed to cause lock
> > acquisition to provide full ordering
On Tue, Jul 17, 2018 at 11:44:23AM -0700, Linus Torvalds wrote:
> On Tue, Jul 17, 2018 at 11:31 AM Paul E. McKenney
> wrote:
> >
> > The isync provides ordering roughly similar to lwsync, but nowhere near
> > as strong as sync, and it is sync that would be needed to cause lock
> > acquisition to p
On Tue, Jul 17, 2018 at 11:44 AM Linus Torvalds
wrote:
>
> (a) lwsync is a memory barrier for all the "easy" cases (ie
> load->store, load->load, and store->load).
That last one should have been "store->store", of course.
So 'lwsync' gives smp_rmb(), smp_wmb(), and smp_load_acquire()
semantics
On Tue, Jul 17, 2018 at 11:31 AM Paul E. McKenney
wrote:
>
> The isync provides ordering roughly similar to lwsync, but nowhere near
> as strong as sync, and it is sync that would be needed to cause lock
> acquisition to provide full ordering.
That's only true when looking at isync in isolation.
On Tue, Jul 17, 2018 at 11:33:41AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 17, 2018 at 09:19:15AM -0700, Linus Torvalds wrote:
> >
> > In particular, I find:
> >
> > "isync is not a memory barrier instruction, but the
> > load-compare-conditional branch-isync sequence can provide this
> >
On Tue, Jul 17, 2018 at 09:19:15AM -0700, Linus Torvalds wrote:
> On Tue, Jul 17, 2018 at 7:45 AM Michael Ellerman wrote:
> >
> >
> > Interesting. I don't see anything as high as 18%, it's more spread out:
> >
> > 7.81% context_switch [kernel.kallsyms] [k] cgroup_rstat_updated
>
> Oh, see
On Tue, Jul 17, 2018 at 7:45 AM Michael Ellerman wrote:
>
>
> Interesting. I don't see anything as high as 18%, it's more spread out:
>
> 7.81% context_switch [kernel.kallsyms] [k] cgroup_rstat_updated
Oh, see that's the difference.
You're running in a non-root cgroup, I think.
That als
Linus Torvalds writes:
> On Mon, Jul 16, 2018 at 7:40 AM Michael Ellerman wrote:
...
>> I guess arguably it's not a very macro benchmark, but we have a
>> context_switch benchmark in the tree[1] which we often use to tune
>> things, and it degrades badly. It just spins up two threads and has them
On Mon, Jul 16, 2018 at 7:40 AM Michael Ellerman wrote:
>
> If the numbers can be trusted it is actually slower to put the sync in
> lock, at least on one of the machines:
>
> Time
> lwsync_sync 84,932,987,977
> sync_lwsync 93,185,930,333
Very funky.
> I guess arguably it's not
On Tue, Jul 17, 2018 at 12:40:19AM +1000, Michael Ellerman wrote:
>
> I guess arguably it's not a very macro benchmark, but we have a
> context_switch benchmark in the tree[1] which we often use to tune
> things, and it degrades badly. It just spins up two threads and has them
> ping-pong using yi
Peter Zijlstra writes:
> On Fri, Jul 13, 2018 at 11:15:26PM +1000, Michael Ellerman wrote:
...
>>
>>
>> So 18-32% slower, or 23-47 cycles.
>
> Very good info. Note that another option is to put the SYNC in lock() it
> doesn't really matter which of the two primitives gets it. I don't
> suppose i
On Fri, Jul 13, 2018 at 07:58:25PM -0700, Linus Torvalds wrote:
> On Fri, Jul 13, 2018 at 6:51 PM Alan Stern wrote:
> >
> > The point being that the scenarios under discussion in this thread all
> > fall most definitely into the "Non-standard usage; you'd better know
> > exactly what you're doing"
On Fri, Jul 13, 2018 at 6:51 PM Alan Stern wrote:
>
> The point being that the scenarios under discussion in this thread all
> fall most definitely into the "Non-standard usage; you'd better know
> exactly what you're doing" category.
Well, yes and no.
The thing is, people expected unlock+lock t
On Fri, 13 Jul 2018, Andrea Parri wrote:
> On Fri, Jul 13, 2018 at 10:16:48AM -0700, Linus Torvalds wrote:
> > On Fri, Jul 13, 2018 at 2:34 AM Will Deacon wrote:
> > >
> > > And, since we're stating preferences, I'll reiterate my preference
> > > towards:
> > >
> > > * RCsc unlock/lock
>
On Fri, Jul 13, 2018 at 06:42:39PM +0200, Peter Zijlstra wrote:
>
> Hi Michael,
>
> On Fri, Jul 13, 2018 at 11:15:26PM +1000, Michael Ellerman wrote:
> > I reran some numbers today with some slightly updated tests.
> >
> > It varies quite a bit across machines and CPU revisions.
> >
> > On one
On Fri, Jul 13, 2018 at 10:16:48AM -0700, Linus Torvalds wrote:
> On Fri, Jul 13, 2018 at 2:34 AM Will Deacon wrote:
> >
> > And, since we're stating preferences, I'll reiterate my preference towards:
> >
> > * RCsc unlock/lock
> > * RCpc release/acquire
>
> Yes, I think this woul
On Fri, Jul 13, 2018 at 2:34 AM Will Deacon wrote:
>
> And, since we're stating preferences, I'll reiterate my preference towards:
>
> * RCsc unlock/lock
> * RCpc release/acquire
Yes, I think this would be best. We *used* to have pretty heavy-weight
locking rules for various reaso
Hi Michael,
On Fri, Jul 13, 2018 at 11:15:26PM +1000, Michael Ellerman wrote:
> I reran some numbers today with some slightly updated tests.
>
> It varies quite a bit across machines and CPU revisions.
>
> On one I get:
>
> Lock/UnlockTime Time %Total Cycles Cycles Cy
Peter Zijlstra writes:
> On Thu, Jul 12, 2018 at 11:10:58AM -0700, Linus Torvalds wrote:
>> On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra wrote:
>> >
>> > The locking pattern is fairly simple and shows where RCpc comes apart
>> > from expectation real nice.
>>
>> So who does RCpc right now fo
On Thu, Jul 12, 2018 at 11:10:58AM -0700, Linus Torvalds wrote:
> On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra wrote:
> >
> > The locking pattern is fairly simple and shows where RCpc comes apart
> > from expectation real nice.
>
> So who does RCpc right now for the unlock-lock sequence? Someb
On Fri, Jul 13, 2018 at 11:07:11AM +0200, Andrea Parri wrote:
> On Thu, Jul 12, 2018 at 07:05:39PM -0700, Daniel Lustig wrote:
> > On 7/12/2018 11:10 AM, Linus Torvalds wrote:
> > > On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra
> > > wrote:
> > >>
> > >> The locking pattern is fairly simple and
On Thu, Jul 12, 2018 at 07:05:39PM -0700, Daniel Lustig wrote:
> On 7/12/2018 11:10 AM, Linus Torvalds wrote:
> > On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra
> > wrote:
> >>
> >> The locking pattern is fairly simple and shows where RCpc comes apart
> >> from expectation real nice.
> >
> > So
On Thu, Jul 12, 2018 at 07:05:39PM -0700, Daniel Lustig wrote:
> On 7/12/2018 11:10 AM, Linus Torvalds wrote:
> > On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra
> > wrote:
> >>
> >> The locking pattern is fairly simple and shows where RCpc comes apart
> >> from expectation real nice.
> >
> > So
On 7/12/2018 2:45 AM, Will Deacon wrote:
> On Thu, Jul 12, 2018 at 11:34:32AM +0200, Peter Zijlstra wrote:
>> On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
>>> And I think if we raise atomic*_acquire() to require TSO (but ideally
>>> raise it to RCsc) we're there.
>>
>> To clarify
On 7/12/2018 11:10 AM, Linus Torvalds wrote:
> On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra wrote:
>>
>> The locking pattern is fairly simple and shows where RCpc comes apart
>> from expectation real nice.
>
> So who does RCpc right now for the unlock-lock sequence? Somebody
> mentioned powerp
On Thu, Jul 12, 2018 at 11:13:48PM +0200, Andrea Parri wrote:
> On Thu, Jul 12, 2018 at 04:43:46PM -0400, Alan Stern wrote:
> > On Thu, 12 Jul 2018, Andrea Parri wrote:
> >
> > > > It seems reasonable to ask people to learn that locks have stronger
> > > > ordering guarantees than RMW atomics do.
On Thu, Jul 12, 2018 at 04:43:46PM -0400, Alan Stern wrote:
> On Thu, 12 Jul 2018, Andrea Parri wrote:
>
> > > It seems reasonable to ask people to learn that locks have stronger
> > > ordering guarantees than RMW atomics do. Maybe not the greatest
> > > situation in the world, but one I think we
On Thu, 12 Jul 2018, Andrea Parri wrote:
> > It seems reasonable to ask people to learn that locks have stronger
> > ordering guarantees than RMW atomics do. Maybe not the greatest
> > situation in the world, but one I think we could live with.
>
> Yeah, this was one of my main objections.
Does
On Thu, Jul 12, 2018 at 09:52:42PM +0200, Andrea Parri wrote:
> On Thu, Jul 12, 2018 at 11:10:58AM -0700, Linus Torvalds wrote:
> > On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra
> > wrote:
> > >
> > > The locking pattern is fairly simple and shows where RCpc comes apart
> > > from expectation r
On Thu, Jul 12, 2018 at 11:10:58AM -0700, Linus Torvalds wrote:
> On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra wrote:
> >
> > The locking pattern is fairly simple and shows where RCpc comes apart
> > from expectation real nice.
>
> So who does RCpc right now for the unlock-lock sequence? Someb
On Thu, Jul 12, 2018 at 01:04:27PM -0400, Alan Stern wrote:
> Which raises the question of whether RCtso (adopting Daniel's suggested
> term) is also too subtle or counter-intuitive for spinlocks.
IMO yes, RCtso also sucks :-) But in my 'fight' for RCsc it disallows
RCpc and captures the current
On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra wrote:
>
> The locking pattern is fairly simple and shows where RCpc comes apart
> from expectation real nice.
So who does RCpc right now for the unlock-lock sequence? Somebody
mentioned powerpc. Anybody else?
How nasty would be be to make powerpc
On Thu, Jul 12, 2018 at 10:28:38AM -0700, Paul E. McKenney wrote:
> Look for the uses of raw_spin_lock_irq_rcu_node() and friends in
> kernel/rcu and include/linux/*rcu*, along with the explanation in
> Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.html
>
> I must confess that
> It seems reasonable to ask people to learn that locks have stronger
> ordering guarantees than RMW atomics do. Maybe not the greatest
> situation in the world, but one I think we could live with.
Yeah, this was one of my main objections.
> > Hence my proposal to strenghten rmw-acquire, becaus
> Anyway, back to the problem of being able to use the memory model to
> describe locks. This is I think a useful property.
>
> My earlier reasoning was that:
>
> - smp_store_release() + smp_load_acquire() := RCpc
>
> - we use smp_store_release() as unlock()
>
> Therefore, if we want unlock
On Thu, Jul 12, 2018 at 01:04:27PM -0400, Alan Stern wrote:
> On Thu, 12 Jul 2018, Peter Zijlstra wrote:
>
> > > But again, these are stuble patterns, and my guess is that several/
> > > most kernel developers really won't care about such guarantees (and
> > > if some will do, they'll have the too
Hi Alan,
On Thu, Jul 12, 2018 at 01:04:27PM -0400, Alan Stern wrote:
> On Thu, 12 Jul 2018, Peter Zijlstra wrote:
> > But as you (and Will) point out, we don't so much care about rmw-acquire
> > semantics as much as that we care about unlock+lock behaviour. Another
> > way to look at this is to de
On Thu, 12 Jul 2018, Peter Zijlstra wrote:
> > But again, these are stuble patterns, and my guess is that several/
> > most kernel developers really won't care about such guarantees (and
> > if some will do, they'll have the tools to figure out what they can
> > actually rely on ...)
>
> Yes it i
On Thu, Jul 12, 2018 at 03:48:21PM +0200, Peter Zijlstra wrote:
[ . . . ]
> I'm still hoping we can convince the PowerPC people that they're wrong,
> and get rid of this wart and just call all locks RCsc.
As always, I must defer to the PowerPC maintainers on this point.
On Thu, Jul 12, 2018 at 01:52:49PM +0200, Andrea Parri wrote:
> On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 11, 2018 at 02:34:21PM +0200, Andrea Parri wrote:
> > > 2) Resolve the above mentioned controversy (the inconsistency between
> > > - locking opera
On Thu, Jul 12, 2018 at 02:01:00PM +0200, Andrea Parri wrote:
> > OTOH (as I pointed out earlier) the strengthening we're configuring
> > will prevent some arch. (riscv being just the example of today!) to
> > go "full RCsc", and this will inevitably "complicate" both the LKMM
>
> "full RCpc"
Tha
On Thu, Jul 12, 2018 at 01:52:49PM +0200, Andrea Parri wrote:
> On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 11, 2018 at 02:34:21PM +0200, Andrea Parri wrote:
> > > Simplicity is the eye of the beholder. From my POV (LKMM maintainer), the
> > > simplest solution
On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
> On Wed, Jul 11, 2018 at 02:34:21PM +0200, Andrea Parri wrote:
> > Simplicity is the eye of the beholder. From my POV (LKMM maintainer), the
> > simplest solution would be to get rid of rfi-rel-acq and unlock-rf-lock-po
> > (or its a
On Thu, Jul 12, 2018 at 11:34:32AM +0200, Peter Zijlstra wrote:
> On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
> > And I think if we raise atomic*_acquire() to require TSO (but ideally
> > raise it to RCsc) we're there.
>
> To clarify, just the RmW-acquire. Things like atomic_re
On Thu, Jul 12, 2018 at 09:40:40AM +0200, Peter Zijlstra wrote:
> And I think if we raise atomic*_acquire() to require TSO (but ideally
> raise it to RCsc) we're there.
To clarify, just the RmW-acquire. Things like atomic_read_acquire() can
stay smp_load_acquire() and be RCpc.
On Wed, Jul 11, 2018 at 10:50:56AM -0700, Daniel Lustig wrote:
> On 7/11/2018 10:00 AM, Peter Zijlstra wrote:
> > On Wed, Jul 11, 2018 at 04:57:51PM +0100, Will Deacon wrote:
> >
> >> It might be simple to model, but I worry this weakens our locking
> >> implementations to a point where they will
> All the discussion here[1] for example is about having ordering and
> doing an smp_cond_load_acquire() on a variable which is sometimes
> protected by a CPU's rq->lock and sometimes not? Isn't that one of the
> key use cases for this whole discussion?
Not a "pure" one:
http://lkml.kernel.or
On Wed, Jul 11, 2018 at 02:34:21PM +0200, Andrea Parri wrote:
> Simplicity is the eye of the beholder. From my POV (LKMM maintainer), the
> simplest solution would be to get rid of rfi-rel-acq and unlock-rf-lock-po
> (or its analogous in v3) all together:
> Among other things, this would immedi
On 7/11/2018 10:00 AM, Peter Zijlstra wrote:
> On Wed, Jul 11, 2018 at 04:57:51PM +0100, Will Deacon wrote:
>
>> It might be simple to model, but I worry this weakens our locking
>> implementations to a point where they will not be understood by the average
>> kernel developer. As I've said before
On Wed, Jul 11, 2018 at 04:57:51PM +0100, Will Deacon wrote:
> It might be simple to model, but I worry this weakens our locking
> implementations to a point where they will not be understood by the average
> kernel developer. As I've said before, I would prefer "full" RCsc locking,
Another vote
> It might be simple to model, but I worry this weakens our locking
> implementations to a point where they will not be understood by the average
> kernel developer. As I've said before, I would prefer "full" RCsc locking,
> but that's not the case with architectures we currently support today, so
On Wed, Jul 11, 2018 at 02:34:21PM +0200, Andrea Parri wrote:
> On Wed, Jul 11, 2018 at 10:43:11AM +0100, Will Deacon wrote:
> > On Tue, Jul 10, 2018 at 11:38:21AM +0200, Andrea Parri wrote:
> > > This distinction between locking operations and "other acquires" appears
> > > to me not only unmotiva
On Wed, 11 Jul 2018, Andrea Parri wrote:
> > > Does something like "po; [UL]; rf; [LKR]; po" fit in with the rest
> > > of the model? If so, maybe that solves the asymmetry and also
> > > legalizes the approach of putting fence.tso in front?
> >
> > That would work just as well. For this versio
On Wed, Jul 11, 2018 at 02:34:21PM +0200, Andrea Parri wrote:
> On Wed, Jul 11, 2018 at 10:43:11AM +0100, Will Deacon wrote:
> > On Tue, Jul 10, 2018 at 11:38:21AM +0200, Andrea Parri wrote:
> > > On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > > > More than one kernel developer has
On Wed, Jul 11, 2018 at 10:43:11AM +0100, Will Deacon wrote:
> On Tue, Jul 10, 2018 at 11:38:21AM +0200, Andrea Parri wrote:
> > On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > > More than one kernel developer has expressed the opinion that the LKMM
> > > should enforce ordering of
On Tue, Jul 10, 2018 at 11:38:21AM +0200, Andrea Parri wrote:
> On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking. In other words, given
>
> I'd like to step back
On Tue, Jul 10, 2018 at 01:17:50PM -0400, Alan Stern wrote:
> On Tue, 10 Jul 2018, Daniel Lustig wrote:
>
> > > --- usb-4.x.orig/tools/memory-model/linux-kernel.cat
> > > +++ usb-4.x/tools/memory-model/linux-kernel.cat
> > > @@ -38,7 +38,7 @@ let strong-fence = mb | gp
> > > (* Release Acquire *)
On Tue, Jul 10, 2018 at 11:34:45AM -0400, Alan Stern wrote:
> On Tue, 10 Jul 2018, Andrea Parri wrote:
>
> > > > ACQUIRE operations include LOCK operations and both smp_load_acquire()
> > > > and smp_cond_acquire() operations. [BTW, the latter was replaced by
> > > > smp_cond_load_acquire()
On Tue, 10 Jul 2018, Daniel Lustig wrote:
> > --- usb-4.x.orig/tools/memory-model/linux-kernel.cat
> > +++ usb-4.x/tools/memory-model/linux-kernel.cat
> > @@ -38,7 +38,7 @@ let strong-fence = mb | gp
> > (* Release Acquire *)
> > let acq-po = [Acquire] ; po ; [M]
> > let po-rel = [M] ; po ; [Re
On 7/9/2018 1:01 PM, Alan Stern wrote:
> More than one kernel developer has expressed the opinion that the LKMM
> should enforce ordering of writes by locking. In other words, given
> the following code:
>
> WRITE_ONCE(x, 1);
> spin_unlock(&s):
> spin_lock(&s);
> WRITE_ONC
On Tue, Jul 10, 2018 at 09:57:17AM -0400, Alan Stern wrote:
> On Mon, 9 Jul 2018, Paul E. McKenney wrote:
>
> > On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > > More than one kernel developer has expressed the opinion that the LKMM
> > > should enforce ordering of writes by lockin
On Tue, 10 Jul 2018, Andrea Parri wrote:
> > > ACQUIRE operations include LOCK operations and both smp_load_acquire()
> > > and smp_cond_acquire() operations. [BTW, the latter was replaced by
> > > smp_cond_load_acquire() in 1f03e8d2919270 ...]
> > >
> > > RELEASE operations include UNLO
> > ACQUIRE operations include LOCK operations and both smp_load_acquire()
> > and smp_cond_acquire() operations. [BTW, the latter was replaced by
> > smp_cond_load_acquire() in 1f03e8d2919270 ...]
> >
> > RELEASE operations include UNLOCK operations and smp_store_release()
> > operatio
On Tue, 10 Jul 2018, Andrea Parri wrote:
> On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking. In other words, given
>
> I'd like to step back on this point: I sti
On Mon, 9 Jul 2018, Paul E. McKenney wrote:
> On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> > More than one kernel developer has expressed the opinion that the LKMM
> > should enforce ordering of writes by locking. In other words, given
> > the following code:
> >
> > WRITE_O
On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> More than one kernel developer has expressed the opinion that the LKMM
> should enforce ordering of writes by locking. In other words, given
I'd like to step back on this point: I still don't have a strong opinion
on this, but all this
On Mon, Jul 09, 2018 at 04:01:57PM -0400, Alan Stern wrote:
> More than one kernel developer has expressed the opinion that the LKMM
> should enforce ordering of writes by locking. In other words, given
> the following code:
>
> WRITE_ONCE(x, 1);
> spin_unlock(&s):
> spin_lock(&
More than one kernel developer has expressed the opinion that the LKMM
should enforce ordering of writes by locking. In other words, given
the following code:
WRITE_ONCE(x, 1);
spin_unlock(&s):
spin_lock(&s);
WRITE_ONCE(y, 1);
the stores to x and y should be propa
76 matches
Mail list logo