On Wed, Mar 05, 2014 at 10:34:32PM +0100, Stefan Richter wrote:
> On Feb 21 Stefan Richter wrote:
> > On Feb 20 Tejun Heo wrote:
> > > PREPARE_[DELAYED_]WORK() are being phased out. They have few users
> > > and a nasty surprise in terms of reentrancy guarantee as workqueue
> > > considers work it
On Feb 21 Stefan Richter wrote:
> On Feb 20 Tejun Heo wrote:
> > PREPARE_[DELAYED_]WORK() are being phased out. They have few users
> > and a nasty surprise in terms of reentrancy guarantee as workqueue
> > considers work items to be different if they don't have the same work
> > function.
> >
>
On Mon, Feb 24, 2014 at 01:32:54AM +0100, Stefan Richter wrote:
> On Feb 23 Paul E. McKenney wrote:
> >>> Please see below for a patch against the current version of
> >>> Documentation/memory-barriers.txt. Does this update help?
>
> Thank you, this clarifies it.
>
> [...]
> A new nit:
> > +The
On Sun, Feb 23, 2014 at 07:09:55PM -0500, Peter Hurley wrote:
> On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
> >On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
> >>Hi Paul,
> >>
> >>On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
> >>>commit aba6b0e82c9de53eb032844f1932599f148ff68d
On Feb 23 Paul E. McKenney wrote:
>>> Please see below for a patch against the current version of
>>> Documentation/memory-barriers.txt. Does this update help?
Thank you, this clarifies it.
[...]
A new nit:
> +The operations will always occur in one of the following orders:
>
> - STORE *A,
On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney
Date: Sun Feb 23 08:34:24 2014 -0800
Document
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
> Hi Paul,
>
> On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
> >commit aba6b0e82c9de53eb032844f1932599f148ff68d
> >Author: Paul E. McKenney
> >Date: Sun Feb 23 08:34:24 2014 -0800
> >
> > Documentation/memory-barriers.txt: Clar
Hi James,
On 02/23/2014 03:05 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
will produce a full barrier if
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney
Date: Sun Feb 23 08:34:24 2014 -0800
Documentation/memory-barriers.txt: Clarify release/acquire ordering
This commit fixes a couple of typos and clarif
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
> If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
> ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
> will produce a full barrier if either (a) the RELEASE and the ACQUIRE are
> executed
On Sun, Feb 23, 2014 at 02:23:03AM +0100, Stefan Richter wrote:
> Hi Paul,
>
> in patch "Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK" (sic),
> you wrote:
> + Memory operations issued before the LOCK may be completed after the
> + LOCK operation has completed. An smp_mb__befo
Hi Paul,
in patch "Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK" (sic),
you wrote:
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed. An smp_mb__before_spinlock(), combined
+ with a following LOCK, orders prior loads agai
On 02/22/2014 01:52 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500,
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
> On 02/22/2014 01:43 PM, James Bottomley wrote:
> >
> > On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
> >> On 02/21/2014 11:57 AM, Tejun Heo wrote:
> >>> Yo,
> >>>
> >>> On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
>
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after u
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
> On 02/21/2014 11:57 AM, Tejun Heo wrote:
> > Yo,
> >
> > On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
> >> Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
> >> no mb__after unlock.
> >
> > We do have smp
On 02/22/2014 09:38 AM, Tejun Heo wrote:
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
It's a long story but the short version is that
Documentation/memory-barriers.txt recently was overhauled to reflect
what cpus actually do and what the different archs actually
deliver.
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
> It's a long story but the short version is that
> Documentation/memory-barriers.txt recently was overhauled to reflect
> what cpus actually do and what the different archs actually
> deliver.
>
> Turns out that unlock + lock is
On 02/21/2014 06:18 PM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
smp_mb__after_unlock_lock() is only for ordering memory operations
between two spin-locked sections on either the same lock or by
the same task/cpu. Like:
i = 1
spin_unlock(lock1)
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
> smp_mb__after_unlock_lock() is only for ordering memory operations
> between two spin-locked sections on either the same lock or by
> the same task/cpu. Like:
>
>i = 1
>spin_unlock(lock1)
>spin_lock(lock2)
>smp_mb__aft
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after unlock.
We do have smp_mb__after_unlock_lock().
[ After thinking about it some, I don't think preventi
On Feb 20 Tejun Heo wrote:
> PREPARE_[DELAYED_]WORK() are being phased out. They have few users
> and a nasty surprise in terms of reentrancy guarantee as workqueue
> considers work items to be different if they don't have the same work
> function.
>
> firewire core-device and sbp2 have been been
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
> Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
> no mb__after unlock.
We do have smp_mb__after_unlock_lock().
> [ After thinking about it some, I don't think preventing speculative
> writes before clearin
Hi Tejun,
On 02/21/2014 08:06 AM, Tejun Heo wrote:
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
I think the vast majority of kernel code which uses the workqueue
assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guaranteed non-reentran
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
> I think the vast majority of kernel code which uses the workqueue
> assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guaranteed non-reentrancy until
recently, forcing everybody to lock explici
On 02/21/2014 05:03 AM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
CPU 0| CPU 1
|
INIT_WORK(fw_device_workfn) |
|
workfn = funcA |
queue_
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
> CPU 0| CPU 1
> |
> INIT_WORK(fw_device_workfn) |
> |
> workfn = funcA |
> queue_work_on() |
> .
On 02/20/2014 09:13 PM, Tejun Heo wrote:
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct fw_dev
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
> On 02/20/2014 08:59 PM, Tejun Heo wrote:
> >Hello,
> >
> >On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
> >>>+static void fw_device_workfn(struct work_struct *work)
> >>>+{
> >>>+ struct fw_device *device = container_o
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct fw_device *device = container_of(to_delayed_work(work),
+ struct
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
> >+static void fw_device_workfn(struct work_struct *work)
> >+{
> >+struct fw_device *device = container_of(to_delayed_work(work),
> >+struct fw_device, work);
>
> I think this ne
On 02/20/2014 03:44 PM, Tejun Heo wrote:
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire core-device and sbp2 have been
32 matches
Mail list logo