> > I have these ones also here in case anyone is interested:
> > https://github.com/ereshetova/linux-stable/commits/refcount_t_fs
> > https://github.com/ereshetova/linux-stable/commits/refcount_t_block
> >
> > They haven't been rebased for a while, but if there is an interest,
> > I can certainly
> On Mon, Jun 15, 2020 at 10:10:08AM +0800, Xiaoming Ni wrote:
> > On 2020/6/13 2:34, Kees Cook wrote:
> > > This series was never applied[1], and was recently pointed out as
> > > missing[2]. If someone has a tree for this, please take it. Otherwise,
> > > please Ack and I'll send it to Linus.
>
>> The in-stack randomization is really a very small change both code wise and
>> logic wise.
>> It does not affect real workloads and does not require enablement of other
>> features (such as GCC plugins).
>> So, I think we should really reconsider its inclusion.
>I'd agree: the code is tiny and
Ingo, Andy,
I want to summarize here the data (including the performance numbers)
and reasoning for the in-stack randomization feature. I have organized
it in a simple set of Q below.
Q: Why do we need in-stack per-syscall randomization when we already have
all known attack vectors covered with
> I confess I've kind of lost the plot on the performance requirements
> at this point. Instead of measuring and evaluating potential
> solutions, can we try to approach this from the opposite direction and
> ask what the requirements are?
>
> What's the maximum number of CPU cycles that we are
> > With 5 bits there's a ~96.9% chance of crashing the system in an attempt,
> > the exploit cannot be used for a range of attacks, including spear
> > attacks and fast-spreading worms, right? A crashed and inaccessible
> > system also increases the odds of leaving around unfinished attack code
>
> > I find it ridiculous that even with 4K blocked get_random_bytes(), which
> > gives us 32k bits, which with 5 bits should amortize the RNG call to
> > something like "once per 6553 calls", we still see 17% overhead? It's
> > either a measurement artifact, or something doesn't compute.
>
> If
> * Reshetova, Elena wrote:
>
> > > * Reshetova, Elena wrote:
> > >
> > > > CONFIG_PAGE_TABLE_ISOLATION=n:
> > > >
> > > > base: Simple syscall: 0.0510
> > > > microsecon
> * Reshetova, Elena wrote:
>
> > CONFIG_PAGE_TABLE_ISOLATION=n:
> >
> > base: Simple syscall: 0.0510 microseconds
> > get_random_bytes(4096 bytes buffer): Simple syscall: 0.0597 microseconds
> >
> > So, pure speed w
..
> > rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
>
> You could try something like:
> u64 rand_val = cpu_var->syscall_rand
>
> while (unlikely(rand_val == 0))
> rand_val = rdrand64();
>
> stack_offset = rand_val & 0xff;
>
> * Andy Lutomirski wrote:
>
> > Or we decide that calling get_random_bytes() is okay with IRQs off and
> > this all gets a bit simpler.
>
> BTW., before we go down this path any further, is the plan to bind this
> feature to a real CPU-RNG capability, i.e. to the RDRAND instruction,
> which
> From: Reshetova, Elena
> > Sent: 03 May 2019 17:17
> ...
> > rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
>
> You could try something like:
> u64 rand_val = cpu_var->syscall_rand
>
> while (unlikely(rand_val == 0))
>> On Fri, May 3, 2019 at 9:40 AM David Laight wrote:
> >
> > That gives you 10 system calls per rdrand instruction
> > and mostly takes the latency out of line.
>
> Do we really want to do this? What is the attack scenario?
>
> With no VLA's, and the stackleak plugin, what's the upside? Are we
> * David Laight wrote:
>
> > It has already been measured - it is far too slow.
>
> I don't think proper buffering was tested, was it? Only a per syscall
> RDRAND overhead which I can imagine being not too good.
>
Well, I have some numbers, but I am struggling to understand one
aspect there.
From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > +unsigned char random_get_byte(void)
> > +{
> > +struct rnd_buffer *buffer = _cpu_var(stack_rand_offset);
> > +unsigned char res;
> > +
> > +if (buffer->byte_counter >=
> From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > I guess this is true, so I did a quick implementation now to estimate the
> > performance hits.
> > Here are the preliminary numbers (proper ones will take a bit more time):
> >
> >
>
> > On Apr 29, 2019, at 12:46 AM, Reshetova, Elena
> wrote:
> >
> >
> >>>> On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
> >>>
> >
> >> It seems to me
> >> that we should be using the “fast-erasure” construction
> On Fri, Apr 26, 2019 at 10:01:02AM -0400, Theodore Ts'o wrote:
> > On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> > > Adding Eric and Herbert to continue discussion for the chacha part.
> > > So, as a short summary I am trying to f
> On Fri, Apr 26, 2019 at 11:33:09AM +0000, Reshetova, Elena wrote:
> > Adding Eric and Herbert to continue discussion for the chacha part.
> > So, as a short summary I am trying to find out a fast (fast enough to be
> > used per
> syscall
> > invocation) source o
> > On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
> >
> >> On Fri, Apr 26, 2019 at 11:33:09AM +0000, Reshetova, Elena wrote:
> >> Adding Eric and Herbert to continue discussion for the chacha part.
> >> So, as a short summary I am trying to find out a
> Hi,
>
> Sorry for the delay - Easter holidays + I was trying to arrange my brain
> around
> proposed options.
> Here what I think our options are with regards to the source of randomness:
>
> 1) rdtsc or variations based on it (David proposed some CRC-based variants for
> example)
> 2)
> From: Reshetova, Elena
> > Sent: 24 April 2019 12:43
> >
> > Sorry for the delay - Easter holidays + I was trying to arrange my brain
> > around
> proposed options.
> > Here what I think our options are with regards to the source of randomness:
> >
>
Hi,
Sorry for the delay - Easter holidays + I was trying to arrange my brain around
proposed options.
Here what I think our options are with regards to the source of randomness:
1) rdtsc or variations based on it (David proposed some CRC-based variants for
example)
2) prandom-based options
> On Tue, Apr 16, 2019 at 11:10:16AM +0000, Reshetova, Elena wrote:
> > >
> > > The kernel can execute millions of syscalls per second, I'm pretty sure
> > > there's a statistical attack against:
> > >
> > > * This is a maximally equidistribu
> So a couple of comments; I wasn't able to find the full context for
> this patch, and looking over the thread on kernel-hardening from late
> February still left me confused exactly what attacks this would help
> us protect against (since this isn't my area and I didn't take the
> time to read
Adding Theodore & Daniel since I guess they are the best positioned to comment
on
exact strengths of prandom. See my comments below.
> * Reshetova, Elena wrote:
>
> > > 4)
> > >
> > > But before you tweak the patch, a more fundamental question:
> > &
Hi Ingo,
Thank you for your feedback! See my comments below.
> * Elena Reshetova wrote:
>
> > This is an example of produced assembly code for gcc x86_64:
> >
> > ...
> > add_random_stack_offset();
> > 0x810022e9 callq 0x81459570
> > 0x810022ee movzbl %al,%eax
> >
> On Wed, Apr 10, 2019 at 3:24 AM Reshetova, Elena
> wrote:
> >
> >
> > > > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > > &g
> * Elena Reshetova wrote:
>
> > 2) Andy's tests, misc-tests: ./timing_test_64 10M sys_enosys
> > base:1000 loops in 1.62224s
> > = 162.22 nsec / loop
> > random_offset (prandom_u32() every syscall): 1000 loops in 1.64660s
> > =
>
> > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > index 7bc105f47d21..38ddc213a5e9 100644
> > > > --- a/arch/x86/entry/common.c
> > > > +++ b/arch/x86/entry/common.c
> > > > @@ -35,6 +35,12 @@
> >
> * Josh Poimboeuf wrote:
>
> > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > index 7bc105f47d21..38ddc213a5e9 100644
> > > --- a/arch/x86/entry/common.c
> > > +++ b/arch/x86/entry/common.c
> > > @@
> On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > index 7bc105f47d21..38ddc213a5e9 100644
> > --- a/arch/x86/entry/common.c
> > +++ b/arch/x86/entry/common.c
> > @@ -35,6 +35,12 @@
> > #define
> On Thu, Mar 28, 2019 at 9:29 AM Andy Lutomirski wrote:
> > Doesn’t this just leak some of the canary to user code through side
> > channels?
>
> Erf, yes, good point. Let's just use prandom and be done with it.
And here I have some numbers on this. Actually prandom turned out to be pretty
> On Mon, Mar 18, 2019 at 1:16 PM Andy Lutomirski wrote:
> > On Mon, Mar 18, 2019 at 2:41 AM Elena Reshetova
> > wrote:
> > > Performance:
> > >
> > > 1) lmbench: ./lat_syscall -N 100 null
> > > base: Simple syscall: 0.1774 microseconds
> > > random_offset
> On Mon, Mar 18, 2019 at 01:15:44PM -0700, Andy Lutomirski wrote:
> > On Mon, Mar 18, 2019 at 2:41 AM Elena Reshetova
> > wrote:
> > >
> > > If CONFIG_RANDOMIZE_KSTACK_OFFSET is selected,
> > > the kernel stack offset is randomized upon each
> > > entry to a system call after fixed location of
Smth is really weird with my intel mail: it only now delivered
me all messages in one go and I was thinking that I don't get any feedback...
> > If CONFIG_RANDOMIZE_KSTACK_OFFSET is selected,
> > the kernel stack offset is randomized upon each
> > entry to a system call after fixed location of
On Mon, 11 Feb 2019 13:49:27 -0800
> Kees Cook wrote:
>
> > On Mon, Feb 11, 2019 at 12:28 PM Steven Rostedt wrote:
> > >
> > > On Mon, 11 Feb 2019 15:27:25 -0500
> > > Steven Rostedt wrote:
> > >
> > > > On Mon, 11 Feb 2019 12:21:32 -0800
> > > > Kees Cook wrote:
> > > >
> > > > > > > Looks
> On Fri, Jan 18, 2019 at 02:27:25PM +0200, Elena Reshetova wrote:
> > Elena Reshetova (5):
> > sched: convert sighand_struct.count to refcount_t
> > sched: convert signal_struct.sigcnt to refcount_t
>
> These should really be seen by Oleg (bounced) and I'll await his reply.
>
> > sched:
> * Elena Reshetova [2019-01-16 13:20:27]:
>
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches
> On Tue, Jan 29, 2019 at 01:55:32PM +0000, Reshetova, Elena wrote:
> > > On Mon, Jan 28, 2019 at 02:27:26PM +0200, Elena Reshetova wrote:
> > > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > > index 3cd13a3..a1e87d2 100644
> > > >
> [ Cc'ing Masami as he maintains uprobes (we need to add uprobes to
> > the MAINTAINERS file ]
>
> Thanks Steve, I think it is maintained mainly by Srikar and Oleg.
> Srikar, Oleg, could you update MAINTAINERS file to add UPROBES entry?
> And ack this change?
Srikar, Oleg, could you please
> On Thu, Jan 31, 2019 at 11:04 AM Reshetova, Elena
> wrote:
> >
> > > Just to check, has this been tested with CONFIG_REFCOUNT_FULL and
> > > > something poking kcov?
> > > >
> > > > Given lib/refcount.c is instrumented, the refcou
> Just to check, has this been tested with CONFIG_REFCOUNT_FULL and
> > something poking kcov?
> >
> > Given lib/refcount.c is instrumented, the refcount_*() calls will
> > recurse back into the kcov code. It looks like that's fine, given these
> > are only manipulated in setup/teardown paths,
> So, you are saying that ACQUIRE does not guarantee that "po-later stores
> > on the same CPU and all propagated stores from other CPUs
> > must propagate to all other CPUs after the acquire operation "?
> > I was reading about acquire before posting this and trying to understand,
> > and this
> On Mon, Jan 28, 2019 at 1:10 PM Elena Reshetova
> wrote:
> >
> > This adds an smp_acquire__after_ctrl_dep() barrier on successful
> > decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test
> > variants and therefore gives stronger memory ordering guarantees than
> > prior
> On Mon, Jan 28, 2019 at 02:27:26PM +0200, Elena Reshetova wrote:
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 3cd13a3..a1e87d2 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -1171,7 +1171,7 @@ static void perf_event_ctx_deactivate(struct
>
> On Mon, Jan 28, 2019 at 03:29:10PM +0100, Andrea Parri wrote:
>
> > > diff --git a/arch/x86/include/asm/refcount.h
> > > b/arch/x86/include/asm/refcount.h
> > > index dbaed55..ab8f584 100644
> > > --- a/arch/x86/include/asm/refcount.h
> > > +++ b/arch/x86/include/asm/refcount.h
> > > @@
> On Mon, Jan 21, 2019 at 11:05:03AM -0500, Alan Stern wrote:
> > On Mon, 21 Jan 2019, Peter Zijlstra wrote:
>
> > > Any additional ordering; like the one you have above; are not strictly
> > > required for the proper functioning of the refcount. Rather, you rely on
> > > additional ordering and
> > I suppose we can add smp_acquire__after_ctrl_dep() on the true branch.
> > Then it reall does become rel_acq.
> >
> > A wee something like so (I couldn't find an arm64 refcount, even though
> > I have distinct memories of talk about it).
>
> In the end, arm and arm64 chose to use
> On Tue, Jan 22, 2019 at 09:11:42AM +0000, Reshetova, Elena wrote:
> > Will you be able to take this and the other scheduler
> > patch to whatever tree/path it should normally go to get eventually
> > integrated?
>
> I've queeud them up.
Thank you!
Best Regards,
Elena.
> On 01/18, Elena Reshetova wrote:
> >
> > For the signal_struct.sigcnt it might make a difference
> > in following places:
> > - put_signal_struct(): decrement in refcount_dec_and_test() only
> >provides RELEASE ordering and control dependency on success
> >vs. fully ordered atomic
> Just to check, has this been tested with CONFIG_REFCOUNT_FULL and
> > something poking kcov?
> >
> > Given lib/refcount.c is instrumented, the refcount_*() calls will
> > recurse back into the kcov code. It looks like that's fine, given these
> > are only manipulated in setup/teardown paths,
> On Fri, Jan 18, 2019 at 02:27:25PM +0200, Elena Reshetova wrote:
> > I would really love finally to merge these old patches
> > (now rebased on top of linux-next/master as of last friday),
> > since as far as I remember none has raised any more concerns
> > on them.
> >
> > refcount_t has been
> On Fri, Jan 18, 2019 at 02:27:25PM +0200, Elena Reshetova wrote:
> > Elena Reshetova (5):
> > sched: convert sighand_struct.count to refcount_t
> > sched: convert signal_struct.sigcnt to refcount_t
>
> These should really be seen by Oleg (bounced) and I'll await his reply.
>
> > sched:
> Hi Elena,
Hi!
>
> [...]
>
> > **Important note for maintainers:
> >
> > Some functions from refcount_t API defined in lib/refcount.c
> > have different memory ordering guarantees than their atomic
> > counterparts.
> > The full comparison can be seen in
> >
==
ANNOUNCEMENT AND CALL FOR PARTICIPATION
LINUX SECURITY SUMMIT EUROPE 2018
25-26 October
==
ANNOUNCEMENT AND CALL FOR PARTICIPATION
LINUX SECURITY SUMMIT EUROPE 2018
25-26 October
> On Wed 29-11-17 13:22:20, Elena Reshetova wrote:
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches
> On Wed 29-11-17 13:22:20, Elena Reshetova wrote:
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter reaches
> On Fri, Jan 05, 2018 at 02:57:50PM +, Mark Rutland wrote:
> > Note: this patch is an *example* use of the nospec API. It is understood
> > that this is incomplete, etc.
> >
> > Under speculation, CPUs may mis-predict branches in bounds checks. Thus,
> > memory accesses under a bounds check
> On Fri, Jan 05, 2018 at 02:57:50PM +, Mark Rutland wrote:
> > Note: this patch is an *example* use of the nospec API. It is understood
> > that this is incomplete, etc.
> >
> > Under speculation, CPUs may mis-predict branches in bounds checks. Thus,
> > memory accesses under a bounds check
> On Thu, Jan 04, 2018 at 02:15:53AM +, Alan Cox wrote:
> >
> > > > Elena has done the work of auditing static analysis reports to a dozen
> > > > or so locations that need some 'nospec' handling.
> > >
> > > How exactly is that related (especially in longer-term support terms) to
> > > BPF
> On Thu, Jan 04, 2018 at 02:15:53AM +, Alan Cox wrote:
> >
> > > > Elena has done the work of auditing static analysis reports to a dozen
> > > > or so locations that need some 'nospec' handling.
> > >
> > > How exactly is that related (especially in longer-term support terms) to
> > > BPF
> On Fri, Dec 22, 2017 at 09:25:53AM -0500, J. Bruce Fields wrote:
> > On Fri, Dec 22, 2017 at 09:29:15AM +, Reshetova, Elena wrote:
> > >
> > > On Wed, Nov 29, 2017 at 01:15:43PM +0200, Elena Reshetova wrote:
> > > > atomic_t variables ar
> On Fri, Dec 22, 2017 at 09:25:53AM -0500, J. Bruce Fields wrote:
> > On Fri, Dec 22, 2017 at 09:29:15AM +, Reshetova, Elena wrote:
> > >
> > > On Wed, Nov 29, 2017 at 01:15:43PM +0200, Elena Reshetova wrote:
> > > > atomic_t variables ar
On Wed, Nov 29, 2017 at 01:15:43PM +0200, Elena Reshetova wrote:
> atomic_t variables are currently used to implement reference
> counters with the following properties:
> - counter is initialized to 1 using atomic_set()
> - a resource is freed upon counter reaching zero
> - once counter
On Wed, Nov 29, 2017 at 01:15:43PM +0200, Elena Reshetova wrote:
> atomic_t variables are currently used to implement reference
> counters with the following properties:
> - counter is initialized to 1 using atomic_set()
> - a resource is freed upon counter reaching zero
> - once counter
On Wed, Nov 29, 2017 at 4:36 AM, Elena Reshetova
> wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
> > This adds a document outlining these differences.
> >
> > Signed-off-by: Elena
On Wed, Nov 29, 2017 at 4:36 AM, Elena Reshetova
> wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
> > This adds a document outlining these differences.
> >
> > Signed-off-by: Elena Reshetova
>
> Thanks for the
On 11/29/2017 04:36 AM, Elena Reshetova wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
> > This adds a document outlining these differences.
> >
> > Signed-off-by: Elena Reshetova
> > ---
>
On 11/29/2017 04:36 AM, Elena Reshetova wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
> > This adds a document outlining these differences.
> >
> > Signed-off-by: Elena Reshetova
> > ---
> >
> Thanks, applying all four for 4.16.--b.
Thank you very much!
Best Regards,
Elena.
>
> On Wed, Nov 29, 2017 at 01:15:42PM +0200, Elena Reshetova wrote:
> > This series, for lockd component, replaces atomic_t reference
> > counters with the new refcount_t type and API (see
> >
> Thanks, applying all four for 4.16.--b.
Thank you very much!
Best Regards,
Elena.
>
> On Wed, Nov 29, 2017 at 01:15:42PM +0200, Elena Reshetova wrote:
> > This series, for lockd component, replaces atomic_t reference
> > counters with the new refcount_t type and API (see
> >
> On Tue, Nov 28 2017 at 5:07am -0500,
> Reshetova, Elena <elena.reshet...@intel.com> wrote:
>
> >
> > > On Fri, Nov 24, 2017 at 2:36 AM, Reshetova, Elena
> > > <elena.reshet...@intel.com> wrote:
> > > >> On Fri, Oct 20, 2017 at 10
> On Tue, Nov 28 2017 at 5:07am -0500,
> Reshetova, Elena wrote:
>
> >
> > > On Fri, Nov 24, 2017 at 2:36 AM, Reshetova, Elena
> > > wrote:
> > > >> On Fri, Oct 20, 2017 at 10:37:38AM +0300, Elena Reshetova wrote:
> > > >>
> On Fri, Nov 24, 2017 at 2:36 AM, Reshetova, Elena
> <elena.reshet...@intel.com> wrote:
> >> On Fri, Oct 20, 2017 at 10:37:38AM +0300, Elena Reshetova wrote:
> >> > } else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) {
> &
> On Fri, Nov 24, 2017 at 2:36 AM, Reshetova, Elena
> wrote:
> >> On Fri, Oct 20, 2017 at 10:37:38AM +0300, Elena Reshetova wrote:
> >> > } else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) {
> >> > r = upgr
> On Fri, Nov 24, 2017 at 08:29:42AM +0000, Reshetova, Elena wrote:
> > By looking at the code, I don't see where the change in the reference
> > counting
> > could have caused this.
>
> The cause was the bug I identified in patch 3, not this patch.
Oh, ok, because
> On Fri, Nov 24, 2017 at 08:29:42AM +0000, Reshetova, Elena wrote:
> > By looking at the code, I don't see where the change in the reference
> > counting
> > could have caused this.
>
> The cause was the bug I identified in patch 3, not this patch.
Oh, ok, because
> Dne 20.10.2017 v 09:37 Elena Reshetova napsal(a):
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter
> Dne 20.10.2017 v 09:37 Elena Reshetova napsal(a):
> > atomic_t variables are currently used to implement reference
> > counters with the following properties:
> > - counter is initialized to 1 using atomic_set()
> > - a resource is freed upon counter reaching zero
> > - once counter
> On Fri, Oct 20, 2017 at 10:37:38AM +0300, Elena Reshetova wrote:
> > } else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) {
> > r = upgrade_mode(dd, mode, t->md);
> > if (r)
> > return r;
> > + refcount_inc(>count);
> > }
>
>
> On Fri, Oct 20, 2017 at 10:37:38AM +0300, Elena Reshetova wrote:
> > } else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) {
> > r = upgrade_mode(dd, mode, t->md);
> > if (r)
> > return r;
> > + refcount_inc(>count);
> > }
>
>
Hi Kees,
Thank you for the proof reading. I will fix the typos/language, but
see the comments on bigger things inside.
> On Tue, Nov 14, 2017 at 11:55 PM, Elena Reshetova
> wrote:
> > Some functions from refcount_t API provide different
> > memory ordering
Hi Kees,
Thank you for the proof reading. I will fix the typos/language, but
see the comments on bigger things inside.
> On Tue, Nov 14, 2017 at 11:55 PM, Elena Reshetova
> wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic
> The middle of the merge window is the wrong time to send patches as
> maintaner attention is going to making certain the merge goes smoothly
> and nothing is missed.
Sorry Eric, please feel free to ignore the patch until you have time.
It is very difficult to figure out the correct time, since
> The middle of the merge window is the wrong time to send patches as
> maintaner attention is going to making certain the merge goes smoothly
> and nothing is missed.
Sorry Eric, please feel free to ignore the patch until you have time.
It is very difficult to figure out the correct time, since
> On Mon, Nov 13, 2017 at 04:01:11PM +0000, Reshetova, Elena wrote:
> >
> > > On Mon, Nov 13, 2017 at 09:09:57AM +, Reshetova, Elena wrote:
> > > >
> > > >
> > > > > Note that there's work done on better documents and updates to this
> On Mon, Nov 13, 2017 at 04:01:11PM +0000, Reshetova, Elena wrote:
> >
> > > On Mon, Nov 13, 2017 at 09:09:57AM +, Reshetova, Elena wrote:
> > > >
> > > >
> > > > > Note that there's work done on better documents and updates to this
> On Mon, Nov 13, 2017 at 09:09:57AM +0000, Reshetova, Elena wrote:
> >
> >
> > > Note that there's work done on better documents and updates to this one.
> > > One document that might be good to read (I have not in fact had time to
> > > read it myself
> On Mon, Nov 13, 2017 at 09:09:57AM +0000, Reshetova, Elena wrote:
> >
> >
> > > Note that there's work done on better documents and updates to this one.
> > > One document that might be good to read (I have not in fact had time to
> > > read it myself
> Note that there's work done on better documents and updates to this one.
> One document that might be good to read (I have not in fact had time to
> read it myself yet :-():
>
> https://github.com/aparri/memory-
> model/blob/master/Documentation/explanation.txt
>
I have just finished
> Note that there's work done on better documents and updates to this one.
> One document that might be good to read (I have not in fact had time to
> read it myself yet :-():
>
> https://github.com/aparri/memory-
> model/blob/master/Documentation/explanation.txt
>
I have just finished
Hi Randy,
Thank you for your corrections! I will fix the language-related issues in the
next version. More on content below.
> On 11/06/2017 05:32 AM, Elena Reshetova wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
Hi Randy,
Thank you for your corrections! I will fix the language-related issues in the
next version. More on content below.
> On 11/06/2017 05:32 AM, Elena Reshetova wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
> On Thu, Nov 02, 2017 at 11:04:53AM +0000, Reshetova, Elena wrote:
>
> > Well refcount_dec_and_test() is not the only function that has different
> > memory ordering specifics. So, the full answer then for any arbitrary case
> > according to your points
> On Thu, Nov 02, 2017 at 11:04:53AM +0000, Reshetova, Elena wrote:
>
> > Well refcount_dec_and_test() is not the only function that has different
> > memory ordering specifics. So, the full answer then for any arbitrary case
> > according to your points
> [I missed this followup, other stuff]
>
> On Mon, Oct 23, 2017 at 03:41:49PM +0200, Peter Zijlstra wrote:
> > On Sat, Oct 21, 2017 at 10:21:11AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 20, 2017 at 02:07:53PM +0300, Elena Reshetova wrote:
> > > IMO, that makes it way too hard to review
> [I missed this followup, other stuff]
>
> On Mon, Oct 23, 2017 at 03:41:49PM +0200, Peter Zijlstra wrote:
> > On Sat, Oct 21, 2017 at 10:21:11AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 20, 2017 at 02:07:53PM +0300, Elena Reshetova wrote:
> > > IMO, that makes it way too hard to review
Sorry for delayed reply, but I was actually reading and trying to understand
all the involved notions, so it took a while...
> On Fri, Oct 27, 2017 at 06:49:55AM +0000, Reshetova, Elena wrote:
> > Could we possibly have a bit more elaborate discussion on this?
> >
> > O
1 - 100 of 363 matches
Mail list logo