- On Sep 25, 2017, at 8:25 AM, Peter Zijlstra pet...@infradead.org wrote:
> On Mon, Sep 25, 2017 at 08:10:54PM +0800, Boqun Feng wrote:
>> > static void membarrier_register_private_expedited(void)
>> > {
>> > struct task_struct *p = current;
>> >
>> > if (READ_ONCE(p->mm->memb
On Mon, Sep 25, 2017 at 08:10:54PM +0800, Boqun Feng wrote:
> > static void membarrier_register_private_expedited(void)
> > {
> > struct task_struct *p = current;
> >
> > if (READ_ONCE(p->mm->membarrier_private_expedited))
> > return;
> > membarrier_arch_reg
On Sun, Sep 24, 2017 at 02:23:04PM +, Mathieu Desnoyers wrote:
[...]
> >>
> >> copy_mm() is performed without holding current->sighand->siglock, so
> >> it appears to be racing with concurrent membarrier register cmd.
> >
> > Speak of racing, I think we currently have a problem if we do a
> >
- On Sep 24, 2017, at 9:30 AM, Boqun Feng boqun.f...@gmail.com wrote:
> On Fri, Sep 22, 2017 at 03:10:10PM +, Mathieu Desnoyers wrote:
>> - On Sep 22, 2017, at 4:59 AM, Boqun Feng boqun.f...@gmail.com wrote:
>>
>> > On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
>>
On Fri, Sep 22, 2017 at 03:10:10PM +, Mathieu Desnoyers wrote:
> - On Sep 22, 2017, at 4:59 AM, Boqun Feng boqun.f...@gmail.com wrote:
>
> > On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
> > [...]
> >> +static inline void membarrier_arch_sched_in(struct task_struct *pr
- On Sep 22, 2017, at 4:59 AM, Boqun Feng boqun.f...@gmail.com wrote:
> On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
> [...]
>> +static inline void membarrier_arch_sched_in(struct task_struct *prev,
>> +struct task_struct *next)
>> +{
>> +/*
>> + * Onl
On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
[...]
> +static inline void membarrier_arch_sched_in(struct task_struct *prev,
> + struct task_struct *next)
> +{
> + /*
> + * Only need the full barrier when switching between processes.
> + */
> + if
On Fri, Sep 22, 2017 at 10:24:41AM +0200, Peter Zijlstra wrote:
> On Fri, Sep 22, 2017 at 11:22:06AM +0800, Boqun Feng wrote:
>
> > The idea is in membarrier_private_expedited(), we go through all ->curr
> > on each CPU and
> >
> > 1) If it's a userspace task and its ->mm is matched, we send an
On Fri, Sep 22, 2017 at 11:22:06AM +0800, Boqun Feng wrote:
> The idea is in membarrier_private_expedited(), we go through all ->curr
> on each CPU and
>
> 1)If it's a userspace task and its ->mm is matched, we send an ipi
>
> 2)If it's a kernel task, we skip
>
> (Because there w
- On Sep 21, 2017, at 11:30 PM, Boqun Feng boqun.f...@gmail.com wrote:
> On Fri, Sep 22, 2017 at 11:22:06AM +0800, Boqun Feng wrote:
>> Hi Mathieu,
>>
>> On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
>> > Provide a new command allowing processes to register their intent t
On Fri, Sep 22, 2017 at 11:22:06AM +0800, Boqun Feng wrote:
> Hi Mathieu,
>
> On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
> > Provide a new command allowing processes to register their intent to use
> > the private expedited command.
> >
> > This allows PowerPC to skip the
Hi Mathieu,
On Tue, Sep 19, 2017 at 06:13:41PM -0400, Mathieu Desnoyers wrote:
> Provide a new command allowing processes to register their intent to use
> the private expedited command.
>
> This allows PowerPC to skip the full memory barrier in switch_mm(), and
> only issue the barrier when sche
Provide a new command allowing processes to register their intent to use
the private expedited command.
This allows PowerPC to skip the full memory barrier in switch_mm(), and
only issue the barrier when scheduling into a task belonging to a
process that has registered to use expedited private.
P
13 matches
Mail list logo