On Thu, Aug 6, 2020 at 10:37 AM Mathieu Desnoyers
<[email protected]> wrote:
>

> >>
> >> This is an unpriv IPI the world. That's a big no-no.
> >
> > removed in v2.
>
> I don't think the feature must be removed, but its implementation needs 
> adjustment.
>
> How about we simply piggy-back on the membarrier schemes we already have, and
> implement:
>
> membarrier_register_private_expedited(MEMBARRIER_FLAG_RSEQ)
> membarrier_private_expedited(MEMBARRIER_FLAG_RSEQ)
>
> All the logic is there to prevent sending IPIs to runqueues which are not 
> running
> threads associated with the same mm. Considering that preemption does an rseq 
> abort,
> running a thread belonging to a different mm should mean that this CPU is not
> currently executing an rseq critical section, or if it was, it has already 
> been
> aborted, so it is quiescent.
>
> Then you'll probably want to change membarrier_private_expedited so it takes 
> an
> extra "cpu" argument. If cpu=-1, iterate on all runqueues like we currently 
> do.
> If cpu >= 0, only IPI that CPU if the thread currently running has the same 
> mm.
>

Thanks, Mathieu! I'll prepare something based on your and Peter's feedback.

> Also, should this belong to the membarrier or the rseq system call ? It just
> looks like the membarrier happens to implement very similar things for 
> barriers,
> but arguably this is really about rseq. I wonder if we should expose this 
> through
> rseq instead, even if we end up using membarrier code.

Yes, this is more about rseq; on the other hand, the high-level API/behavior
looks closer to that membarrier, and a lot of code will be shared.

As you are the maintainer for both rseq and membarrier, this is for
you to decide, I guess... :)

Reply via email to