On 20/11/24 15:23, Frederic Weisbecker wrote:

> Ah but there is CT_STATE_GUEST and I see the last patch also applies that to
> CT_STATE_IDLE.
>
> So that could be:
>
> bool ct_set_cpu_work(unsigned int cpu, unsigned int work)
> {
>       struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
>       unsigned int old;
>       bool ret = false;
>
>       preempt_disable();
>
>       old = atomic_read(&ct->state);
>
>       /* CT_STATE_IDLE can be added to last patch here */
>       if (!(old & (CT_STATE_USER | CT_STATE_GUEST))) {
>               old &= ~CT_STATE_MASK;
>               old |= CT_STATE_USER;
>       }

Hmph, so that lets us leverage the cmpxchg for a !CT_STATE_KERNEL check,
but we get an extra loop if the target CPU exits kernelspace not to
userspace (e.g. vcpu or idle) in the meantime - not great, not terrible.

At the cost of one extra bit for the CT_STATE area, with CT_STATE_KERNEL=1
we could do: 

  old = atomic_read(&ct->state);
  old &= ~CT_STATE_KERNEL;


Reply via email to