On 17.02.20 13:17, Roger Pau Monné wrote:
On Mon, Feb 17, 2020 at 01:11:59PM +0100, Jürgen Groß wrote:
On 17.02.20 12:49, Julien Grall wrote:
Hi Juergen,

On 17/02/2020 07:20, Juergen Gross wrote:
+void rcu_barrier(void)
   {
-    atomic_t cpu_count = ATOMIC_INIT(0);
-    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
+    if ( !atomic_cmpxchg(&cpu_count, 0, num_online_cpus()) )

What does prevent the cpu_online_map to change under your feet?
Shouldn't you grab the lock via get_cpu_maps()?

Oh, indeed.

This in turn will require a modification of the logic to detect parallel
calls on multiple cpus.

If you pick my patch to turn that into a rw lock you shouldn't worry
about parallel calls I think, but the lock acquisition can still fail
if there's a CPU plug/unplug going on:

https://lists.xenproject.org/archives/html/xen-devel/2020-02/msg00940.html

Thanks, but letting rcu_barrier() fail is a no go, so I still need to
handle that case (I mean the case failing to get the lock). And handling
of parallel calls is not needed to be functional correct, but to avoid
not necessary cpu synchronization (each parallel call detected can just
wait until the master has finished and then return).

BTW - The recursive spinlock today would allow for e.g. rcu_barrier() to
be called inside a CPU plug/unplug section. Your rwlock is removing that
possibility. Any chance that could be handled?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to