On 10.03.20 17:37, Jan Beulich wrote:
On 10.03.2020 17:34, Jürgen Groß wrote:
On 10.03.20 17:29, Jan Beulich wrote:
On 10.03.2020 08:28, Juergen Gross wrote:
+void rcu_barrier(void)
{
-atomic_t cpu_count = ATOMIC_INIT(0);
-return stop_machine_run(rcu_barrier_action, _count,
On 10.03.2020 17:34, Jürgen Groß wrote:
> On 10.03.20 17:29, Jan Beulich wrote:
>> On 10.03.2020 08:28, Juergen Gross wrote:
>>> +void rcu_barrier(void)
>>> {
>>> -atomic_t cpu_count = ATOMIC_INIT(0);
>>> -return stop_machine_run(rcu_barrier_action, _count, NR_CPUS);
>>> +unsigned
On 10.03.20 17:29, Jan Beulich wrote:
On 10.03.2020 08:28, Juergen Gross wrote:
@@ -143,51 +143,75 @@ static int qhimark = 1;
static int qlowmark = 100;
static int rsinterval = 1000;
-struct rcu_barrier_data {
-struct rcu_head head;
-atomic_t *cpu_count;
-};
+/*
+ *
On 10.03.2020 08:28, Juergen Gross wrote:
> @@ -143,51 +143,75 @@ static int qhimark = 1;
> static int qlowmark = 100;
> static int rsinterval = 1000;
>
> -struct rcu_barrier_data {
> -struct rcu_head head;
> -atomic_t *cpu_count;
> -};
> +/*
> + * rcu_barrier() handling:
> + *
Today rcu_barrier() is calling stop_machine_run() to synchronize all
physical cpus in order to ensure all pending rcu calls have finished
when returning.
As stop_machine_run() is using tasklets this requires scheduling of
idle vcpus on all cpus imposing the need to call rcu_barrier() on idle
cpus