On Thu, 2016-09-15 at 19:11 -0300, Thadeu Lima de Souza Cascardo wrote:
> Instead of using flow stats per NUMA node, use it per CPU. When using
> megaflows, the stats lock can be a bottleneck in scalability.
>
> On a E5-2690 12-core system, usual throughput went from ~4Mpps to
> ~15Mpps when forwarding between two 40GbE ports with a single flow
> configured on the datapath.
>
> This has been tested on a system with possible CPUs 0-7,16-23. After
> module removal, there were no corruption on the slab cache.
>
> Signed-off-by: Thadeu Lima de Souza Cascardo <[email protected]>
> Cc: pravin shelar <[email protected]>
> ---
> + /* We open code this to make sure cpu 0 is always considered */
> + for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu,
> cpu_possible_mask))
> + if (flow->stats[cpu])
> kmem_cache_free(flow_stats_cache,
> - (struct flow_stats __force
> *)flow->stats[node]);
> + (struct flow_stats __force
> *)flow->stats[cpu]);
> kmem_cache_free(flow_cache, flow);
> }
>
> @@ -757,7 +749,7 @@ int ovs_flow_init(void)
> BUILD_BUG_ON(sizeof(struct sw_flow_key) % sizeof(long));
>
> flow_cache = kmem_cache_create("sw_flow", sizeof(struct sw_flow)
> - + (nr_node_ids
> + + (nr_cpu_ids
> * sizeof(struct flow_stats *)),
> 0, 0, NULL);
> if (flow_cache == NULL)
Well, if you switch to percpu stats, better use normal
alloc_percpu(struct flow_stats)
The code was dealing with per node allocation so could not use existing
helper.
No need to keep this forever.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev