Added cache guard after the table holding the ring elements, to avoid
false sharing conflicts caused by next-line hardware prefetchers when
accessing elements at the end of the ring table.

Signed-off-by: Morten Brørup <[email protected]>
Acked-by: Konstantin Ananyev <[email protected]>
---
v2:
* Added comment describing reason for padding. (Konstantin)
---
 lib/ring/rte_ring.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c
index f10050a1c4..10b52dc679 100644
--- a/lib/ring/rte_ring.c
+++ b/lib/ring/rte_ring.c
@@ -73,8 +73,15 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int 
count)
                return -EINVAL;
        }
 
+       static_assert(sizeof(struct rte_ring) == 
RTE_CACHE_LINE_ROUNDUP(sizeof(struct rte_ring)),
+                       "Size of struct rte_ring not cache line aligned");
        sz = sizeof(struct rte_ring) + (ssize_t)count * esize;
+       /* Add padding, to guard against false sharing-like effects
+        * on systems with a next-N-lines hardware prefetcher, when
+        * accessing elements at the end of the ring table.
+        */
        sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+       sz += RTE_CACHE_GUARD_LINES * RTE_CACHE_LINE_SIZE;
        return sz;
 }
 
-- 
2.43.0

Reply via email to