On 2023-05-16 18:03, Morten Brørup wrote:
From: Stephen Hemminger [mailto:step...@networkplumber.org]
Sent: Tuesday, 16 May 2023 17.24
On Tue, 16 May 2023 13:41:46 +0000
Yasin CANER <yasinnca...@gmail.com> wrote:
From: Yasin CANER <yasin.ca...@ulakhaberlesme.com.tr>
after a while working rte_mempool_avail_count function returns bigger
than mempool size that cause miscalculation rte_mempool_in_use_count.
it helps to avoid miscalculation rte_mempool_in_use_count.
Bugzilla ID: 1229
Signed-off-by: Yasin CANER <yasin.ca...@ulakhaberlesme.com.tr>
An alternative that avoids some code duplication.
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index cf5dea2304a7..2406b112e7b0 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1010,7 +1010,7 @@ rte_mempool_avail_count(const struct rte_mempool
*mp)
count = rte_mempool_ops_get_count(mp);
if (mp->cache_size == 0)
- return count;
+ goto exit;
This bug can only occur here (i.e. with cache_size==0) if
rte_mempool_ops_get_count() returns an incorrect value. The bug should be fixed
there instead.
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
count += mp->local_cache[lcore_id].len;
@@ -1019,6 +1019,7 @@ rte_mempool_avail_count(const struct rte_mempool
*mp)
This loop may result in the same symptoms (i.e., count > mp->size). For
example, the LF stack has a relaxed atomic load to retrieve the usage
count, which may well be reordered (by the CPU and/or compiler) in
relation to these non-atomic loads. The same issue may well exist on the
writer side, with the order between the global and the cache counts not
being ordered.
So while sanity checking the count without caching is not needed, with,
you can argue it is.
* due to race condition (access to len is not locked), the
* total can be greater than size... so fix the result
*/
+exit:
if (count > mp->size)
return mp->size;
return count;