Hi,
On 02/09/2015 03:41 PM, Liang, Cunming wrote:
>>> #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
>>> -#define __MEMPOOL_STAT_ADD(mp, name, n) do { \
>>> - unsigned __lcore_id = rte_lcore_id(); \
>>> - mp->stats[__lcore_id].name##_objs += n; \
>>> - mp->stats[__lcore_id].name##_bulk += 1; \
>>> +#define __MEMPOOL_STAT_ADD(mp, name, n) do { \
>>> + unsigned __lcore_id = rte_lcore_id(); \
>>> + if (__lcore_id < RTE_MAX_LCORE) { \
>>> + mp->stats[__lcore_id].name##_objs += n; \
>>> + mp->stats[__lcore_id].name##_bulk += 1; \
>>> + } \
>>
>> Does it mean that we have no statistics for non-EAL threads?
>> (same question for rings and timers in the next patches)
> [LCM] Yes, it is in this patch set, mainly focus on EAL thread and make sure
> no running issue on non-EAL thread.
> For full non-EAL function, will have other patch set to enhance non-EAL
> thread as the 2nd step.
OK
>>> @@ -952,7 +955,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void
>> **obj_table,
>>> uint32_t cache_size = mp->cache_size;
>>>
>>> /* cache is not enabled or single consumer */
>>> - if (unlikely(cache_size == 0 || is_mc == 0 || n >= cache_size))
>>> + if (unlikely(cache_size == 0 || is_mc == 0 ||
>>> + n >= cache_size || lcore_id >= RTE_MAX_LCORE))
>>> goto ring_dequeue;
>>>
>>> cache = &mp->local_cache[lcore_id];
>>>
>>
>> What is the performance impact of adding this test?
> [LCM] By perf in unit test, it's almost the same. But haven't measure EAL
> thread and non-EAL thread share the same mempool.
When you say "unit test", are you talking about mempool tests from
"make test"? Do you have some numbers to share?