On 3/3/2021 3:07 PM, Loftus, Ciara wrote:

On 2/24/2021 11:18 AM, Ciara Loftus wrote:
Prior to this the max size was 32 which was unnecessarily
small.

Can you please describe the impact? Why changed from 32 to 512?
I assume this is to improve the performance but can you please explicitly
document it in the commit log?

Indeed - improved performance due to bulk operations and fewer ring accesses 
and syscalls.
The value 512 was arbitrary. I will change this to the default ring size as 
defined by libbpf (2048) in v2.
Will update the commit log with this info.


Also enforce the max batch size for TX for both
copy and zero copy modes. Prior to this only copy mode
enforced the max size.


By enforcing, the PMD ignores the user provided burst value if it is more than
PMS supported MAX, and this ignoring is done in silent. Also there is no way
to
discover this MAX value without checking the code.

Overall, why this max values are required at all? After quick check I can see
they are used for some bulk operations, which I assume can be eliminated,
what
do you think?

We need to size some arrays at compile time with this max value.

Instead of removing the bulk operations which may impact performance, how about 
taking an approach where we split batches that are > 2048 into smaller batches and 
still handle all the packets instead of discarding those > 2048. Something like 
what's done in ixgbe for example:
http://code.dpdk.org/dpdk/v21.02/source/drivers/net/ixgbe/ixgbe_rxtx.c#L318
`

If there is no reasonable way to eliminate the fix sized arrays, above suggestion looks good.

Thanks,
ferruh

Reply via email to