Hi Mark, On Monday 12 June 2017 07:28 PM, Kavanagh, Mark B wrote:
>> From: Stokes, Ian >> Sent: Monday, June 12, 2017 1:41 PM >> To: Santosh Shukla <[email protected]>; [email protected]; >> [email protected]; >> [email protected] >> Cc: [email protected]; [email protected]; >> [email protected]; Kavanagh, >> Mark B <[email protected]> >> Subject: RE: [PATCH v2] netdev-dpdk: round up mbuf_size to cache_line_size >> >>> Subject: [PATCH v2] netdev-dpdk: round up mbuf_size to cache_line_size >>> >>> Some pmd driver(e.g: vNIC thunderx PMD) want mbuf_size to be multiple of >>> cache_line_size. With out this fix, Netdev-dpdk initialization would fail >>> for those PMD. >>> >>> Signed-off-by: Santosh Shukla <[email protected]> >>> --- >>> v1 --> v2 >>> - Removed mtu to dmp->mtu change in v2. >>> - Removed extra MTU macro definition (_MBUF_SIZE) in v2. Now MBUF_SIZE >>> looks after the round_up. >>> - For details refer v1 [1]. >>> [1] https://patchwork.ozlabs.org/patch/769113/ >>> >>> lib/netdev-dpdk.c | 7 ++++--- >>> 1 file changed, 4 insertions(+), 3 deletions(-) >>> >>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index >>> 810800ed3..023880ca6 100644 >>> --- a/lib/netdev-dpdk.c >>> +++ b/lib/netdev-dpdk.c >>> @@ -76,9 +76,10 @@ static struct vlog_rate_limit rl = >>> VLOG_RATE_LIMIT_INIT(5, 20); >>> #define MTU_TO_MAX_FRAME_LEN(mtu) ((mtu) + ETHER_HDR_MAX_LEN) >>> #define FRAME_LEN_TO_MTU(frame_len) ((frame_len) \ >>> - ETHER_HDR_LEN - ETHER_CRC_LEN) >>> -#define MBUF_SIZE(mtu) (MTU_TO_MAX_FRAME_LEN(mtu) \ >>> - + sizeof(struct dp_packet) \ >>> - + RTE_PKTMBUF_HEADROOM) >>> +#define MBUF_SIZE(mtu) ROUND_UP((MTU_TO_MAX_FRAME_LEN(mtu) \ >>> + + sizeof(struct dp_packet) \ >>> + + RTE_PKTMBUF_HEADROOM), \ >>> + RTE_CACHE_LINE_SIZE) >>> #define NETDEV_DPDK_MBUF_ALIGN 1024 >>> #define NETDEV_DPDK_MAX_PKT_LEN 9728 >> Hi Santosh, >> >> I'm ok with the approach above, my initial concern was that it would >> increase the memory >> footprint for the hugepages as now the MBUF will be rounded up. But in >> testing I'm not seeing >> that big increase (a delta of 56 for each MTU size examined). >> >> It could increase the requirement for the number of hugepages but I don't >> think it's an issue >> considering what it enables. >> >> This was run on an Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz. >> >> +------+--------------------+-------------------+ >> | MTU | MBUF Size (Before) | MBUF Size (After) | >> +------+--------------------+-------------------+ >> | 1500 | 2760 | 2816 | >> +------+--------------------+-------------------+ >> | 1920 | 3784 | 3840 | >> +------+--------------------+-------------------+ >> | 9000 | 9928 | 9984 | >> +------+--------------------+-------------------+ >> >> Would be interesting to see if there is any performance impact however on >> existing DPDK >> devices with the new approach. >> >> I'd like to add Mark Kavanagh to this thread as he has done a lot of work in >> this area also. >> >> @Mark, any concerns with the approach outline here? > Hi Ian, > > Thanks for pulling me into the discussion. > > You've already addressed my first concern, which is relation to the increased > mempool size when rounding up the mbuf size; WRT your findings, I don't think > that 56B of an increase in mempool size is of major concern. > > In terms of performance, I tested the Phy-Phy and Phy-VM-Phy paths (using > DPDK vHost user backend), and found that forwarding rate increases marginally > (~10000 pps) for 64B and 128B frames with the 'roundup' patch. > Testing was conducted on Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, and > Fortville network controller. > > Overall, I'm happy with this patch. Santosh, feel free to add the following > tags: > Acked-by: Mark Kavanagh <[email protected]> > Tested-by: Mark Kavanagh <[email protected]> > > Thanks, > Mark Thank you Mark. Sending v3 including your tags. >> Ian >> >>> -- >>> 2.13.0 _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
