Hi, On 10/25/2016 12:22 PM, Morten Br?rup wrote: > Comments inline (at the end). > >> -----Original Message----- >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ananyev, >> Konstantin >> Sent: Tuesday, October 25, 2016 12:03 PM >> To: Richardson, Bruce; Morten Br?rup >> Cc: Wiles, Keith; dev at dpdk.org; Olivier Matz >> Subject: Re: [dpdk-dev] mbuf changes >> >> Hi everyone, >> >>> -----Original Message----- >>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Bruce Richardson >>> Sent: Tuesday, October 25, 2016 9:54 AM >>> To: Morten Br?rup <mb at smartsharesystems.com> >>> Cc: Wiles, Keith <keith.wiles at intel.com>; dev at dpdk.org; Olivier Matz >>> <olivier.matz at 6wind.com> >>> Subject: Re: [dpdk-dev] mbuf changes >>> >>> On Mon, Oct 24, 2016 at 11:47:16PM +0200, Morten Br?rup wrote: >>>> Comments inline. >>>> >>>> Med venlig hilsen / kind regards >>>> - Morten Br?rup >>>> >>>>> -----Original Message----- >>>>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Bruce >>>>> Richardson >>>>> Sent: Monday, October 24, 2016 6:26 PM >>>>> To: Wiles, Keith >>>>> Cc: Morten Br?rup; dev at dpdk.org; Olivier Matz >>>>> Subject: Re: [dpdk-dev] mbuf changes >>>>> >>>>> On Mon, Oct 24, 2016 at 04:11:33PM +0000, Wiles, Keith wrote: >>>>>> >>>>>>> On Oct 24, 2016, at 10:49 AM, Morten Br?rup >>>>>>> <mb at smartsharesystems.com> >>>>> wrote: >>>>>>> >>>>>>> 2. >>>>>>> >>>>>>> There seemed to be consensus that the size of m->refcnt >> should >>>>>>> match >>>>> the size of m->port because a packet could be duplicated on all >>>>> physical ports for L3 multicast and L2 flooding. >>>>>>> >>>>>>> Furthermore, although a single physical machine (i.e. a >> single >>>>>>> server) >>>>> with 255 physical ports probably doesn?t exist, it might contain >>>>> more than >>>>> 255 virtual machines with a virtual port each, so it makes sense >>>>> extending these mbuf fields from 8 to 16 bits. >>>>>> >>>>>> I thought we also talked about removing the m->port from the >>>>>> mbuf as it >>>>> is not really needed. >>>>>> >>>>> Yes, this was mentioned, and also the option of moving the port >>>>> value to the second cacheline, but it appears that NXP are using >>>>> the port value in their NIC drivers for passing in metadata, so >>>>> we'd need their agreement on any move (or removal). >>>>> >>>> If a single driver instance services multiple ports, so the ports >>>> are not polled individually, the m->port member will be required to >>>> identify >>> the port. In that case, it might also used elsewhere in the ingress >> path, and thus appropriate having in the first cache line. >> >> Ok, but these days most of devices have multiple rx queues. >> So identify the RX source properly you need not only port, but >> port+queue (at least 3 bytes). >> But I suppose better to wait for NXP input here. >> >>> So yes, this needs >>> further discussion with NXP. > > It seems that this concept might be somewhat similar to the Flow Director > information, which already has plenty of bits in the first cache line. You > should consider if this can be somehow folded into the m->hash union.
I'll tend to agree with Morten. First, I think that having more than 255 virtual links is possible, so increasing the port size to 16 bits is not a bad idea. I think the port information is more useful for a network stack than the queue_id, but it may not be the case for all applications. On the other hand, the queue_id (or something else providing the same level of information) could led in the flow director structure. Now, the question is: do we really need the port to be inside the mbuf? Indeed, the application can already associate a bulk of packet with a port id. The reason why I'll prefer to keep it is because I'm pretty sure that some applications use it. At least in the examples/ directory, we can find distributor, dpdk_qat, ipv4_multicast, load_balancer, packet_ordering. I did not check these apps in detail, but it makes me feel that other external applications could make use of the port field. Regards, Olivier