2016-10-25 15:00, Olivier Matz: > On 10/25/2016 12:22 PM, Morten Br?rup wrote: > > From: Ananyev, Konstantin > >> From: Bruce Richardson > >>> On Mon, Oct 24, 2016 at 11:47:16PM +0200, Morten Br?rup wrote: > >>>> From: Bruce Richardson > >>>>> On Mon, Oct 24, 2016 at 04:11:33PM +0000, Wiles, Keith wrote: > >>>>>> I thought we also talked about removing the m->port from the > >>>>>> mbuf as it is not really needed. > >>>>>> > >>>>> Yes, this was mentioned, and also the option of moving the port > >>>>> value to the second cacheline, but it appears that NXP are using > >>>>> the port value in their NIC drivers for passing in metadata, so > >>>>> we'd need their agreement on any move (or removal). > >>>>> > >>>> If a single driver instance services multiple ports, so the ports > >>>> are not polled individually, the m->port member will be required to > >>>> identify the port. > >>>> In that case, it might also used elsewhere in the ingress path, > >>>> and thus appropriate having in the first cache line. > >> > >> Ok, but these days most of devices have multiple rx queues. > >> So identify the RX source properly you need not only port, but > >> port+queue (at least 3 bytes). > >> But I suppose better to wait for NXP input here. > >> > >>> So yes, this needs further discussion with NXP. > > > > It seems that this concept might be somewhat similar to the > > Flow Director information, which already has plenty of bits > > in the first cache line. You should consider if this can be > > somehow folded into the m->hash union. > > I'll tend to agree with Morten. > > First, I think that having more than 255 virtual links is possible, > so increasing the port size to 16 bits is not a bad idea. > > I think the port information is more useful for a network stack > than the queue_id, but it may not be the case for all applications. > On the other hand, the queue_id (or something else providing the same > level of information) could led in the flow director structure. > > Now, the question is: do we really need the port to be inside the mbuf? > Indeed, the application can already associate a bulk of packet with a > port id. > > The reason why I'll prefer to keep it is because I'm pretty sure that > some applications use it. At least in the examples/ directory, we can > find distributor, dpdk_qat, ipv4_multicast, load_balancer, > packet_ordering. I did not check these apps in detail, but it makes me > feel that other external applications could make use of the port field.
Having some applications using it does not mean that there is a good justification to keep it. I think the data must be stored inside the mbuf struct only if it increases the performance of known use cases significantly. So the questions should be: - How significant are the use cases? - What is the performance drop when having the data outside of the struct? The mbuf space must be kept jealously like a DPDK treasure :)