On Tue, Sep 20, 2016 at 8:30 AM, Christophe Milard <
christophe.mil...@linaro.org> wrote:

> Hi,
>
> I am here trying to make a summary of what is needed by the driver
> interface
> regarding odp packet handling. Will serve as the base for the discussions
> at connect. Please read and comment... possibly at connect...
>
> /Christophe
>
> From the driver perspective, the situation is rather simple: what we need
> is:
>
> /* definition of a packet segment descriptor:
>  * A packet segment is just an area, continuous in virtual address space,
>  * and continuous in the physical address space -at least when no iommu is
>  * used, e.g for virtio-. Probably we want to have physical continuity in
>  * all cases (to avoid handling different cases to start with), but that
>  * would not take advantage of the remapping that can be done by iommus,
>  * so it can come with a little performance penalty for iommu cases.
>

I thought we had discussed and agreed that ODP would assume it is running
on a platform with IOMMU capability? Are there any non-IOMMU platforms of
interest that we need to support? If not, then I see no need to make this
provision. In ODP we already have an odp_packet_seg_t type that represents
a portion of an odp_packet_t that can be contiguously addressed.


>  * Segments are shared among all odp threads (including linux processes),
>

Might be more precise to simply say "segments are accessible to all odp
threads". Sharing implies simultaneous access, along with some notion of
coherence, which is something that probably isn't needed.


>  * and are guaranteed to be mapped at the same virtual address space in
>  * all ODP instances (single_va flag in ishm) */
>

Why is this important? How does Thread A know how a segment is accessible
by Thread B, and does it care?


>  * Note that this definition just implies that a packet segment is
> reachable
>  * by the driver. A segment could actually be part of a HW IO chip in a HW
>  * accelerated HW.
>

I think this is the key. All that (should) be needed is for a driver to be
able to access any segment that it is working with. How it does so would
seem to be secondary from an architectural perspective.


> /* for linux-gen:
>  * Segment are memory areas.
>  * In TX, pkt_sgmt_join() put the pointer to the odp packet in the
> 'odp_private'
>  * element of the last segment of each packet, so that pkt_sgmt_free()
>  * can just do nothing when odp_private is NULL and release the complete
>  * odp packet when not null. Segments allocated with pkt_sgmt_alloc()
>  * will have their odp_private set to NULL. The name and the 'void*' is
>  * to make that opaque to the driver interface which really should not
> care...
>  * Other ODP implementation could handle that as they wish.
>

Need to elaborate on this. Currently we have an odp_packet_alloc() API that
allocates a packet that consists of one or more segments. What seems to be
new from the driver is the ability to allocate (and free) individual
segments and then (a) assemble them into odp_packet_t objects or (b) remove
them from odp_packet_t objects so that they become unaffiliated raw
segments not associated with any odp_packet_t.

So it seems we need a corresponding set of odp_segment_xxx() APIs that
operate on a new base type: odp_segment_t. An odp_segment_t becomes an
odp_packet_seg_t when it (and possibly other segments) are converted into
an odp_packet_t as part of a packet assembly operation. Conversely, an
odp_packet_seg_t becomes an odp_segment_t when it is disconnected from an
odp_packet_t.


>  */
>
> typedef uint64_t phy_address_t;
>
> typedef struct{
>         void            *address;
>         phy_address_t   phy_addr;
>         uint32_t        len;
>         void*           odp_private;
> } pkt_sgmt_t;
>
> /* FOR RX: */
> /* segment allocation function:
>  * As it is not possible to guarantee physical memory continuity from
>  * user space, this segment alloc function is best effort:
>  * The size passed in parameter is a hint of what the most probable
> received
>  * packet size could be: this alloc function will allocate a segment whose
> size
>  * will be greater or equal to the required size if the latter can fit in
>  * a single page (or huge page), hence guarateeing the segment physical
>  * continuity.
>  * If there is no physical page large enough for 'size' bytes, then
>  * the largest page is returned, meaning that in that case the allocated
>  * segment will be smaller than the required size. (the received packet
>  * will be fragmented in this case).
>  * This pkt_sgmt_alloc function is called by the driver RX side to populate
>  * the NIC RX ring buffer(s).
>  * returns the number of allocated segments (1) on success or 0 on error.
>  * Note: on unix system with 2K and 2M pages, this means that 2M will get
>  * allocated for each large (64K?) packet... to much waste? should we
> handle
>  * page fragmentation (which would really not change this interface)?
>  */
> int pkt_sgmt_alloc(uint32_t size, pkt_sgmt_t *returned_sgmt);
>

Presumably the driver is maintaining rings of odp_segment_t's that it has
allocated. Currently allocations are from ODP pools that are of various
pool types. So one question is do we define a new pool type
(ODP_POOL_SEGMENT) of this existing odp_pool_t or do we introduce a
separate odp_segment_pool_t type to represent an aggregate area of memory
from which odp_segment_t's are drawn?

Alternately, do we provide an extended set of semantics on the current
ODP_POOL_PACKET pools? That might be cleaner given how odp_segment_t
objects need to be convertible to and from odp_packet_seg_t objects. This
means that either (a) applications must pass an odp_pool_t to the driver as
part of odp_pktio_open() processing, or (b) the driver itself calls
odp_pool_create() and communicates that back to the application.


>
> /*
>  * another variant of the above function could be:
>  * returns the number of allocated segments on success or 0 on error.
>  */
> int pkt_sgmt_alloc_multi(uint32_t size, pkt_sgmt_t *returned_sgmts,
>                          int* nb_sgmts);
>
> /*
>  * creating ODP packets from the segments:
>  * Once a series of segments belonging to a single received packet is
>  * fully received (note that this serie can be of lengh 1 if the received
>  * packet fitted in a single segment), we need a function to create the
>  * ODP packet from the list of segments.
>  * We first define the "pkt_sgmt_hint" structure, which can be used by
>  * a NIC to pass information about the received packet (the HW probably
>  * knows a lot about the received packet so the SW does not nesseceraly
>  * need to reparse it: the hint struct contains info which is already known
>  * by the HW. If hint is NULL when calling pkt_sgmt_join(), then the SW has
>  * to reparse the received packet from scratch.
>  * pkt_sgmt_join() returns 0 on success.
>  */
> typedef struct {
>         /* ethtype, crc_ok, L2 and L3 offset, ip_crc_ok, ... */
> } pkt_sgmt_hint;
>

I think the main point here is that the driver may have information
relating to the received packet that should populate the packet metadata as
part of creating the odp_packet_t from the individual odp_segment_t's that
contain the packet data. At a gross level we'd expect this sort of
processing:

odp_packet_t driver_rx() {
         ...receive a packet into a collection of odp_segment_t's
         return odp_packet_assemble(seg_list, seg_count, ...);
}

Where odp_packet_assemble() would have the signature:

odp_packet_t odp_packet_assemble(odp_segment_t seg_list[], int seg_count,
...);

This would be a new API that, unlike odp_packet_alloc(), takes an existing
list of odp_segment_t's and associated info to return an odp_packet_t.

The driver_rx() interface would be called by the scheduler as it polls the
pktio and is then processed by issuing the call:

odp_packet_classify(pkt, 0, ODP_PARSE_L2, ODP_NO_REBUFFER);

where this API is as proposed in the pipeline design doc:

int odp_packet_classify(odp_packet_t pkt, int offset,

                       enum odp_packet_layer,
                       int rebuffer);

This call releases the packet from the scheduler by populating the packet
metadata and enqueuing it on an appropriate (application) receive queue
according to its CoS.


>
> int pkt_sgmt_join(pkt_sgmt_hint *hint,
>                   pkt_sgmt_t *segments, int nb_segments,
>                   odp_packet_t *returned_packet);
>
> /* another variant of the above, directely passing the packet to a given
> queue*/
> int pkt_sgmt_join_and_send(pkt_sgmt_hint *hint,
>                            pkt_sgmt_t *segments, int nb_segments,
>                            odp_queue_t *dest_queue);
>

As noted above, we probably don't want the driver itself concerned with
destination queues since that's really the job of the classifier.


>
>
> /* FOR TX: */
> /*
>  * Function returning a list of segments making an odp_packet:
>  * return the number of segments or 0 on error:
>  * The segments are returned in the segments[] array, whose length will
>  * never exceed max_nb_segments.
>  */
> int pkt_sgmt_get(odp_pool_t *packet, pkt_sgmt_t *segments, int
> max_nb_segments);
>
> /*
>  * "free" a segment
>  */
> /*
>  * For linux-generic, that would just do nothing, unless
> segment->odp_private
>  * is not NULL, in which case the whole ODP packet is freed.
>  */
> int pkt_sgmt_free(pkt_sgmt_t *segment);
> int pkt_sgmt_free_multi(pkt_sgmt_t *segments, int nb_segments);
>
>
We already have APIs for getting and walking the list of odp_packet_seg_t's
associated with a packet. An API that would do a bulk "disassemble" of an
odp_packet_t into individual odp_segment_t's would be reasonable here. We
do need to be careful about freeing, however, as this is going to intersect
with whatever we do with packet references.

Something to explore more fully.

Reply via email to