On Fri, Apr 6, 2018 at 12:12 PM, Francois Ozog <francois.o...@linaro.org>
wrote:

>
>
> On 6 April 2018 at 19:02, Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
>
>>
>>
>> On Fri, Apr 6, 2018 at 11:37 AM, Francois Ozog <francois.o...@linaro.org>
>> wrote:
>>
>>> In the case of DPI, I came across this.
>>>
>>>  Did you consider :
>>> - a symetric hash option so that uplink and downlink packets of a single
>>> flow (either TCP or UDP) give the same hash value?
>>>
>>
>> Yes, that's normally the case, however in the case of stateful protocols
>> like TCP there's the added wrinkle that contexts associated with a single
>> connection are split such that RX fields are R/W for RX processing but R/O
>> for TX processing while TX fields are R/W for TX processing but R/O for RX
>> processing. We should definitely get input from OFP on these sort of needs
>> as part of the overall design.
>>
>>
>>> - an offset so that HW calculates the hash starting at a specific packet
>>> area?
>>>
>>
>> A skippable prefix either to account for headroom or non-standard shims
>> makes sense.
>>
>>
>>> - an option that would calculate the hash starting at the most inner IP
>>> header (passing as much as encapsulations possible such as GRE, VxLAN...)
>>>
>>
>> Presumably we have two cases. Application-initiated hashes where the
>> application specifies the tuple directly or via a set of offsets into an
>> odp_packet_t, or system-initiated hashes that are performed as part of
>> packet parsing/classification.
>>
> When you say system, you mean HW or ODP itself? I wanted to have the app
> specify the tupple, not being aware of any encapsulation, the hardware just
> "skips" any encapsulation to do the app defined job on the most inner IP
> packet.
>

I'm referring to system behavior as defined by ODP APIs. From an
application standpoint it doesn't care how that behavior gets realized.
Ideally that behavior is HW-offloaded or HW-assisted, but that just results
in a more efficient implementation. This is another example of accelerator
pipelining as the application will define how it wants flows to be managed
and leaves it up to the ODP implementation to make that happen.


>
>>
>>> - an option to include or not the VLAN in the hash
>>>
>>> in addition to SPI, do you envision fields like GTP tunnelid?
>>>
>>
>> I would think that would make sense. The idea is that a "flow" is an
>> application-defined grouping of packets based on some higher-level
>> protocols.
>>
>>
>>>
>>> FF
>>>
>>> On 6 April 2018 at 17:35, Bala Manoharan <bala.manoha...@linaro.org>
>>> wrote:
>>>
>>>> On 6 April 2018 at 20:56, Bill Fischofer <bill.fischo...@linaro.org>
>>>> wrote:
>>>>
>>>> > Thanks, Bala. I like this direction. One point to discuss is the idea
>>>> > of flow hashes vs. flow ids or labels. A hash is an
>>>> > implementation-defined value that is derived from some
>>>> > application-specified set of fields (e.g., based on tuples). A flow id
>>>> > or label is an application-chosen value that is used to "tag" related
>>>> > packets based on some application-defined criteria.
>>>> >
>>>> > Do we need to consider both?
>>>> >
>>>>
>>>> Yes Bill. This value proposed could be considered as either of these
>>>> two.
>>>> for eg the flow is usually generated for the incoming packet by the HW
>>>> based on the tuple configured by the application.
>>>> Once the application receives the incoming packet based on the HW
>>>> generate
>>>> flow he might switch the packet to a SW generated flow in the next phase
>>>> which could be based on a hash generated from set of fields or a "tag"
>>>> like
>>>> SPI index.
>>>>
>>>
>> OK, then I wouldn't want to stretch existing APIs too far. If we can
>> reuse/extend existing APIs that's fine but it shouldn't be a primary design
>> consideration.
>>
>>
>>>
>>>> Regards,
>>>> Bala
>>>>
>>>> >
>>>> > On Fri, Apr 6, 2018 at 8:35 AM, Bala Manoharan
>>>> > <bala.manoha...@linaro.org> wrote:
>>>> > > Hi,
>>>> > >
>>>> > > Based on the requirements from our customers, we have come across
>>>> certain
>>>> > > limitations in the current scheduler design,
>>>> > >
>>>> > > 1) Creating a huge number of odp_queue_t is very expensive operation
>>>> > since
>>>> > > each queue contains a context and having millions of queues creates
>>>> > memory
>>>> > > constraints in the platform
>>>> > > 2) ORDERED and ATOMIC synchronization is currently handled at the
>>>> queue
>>>> > > level which is not sufficient since there are platforms which can
>>>> provide
>>>> > > synchronization for millions of different flows
>>>> > > 3) There arises a need to create a lightweight queue (flow) which
>>>> can be
>>>> > > configured by the application without any need to rely on the
>>>> underlying
>>>> > HW
>>>> > > memory constraints
>>>> > >
>>>> > > The proposal to handle these limitations are as follows,
>>>> > >
>>>> > > 1) create a lightweight flow handling similar to the queue.
>>>> > > 2) The flow will be a lightweight entity and the flow number can be
>>>> > > configured by the application on the fly before enqueuing the
>>>> packet into
>>>> > > the schedule operation.
>>>> > > 3) We will be re-using the odp_packet_flow_hash() API, the idea is
>>>> that
>>>> > the
>>>> > > application will configure the packet flow value before enqueuing
>>>> the
>>>> > > packet into the schedule operation and if the scheduler is
>>>> configured to
>>>> > be
>>>> > > flow aware then the synchronization will be done at the flow level
>>>> > instead
>>>> > > of queue level in the existing ODP design.
>>>> > > 3) This will not change the current ODP design as this will be a
>>>> > capability
>>>> > > parameter and the current ODP design can be seen as having a single
>>>> flow
>>>> > > within each queue
>>>> > >
>>>> > > Regards,
>>>> > > Bala
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> [image: Linaro] <http://www.linaro.org/>
>>> François-Frédéric Ozog | *Director Linaro Networking Group*
>>> T: +33.67221.6485
>>> francois.o...@linaro.org | Skype: ffozog
>>>
>>>
>>
>
>
> --
> [image: Linaro] <http://www.linaro.org/>
> François-Frédéric Ozog | *Director Linaro Networking Group*
> T: +33.67221.6485
> francois.o...@linaro.org | Skype: ffozog
>
>

Reply via email to