On Mon, Mar 27, 2017 at 5:27 AM, Savolainen, Petri (Nokia - FI/Espoo)
<petri.savolai...@nokia-bell-labs.com> wrote:
>
>> >  /**
>> > + * Configuration options for IPSEC inbound processing
>> > + */
>> > +typedef struct odp_ipsec_inbound_config_t {
>> > +       /** Default destination queue for IPSEC events
>> > +        *
>> > +        *  When inbound SA lookup fails in asynchronous or inline
>> modes,
>> > +        *  resulting IPSEC events are enqueued into this queue.
>> > +        */
>> > +       odp_queue_t default_queue;
>> > +
>> > +       /** Constraints for SPI values of inbound SAs for which lookup
>> is
>> > +        *  enabled. Minimal range size and unique SPI values may
>> improve
>> > +        *  performance. */
>> > +       struct {
>> > +               /** Minimum inbound SPI value. Default value is 0. */
>> > +               uint32_t min;
>> > +
>> > +               /** Maximum inbound SPI value. Default value is
>> UINT32_MAX. */
>> > +               uint32_t max;
>> > +
>> > +               /** Inbound SPI values may overlap. Default value is 0.
>> */
>> > +               odp_bool_t overlap;
>>
>> It's not clear what you mean by SPI values overlapping since an SPI is
>> a unit32_t value. Some explanation / illustration seems warranted
>> here.
>
> Application tells if it uses unique SPI values for all (inbound) SAs it 
> creates (== no two SAs have the same SPI), which results simpler lookup for 
> implementation. This should be the default setting, since application can 
> decide inbound SPI values. Since default values are usually zero, I used term 
> "overlap" instead of e.g. "non_unique".

As we discussed today, this is really saying whether lookups will be
done via a combination of SPI + dest addr or SPI only. In the latter
case SPI values must be unique. Either the name should make that
distinction obvious or else some additional documentation text should
be included here to avoid any possible confusion on the part of the
application writer or ODP implementer as to what the intent is here.
Perhaps "lookup_spi_and_dest" ?

>
>
>> >
>> >  /**
>> > + * Disable IPSEC SA
>> > + *
>> > + * Application must use this call to disable a SA before destroying it.
>> The call
>> > + * marks the SA disabled, so that IPSEC implementation stops using it.
>> For
>> > + * example, inbound SPI lookups will not match any more. Application
>> must
>> > + * stop providing the SA as parameter to new IPSEC input/output
>> operations
>> > + * before calling disable. Packets in progress during the call may
>> still match
>> > + * the SA and be processed successfully.
>>
>> Does this mean that it is an application responsibility to ensure that
>> there are no "in flight" IPsec operations at the time this call is
>> made? That seems unnecessarily burdensome, as one of the purposes of
>> an API like this is to flush the request pipeline before issuing a
>> destroy call. I'd prefer the following sort of wording:
>>
>> "Operations initiated or in progress at the time this call is made
>> will complete normally, however no further operations on this SA will
>> be accepted. For example, inbound SPI lookups will not match any more,
>> and outbound operations referencing this SA will fail."
>
>
> Application must not pass the same SA as parameter simultaneously to 
> disable() and e.g. ipsec_in() or ipsec_enq(). This way implementation does 
> not (may not) need to synchronize, between slow path (disable) and fast path 
> (ipsec_in) calls. Appplication should be able to synchronize itself so that 
> one thread does not disable/destroy a SA that other thread still actively 
> uses.
>
> Look ups for in-flight packet are still possible during disable, since that 
> SA usage is in control of implementation. So, application can still continue 
> passing packets without SA (for lookup), but it must not explicitly refer use 
> SA itself on other (fast path calls).

An IPsec operation referencing an SA can do so explicitly (via an
application odp_ipsec_in/out call) or implicitly (via an inline
lookup). As part of any such SA reference, the ODP implementation must
always determine whether the SA is valid. Reasons for an SA not being
valid include:

- The SA lookup failed (implicit only)
- The SA has expired due to elapsed time or number of bytes processed
- The SA has been administratively disabled

Since the ODP implementation has to do these checks for every
operation anyway, having it do the checks for administrative
disablement, which is what odp_ipsec_sa_disable() does, is not adding
any real additional work. The proposal that the implementation does
the check for administrative disablement only for implicit SA
references imposes an additional burden on the part of every ODP
application that uses IPsec that seems unnecessary.

I view odp_ipsec_sa_disable() to be analogous to odp_pktio_stop(),
which is used to quiesce an odp_pktio_t prior to closing it. Note the
analogous documentation in odp_pktio_stop():

 * Stop packet receive and transmit on a previously started interface. New
 * packets are not received from or transmitted to the network. Packets already
 * received from the network may be still available from interface and
 * application can receive those normally. New packets may not be accepted for
 * transmit. Packets already stored for transmit are not freed. A following
 * odp_packet_start() call restarts packet receive and transmit.

The "new packets may not be accepted for transmit" is the way I'd like
to see odp_ipsec_out() behave following an odp_ipsec_sa_disable()
call. Disable simply marks the SA so that in-flight operations can
complete but no new operations on that SA will be accepted. Once it
returns (indicating that the operation "pipeline" for that SA has been
flushed) the application can know that it is safe to destroy the SA.

If odp_ipsec_sa_disable() does not behave this way, then the
application would have to separately track and check SA enable/disable
status before it makes any odp_ipsec_out() call. But note that we're
not asking the application to track and check time/data expiration
limits on the SA, so this is inconsistent. Either the application
using lookaside processing should be responsible for tracking
everything or the ODP implementation should do this for the
application. The ODP philosophy has been that it's better to do such
things within the implementation than to require every ODP application
to invent their own scheme for doing this sort of thing.

Hope that clarifies things.


>
> -Petri

Reply via email to