Summary of discussion and decision from today's ODP architecture call:

1. Status events (type ODP_EVENT_STATUS) are reserved for use in reporting
out-of-band notifications. The only such usage of this event currently
defined is soft byte/packet expiration notifications on outbound SAs
operating in inline mode.

2. All other IPsec events are reported as events of type ODP_EVENT_PACKET,
subtype ODP_EVENT_PACKET_IPSEC with appropriate error/warning bits set.
Implementations are free to use dummy packets to communicate events like
IPsec disable completion. Such dummy packets should have an
odp_packet_len() of 0. Applications should not attempt to use these as
actual packets, but instead take appropriate action in response to the
notification and then free them.

3. Dmitry will post a revision (or replacement) of PR #197 to reflect these
changes.

4. Bill will post a revision of PR #185 to reflect these changes.

5. Petri, Bala, and Nikhil will review the revised PRs posted by Dmitry and
Bill noted in points 3 and 4 above.


On Mon, Oct 23, 2017 at 6:42 AM, Pascal Van Leeuwen <
pvanleeu...@insidesecure.com> wrote:

> 1). I am not a huge fan of the idea of starting a timer to perform the SA
> delete since calculating this timer's timeout will be an issue and will
> depend upon the network load at the specific time.
>
> [PLE] I fully agree with you on that.  Time-out counters are an ugly
> solution. But sometimes HW/SW interfacing gets a little messy, either for
> practicle reasons (performance, HW complexity) or because the HW guys
> didn't fully realise the SW implications of their decisions (and never got
> any feedback on that because the SW guys just worked around it).
>
>
> 2). I get the feeling that this discussion is moving towards the earlier
> proposal from Dmitry to introduce a dummy packet to inform the application
> of the last packet which has been processed by this SA.
> correct me if I am wrong
>
> [PLE] Again, I agree. If the hardware provides a mechanism to accurately
> indicate this, then that mechanism can be used to create the dummy packet.
> If the hardware does not have such a facility, then it could be emulated by
> its ODP /implementation/ (which is HW specific anyway, so it should know
> how to calculate this value) by means of some local and implementation
> specific time-out mechanism.
>
> With respect to the dummy packet implementation, the only thing to realise
> is that (for inbound with HW lookup) it /may/ not be possible (without an
> undesirable performance impact, anyway), to:
>
> a) control with 100% accuracy which packet will be the last packet
> processed for an SA once odp_ipsec_sa_disable() is called
> b) indicate with 100% accuracy which was the last packet actually
> processed for the disabled SA
>
> So you shouldn't expect an "SA destroy" dummy packet to provide 100%
> synchronization with the packet stream.
> But it would at least allow giving some /approximate/ synchronization,
> which might still be better than a fully asynchronous SA destroy event.
>
> 3). I would really appreciate if you can summarize the discussion in form
> of a PR or RFC patch so we can debate on the exact changes for this
> proposal.
>
> [PLE] It was not my intention to push for any changes, I was asked about
> the HW side of things and commented on what it would mean for our HW
> specifically.
> I think the high level concept of the application first calling
> odp_ipsec_sa_disable() and then sometime later getting some event /or/
> dummy packet (I don't care which) back that the SA can be safely destroyed
> would work just fine for us. Any hardware specifics could be hidden from
> the application in the HW-specific part of the ODP implementation.
>
>
>
>
>
> Further comments inline...
>
> Regards,
> Bala
>
>
> On 18 October 2017 at 20:29, Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
> > On Wed, Oct 18, 2017 at 7:38 AM, Pascal Van Leeuwen <
> > pvanleeu...@insidesecure.com> wrote:
> >
> >> Bill,
> >>
> >>
> >>
> >> I realize the complexity of coming with an abstraction that would
> >> suit all possible (existing) implementations. Therefore, I am
> >> completely open to making changes/improvements to our own
> >> implementation where this makes sense and can be done without
> >> completely reinventing the architecture (obviously).
> >>
> >>
> >>
> >> However, since I don’t have any knowledge regarding other
> >> implementations, I can only comment on what would work for ours, with
> >> some changes on our side.
> >>
> >>
> >>
> >> The problem with removing an SA from the lookup table immediately
> >> when
> >> odp_ipsec_disable() is called for our hardware would be that this
> >> lookup table is managed by software, completely asynchronously to our
> >> hardware. So you still have no way of knowing which packet that is
> >> already in the hardware (prior to the lookup stage) or already
> >> waiting to be fetched in an external ring buffer would be found and
> >> which packet would no longer be found.
> >>
> >>
> >>
> >> That's the reason why odp_ipsec_sa_disable() exists. It's assumed
> >> that there are some number of packets in flight that are in various
> >> stages of processing at the time of the call. From the application's
> >> perspective, the only packets that matter are those packets in the RX
> >> pipeline that the application has yet to see. Once a packet has been
> >> seen by the application it becomes the application's responsibility
> >> to track/account for it in deciding when it's safe to call
> odp_ipsec_sa_destroy().
> >>
> >>
> >>
> >> The TX side is simple because the application initiates TX via the
> >> odp_ipsec_out_inline() API. So it's the application's responsibility
> >> to not make further calls to that API once it calls
> >> odp_ipsec_sa_disable(). So we only really need extra support from the
> >> ODP implementation on the RX side to account for packets that the
> application doesn't yet know about.
> >>
> >>
> >>
> >> Yes, I was talking about the RX side only because ODP does not
> >> actually support lookups by the IPsec HW for outbound (TX) operations
> >> (our HW does, by the way).
> >>
> >
> > That's good to know. I can see that capability being added to the API
> > at some point. You might want to make some suggestions for that.
> >
> >
> >> It makes sense that an application explicitly disabling a TX SA would
> >> not send anymore packets for that specific SA. I guess that’s the
> >> easy side then J
> >>
> >>
> >>
> >> I actually liked the idea of a **single** SA disable event being sent
> >> to the hardware, in our specific case by means of a dummy packet.
> >> This would allow us to mark the SA as being disabled and any packets
> >> behind it in the queue would see that and respond with some “SA already
> expired” error code.
> >> (actually, we could also automagically mark the SA disabled once we
> >> detect some preprogrammed packet count/octet count/timestamp
> >> threshold has been exceeded – or just provide some status code on the
> >> output for that)
> >>
> >> For our single pipe engines, that process everything strictly
> >> in-order, this would be fully deterministic. For our multi-pipe
> >> engines, there would still be some indeterminism due to packets being
> >> able to pass each other through different pipes, but at least it
> >> would be fairly limited. Either way, this indeterminism needs to be
> taken into account somehow.
>
> Since you mention about multi-pipe engines, the ODP API specifies that the
> packet received in the queue should maintain packet order per SA.
> Hence you should process packets belonging to a single SA in a single pipe
> else implement some mechanism in your implementation to make sure that
> packet belonging to a single SA maintain packet order.
>
> Do you have any issue in your HW to maintain packet order per SA?
>
> >>
> >>
> >>
> >> If I understand this approach, odp_ipsec_sa_disable() would operate
> >> by injecting a special "disable" packet via a loopback interface or
> >> similar into the RX packet stream.
> >>
> >> I don’t know enough about the ODP API (yet) to comment on ODP specifics.
> >> For our hardware, the packet would have to be injected into the input
> >> /packet/ ring buffer (the same one used for normal processing),
> >> basically at the exact point in time where you would want to stop
> >> allowing newly arriving packets from using the SA.
> >>
> >
> > That's one of the effects of odp_ipsec_sa_disable(), after that call
> > no further packets should be accepted for processing on this SA. They
> > are treated as an SA lookup failure.
> >
> >
> >>
> >>
> >> This would serve both to initiate the disable as well as to serve as
> >> the status event that indicates that the RX pipeline has been
> >> flushed. That would certainly be a valid implementation of the
> >> currently-defined disable semantics.
> >>
> >>
> >>
> >> When you refer to "multi-pipe engines" presumably that means you have
> >> two or more IPsec HW blocks operating in parallel that are fed by the
> RX queues.
> >>
> >> Actually it’s one IPsec hardware block scheduling packets from
> >> potentially a single input queue, using potentially a single shared
> >> SA, over multiple processing pipelines while maintaining proper
> sequence numbering/checking.
> >>
> >>
> >>
> >> Can operations from the same SA be in two different blocks at the
> >> same time?
> >>
> >> Yes! This is one of our major selling points: we can process a single
> >> SA over multiple processing pipes, allowing us to achieve very high
> >> single SA throughput.
> >>
> >
> > ODP uses ordered queues and locks to achieve the same by allowing
> > multiple cores to work on a single flow in parallel. It's a useful
> abstraction.
> >
> >
> >>
> >>
> >> Since IPsec operations assume sequentiality I'd assume either there
> >> would be some sort of interlock to prevent that or else there is
> >> inter-block coordination to assure that things like sequence numbers
> >> are handled properly.
> >>
> >> We are purposely not interlocking our pipes as that would kill
> >> performance when processing a mix of packets with a large delta in
> packet size (e.g.
> >> IMIX).
> >> So we indeed have some coordination hardware between the processing
> >> pipes to handle sequence numbers properly.
> >>
> >>
> >>
> >> Then, once the status result of the SA disable event has been seen at
> >> the output of our engine, the SA could finally be removed from the
> >> lookup table. And some time after that – I guess you would need some
> >> kind of time-out for that, as our hardware still needs it until all
> >> pending packets have passed the lookup stage – it could finally be
> >> destroyed and removed from memory.
> >>
> >>
> >>
> >> The question of lookup table vs. back-end processing and result
> >> queueing is an interesting one. Can an SA be removed/deactivated from
> >> lookup without also unhooking it from the rest of the IPsec processing
> machinery?
> >>
> >> I guess so. Removing the entry from the lookup table would just
> >> prevent it from being found. Any packets already past the lookup
> >> stage would still be able to use it as long as the SA data still
> >> exists in external memory. The problem is that you don’t really know
> >> which packets live where in the pipe at the time of removing the entry
> from the table.
> >>
> >
> > That's OK, as long as lookup is done once and the results of that
> > lookup are carried with the packet as it works its way through the
> > pipeline. Any packet that makes it past lookup before the disable call
> > removes the SA from lookup processing should be handled normally.
> >
> >
> >> So you don’t really know which packets that have already been
> >> enqueued would end up not finding the SA because it was removed from
> >> the table while they were waiting to be processed.
> >>
> >
> > If the lookup is done multiple times during the course of packet
> > processing that would be a problem. In that case your special "disable
> > packet" marker would serve the same purpose as any packets seen after
> > it would be discarded.
> >
> >
> >> Having an explicit SA disable event travelling /together/ with the
> >> packet stream would give you better control over that. If you need
> >> that at all, that is.
> >>
> >>
> >>
> >> Our current disable semantics assume that it can. If that's not
> >> possible then I can see where that would cause issues in trying to
> >> map these semantics to HW.
> >>
> >>
> >>
> >> Another approach would be to handle this as the SA queue level rather
> >> than the SA itself. The SA completion queue contains packets that
> >> have completed RX input processing and are ready for the application
> >> to see. However, until the application sees them it's
> >> indistinguishable where the packets are between the wire and this
> >> queue, so simply flushing the queue and pretending that all of these
> >> unseen packets were still on the wire and had not yet arrived at the
> >> time odp_ipsec_sa_disable(0 was called is also legitimate.
> >>
> >> Hmm … but if you flush that queue you would lose the packets? I guess
> >> with ethernet & IP not supposed to be reliable protocols that is sort
> >> of legal, and I suppose SA disables should not happen all too
> >> frequently, but it still seems to be a bit ugly to me.
> >>
> >
> > It's perfectly fine to drop packets for good reason. Since SA disables
> > (for tunnels, at least) are relatively rare events this should have no
> > meaningful impact on overall operation or performance. For "short
> session"
> > SAs, where an SA exists to handle only a single burst of packets,
> > chances are the higher-level protocol will signal completion and the
> > disable/destroy will be happening on an empty pipeline anyway, so
> > there will be nothing to discard.
> >
> >
> >>
> >>
> >> Regards,
> >>
> >> Pascal
> >>
> >>
> >>
> >> [image:
> >> http://medias.infostrates.fr/b2c/insidesecure/signatureMail/inside-lo
> >> go-signature.jpg]
> >>
> >> *Pascal van Leeuwen*
> >> Senior IC Design Engineer
> >>
> >> Tel. : +31 (0)73 65 81 900
> >>
> >>
> >> www.insidesecure.com
> >>
> >>
> >>
> >>
> >>
> >> CONFIDENTIALITY - This e-mail and any attachments hereto are intended
> >> only for use by the addressee(s) named herein and may contain
> >> confidential information. If you are not the named recipient, please
> >> notify the sender immediately and do not disclose the contents to any
> >> person, use it for any purpose, or store or copy the information on
> >> any medium. Please permanently delete the original copy of this
> >> e-mail and/or any attachments thereto and any printout thereof. Thank
> you.
> >>
> >>
> >>
> >> *From:* Bill Fischofer [mailto:bill.fischo...@linaro.org]
> >> *Sent:* Wednesday, October 18, 2017 3:36 AM
> >> *To:* lng-odp-forward <lng-odp@lists.linaro.org>; Pascal Van Leeuwen
> >> < pvanleeu...@insidesecure.com>
> >> *Subject:* Latest odp_ipsec_sa_disable() discussion and proposed
> >> resolution
> >>
> >>
> >>
> >> We discussed this during the ODP public call today and had the
> >> benefit of Pascal Van Leeuwen's participation. Pascal is a HW guy and
> >> explained some of the issues that HW IPsec support faces.
> >>
> >>
> >>
> >> HW recognizes an active SA by it being in a HW lookup table. From an
> >> ODP implementation perspective, odp_ipsec_sa_create() allocates an
> >> SA, initializes it, and inserts it into the lookup table to activate
> >> it. The question is how is the SA removed from the lookup table.
> >> There are two possible approaches.
> >>
> >>
> >>
> >> 1. It can be removed by odp_ipsec_sa_disable()
> >>
> >> 2. It can be removed by odp_ipsec_sa_destroy()
> >>
> >>
> >>
> >> The suggestion is that it be removed by odp_ipsec_sa_disable(). This
> >> accomplishes the goal of ensuring that no further inbound IPsec
> >> packets are recognized as belonging to this SA (they will appear on
> >> the pktio default queue and/or passed to the classifier for processing
> as-is).
> >>
> >>
> >>
> >> However, at the time odp_ipsec_sa_disable() is called, there may be
> >> one or more IPsec packets from that SA in process. In general, this
> >> processing may be carried out in parallel and the order in which
> >> packets exit the IPsec HW may not be determinate. Some
> >> implementations may be able to determine directly when these packets
> >> belonging to this SA have cleared the various IPsec HW blocks and
> >> have reached the SA queue. However absent such ability each
> >> implementation should be able to determine a maximum wait time after
> >> which it can be assured that any in-progress IPsec packet has
> >> completed whatever HW processing might have been in progress at the
> >> time
> >> odp_ipsec_sa_disable() was called.
> >>
> >>
> >>
> >> This leads to the following suggested implementation logic, which
> >> adheres to the original odp_ipsec_sa_disable() semantics:
> >>
> >>
> >>
> >> - Upon receiving the odp_ipsec_sa_disable() call, remove the SA from
> >> any HW lookup table(s)
> >>
> >>
> >>
> >> - Set an internal timer (implementation dependent) sufficient to
> >> allow all in-flight IPsec HW operations to complete.
> >>
> >>
> >>
> >> - Return 0 to indicate disable is in progress
> >>
> >>
> >>
> >> At this point no further inbound IPsec packets for this SA will be
> >> recognized and any that are on the queue associated with the SA will
> >> be processed by the application. This queue may or may not drain
> >> completely by the time the implementation-internal disable timer
> >> expires, but we can be sure that any in-flight IPsec packets are at
> least on this queue.
> >>
> >>
> >>
> >> When the internal timer expires, the implementation has two choices.
> >>
> >>
> >>
> >> - If it is able to add elements to the SA queue, it can add an
> >> ODP_EVENT_IPSEC_STATUS event to signal disable completion.
> >>
> >>
> >>
> >> - If it is unable to add elements to the SA queue, it can set
> >> internal metadata associated with the SA queue such that
> >> odp_queue_deq() or the scheduler will recognize that this "empty"
> >> queue needs to report a pseudo-status event. This can then be created
> >> and returned in response to this call and the associated internal queue
> metadata cleared.
> >>
> >>
> >>
> >> This combination of actions should result in the current
> >> odp_ipsec_sa_disable() semantics being realized.
> >>
> >>
> >>
> >> The implementation has broad flexibility in how this may be done. For
> >> example, a software "shadow queue" could be created that has priority
> >> strictly lower than the main SA queue such that it will never be
> >> examined if the SA queue contains any elements. Upon timer expiration
> >> the required status event could be added to this shadow queue such
> >> that it will only be dequeued after the SA queue is empty. The shadow
> >> queue would also have the property that it would report the same queue
> identity as its SA "parent"
> >> (the shadow being invisible to application SW) and thus serves only
> >> to work around cases where SW cannot add elements directly to the HW SA
> queue.
> >>
> >>
> >>
> >> I'd appreciate if those with direct knowledge of HW IPsec
> >> implementations could comment on whether this sort of approach is
> >> feasible on their platforms.
> >>
> >>
> >>
> >>
> >>
>

Reply via email to