On Mon, Feb 13, 2023 at 4:29 PM Watson Ladd <watsonbl...@gmail.com> wrote:

>
>
> On Wed, Feb 8, 2023 at 10:16 AM Boris Pismenny <borispisme...@gmail.com>
> wrote:
> >
> > Hello,
> >
> > I work on NIC hardware acceleration for NVIDIA, and we are looking into
> QUIC and DTLS1.3 acceleration. QUIC and DTLS employ packet number
> encryption (PNE) which increases security. At the same time, PNE
> significantly encumbers hardware acceleration as I’ll explain next.
> >
> > For hardware to encrypt the packet numbers, there are two options:
> >
> > Feed the header back into the encryption machine after data has been
> encrypted. This means storing and forwarding data, higher implementation
> complexity, and greater bandwidth requirements on the single encryption
> machine.
>
> Isn't the cost going to be one additional AES block+a small amount of
> routing+buffering the output packet (which you have to do anyway for
> pacing?) Yes, it's annoying that you have to back to the start of the
> packet to fill it in, but that seems relatively accommodate. I don't think
> it costs you bandwidth requirements, I think it costs 1 additional
> encryption+memory accesses.
>
>
For pacing, no buffering needs to happen inside the NIC, as packet data
resides in host memory
until it needs to go out to the wire. In general extra buffering is rare.
Obviously it is possible to design  hardware with PNE support that achieves
line rate by hiding
latency and adding extra bandwidth to the relevant units, but if you look
at latency then it is a
different story. For those that care about low latency and encryption, I
think that disabling PNE
is a reasonable trade-off to make.


> >
> > Adding an additional unique pipeline stage dedicated for header
> encryption.
>
> So I read the whole exchange downthread and I'm not quite sure what you
> mean by this. Is the idea that the hardware gets the packet in 16 byte
> lumps, encrypt each one, and separately has a dedicated encryptor for the
> encoded header that spits out the result along with the 2nd block into the
> downstream processing? So then the next thing is some sort of buffer that
> fast forwards the header when it is ready and then outputs the rest of the
> packet? This feels really similar to the first option/the optimization
> discussed below.
>

The difference is whether it is the same AES unit that does the payload or
a different one.


>
> >
> > As you may already know, this is not hardware friendly and for this
> reason many vendors will likely refuse to pay the cost of supporting this.
> But suppose a vendor does implement this feature, one problem still
> remains. PNE will still cause noticeable latency and performance
> degradation for high speed networks (think >400Gbps).
>
> *spins the wheel of reincarnation*
>
> >
> > Now, in certain use-cases, such as high performance computing, cloud
> computing, or data-center clusters—the security benefits of encrypting
> headers are marginal compared to the latency imposed by PNE. Would it be
> possible to consider letting these users negotiate to disable PNE and by
> doing so benefit (more) from encryption acceleration?
>
> It's only partially a security thing but also an anti-ossofication measure.
>

Right. But, I'm not sure if this applies for both QUIC and DTLS.
If I understand correctly, DTLS has used plaintext sequence numbers before
version 1.3 with no apparent ossification. So, would it be a bad thing if
users can still do that and benefit from all the work done towards DTLS1.3?
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to