Re: [TLS] potential attack on TLS cert compression

2018-03-22 Thread Thomas Pornin
On Thu, Mar 22, 2018 at 07:10:00PM +0200, Ilari Liusvaara wrote:
> I think BearSSL processes messages chunk-by-chunk. I think it even can
> process individual X.509 certificates chunk-by-chunk.

That's correct. In fact, it can process a complete handshake, including
the X.509 certificate chain, even if each individual byte is sent in its
own record. The only elements that are reassembled in memory are public
keys and signature values, on which I can enforce strict size limits
(e.g. at most 512 bytes for a signature, which is good for up to
RSA-4096).


> The reason why chunk-by-chunk processing is so rare is how difficult it
> is to program.

BearSSL does that by doing all the parsing in a dedicated coroutine,
which is itself implemented with a Forth-like language. This allows a
"normal", nested parsing that can be interrupted and resumed at will, as
data bytes become available.


Certificate compression would be challenging to implement, though.
Usually, compression relies on at least a "window" over the decompressed
data (32 kB for Zlib/Deflate). Some rudimentary forms of compression
don't need that (e.g. run-length encoding) but usually offer poor
compression ratios. A 32 kB window is a lot for the kind of architecture
that BearSSL targets.


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] draft-thomson-tls-record-limit-00

2017-03-28 Thread Thomas Pornin
On Tue, Mar 28, 2017 at 06:35:24AM -0500, Martin Thomson wrote:
> I just submitted a version of the draft we've discussed a little on
> the list.
> 
> I don't think we concluded the discussion about what to do about block
> cipher padding.

I don't have strong preferences on this, but I would incline toward
using the plaintext length in the extension. In any case, adding a
longer-than-necessary padding in order to defeat traffic analysis does
not make sense if it expands the size beyond the minimal record size for
a full record (i.e. if an endpoint wants to add extra padding bytes to a
record with 16384 bytes of padding, it is only _revealing_ extra
information, not hiding it).

I suggest altering this paragraph:

   The size limit expressed in the "record_size_limit" extension doesn't
   account for expansion due to compression or record protection.  It is
   expected that a constrained device will disable compression and know
   - and account for - the maximum expansion possible due to record
   protection based on the cipher suites it offers or selects.  Note
   that up to 256 octets of padding and padding length can be added to
   block ciphers.

into this:

   The size limit expressed in the "record_size_limit" extension doesn't
   account for expansion due to compression or record protection.  If
   and endpoint advertises a size limit which is lower than the
   protocol-defined limit, then the peer SHALL NOT send a record whose
   final, protected size exceeds that of the minimal protected size of a
   record that contains exactly "record_size_limit" plaintext bytes and
   uses no compression.
   
   For instance, if using TLS 1.2 and a cipher suite that mandates
   AES/CBC encryption and HMAC/SHA-256 for protection, and an endpoint
   advertises a "record_size_limit" of 700 bytes, then the minimal
   protected record size for 700 bytes of plaintext contents is 757
   bytes:

 - 700 bytes of plaintext
 - 32 bytes for the HMAC/SHA-256
 - 4 bytes of padding to reach the next multiple of the AES block
   size (which is 16 bytes)
 - 16 bytes for the explicit IV
 - 5 bytes for the record header

   The padding may have length 1 to 256 bytes as per protocol rules;
   but in the presence of a "record_size_limit" of 700 bytes expressed
   by the peer, an endpoint SHALL refrain from sending records whose
   total protected size exceeds 757 bytes.

   It is expected that a constrained device will disable compression;
   moreover, the practice of adding a longer-than-minimal padding is
   done in order to defeat traffic analysis, and sending records longer
   than the minimal size for full records is counterproductive (such a
   record would reveal extra information to onlookers, and thus should
   be avoided).

--

Another unrelated comment: in section 3, there is the following:

   The "max_fragment_length" extension is also ill-suited to cases where
   the capabilities of client and server are asymmetric.  The server is
   required to select a fragment length that is as small or smaller than
   the client offers and both endpoints need to comply with this smaller
   limit.

Actually, it is worse than that: per the wording of RFC 6066, if a
client advertises a length of L bytes, the server must respond with
_exactly_ the same length L; the server is not allowed to select a
smaller length. The relevant RFC text is:

   The "extension_data" field of this extension SHALL contain a
   "MaxFragmentLength" whose value is the same as the requested maximum
   fragment length.

and it is reinforced some lines later:

   Similarly, if a client receives a maximum fragment length negotiation
   response that differs from the length it requested, it MUST also
   abort the handshake with an "illegal_parameter" alert.

The "max_fragment_length" extension is completely client-driven: it is
used only on the client's initiative, and uses the client's length. The
server's only choice is to accept the will of the client, or reject the
connection. Thus, it handles only the case of constrained clients
talking to big servers, not the other way round.


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC 6066 - Max fragment length negotiation

2017-03-21 Thread Thomas Pornin
On Fri, Mar 17, 2017 at 05:24:09PM +1100, Martin Thomson wrote:
> I'd even go so far as to specify it:
> 
> https://martinthomson.github.io/tls-record-limit/
> 
> I'll submit an I-D once the blackout ends if people are interested in this.

I like this proposal. One comment, though: I think the wording in
section 4 should mandate that the value sent MUST NOT exceed the maximum
record size -- i.e. if an implementation supports records up to 16384
bytes, then it should put 16384 here, not a bigger value suc as 65535.

Rationale: last time this was discussed on this list, some people
expressed the wish to ultimately support records with more than 16384
bytes of plaintext. If such an extension ever comes to fruition (it is
certainly easy enough to do with CBC and GCM cipher suites), then
sending a record_size_limit with a limit of, say, 6 bytes, would
serve as indication that the implementation indeed supports such larger
records. This holds only as long as no implementation sends a value
larger than 16384 if it does not really accept records of more than
16384 bytes.

Therefore, I propose to replace this paragraph:

An endpoint that has no limit on the size of data they receive can
set this value to any value equal to or greater than the maximum
possible record size, such as 65535. A larger value does not allow
the endpoint to send larger records than the protocol permits. An
endpoint that receives a value larger than the maximum defined in
the protocol MUST NOT exceed protocol-defined limits. For TLS 1.3
and earlier, this limit is 2^14 octets.

with the following:

An endpoint that supports all sizes that comply with the
protocol-defined limits MUST send exactly that limit as value for
maximum record size (or a lower value). For TLS 1.3 and earlier,
that limit is 2^14 octets. Higher values are currently reserved for
future versions of the protocol that may allow larger records; an
endpoint MUST NOT send a value higher than 2^14 unless explicitly
allowed by such a future version and supported by the endpoint.

When an endpoint receives a maximum record size limit larger than
the protocol-defined limit, that end point MUST NOT send records
larger than the protocol-defined limit, unless explicitly allowed by
a future TLS version.


Of course, larger-than-16384 records are not meant for constrained
systems, but for big systems. Overhead for 16384-byte records with
ChaCha20+Poly1305 is 21 bytes (for AES/GCM in TLS 1.2, this is 29
bytes), i.e. less than 0.2%, which seems small enough to me; but there
still is some demand for larger records, so it makes sense not to
prevent them from ever happening with tighter wording.

---

Another point which was made is that CBC cipher suites have a
variable-length padding, up to 256 bytes (length byte + padding), which
is not fully negligible: an endpoint with a 500-byte buffer would have
to send a "maximum record size" of 223 bytes only, in order to fully
support AES/CBC+HMAC/SHA-256 in all cases, while in practice most if not
all endpoints will stick to minimal-sized paddings. Maybe there should
be some extra wording saying that when a "maximum record size" was
received, with a value less than the protocol-defined limit, then an
endpoint SHOULD strive to use minimal-sized padding in cipher suites
that have a variable-sized padding. Or maybe something more convoluted
that says:

An endpoint MUST NOT generate a protected record with plaintext
larger than the RecordSizeLimit value received from its peer. An
endpoint MUST NOT either generate a protected record such that the
encrypted record length (TLSCipherText.length) exceeds the length of
the smallest possible encrypted record that would contain exactly as
many plaintext bytes as the received RecordSizeLimit value, in the
currently active cipher suite.

This would be only for the benefit of CBC cipher suites with TLS 1.2 and
earlier, not for TLS 1.3, because recent AEAD cipher suites have
predictable (and small) overhead.

Arguably, pre-TLS-1.3 versions also have problem with compression, which
should be in all generality avoided, just like CBC cipher suites should
also be avoided. Maybe this is not a problem after all, and constrained
systems that are recent enough to implement this new extension will also
"naturally" avoid CBC cipher suites anyway. (In any case, if an endpoint
requires small records, then it cannot really talk with peers that don't
support the proposed maximum_record_size extension, so it needs recent
implementations that _should_ already implement at least TLS 1.2 and
some AEAD cipher suites.)


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC 6066 - Max fragment length negotiation

2017-03-17 Thread Thomas Pornin
On Fri, Mar 17, 2017 at 04:44:48PM +0200, Ilari Liusvaara wrote:
> The mere thought of someone implementing streaming processing in
> C scares me. I think BearSSL autogenerates that code.

Yes, actual code is in a custom Forth dialect, which is compiled to
token-threaded code executed by a C interpreter. That's because if you
want to implement streamed processing sanely in an imperative language
(like C), then you basically need coroutines, i.e. the ability to
interrupt the processing, and later on jump back to the processing. You
cannot do that in plain C if you have function calls and thus a "call
stack" to save and recover (and if you do not, then the code becomes
insanely unreadable). You _could_ make a custom stack, but this is
expensive (since the C compiler tends to create local variables at will,
a custom stack would need at least 1 or 2 kB of extra RAM) and it is
awfully non-portable.


> Also, in TLS 1.3, certificate messages are considerably more
> complicated. I don't think streaming processing of recommended-to-
> support stuff is even possible.

Streaming processing is ill-supported and on the decline. E.g. even with
TLS 1.2, EdDSA-signed certificates cannot be processed with streaming,
because the hash function computation over the to-be-signed must begin
with hashing the 'R' element (which is part of the signature, and occurs
_after_ the TBS) and the 'A' value (the signer's public key, which is
found in the signer's certificate, that comes _after_ the current
certificate in TLS 1.2 Certificate message).

Since TLS 1.3 also mandates some options that may require considerable
buffering (e.g. the cookies and the session tickets, both ranging up to
64 kB), one might say that, as an evolving standard, TLS 1.3 is moving
away from the IoT/embedded world, and more toward a Web world. This is
not necessarily _bad_, but it is likely to leave some people unsatisfied
(and, in practice, people clinging to TLS 1.2).


> TLS architecture does not allow this. Sending any extension in server
> hello that wasn't in client hello causes loads of implementations to
> just blow up (my implementation is certainly one of those). In fact,
> clients are REQUIRED to.

I know. BearSSL also rejects server extension that do not match client
extensions. It also rejects attempts by the server at trying to
negotiate a different maximum fragment length. It does so because the
RFC says so (even though I agree with Peter that the standard behaviour
is of questionable usefulness).


> You mean maximum handshake message size and maximum record size?

I mean a maximum record size for records sent by the client to the
server, _and_ a maximum record size for records sent by the server to
the client. Since any implementation may use distinct buffers for
sending and for receiving(*), the two sizes need not match.


(*) In particular, if you want to support HTTPS, where pipelining
requests is allowed, an HTTPS-aware server more or less needs to
have two distinct buffers for input and output.


--Thomas

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC 6066 - Max fragment length negotiation

2017-03-17 Thread Thomas Pornin
On Fri, Mar 17, 2017 at 11:21:12AM +, Peter Gutmann wrote:
> However, this then leads to a problem where it doesn't actually solve
> the constrained-client/server issue, if a client asks for 2K max
> record size and the server responds with a 4K hello then it's going to
> break the client even if later application_data records are only 2K.
> So it would need to apply to every record type, not just
> application_data.

Hello,

I had tried to raise the same issues here, a few months ago. The
max_frag_length extension, as currently defined in RFC 6066, has the
following issues:

  - It is client-driven:

** The server cannot send the extension unless the client has
   sent it first.

** Even if the client sent the extension, the only option for the
   server is to respond with an extension advertising the very
   same length. The server has no option to negotiate a smaller
   maximum fragment length.

  - "Big" clients (Web browsers) don't support it and have no incentive
to do so, since they, as client, can totally use huge records,
which are negligible with regards to the dozens of megabytes they
eat up just for starting up.

  - The extension mandates the same size constraint on both directions.
A constrained implementation may have two separate buffers for
sending and receiving, and these buffers need not have the same size.
In fact, in dedicated specific situations, records larger than the
output buffer may be sent (the sender must know in advance how many
bytes it will send, but it can encrypt and MAC "on the fly").

Fragmentation of messages is another issue, which is correlated but
still distinct. Note for instance that it is customary, in the case of
TLS 1.0 with a CBC-based cipher suite, to fragment _all_ records
(application data records, at least) as part of a protection against
BEAST-like attacks. Also, having very small buffers does not necessarily
prevent processing larger handshake messages, or even larger unencrypted
records. Here I may point at my own SSL implementation (www.bearssl.org)
that can do both: it supports unencrypted records that are larger than
its input buffer, and it supports huge handshake messages. It can
actually perform rudimentary X.509 path validation even with
multi-megabyte certificates, while keeping to a few kilobytes of RAM and
no dynamic allocation.

Now that does not mean that a "don't fragment" flag has no value.
Indeed, streamed processing of messages is not easy to implement (I
know, since I did it), and having some guarantees on non-fragmentations
may help some implementations that are very constrained in ROM size and
must stick to the simplest possible code. But it still is a distinct
thing. Moreover, maximum handshake message length needs not be the same
as the maximum record length. For instance, OpenSSL tends to enforce a
maximum 64 kB size on handshake messages. Maybe we need a "maximum
handshake message length" extension.


In order to "fix" RFC 6066, the following would be needed, in my opinion:

  - Allow the server to send the extension even if the client did not
send it.

  - Allow the server to mandate fragment lengths smaller than the
value sent by the client (a client not sending the extension would
be assumed to have implicitly sent an extension with a 16384-byte
max fragment length).

  - Preferably, change the encoding to allow for _two_ lengths, for
both directions, negociated separately.

  - Preferably, write down in TLS 1.3 that supporting the extension is
mandatory. Otherwise, chances are that Web browsers won't
implement it anyway.

I can prototype things in BearSSL (both client and server).


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Last call comments and WG Chair review of draft-ietf-tls-ecdhe-psk-aead

2017-03-01 Thread Thomas Pornin
On Wed, Mar 01, 2017 at 01:06:27PM +, Aaron Zauner wrote:
> I don't see why the IoT/embedded-world can't make use of ChaCha/Poly
> in future implementations?

IF the embedded platform is "generic" (say, it's an ARM Cortex M0+),
then ChaCha20 is faster than anything using AES. Poly1305 is less clear
because it relies on multiplications and multiplications can be
expensive on small microcontrollers; in my own tests with my own
implementations, ChaCha20 and Poly1305 run at roughly the same speed on
a Cortex M0+ (with the 1-cycle multiplier option). Even a table-based
AES (that is, formally "not constant-time", though on a cache-less
microcontroller it might be fine nonetheless) will be about twice
slower. Similarly, the GHASH part of GCM will be slower than Poly1305
(unless you use big key-dependent tables, which is not constant-time but
also rarely doable in small embedded systems, where RAM is a very scarce
resource).

HOWEVER, there are some microcontrollers with hardware acceleration for
AES, e.g. the ESP32 (a popular micrcontroller-with-WiFi) has some
circuitry that can do an AES block encryption in 11 clock cycles, which
is much faster than ChaCha20. Moreover, in the presence of such
hardware, CCM will also be much faster than GCM, the GHASH part becoming
prohibitively expensive (relatively to encryption). The push for CCM
mainly comes from that kind of hardware.

(EAX mode might be even preferable on AES-able hardware, but CCM has
a stronger legacy foothold.)


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-12-02 Thread Thomas Pornin
On Fri, Dec 02, 2016 at 02:17:24PM +, Ackermann, Michael wrote:
> In Enterprise circles TLS is an unknown acronym and as painful as it
> is,  we must usually refer to it as SSL,  before anyone knows what we
> are talking about.  Software products are guilty too.   Parameter
> fields frequently reference SSL.   :(

Actually there is a large variety in what I encounter (I work in a big
financial institution, and I have gone through other big organisations).

Some will just know "SSL" and talk about SSL for all protocols in the
"SSL" family (which so far includes SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1
and TLS 1.2).

Some will use "SSL" for SSL 2.0 and SSL 3.0, and "TLS" for the TLS 1.x
versions. They then ban "SSL" and want to enforce "TLS". When they
encounter regulations that say "don't use TLS 1.0, only TLS 1.1+", they
get confused.

Some people and software interfaces use "SSL vs TLS" in a completely
different way, in the context of protocols like IMAP or FTPS: they use
"SSL" to mean "SSL handshake first, then protocol inside it", and "TLS"
to mean "protocol first and a STARTTLS command". This distinction is
orthogonal to protocol versions.

Commercial CA tend to sell "SSL certificates", not "TLS certificates"
or "SSL/TLS certificates". In a similar vein, the 'S' in 'HTTPS' does
_not_ mean "SSL", but not many people know that.

When I encounter someone who knows the differences between all versions,
then I am in front of a mirror. The taxonomy is confused and
complicated, and people who are maniacal enough to learn and remember it
are very rare.



If we look at what Microsoft did when it encountered the same kind of
terminology mess, it decided that the number following 2000 was "XP".
Lately, for server versions, Microsoft uses a year-based numbering,
and even so, they depart from it at times, e.g. when they decided that
"2009" was really "2008R2".

In practice, people don't have problem with gaps in numbering; they
are even eager to _create_ gaps when convenient, for instance by
not acknowledging the existence of Windows Vista.


So my conclusion is that terminology is essentially fluid and chosen by
people in the field, without any form of concertation and with a trend
toward simplification: the _operational_ notion is to lump versions into
two groups, the ones that must be used and the ones that must not be
used. There is about nothing IETF can do about it (though a really
poorly chosen name might increase confusion even further). The only
naming scheme which is kinda coherent is the numbering scheme on the
wire (3.0, 3.1...), and even that one fails to capture SSL 2.0 (which is
in fact 0.2 on the wire).


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Maximum Fragment Length negotiation

2016-11-29 Thread Thomas Pornin
On Thu, Nov 24, 2016 at 09:10:00PM +, Fossati, Thomas (Nokia - GB)
wrote:
> I like your proposal, but I'm not convinced that overloading the
> semantics of an already existing extension when used in combination
> with a specific version of the protocol is necessarily the best
> strategy.  Besides, I'd like to be able to deploy a similar mechanism
> in 1.2.

Defining a new extension is certainly possible. However, it would then
require deciding on the intended behaviour when both that new extension
and the RFC 6066 extension are present.

Tentatively, one could try this:

  - The new extension documents the maximum record length supported
by whoever sends it. Encoding is as in RFC 6066: one byte of
value x for a maximum record plaintext length of 2^(x+8) bytes).
We extend that to the whole 1..8 range so that larger records
may be used by implementations who can afford them and obtain
some performance increase by doing so (actual maximum plaintext
length will be slightly less than 65535 bytes becose the length
header is 16-bit and there must be some room for the MAC).

  - If a client sends both the RFC 6066 extension and the new extension,
and the server supports the new extension, then the RFC 6066
extension is ignored and only the new extension is used. A server
MUST NOT send both extensions.

  - All implementations that support the extension MUST have the
ability to apply a shorter size limit than their maximum limit
(this is for _sending_ records).

  - The length sent by the server is the one that will be applied to
subsequent records on the connection, in both directions. This
applies to the whole connection, including subsequent handshakes
(renegotiations), unless both client and server send the new
extension again in a renegotiation (in which case the new length
appplies).

  - If using TLS 1.3, then the following extra rules apply:

 - All TLS 1.3 implementations MUST support the extension.

 - If the client does not send the new extension, then this is
   equivalent to the client sending the new extension with a
   value of 6 (i.e. maximum plaintext length is 2^14 = 16384 bytes).
   In particular, this allows the server to send the extension.

 - If the server does not send the new extension, then this is
   equivalent to the server sending the new extension with the
   same value as the one from the client. (So, if neither sends
   the extension, then the usual 16384-byte limit applies.)

  - If using TLS 1.2 or a previous version, then there is no implicit
usage:

 - The server MUST NOT send the new extension unless the client sent
   it.

 - The maximum plaintext limit shall be enforced only if the server
   sent the extension; that limit is the one defined by the server's
   extension.

 - If the client and/or the server does not send the extension, then
   the maximum plaintext length is the one that was in force at that
   point, i.e. 16384 bytes for a new connection, or whatever was
   used before the new handshake in case of renegotiation.

Some noteworthy points:

  * TLS 1.3 has no renegotiation, so the renegotiation behaviour is
for TLS 1.2 and previous. This avoids tricky issues with the
TLS 1.3 implicit behaviour in case of renegotiation.

  * A client SHOULD send the new extension in all ClientHello if it
is ready to use TLS 1.2 or previous, so that a non-1.3-aware
server may have the possibility to negotiate a shorter maximum
plaintext length.

  * The initial ClientHello may use records larger than what the server
is willing to accept, and before the server has any chance to
advertise its own maximum record size. However, since the initial
records are unprotected, implementations may be able to process
partial records, and thus could accept un-MACed records larger
than their incoming buffer (at least BearSSL can do that).


The "implicit" behaviour (both for client and server) with TLS 1.3 is a
way to make the extension free (with regards to network usage) in the
common case. It cannot be applied unless the extension support is made
mandatory for TLS 1.3. Making it mandatory is also an important feature,
since otherwise such an extension would likely remain unimplemented
by "big" clients (e.g. Web browsers).


Any comments?
I can try to write the corresponding text for inclusion in the TLS 1.3
draft. What is the process for submitting such text?


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Certificate compression (a la QUIC) for TLS 1.3

2016-11-29 Thread Thomas Pornin
On Tue, Nov 29, 2016 at 02:05:21PM +0100, Nikos Mavrogiannopoulos wrote:
> Well, PKIX/X.509 parsing seems to be order of magnitude more complex
> than compression :)

I have implemented both at times, so I can confirm that X.509 parsing is
a bit more complex than decompression (with Deflate). The _compression_
is tougher.

Another point which is worth pointing out is that decompression (again
with Deflate, the algorithm inside gzip) works on repeating sequences in
the past 32 kB window, so reliable implementation requires an up to 32 kB
buffer. This won't make RAM-constrained people happy.


--Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Maximum Fragment Length negotiation

2016-11-24 Thread Thomas Pornin
Hello,

I know that I am a bit late to the party, but I have a suggestion for
the upcoming TLS 1.3.

Context: I am interested in TLS support in constrained architectures,
specifically those which have very little RAM. I recently published a
first version of an implementation of TLS 1.0 to 1.2, that primarily
targets that kind of system ( https://www.bearssl.org/ ); a fully
functional TLS server can then run in as little as 25 kB of RAM (and
even less of ROM, for the code itself).

Out of these 25 kB, 16 kB are used for the buffer for incoming records,
because encrypted records cannot be processed until fully received (data
could be obtained from a partial record, but we must wait for the MAC
before actually acting on the data) and TLS specifies that records can
have up to 16384 bytes of plaintext. Thus, about 2/3 of the RAM usage is
directly related to that maximum fragment length.

There is a defined extension (in RFC 6066) that allows a client to
negotiate a smaller maximum fragment length. That extension is simple
to implement, but it has two problems that prevent it from being
really usable:

 1. It is optional, so any implementation is free not to implement it,
and in practice many do not (e.g. last time I checked, OpenSSL did
not support it).

 2. It is one-sided: the client may asked for a smaller fragment, but
the server has no choice but to accept the value sent by the client.
In situations where the constrained system is the server, the
extension is not useful (e.g. the embedded system runs a minimal
HTTPS server, for a Web-based configuration interface; the client is
a Web browser and won't ask for a smaller maximum fragment length).


I suggest to fix these issues in TLS 1.3. My proposal is the following:

 - Make Max Fragment Length extension support mandatory (right now,
   draft 18 makes it "recommended" only).

 - Extend the extension semantics **when used in TLS 1.3** in the following
   ways:

   * When an implementation supports a given maximum fragment length, it
 MUST also support all smaller lengths (in the list of lengths
 indicated in the extension: 512, 1024, 2048, 4096 and 16384).

   * When the server receives the extension for maximum length N, it
 may respond with the extension with any length N' <= N (in the
 list above).

   * If the client does not send the extension, then this is equivalent
 to sending it with a maximum length of 16384 bytes (so the server
 may still send the extension, even if the client did not).

   Semantics for the extension in TLS 1.2 and previous is unchanged.

With these changes, RAM-constrained clients and servers can negotiate a
maximum length for record plaintext that they both support, and such an
implementation can use a small record buffer with the guarantee that all
TLS-1.3-aware peers will refrain from sending larger records. With, for
instance, a 2048-byte buffer, per-record overhead is still small (about
1%), and overall RAM usage is halved, which is far from negligible.


RAM-constrained full TLS 1.3 is likely to be challenging (I envision
issues with, for instance, cookies, since they can be up to 64 kB in
length), but a guaranteed flexible negotiation for maximum fragment
length would be a step in the right direction.

Any comments / suggestions ?

Thanks,


    --Thomas Pornin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls