Thanks Thomas, inline responses.

On 22 March 2017 at 00:15, Thomas Pornin <por...@bolet.org> wrote:
> Therefore, I propose to replace this paragraph:
>
>     An endpoint that has no limit on the size of data they receive can
>     set this value to any value equal to or greater than the maximum
>     possible record size, such as 65535. A larger value does not allow
>     the endpoint to send larger records than the protocol permits. An
>     endpoint that receives a value larger than the maximum defined in
>     the protocol MUST NOT exceed protocol-defined limits. For TLS 1.3
>     and earlier, this limit is 2^14 octets.
>
> with the following:
>
>     An endpoint that supports all sizes that comply with the
>     protocol-defined limits MUST send exactly that limit as value for
>     maximum record size (or a lower value). For TLS 1.3 and earlier,
>     that limit is 2^14 octets. Higher values are currently reserved for
>     future versions of the protocol that may allow larger records; an
>     endpoint MUST NOT send a value higher than 2^14 unless explicitly
>     allowed by such a future version and supported by the endpoint.
>
>     When an endpoint receives a maximum record size limit larger than
>     the protocol-defined limit, that end point MUST NOT send records
>     larger than the protocol-defined limit, unless explicitly allowed by
>     a future TLS version.

Added, tweaked a little:
https://github.com/martinthomson/tls-record-limit/commit/62a5ef2306c123394b4045913b81aee9f529dd90

(My original thought was that perhaps we could keep this orthogonal to
any potential extension that tweaked the maximum size, but this has
the same effect with less ugliness.)

> Of course, larger-than-16384 records are not meant for constrained
> systems, but for big systems. Overhead for 16384-byte records with
> ChaCha20+Poly1305 is 21 bytes (for AES/GCM in TLS 1.2, this is 29
> bytes), i.e. less than 0.2%, which seems small enough to me; but there
> still is some demand for larger records, so it makes sense not to
> prevent them from ever happening with tighter wording.

Note that the main reason cited for having larger records is not the
size overhead (as you say, that's negligible), but the processing
overheads associated with processing each record.  Willy Tarreau gave
a great presentation at a workshop a while ago showing how moving
between different layers of his stack had a material effect on
performance.  Larger records means doing any per-record processing
less often.

> Another point which was made is that CBC cipher suites have a
> variable-length padding, up to 256 bytes (length byte + padding), which
> is not fully negligible: an endpoint with a 500-byte buffer would have
> to send a "maximum record size" of 223 bytes only, in order to fully
> support AES/CBC+HMAC/SHA-256 in all cases, while in practice most if not
> all endpoints will stick to minimal-sized paddings. Maybe there should
> be some extra wording saying that when a "maximum record size" was
> received, with a value less than the protocol-defined limit, then an
> endpoint SHOULD strive to use minimal-sized padding in cipher suites
> that have a variable-sized padding.

Yeah, the hazard there is that they don't minimally pad and then you
have no real recourse if you haven't reserved space for the extra
padding.  If you are going to do that, you need to make it a MUST I
think.

I didn't want to do this, because it's basically a prohibition on any
sort of padding-for-traffic-analysis-resistance (lame though it might
be).

So we have a trade-off.  I think that your suggestion is probably OK,
though that means making the limitation obvious.

> This would be only for the benefit of CBC cipher suites with TLS 1.2 and
> earlier, not for TLS 1.3, because recent AEAD cipher suites have
> predictable (and small) overhead.

Well, in TLS 1.3 any padding is counted, so requiring minimal padding
at the cipher level would be possible even if the cipher required some
padding.

> Arguably, pre-TLS-1.3 versions also have problem with compression, which
> should be in all generality avoided, just like CBC cipher suites should
> also be avoided. Maybe this is not a problem after all, and constrained
> systems that are recent enough to implement this new extension will also
> "naturally" avoid CBC cipher suites anyway. (In any case, if an endpoint
> requires small records, then it cannot really talk with peers that don't
> support the proposed maximum_record_size extension, so it needs recent
> implementations that _should_ already implement at least TLS 1.2 and
> some AEAD cipher suites.)

Yes, I think that I ultimately decided that I didn't care enough to
solve this problem for block ciphers and I would let someone who cared
about them propose the solution that best suits them.

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to