The two mechanisms address different targets but overall I prefer the
design of the new proposal.
Yours,
Daniel

On Wed, Jun 13, 2018 at 4:29 PM, David Benjamin <david...@chromium.org>
wrote:

> Are you asking about this new proposal (which still needs an amusing
> name), or the original GREASE mechanism?
>
> The original GREASE mechanism was only targetting ClientHello intolerance
> in servers. It's true that it uses specific values, and indeed there is
> nothing stopping buggy implementations from treating them differently. The
> thought then was ClientHello intolerance in servers is usually just
> accidental. It takes a certain willful ignorance to forget the default in
> your switch-case, and then go out of your way to special-case things,
> rather than recheck the spec as to what you're supposed to do. It was also
> meant to be lightweight (a one-time implementation cost and a one-time
> allocation). It's imperfect, but it does seem to help with the problem.
>
> This new proposal is targetting ServerHello intolerance problems. Rather
> than fixing a set of values initially, it regularly rerolls random values
> over time, with no fixed pattern. It should hopefully be more resilient to
> this sort of misbehavior. On the flip side, it is more work to maintain and
> only implementations that update sufficiently frequently can participate,
> whereas, in theory, anyone could deploy the original GREASE.
>
> On Wed, Jun 13, 2018 at 3:15 PM Daniel Migault <
> daniel.miga...@ericsson.com> wrote:
>
>> I also support something is being done in this direction. I like the idea
>> of taking ephemeral non allocated code points.
>>
>> What is not so clear to me is how GREASE prevents a buggy implementations
>> from behaving correctly for GREASE allocated code points, while remaining
>> buggy for the other (unallocated). code points.
>> Yours,
>> Daniel
>>
>> On Wed, Jun 13, 2018 at 2:06 PM, Alessandro Ghedini <
>> alessan...@ghedini.me> wrote:
>>
>>> On Tue, Jun 12, 2018 at 12:27:39PM -0400, David Benjamin wrote:
>>> > Hi all,
>>> >
>>> > Now that TLS 1.3 is about done, perhaps it is time to reflect on the
>>> > ossification problems.
>>> >
>>> > TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may
>>> be
>>> > incrementally rolled out in an existing compliant TLS 1.2 deployment.
>>> Yet
>>> > we had problems. Widespread non-compliant servers broke on the TLS 1.3
>>> > ClientHello, so versioning moved to supported_versions. Widespread
>>> > non-compliant middleboxes attempted to parse someone else’s
>>> ServerHellos,
>>> > so the protocol was further hacked to weave through their many
>>> defects..
>>>
>>> >
>>> > I think I can speak for the working group that we do not want to repeat
>>> > this adventure again. In general, I think the response to ossification
>>> is
>>> > two-fold:
>>> >
>>> > 1. It’s already happened, so how do we progress today?
>>> > 2. How do we avoid more of this tomorrow?
>>> >
>>> > The workarounds only answer the first question. For the second, TLS
>>> 1.3 has
>>> > a section which spells out a few protocol invariants
>>> > <https://tlswg.github.io/tls13-spec/draft-ietf-tls-
>>> tls13.html#rfc.section.9..3>.
>>> > It is all corollaries of existing TLS specification text, but hopefully
>>> > documenting it explicitly will help. But experience has shown
>>> specification
>>> > text is only necessary, not sufficient.
>>> >
>>> > For extensibility problems in servers, we have GREASE
>>> > <https://tools.ietf.org/html/draft-ietf-tls-grease-01>. This enforces
>>> the
>>> > key rule in ClientHello processing: ignore unrecognized parameters.
>>> GREASE
>>> > enforces this by filling the ecosystem with them. TLS 1.3’s middlebox
>>> woes
>>> > were different. The key rule is: if you did not produce a ClientHello,
>>> you
>>> > cannot assume that you can parse the response. Analogously, we should
>>> fill
>>> > the ecosystem with such responses. We have an idea, but it is more
>>> involved
>>> > than GREASE, so we are very interested in the TLS community’s feedback.
>>> >
>>> > In short, we plan to regularly mint new TLS versions (and likely other
>>> > sensitive parameters such as extensions), roughly every six weeks
>>> matching
>>> > Chrome’s release cycle. Chrome, Google servers, and any other
>>> deployment
>>> > that wishes to participate, would support two (or more) versions of TLS
>>> > 1.3: the standard stable 0x0304, and a rolling alternate version.
>>> Every six
>>> > weeks, we would randomly pick a new code point. These versions will
>>> > otherwise be identical to TLS 1.3, save maybe minor details to separate
>>> > keys and exercise allowed syntax changes. The goal is to pave the way
>>> for
>>> > future versions of TLS by simulating them (“draft negative one”).
>>> >
>>> > Of course, this scheme has some risk. It grabs code points everywhere..
>>> Code
>>> > points are plentiful, but we do sometimes have collisions (e.g. 26 and
>>> 40).
>>> > The entire point is to serve and maintain TLS’s extensibility, so we
>>> > certainly do not wish to hamper it! Thus we have some safeguards in
>>> mind:
>>> >
>>> > * We will document every code point we use and what it refers to. (If
>>> the
>>> > volume is fine, we can email them to the list each time.) New
>>> allocations
>>> > can always avoid the lost numbers. At a rate of one every 6 weeks, it
>>> will
>>> > take over 7,000 years to exhaust everything.
>>> >
>>> > * We will avoid picking numbers that the IETF is likely to allocate, to
>>> > reduce the chance of collision. Rolling versions will not start with
>>> 0x03,
>>> > rolling cipher suites or extensions will not be contiguous with
>>> existing
>>> > blocks, etc.
>>> >
>>> > * BoringSSL will not enable this by default. We will only enable it
>>> where
>>> > we can shut it back off. On our servers, we of course regularly deploy
>>> > changes. Chrome is also regularly updated and, moreover, we will gate
>>> it on
>>> > our server-controlled field trials
>>> > <https://textslashplain.com/2017/10/18/chrome-field-trials/>
>>> mechanism. We
>>> > hope that, in practice, only the last several code points will be in
>>> use at
>>> > a time.
>>> >
>>> > * Our clients would only support the most recent set of rolling
>>> parameters,
>>> > and our servers the last handful. As each value will be short-lived,
>>> the
>>> > ecosystem is unlikely to rely on them as de facto standards.
>>> Conversely,
>>> > like other extensions, implementations without them will still
>>> interoperate
>>> > fine. We would never offer a rolling parameter without the
>>> corresponding
>>> > stable one.
>>> >
>>> > * If this ultimately does not work, we can stop at any time and only
>>> have
>>> > wasted a small portion of code points.
>>> >
>>> > * Finally, if the working group is open to it, these values could be
>>> > summarized in regular documents to reserve them, so that they are
>>> > ultimately reflected in the registries. A new document every six weeks
>>> is
>>> > probably impractical, but we can batch them up.
>>> >
>>> > We are interested in the community’s feedback on this proposal—anyone
>>> who
>>> > might participate, better safeguards, or thoughts on the mechanism as a
>>> > whole. We hope it will help the working group evolve its protocols more
>>> > smoothly in the future.
>>>
>>> This looks interesting and I very much agree that we should do
>>> *somthing* to
>>> try to avoid the pain we've seen with deploying TLS 1.3 for future
>>> versions.
>>>
>>> We (Cloudflare) would be happy to help with developing and deploying it,
>>> and
>>> see how the experiment goes (and maybe even help put a draft together if
>>> needed,
>>> if that is the form this proposal will take).
>>>
>>> Cheers
>>>
>> _______________________________________________
>>> TLS mailing list
>>> TLS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/tls
>>>
>>
> _______________________________________________
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
>
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to