Hi David,

Thanks for the response.

I’ll try to give a deeper explanation on what I’m thinking on points #2 and #4 
(“a-bit” and the draft-citing), and see if it leads to any further clarity or 
easier consensus.

Sorry for the length, and please don’t feel a need to respond to each 
individual part of this message. This is meant to give a coherent picture of 
the thinking and assumptions behind my objections, in hopes that you can more 
easily narrow in on examples that will best help to clear up any mismatches.


2. a-bit use cases/flag day avoidance:

I’m struggling to think of an example protocol that would need the a-bit to 
implement this kind of incremental increase in security.

The examples I’ve considered that run over TCP all do a feature (or version) 
negotiation in a handshake of some sort, then settle on a set of options that 
both sides understand, specifically so they can avoid flag days when making 
future upgrades.

In protocols that follow that practice, I’d expect that if they upgraded to add 
functionality for using the ENO session ID during authentication, the new 
protocol extension would add a new field type with a very specific meaning, 
probably with more than 1 bit of information (including for example which 
specific hash to use).

I’m also trying to think about the consequences if the protocol X in your 
example made such an extension by relying on the a-bit, and I keep running into 
reasons for concern.

2.a. A scenario for illustration:

For instance, maybe next year somebody reads about ENO and decides to upgrade 
protocol X, their proprietary gaming application protocol, so that Xv2 will be 
identical except that the passphrase will now be HMAC-MD5 of 
passphrase+sessionID if and only if the remote connection sets the a-bit, 
instead of just the cleartext passphrase. Your example exactly, I think.

They roll out the upgrade and tell their boss that even though many kinds of 
upgrades are impossible for them without a flag day, this one is safe and will 
secure their communications, so now they can finally transmit credit card info 
to their servers from clients that are running on an encryption-capable network 
stack. They run it that way for a few months, then a new attack on MD5 is 
discovered, and they realize it really should have been SHA-256 these days. 
Under the same logic that argues protocol X needs the a-bit, would the z1 bit 
in ENO be assigned for a second level of backward compatibility so that 
protocol Xv3 can distinguish between the SHA-256 and the MD5 hashes? I assume 
not, but this is part of why I’m struggling.

This example didn’t seem very compelling to me, so I was hoping that access to 
a real life example that needs the a-bit for this kind of backward 
compatibility would make it easier for me to follow the intended usage.

2.b. Another different example with a different problem:

Suppose I write an app that talks to servers I don’t own, and my OS has the 
recommended API feature from 4.7 (“implementations MAY provide a per-connection 
mandatory encryption mode that automatically resets a connection if ENO 
fails”), so my app tries one connection with mandatory encryption, then on 
failure with the right error code, I retry without encryption and with a 
reduced feature set of things I’m willing to send to the server. However, I 
don’t have time to implement session ID authentication, and the server wouldn’t 
accept it anyway.

Should my app set the a-bit? I think this version of the ENO draft says yes, 
because I have altered my behavior in the presence of encrypted TCP (and it 
wasn’t practical for me to authenticate, so I qualify as an exception for the 
first SHOULD from 5.1). I publish my app this way, and it’s downloaded by a few 
hundred folks with accolades about my security-consciousness.

However, a few years later the owners of the server side read section 9’s 
“applications MAY use the application-aware bit to negotiate the inclusion of 
session IDs in authentication”, and they roll out a new version that uses this 
by folding in new semantics for the authentication field, plus a blog post 
about the new meaning of application-awareness and how to upgrade to a new 
version of someone else’s more-popular app that incorporates these 
authentication semantics, and explains what the authentication using session 
IDs now look like.

Now my app’s authentication has a different meaning from the server’s and I 
start getting rejected on authentication until I can roll out an emergency 
upgrade, because I’m setting the a-bit but I wasn’t coordinated with the 
server’s upgrade.

In this scenario, I think the a-bit hasn’t avoided the flag day, and the reason 
is because of semantic confusion over the proper use of the a-bit, but 
everybody followed the rules as laid out in this draft.

Therefore, there must be some problem either with my understanding of what’s 
allowed or with the rules in the draft.

2.c. a-bit summary

I think it’s likely I’m missing something, and I’m hoping that a real life 
example and a use case walk-through of a scenario where it’s expected to help 
(and maybe also an explanation of how the scenarios like the above 2 are 
avoided under a correct reading of the draft) will help clear things up.

But if it turns out this is a real issue, I think this problem goes away 
without having to change any implementations if you cut out section 9’s “To 
preserve backwards compatibility, applications MAY use the application-aware 
bit to negotiate the inclusion of session IDs in authentication.”. 

Instead, by leaving the authentication negotiation to the higher level 
protocol, where they’ll most likely want to do it anyway, the responsibility 
for negotiating it backwards-compatibly falls entirely to the higher level 
(even though the SHOULD from 5.1 remains in place and recommends that the 
session ID be used as part of the authentication where practical).

If you think it’s important to state that the a-bit might be involved in the 
decision, another alternative is perhaps to change to a negative requirement, 
with something like: “applications SHOULD NOT try to use the session ID in the 
authentication unless the remote host sets the a-bit” instead of the backwards 
compatibility sentence from section 9.

The negative formulation to me does not imply that it’s safe to assume that the 
a-bit can serve as the sole form of negotiation about the authentication format 
at the higher level. However, I’m not sure how useful it is, since presumably a 
non-aware application won’t accept a session-ID-based authentication anyway, 
unless it’s because they just never changed their code to set the a-bit. (In 
which case if the higher level negotiates a session-ID-based authentication 
method and both sides agree, they could use it regardless of whether the a-bit 
is set, and it would be pointless to prevent them from doing so by adding this 
negative clause. What am I missing here?)

I don’t see a way the a-bit is harmful in itself, but I haven’t yet understood 
a case where it can safely be used for the backward-compatibility purpose 
highlighted in section 9 of the current ENO draft.

So can you please point me to a real-life example protocol that needs the a-bit 
for backward compatibility? Thanks.


4. citing drafts in support of future large SYN options:
“Is there harm in doing this?  E.g., is it bad practice to cite internet drafts 
(non-normatively, of course) in an RFC?”

4.a. Citing drafts does go against the current BCP, as I understand it.

From https://tools.ietf.org/html/rfc2026#section-2.2, in a big star-box:
“Under no circumstances should an Internet-Draft be referenced by any paper, 
report, or Request-for-Proposal, nor should a vendor claim compliance with an 
Internet-Draft.”

There’s a partial exception right afterward, which I’m not sure how well it 
applies in this case:
“
   Note: It is acceptable to reference a standards-track specification
   that may reasonably be expected to be published as an RFC using the
   phrase "Work in Progress" without referencing an Internet-Draft.
   This may also be done in a standards track document itself as long
   as the specification in which the reference is made would stand as a
   complete and understandable document with or without the reference to
   the "Work in Progress".
“

4.b. the moral case for truth in advertising:

That said, I do think it’s reasonable to make the point that extending SYN 
option space would benefit ENO, and to point to evidence of ongoing work in 
that direction, even if it’s a long shot.

I also agree that one of the cited drafts is legitimately attempting something 
that would help ENO in this way if it continues to move forward 
(https://tools.ietf.org/html/draft-touch-tcpm-tcp-syn-ext-opt-06). In my 
original comment, one of the 2 alternatives I suggested as an edit was to 
continue to cite that draft, but to point out that it’s experimental.

However, I think 2 of the citations currently used as evidence are misleading, 
one of them because it shows no signs of moving forward toward any form of 
adoption 
(https://tools.ietf.org/html/draft-briscoe-tcpm-inspace-mode-tcpbis-00), and 
the other because it does not apply to SYN or SYN-ACK 
(https://tools.ietf.org/html/draft-ietf-tcpm-tcp-edo-07), and therefore 
wouldn’t help ENO’s use case. That is why I suggested cutting them out. (Or 
alternatively, if this weakens the point by so much it’s not worth making, to 
cut out the paragraphs that rely on it.)

My underlying concern here is that someone might take hope from this section 
and try to push their luck by putting keys into the SYN option with a length 
near the lower edge of what’s OK for security, in hopes that by the time it’s a 
real problem they’ll get an extension on the option space, rather than 
wrestling with the likely-harder problem of sending the keys in the payload.

But in practice, the option space in SYN is likely to be even less than what’s 
pointed out in this draft, because if you leave window-scale, timestamps, and 
sack-permitted out of your SYN, you’re likely to lose more on performance than 
any gains you might have made by getting a key into the SYN.

In my experience, this is exactly the kind of subtle early misunderstanding 
that can lead a team to spend 6+ months developing something, then abort the 
project once they discover it cannot be made both performant enough and secure 
enough under the constraints of an early design decision.

Therefore, I would prefer this doc not to present a rosier-than-reality picture 
of the likelihood that a future development will make large SYN options 
available.

To take that a little further, I’d almost rather see an explicit warning that 
provides clear support for a claim like “You cannot fit a cryptographically 
secure key into the SYN option unless and until further standards work makes it 
possible to have more space there. Don’t try, you’ll regret it.”

It’s kind of a minor point to get this much discussion, but that’s a more 
complete explanation of my objection.

I hope that helps.

-Jake


On 2/2/17, 9:14 PM, "David Mazieres" <dm-list-tcpcr...@scs.stanford.edu> wrote:

"Holland, Jake" <jholl...@akamai.com> writes:

> A few suggestions that I think might improve the doc:

Thanks for going through the document.

> 1. There should be a MUST for an API that an application can use to
> discover whether a connection ended up encrypted, unless it’s there
> and I missed it. I couldn’t find one in the doc, but it seems a likely
> vital point for anything that satisfies the application-aware
> definition.

This is a good catch.  There used to be a requirement that
implementations MUST provide an API for getting the session ID that MUST
give an error if the connection is not TEP-encrypted, but I think this
got moved into the informational API draft.  The natural place to
mention this would be section 9 (security considerations).

> 2. I’d like to see a section that lists a use case or 2 that can be
> solved by knowing the remote host’s a-bit (or with the mandatory
> application-aware mode), and how the a-bit solves them. I assume I’m
> missing something obvious, but I haven’t been able to come up with a
> use case that does anything useful with the remote a-bit.

This is mentioned in the second paragraph of section 9 ("applications
MAY use the application-aware bit to negotiate the inclusion of session
IDs in authentication.")  However, if you think the point is worth
making more explicitly, maybe a place to do this would be in a new
subsection of 7 (design rationale).

> (An example guess: is the whole point so that you can avoid sending
> sensitive data if the remote app itself hasn’t done anything to become
> secure? And if so, is there some reason the application-layer protocol
> shouldn’t be in charge of determining that?)

Actually the point is to enable incremental steps towards security
legacy protocols.  Let's say that today some protocol X sends plaintext
authentication cookie.  With TCP-ENO, at least the cookie isn't
available to a passive eavesdropper, but it's still not very secure.  So
you use the application-aware bit to signal that instead of a cookie,
you will send a MAC of the session ID under the cookie, thereby keeping
the cookie secret.  Then in a few years, once everyone is always setting
the application-aware bit, you disable the old authentication behavior
or issue some warning, or provide some sort of application-aware pinning
mechanism to prevent rollback attacks.

The application-aware bit is what allows this incremental improvement in
security to happen without a flag day, because having a flag day would
be a show stopper for a lot of legacy protocols.

Does that make sense?  And does it merit its own section 7.6 or
something?

> 3. All 3 instances of “manual(ly)” in the doc seem better if changed
> to “explicit(ly)” (sections 4.2 and 7.4)

That's an easy change.

> 4. In section 7.1, the hopes of increasing TCP’s SYN option space seem
> exaggerated. EDO does not apply to SYN, and of the other 2 cited
> drafts, one is expired over a year ago and the other looks, I guess
> I’d call it "tricky", in addition to being experimental. It might be
> better to remove the second and third paragraphs of section 7.1, or at
> least reduce to just the one example of a live and applicable draft
> (and maybe noting that it’s experimental).

I agree that large SYN options seem like kind of a long shot, but if
they ever happen, ENO stands to benefit.  So part of the point here is
to make it unambiguous that ENO would benefit from large SYN options and
is ready to take advantage of them, because A) it's true, and B) it
could potentially strengthen the case for current or future large option
designs.

Is there harm in doing this?  E.g., is it bad practice to cite internet
drafts (non-normatively, of course) in an RFC?

David



_______________________________________________
Tcpinc mailing list
Tcpinc@ietf.org
https://www.ietf.org/mailman/listinfo/tcpinc

Reply via email to