I’ve read this and don’t entirely understand the use case.

If I am running a service that uses an in-the-clear transport and then 
experimentally add an encrypted transport, I can see the desire to let the 
clients know that the latter is experimental and subject to accidental 
unavailability.

But once I decide that I want my service to only be available over an encrypted 
transport, why would I make it available over the in-the-clear transport any 
longer?  To prevent fallback, the latter must be disabled entirely.

Taking the perspective of the client, assume the client discovers the service 
and the service is available via a few transport options, with the testing flag 
clear (0, not in testing).  The client then chooses an encrypted transport but 
suffers a connection failure.  The client has the flag in hand, but for other 
reasons, presses on to connect via the in-the-clear transport.  As the service 
operator has indicated that this fallback is not desirable, how does the 
service provider react?

It would seem to me that the best way forward in this use case is for the 
operator to not offer the service over the on-the-clear transport.  That is the 
only way to enforce the “don’t fallback” so-called rule of the service operator.

Why would the service operator leave the undesirable option open?  Is it for 
clients that are not able to use the encrypted transport option?  How can the 
service operator distinguish between clients that can (and should) and those 
that can’t? And what if a client sometimes can and other times can’t, like in a 
nomadic client (nomadic: changes LAN connections from time to time, as opposed 
to mobile, constantly moving)?

I don’t understand the reason for any kind of “negotiating” in this case.  If 
the service operator does not want fallback to occur, remove the option for it 
to occur.

If it is a matter that I want a new service offering to be preferred over an 
old offering, in the sense that I want to test the new offering with live 
traffic for clients willing to take a risk, then offer both the old and new 
side-by-side and encourage, in any way you can, risk-takers to try the new.  (I 
feel compelled to add this cynical retort: this strategy worked so well with 
IPv6!  But let’s move on…)

The protocol design concept involved here is that one side of a communication 
**cannot** enforce any required reaction to be taken by the remote side.  The 
two sides are independent, the medium in between unreliable.

The server-side can’t prevent the client-side from attempting anything.  In the 
same sense you can never prevent an attack.  Neither side can demand the other 
react in a certain way, it’s all about requests (“please do”) and reactions 
(“hear it is”/”nope”).

A server-side doesn’t know the client-side’s context nearly as well as the 
client does, which means assumptions are limited.

Adding the testing flag is an interesting piece of meta-data to add for 
consideration by the remote side when connecting, but it isn’t something 
enforceable, hence just a complication in the configuration of the server and 
communication path.  (Misanthropically speaking: …as annoying as the dozens of 
happy birthday messages in the intra-office chat channel that happen weekly, 
interrupting any chain of thought one might have had!)

From: Ben Schwartz <bem...@meta.com>
Date: Monday, February 12, 2024 at 16:39
To: Manu Bretelle <chan...@gmail.com>, Peter Thomassen <pe...@desec.io>
Cc: Edward Lewis <edward.le...@icann.org>, "dnsop@ietf.org" <dnsop@ietf.org>
Subject: Re: [DNSOP] [Ext] Re: General comment about downgrades vs. setting 
expectations in protocol definitions

Manu and I have now published a draft describing this "testing" flag: 
https://datatracker.ietf.org/doc/draft-manuben-svcb-testing-flag/ 
[datatracker.ietf.org]<https://urldefense.com/v3/__https:/datatracker.ietf.org/doc/draft-manuben-svcb-testing-flag/__;!!PtGJab4!8-ys4ugbv5zdMQkk3MZf2Nj75pZ-yo7WYmpRUUcFYqy8o3WthsNYI-Tjj_lEwF7T8nK17pWWAF1muSZ9A4M4Qw$>

While we think this is relevant to DELEG, it is entirely independent and could 
be used in any SVCB setting (although it doesn't have any obvious utility for 
HTTPS records at present).

--Ben Schwartz
________________________________
From: Manu Bretelle <chan...@gmail.com>
Sent: Wednesday, February 7, 2024 2:19 PM
To: Peter Thomassen <pe...@desec.io>
Cc: Edward Lewis <edward.le...@icann.org>; Ben Schwartz <bem...@meta.com>; 
dnsop@ietf.org <dnsop@ietf.org>
Subject: Re: [DNSOP] [Ext] Re: General comment about downgrades vs. setting 
expectations in protocol definitions

On Thu, Feb 1, 2024 at 4: 49 AM Peter Thomassen <peter@ desec. io> wrote: On 
2/1/24 13: 34, Edward Lewis wrote: > The proper response will depend on the 
reason - more accurately the presumed (lacking any out-of-band signals) reason 
- why
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender

ZjQcmQRYFpfptBannerEnd


On Thu, Feb 1, 2024 at 4:49 AM Peter Thomassen 
<pe...@desec.io<mailto:pe...@desec.io>> wrote:


On 2/1/24 13:34, Edward Lewis wrote:
> The proper response will depend on the reason - more accurately the presumed 
> (lacking any out-of-band signals) reason - why the record is absent.

Barring any other information, the proper response should IMHO not depend on 
the presumed reason, but assume the worst case. Anything else would break 
expected security guarantees.

Agreed, I don't think that the protocol should prescribe what to do in case of 
"operational error". Differentiating an "operational error" from an actual 
malicious interference is very likely going to be a slippery slope.
That being said, I think it will be useful for adoption that resolvers provide 
a feature to use DELEG and fallback to NS when things are not correct. This is 
not something that is to be part of the protocol though.

What I see could be useful is if we could signal something alike the qualifier 
in SPF [0]. This way an operator could onboard their zone into DELEG in 
"testing mode", allowing them to enable DELEG with the comfort of falling back 
to NS, build confidence and flip the switch. This could have the side effect of 
ever having DELEG delegations in "testing mode" though.


[0] https://www.spf-record.com/syntax 
[spf-record.com]<https://urldefense.com/v3/__https:/www.spf-record.com/syntax__;!!PtGJab4!8-ys4ugbv5zdMQkk3MZf2Nj75pZ-yo7WYmpRUUcFYqy8o3WthsNYI-Tjj_lEwF7T8nK17pWWAF1muSaTtQQjeA$>

Manu



> From observations of the deployment of DNSSEC, [...]
> It’s very important that a secured protocol be able to thwart or limit damage 
> due to malicious behavior, but it also needs to tolerate benign operational 
> mistakes.  If mistakes are frequent and addressed by dropping the guard, then 
> the security system is a wasted in investment.

That latter sentence seems right to me, but it doesn't follow that the protocol 
needs to tolerate "benign operational mistakes".

Another approach would be to accompany protocol deployment with a suitable set 
of automation tools, so that the chance of operational mistakes goes down. That 
would be my main take-away from DNSSEC observations.

In other words, perhaps we should consider a protocol incomplete if the spec 
doesn't easily accommodate automation and deployment without it would yield 
significant operational risk.

Let's try to include automation aspects from the beginning.

Peter

--
https://desec.io/ 
[desec.io]<https://urldefense.com/v3/__https:/desec.io/__;!!PtGJab4!8-ys4ugbv5zdMQkk3MZf2Nj75pZ-yo7WYmpRUUcFYqy8o3WthsNYI-Tjj_lEwF7T8nK17pWWAF1muSYzZzt9zg$>

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org<mailto:DNSOP@ietf.org>
https://www.ietf.org/mailman/listinfo/dnsop 
[ietf.org]<https://urldefense.com/v3/__https:/www.ietf.org/mailman/listinfo/dnsop__;!!PtGJab4!8-ys4ugbv5zdMQkk3MZf2Nj75pZ-yo7WYmpRUUcFYqy8o3WthsNYI-Tjj_lEwF7T8nK17pWWAF1muSY67rNlpQ$>
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to