On Tue, Nov 28, 2023 at 1:00 PM James Addison <ja...@reciperadar.com> wrote:
>
> On Tue, Nov 28, 2023 at 12:25 PM Ben Schwartz <bem...@meta.com> wrote:
[snip]
> > I think DNS is simply the wrong tool for this job.  The most direct 
> > solution I've thought of involves a new X.509 OID for "HTTP content 
> > auditor" and a signature from the auditor on every returned resource, but 
> > that's off-topic for this working group.  You might also want to review the 
> > Mirror Protocol, which solves a different variant of this problem: 
> > https://datatracker.ietf.org/doc/draft-group-privacypass-consistency-mirror/.
>
> Thanks again - I'll read some more into those alternatives.  If
> possible I would like a solution that is backwards-compatible (despite
> the lack of integrity guarantees for legacy web clients), however I'm
> open to opting-in to forward-looking solutions too.

Along the X.509 path: I understand that this is a suggestion for a
mechanism that does not currently exist.  Perhaps I'm conflating two
ideas, but it reads as potentially similar to some W3C specification
work currently termed Source Code Transparency[1] that provides an
integrity attestation based on a transparency log.  For transparency:
the author of that spec and I have been in contact to discuss[2] it in
draft.

As a non-confidential service, I'd like to (continue to) provide
RecipeRadar with integrity guarantees over plaintext.  To attempt to
opt-in to an X.509-based approach in a plaintext over-the-wire
context, I've considered configuring null TLS ciphers, although I
don't believe use of those is widely-supported.  To that extent, use
of HTTP protocol without TLS, and therefore an absence of X.509, seem
to be design factors I'm working within.  This may appear off-topic,
and to some extent it is, but I'd like to explain the use case and
constraints.

The Mirror Protocol and notion of double/multiple-checking could be
useful, although I am not keen on the anticipated resulting increase
in bandwidth usage for clients.  Having independent auditors running
checks from diverse locations on the resources would be useful,
although given that the intent is to mitigate against compromise of a
very small number of hosts within potentially large server fleets,
that auditing might be limited in terms of detection ability.

Some additional problems I've been mulling over:

  * Providing an end-user with a warning/notice that they cannot
meaningfully do anything about can be counterproductive, and
multiplicatively more so in the presence of false positives.

  * Because this proposal -- whether implemented using TXT records, an
additional B record type, or a ServiceParamKey option -- requires
additional root-level records, it could increase query traffic to
authority DNS nodes.  This _might_ be tempered by the homepage-only
nature of the proposal (but refer to the next point also).

  * Catering for situations where rapid updates are applied to
integrity records seems like it may be required, at least during short
durations.  One typo fix is frequently followed by others, for
example, and some application deployments require hotfixes after
production metrics are received.  Those moments are cache-disruptive,
and in a frustratingly annoying way: it shouldn't really be necessary
for the authority to hold multiple integrity records -- what might be
preferable is for a resolver to remember (or query) a few
temporally-stale entries, and only to request an updated integrity
record when content cannot be validated (suggesting that the workflow
is in fact: DNS A* lookup, followed by an HTTP(S) request, followed by
an optional DNS integrity request).

In terms of a pathological propagation failure scenario, I'm not there
yet, but am constructing a case where a cache chain of perhaps 10 or
so resolvers (with 3 authoritative) exists on the lookup side -- where
we want to prove a failure -- and during which integrity records
updates are applied from a nameserver outside of that chain.  I do
think that there are important resulting relationships here between
the rate of application deployments, the storage requirements for
integrity records, integrity record TTL, client freshness, and the
possibility of false positives.  Those factors have all influenced the
problems I've noted above.

[1] - 
https://www.w3.org/2023/03/secure-the-web-forward/talks/source-code-transparency.html

[2] - https://github.com/twiss/source-code-transparency/pull/2

> >
> > ________________________________
> > From: James Addison <ja...@reciperadar.com>
> > Sent: Tuesday, November 28, 2023 6:51 AM
> > To: Ben Schwartz <bem...@meta.com>
> > Cc: dnsop@ietf.org <dnsop@ietf.org>
> > Subject: Re: [DNSOP] RFC 9460: ServiceParamKey for web integrity
> >
> > !-------------------------------------------------------------------|
> >   This Message Is From an Untrusted Sender
> >   You have not previously corresponded with this sender.
> > |-------------------------------------------------------------------!
> >
> > Hi Ben,
> >
> > Thanks for your response.  Please find some comments inline, with one
> > intra-line edit from your message, annotated with pipe symbols.
> >
> > On Tue, Nov 28, 2023 at 3:35 AM Ben Schwartz <bem...@meta.com> wrote:
> > >
> > > Hi James,
> > >
> > > RFC 9460 is quite flexible, and its IANA registration procedures are 
> > > relatively open, so there are few barriers to attempting a specification 
> > > like you describe.|
> >
> > Thanks - I'd like to be able to participate.  The intended goal is to
> > find existing mechanisms to provide high integrity assurance for
> > delivery of a static single-page HTML web application to clients -- or
> > to explore what those mechanisms could be if they do not yet exist.
> >
> > >  |However, I do not think it would be a wise approach, for several 
> > > reasons:
> > >
> > > HTTP is not normally used to serve a single resource per origin.
> >
> > Acknowledged - I don't have an on-topic response for this, although I
> > do believe that integrity within websites (and I admit that is not all
> > HTTP services) could be enhanced, and am aware of one[1] such request
> > for the W3C SRI spec.
> >
> > > HTTP resources admit a variety of representations, resulting in distinct 
> > > digest values.
> >
> > This is certainly true in a number of situations - dynamic websites
> > and differing character set encodings spring to mind.  Despite that I
> > think that there are cases where it is valuable to deliver static
> > content with high integrity.  Doing so can align well with client
> > implementation simplicity and cache hit rates.
> >
> > > The security offered by this feature would be extremely limited in the 
> > > common case where DNSSEC is not applied end-to-end.
> >
> > The proposal should allow some limited tampering of webserver
> > responses to be detected in the absence of DNSSEC, but I agree that
> > adding DNSSEC provides stronger guarantees.
> >
> > > Deploying this feature would be operationally challenging if the content 
> > > can ever change, because of the need to perform coordinated updates to 
> > > HTTP content and DNS records.
> >
> > The deployment mechanism that I use currently -- a TXT record where
> > the character string is prefixed with an uppercase B -- involves two
> > invocations of the openssl command (one dgst, one base64) and then
> > entry of the resulting hash into DNS.  For software development
> > lifecycles that include automation, I do not believe that it should be
> > onerous to optionally publish one or two digest values into DNS,
> > although I also do not think that the specification should constrain
> > the record update procedure.
> >
> > > Resource integrity is most valuable when the resource digest is held by a 
> > > party who is not the resource publisher, in order to prevent the 
> > > publisher from substituting a malicious resource.  However, in this 
> > > design, the resource publisher (i.e. the origin) also controls the DNS 
> > > records on its own zone.
> >
> > Agreed, although I think it should be acceptable (despite perhaps
> > appearing less trustworthy) for both entities to be the same.
> >
> > To further improve integrity (outside of the scope of either the DNS
> > or SRI specifications) it could make sense to allow independent
> > parties to rebuild the deployed web resource content from its source
> > code (perhaps retrieved from yet another entity) -- to verify that all
> > three of the DNS-published digest, the digest calculated from the
> > fetched resource, and the digest calculated after building the web
> > application entrypoint page from source have the same value (this is
> > similar to the idea of reproducible builds).
> >
> > [1] - https://github.com/w3c/webappsec/issues/497
> >
> > [snip]
> > > ________________________________
> > > From: DNSOP <dnsop-boun...@ietf.org> on behalf of James Addison 
> > > <ja...@reciperadar.com>
> > > Sent: Wednesday, November 22, 2023 12:52 PM
> > > To: dnsop@ietf.org <dnsop@ietf.org>
> > > Subject: [DNSOP] RFC 9460: ServiceParamKey for web integrity
> > >
> > > !-------------------------------------------------------------------|
> > >   This Message Is From an Untrusted Sender
> > >   You have not previously corresponded with this sender.
> > > |-------------------------------------------------------------------!
> > >
> > > Hello,
> > >
> > > This is a follow-up / redirection from a discussion thread[1] on the
> > > dnsext mailing list regarding a proposal for an additional DNS RR
> > > type.  Feedback received there indicates that instead of a distinct
> > > record type, a ServiceParamKey for use with the RFC 9460 HTTPS record
> > > type could potentially cater to the requirements.
> > >
> > > In short summary of the previous thread: the request is for addition
> > > of an integrity record, in a similar or identical format to that
> > > specified by W3C HTML SubResource Integrity specification[2], to be
> > > available alongside existing A/AAAA records for domains containing
> > > webservers.  The contents of the record would be used by web browser
> > > clients to validate whether the response they receive from an initial
> > > request to the root URI path from any of the hosts in the domain
> > > matches an expected hash value.
> > >
> > > The motivation of the request is to provide an optional
> > > out-of-HTTP-band integrity check for web clients that download a
> > > single-page web application from a fixed  URI path on a domain name.
> > > The risk that it intends to mitigate is that one or more hosts within
> > > the domain could have become compromised to respond with web content
> > > that does not match that intended by the domain owner, regardless of
> > > the presence of TLS during the web requests.
> > >
> > > I have two questions about this in relation to RFC 9460:
> > >
> > > * Would it seem valid to suggest an HTTPS ServiceParamKey to contain
> > > an integrity record of this kind?
> > >
> > > * Given a desire to deliver content using _either_ plaintext HTTP _or_
> > > TLS-enabled HTTPS (traditionally TCP ports 80, 443 respectively) -
> > > would Section 9.5 of RFC 9460 (footnote three) conflict with the
> > > plaintext HTTP delivery mechanism?
> > >
> > > Thank you,
> > > James
> > >
> > > [1] - 
> > > https://mailarchive.ietf.org/arch/msg/dnsext/vtbGXqBKSKzBqYAAE1VMhATiuw4/
> > >
> > > [2] - https://www.w3.org/TR/2016/REC-SRI-20160623/
> > >
> > > [3] - https://www.rfc-editor.org/rfc/rfc9460.html#section-9.5
> > >
> > > _______________________________________________
> > > DNSOP mailing list
> > > DNSOP@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dnsop

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to