Hi everyone,

I have just submitted a new individual Internet-Draft, draft-dns-content-delivery-00 , titled "DNS-Based Content Delivery & Fallback Mechanism".

This document proposes a new protocol (DNSC) that allows User Agents to retrieve content, such as HTML or JSON, directly via DNS TXT records.
The primary goals of this mechanism are:

- Primarily, o provide a fallback mechanism for when a primary service's A/AAAA records are unreachable or result in connection timeouts.

- To offer a lightweight hosting solution for parked domains or placeholder sites.

Key technical aspects of the draft include:

   - It allows for content delivery using entries in the DNS.

- It establishes trust and can provide a secure context by relying on DNSSEC validation.

- It implements a chunking mechanism to support content that exceeds the safe size limits of a single DNS message by splitting it across multiple records.

   - It supports content compression, specifically gzip and Brotli.

The full draft is available here:
https://datatracker.ietf.org/doc/draft-dns-content-delivery/

I would greatly appreciate any feedback, reviews, or thoughts from the working group regarding the architecture, security considerations, and general interest in this approach.

Best regards,

April Faye John

---

PS: Here are some possible questions that I thought y'all might have, and my thoughts on them:

Q: DNS is designed for low-latency name resolution, not for hosting content like HTML or JSON. Can't large TXT records can lead to packet fragmentation or increased load on recursive resolvers?

A: This is meant as a fallback mechanism or for parked domains only. It is not intended to replace high-traffic web servers. See my recommendation for caching and chunk limits (max 16 chunks) as a way to protect the infrastructure in Security Considerations.

--

Q: Aren't large TXT records are a classic tool for DNS Amplification DDoS attacks? An attacker can send a small spoofed query and cause a large response to be sent to a victim.

A: Since these are TXT records, they are no more dangerous than existing large TXT records (like those used for DKIM or SPF) if managed correctly.

--

Q: Your security model relies entirely on DNSSEC for trust. If a zone isn't signed, the content is "insecure," which many modern browsers are moving away from.

A: In my opinion this creates an incentive for DNSSEC adoption. By tying "Secure Context" status to DNSSEC validation, you provide a tangible benefit for domain owners to sign their zones and many Registrars nowadays automatically provide DNSSEC for their customers.

--

Q: The IETF recently standardized SVCB and HTTPS records (RFC 9460) to provide instructions on how to reach a service. Why not just use those?

A: SVCB/HTTPS records tell a client how to connect to an HTTP server, whereas DNSC provides the actual content when that server is dead or non-existent. It’s a solution for the "Connection Refused" scenario.

--

Q: DNS queries are often unencrypted (unless using DoH/DoT). Serving content via DNS might expose a user's browsing habits even more than standard SNI does.

A: There are two answers to that:
1. This method is intended to serve static content, not dynamic ones. All those pages that a user could browse on, that serve similar content anyways, are A) already crawled and backed up my many spiders in the internet (like the wayback machine) and B) would not show different content per person anyways, so a MITM like an ISP could just auto curl those pages and would get the same result 2. As DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) become standard, the privacy of a DNSC request becomes better than a standard unencrypted HTTP request.

_______________________________________________
DNSOP mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to