Paul Hoffman <[email protected]> writes:

> >> During the development of the DoH standard, people from many DNS
> >> vendors (including the one you work for) contributed to the spec
> >> without objection in the WG.

[snip other comments]

One issue with the IETF specifications is that we allow for, and should
allow for, publication of specifications that enable interoperability.
What we fail to do (well) is provide guidance on when a specification is
applicable to a problem space, and when it should and when it SHOULD NOT
be used.  Sometimes on "Operational Considerations" section helps out in
this regard, but frequently not fully in part because
people/companies/etc find new and unique ways of using new protocols in
ways people hadn't thought of.  But I digress from the real problem...

So the question is: Yes, if it's possible to do DNS over HTTP should
everyone use it?  This is where the discussion seems to be
concentrating, and is what we should be discussing.  However, I argue
that this has nothing to do with whether or not it should have been
published as an RFC.

Personally speaking, I don't think it's the right solution for "most
uses".  Much of the time I may trust my local, small ISP more than the
large corporations that are offering some of the global DoH or even
generic DNS resolution services.  And as I switch places, my trust may
change (my local, interdependent coffee shop is a "maybe" but a large
chain, "probably not").  I really need a "which do you trust more?  A or
B?" choice when making network switches (with saving of preference).

But, a few wise large entities are making the decisions for us instead.
A we large companies are standing up expensive infrastructure and
advertising them using easy addresses saying 'use us, use us'.  Other
organizations are hard(ish)-coding what you should use in their
software, often pointing toward these large DoH resolver beds.  So now
we are trending toward a lot of software sending all DNS queries to a
single or a small set of companies implementing global infrastructure.

Now, no where above am I saying "this is bad" or "that is bad".  I'm not
sure which is preferred.  Which is better: a light weight protocol with
local caching that is potentially manipulated or sniffed by local
on-path-attackers (which could be someone you have no choice but to
use), or a more complex set of multiple layering protocols that point
toward a small number of service providers?

The answer is likely different per person, per organization, etc.  What
I want where I live is likely very different than what I might want
behind a border with significant DNS rewriting.

So, DoH is hardly "bad" itself.  It's the wrong decision for me in some
locations at some times.  But the standardization of an interoperable
specification in an RFC doesn't ramp up use (as Paul has been saying).

The use was ramping up because a few smart companies realized they could
follow Google's model of standing up a public resolver that everyone
would want to use, then negotiating the use of those resolvers with some
software companies to get it deployed.

I don't object to DoH.  I think it's a critically important protocol for
protecting the privacy and usability of DNS is certain situations.  That
doesn't mean I want to use it everywhere, even though that's what we're
trending toward.

[that was a bit rambly; sorry]

-- 
Wes Hardaker                                     
My Pictures:       http://capturedonearth.com/
My Thoughts:       http://blog.capturedonearth.com/

_______________________________________________
dns-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dns-privacy

Reply via email to