Re: [DNSOP] Call for Adoption: draft-huston-kskroll-sentinel

2017-11-27 Thread Richard Barnes
Well, that's what I get for providing drive-by feedback.  Someone pointed
me off-list to RFC 8145 and the operational issues with that.  I still
think that that calls for a better authoritative/resolver telemetry
interface, not some client-side thing.

On Mon, Nov 27, 2017 at 1:10 PM, Richard Barnes <r...@ipv.sx> wrote:

> George, you should know better than to claim that a mechanism that
> requires resolver updates will have "immediate benefit" :)
>
> I do not find this mechanism terribly compelling.  It is not useful in the
> short run, as noted above.  And it has the wrong architecture for the long
> run.
>
> What zone operators need, for KSK roll-overs and other evolution
> decisions, is telemetry about the capabilities of the resolvers they
> serve.  In order for an approach like this to provide that telemetry, one
> would need a broad-scale client-side measurement system.  While such
> systems exist (Geoff and George being expert practitioners), they have a
> lot of problems -- they're expensive to operate at scale; they're extremely
> limited in terms of what they can measure and how reliably; and they impose
> much more overhead than is needed here.  We shouldn't be building a
> telemetry system for the DNS that has hard-coded assumptions about web ads
> or dedicated probes.
>
> It would be far better to build a telemetry mechanism that operated
> directly between resolvers and authoritative servers.  There are a variety
> of ways you could do this.  In today's world, you could have some record by
> which an authoritative server could advertise a telemetry submission
> point.  In a DOH world, you could have the resolver provide a Link header
> telling the authoritative server where it could pick up information about
> resolver capabilities.  None of these are hard to build (and they don't
> interfere with the "fast path" of the resolver) and they provide much more
> high quality information.
>
> If you need data for the KSK roll that we're already a decade late for,
> gather it in a way that doesn't require a resolver upgrade.  (Deploy a
> dedicated temporary TLD if you need to.)  If you're trying to solve the
> long-run telemetry problem, then build it properly.
>
> --Richard
>
>
> On Thu, Nov 16, 2017 at 3:34 AM, George Michaelson <g...@algebras.org>
> wrote:
>
>> I support adoption of this work. Its a sensible, simple proposal which
>> has immediate benefit, and can be used by anyone to test the ability
>> of their nominated resolver to recognise specific keys, and their
>> trust state.
>>
>> I believe as a community, at large,  we need code deployed into the
>> resolvers in the wild, and we need a document specifying the behaviour
>> we need deployed into those resolvers. We can use this to inform
>> ourselves of operational risk under keychange. We can know as
>> individuals, as organizations what we will see, if keys change. I
>> think this is quite powerful. compared to measurement of what
>> resolvers see, or what authoritatives or roots see, going back to
>> these service-providers themselves. This method informs the client
>> side of the transaction. Thats big.
>>
>> I'm not saying we shouldn't do other things, or measure. I'm saying
>> that this proposal has a qualitative aspect which I think is
>> different, and good.
>>
>> -George
>>
>> On Thu, Nov 16, 2017 at 4:23 PM, tjw ietf <tjw.i...@gmail.com> wrote:
>> > All
>> >
>> > The author has rolled out a new version addressing comments from the
>> meeting
>> > on Monday, and we feel it’s ready to move this along.
>> >
>> > This starts a Call for Adoption for draft-huston-kskroll-sentinel
>> >
>> > The draft is available here:
>> > https://datatracker.ietf.org/doc/draft-huston-kskroll-sentinel/
>> >
>> > Please review this draft to see if you think it is suitable for
>> adoption by
>> > DNSOP, and comments to the list, clearly stating your view.
>> >
>> > Please also indicate if you are willing to contribute text, review, etc.
>> >
>> > This call for adoption ends: 30 November 2017 23:59
>> >
>> > Thanks,
>> > tim wicinski
>> > DNSOP co-chair
>> >
>> > ___
>> > DNSOP mailing list
>> > DNSOP@ietf.org
>> > https://www.ietf.org/mailman/listinfo/dnsop
>> >
>>
>> ___
>> DNSOP mailing list
>> DNSOP@ietf.org
>> https://www.ietf.org/mailman/listinfo/dnsop
>>
>
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Call for Adoption: draft-huston-kskroll-sentinel

2017-11-27 Thread Richard Barnes
George, you should know better than to claim that a mechanism that requires
resolver updates will have "immediate benefit" :)

I do not find this mechanism terribly compelling.  It is not useful in the
short run, as noted above.  And it has the wrong architecture for the long
run.

What zone operators need, for KSK roll-overs and other evolution decisions,
is telemetry about the capabilities of the resolvers they serve.  In order
for an approach like this to provide that telemetry, one would need a
broad-scale client-side measurement system.  While such systems exist
(Geoff and George being expert practitioners), they have a lot of problems
-- they're expensive to operate at scale; they're extremely limited in
terms of what they can measure and how reliably; and they impose much more
overhead than is needed here.  We shouldn't be building a telemetry system
for the DNS that has hard-coded assumptions about web ads or dedicated
probes.

It would be far better to build a telemetry mechanism that operated
directly between resolvers and authoritative servers.  There are a variety
of ways you could do this.  In today's world, you could have some record by
which an authoritative server could advertise a telemetry submission
point.  In a DOH world, you could have the resolver provide a Link header
telling the authoritative server where it could pick up information about
resolver capabilities.  None of these are hard to build (and they don't
interfere with the "fast path" of the resolver) and they provide much more
high quality information.

If you need data for the KSK roll that we're already a decade late for,
gather it in a way that doesn't require a resolver upgrade.  (Deploy a
dedicated temporary TLD if you need to.)  If you're trying to solve the
long-run telemetry problem, then build it properly.

--Richard


On Thu, Nov 16, 2017 at 3:34 AM, George Michaelson  wrote:

> I support adoption of this work. Its a sensible, simple proposal which
> has immediate benefit, and can be used by anyone to test the ability
> of their nominated resolver to recognise specific keys, and their
> trust state.
>
> I believe as a community, at large,  we need code deployed into the
> resolvers in the wild, and we need a document specifying the behaviour
> we need deployed into those resolvers. We can use this to inform
> ourselves of operational risk under keychange. We can know as
> individuals, as organizations what we will see, if keys change. I
> think this is quite powerful. compared to measurement of what
> resolvers see, or what authoritatives or roots see, going back to
> these service-providers themselves. This method informs the client
> side of the transaction. Thats big.
>
> I'm not saying we shouldn't do other things, or measure. I'm saying
> that this proposal has a qualitative aspect which I think is
> different, and good.
>
> -George
>
> On Thu, Nov 16, 2017 at 4:23 PM, tjw ietf  wrote:
> > All
> >
> > The author has rolled out a new version addressing comments from the
> meeting
> > on Monday, and we feel it’s ready to move this along.
> >
> > This starts a Call for Adoption for draft-huston-kskroll-sentinel
> >
> > The draft is available here:
> > https://datatracker.ietf.org/doc/draft-huston-kskroll-sentinel/
> >
> > Please review this draft to see if you think it is suitable for adoption
> by
> > DNSOP, and comments to the list, clearly stating your view.
> >
> > Please also indicate if you are willing to contribute text, review, etc.
> >
> > This call for adoption ends: 30 November 2017 23:59
> >
> > Thanks,
> > tim wicinski
> > DNSOP co-chair
> >
> > ___
> > DNSOP mailing list
> > DNSOP@ietf.org
> > https://www.ietf.org/mailman/listinfo/dnsop
> >
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] `localhost` and DNS.

2017-11-15 Thread Richard Barnes
On Thu, Nov 16, 2017 at 5:05 AM, Ted Lemon  wrote:

> On Nov 15, 2017, at 10:51 PM, Mike West  wrote:
>
> Skimming through the recording of Monday's meeting
>  
> (starting
> at around 53:56), it sounds to me as though there's at least loose
> agreement that signing a response for `localhost` is not what we'd like to
> recommend: all the folks who commented explicitly took that position for
> similar reasons. The current text in https://tools.ietf.org/
> html/draft-ietf-dnsop-let-localhost-be-localhost-01#section-4.2 reflects
> this position, and IMO it's what we should run with.
>
>
> Yes, the current text appears to me to be correct.
>

+1



>
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSOP Call for Adoption - draft-west-let-localhost-be-localhost

2017-09-12 Thread Richard Barnes
On Tue, Sep 12, 2017 at 8:54 AM, Tony Finch  wrote:

> Paul Vixie  wrote:
> >
> > while i've generally included a localhost.$ORIGIN A RR in zones that
> appear in
> > my stub resolver search lists, in order that "localhost" be found,
>
> I agree with the rest of your message but I want to highlight this bit
> because it is directly related to the main reason this draft exists.
>
> Your localhost records (like the ones I deleted from cam.ac.uk last week)
> are troublesome for the web browser same origin security policy: they can
> lead to vulnerabilites when your websites are accessed from multi-user
> machines and in other more obscure circumstances - for details, see
> http://seclists.org/bugtraq/2008/Jan/270


Cf.
https://tools.ietf.org/html/draft-thomson-postel-was-wrong-00#section-4.1

When something shouldn't work, it shouldn't work.

--Richard



>
>
> Tony.
> --
> f.anthony.n.finch    http://dotat.at/  -  I xn--zr8h
> punycode
> Tyne, Dogger: Westerly backing southeastrly 4 or 5, occasionally 6 at
> first,
> then becoming cyclonic, mainly northwesterly later, 6 to gale 8,
> occasionally
> severe gale 9 later in south. Moderate or rough, occasionally very rough
> later
> in south. Rain. Good occasionally poor.
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSOP Call for Adoption - draft-west-let-localhost-be-localhost

2017-09-06 Thread Richard Barnes
I am strongly in support of the WG adopting this draft.  It will allow
applications to deliver a better developer experience and higher security.

As Ted notes, there is a possibility of breakage.  If something on a host
is relying on an external resolver to provide localhost resolution in
accordance with RFC 6761.  However, that behavior is almost certainly not
secure to start with, so this breakage is of the good, "increasing
security" kind.

--Richard

On Wed, Sep 6, 2017 at 10:35 AM, Ted Lemon  wrote:

> On Sep 6, 2017, at 10:33 AM, tjw ietf  wrote:
>
> Thanks.  The document still waffles, but it 'waffles less' than it did
> initially.  But Mike is committed to working that and any other issue which
> may arise.
>
>
> The question I really have is not whether Mike is willing—he's stated that
> he is.   It's whether the working group is willing, since returning
> NXDOMAIN is an actual change in behavior from the original specification in
> RFC 6761, and will likely result in some breakage, since it can safely be
> assumed that some stacks are currently following the RFC6761 advice.
>
>
>
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Status of "let localhost be localhost"?

2017-08-12 Thread Richard Barnes
On Sat, Aug 12, 2017 at 2:36 PM, Paul Hoffman  wrote:

> On 12 Aug 2017, at 10:14, Ted Lemon wrote:
>
> El 12 ag 2017, a les 13:09, John Levine  va escriure:
>>
>>> Right.  That's why it's long past time that we make it clear that
>>> non-broken resolvers at any level will treat localhost as a special
>>> case.  As you may have heard, we are not the Network Police, but we do
>>> publish the occasional document telling people what to do if they want
>>> to interoperate with the rest of the Internet.
>>>
>>
>> With respect, John, the issue I raised here isn't interop.  It's security.
>>
>
> It's security through interop. It's causing systems that want to hope that
> "localhost" has a particular meaning that has security implications to have
> a better chance that their hope is fulfilled.


And giving systems that want to ensure that they never mistake "localhost"
for something other than loopback to have a better chance that they won't
break things.

--Richard
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Status of "let localhost be localhost"?

2017-08-02 Thread Richard Barnes
On Wed, Aug 2, 2017 at 4:27 PM, Ted Lemon  wrote:

> On Aug 2, 2017, at 2:02 PM, Robert Edmonds  wrote:
>
> draft-west-let-localhost-be-localhost-03 upgrades the requirements in
> RFC 6761 §6.3 to make them much stricter, for all applications,
> converting SHOULDs to MUSTs, etc. So we're not arguing about whether
> localhost "should" be treated specially, but whether it MUST be treated
> specially, by all applications. Can the W3C not impose stricter
> requirements on browser developers even if 6761 doesn't impose mandatory
> treatment for "localhost"?
>
>
> It should be MUST in both cases.   But writing that in an RFC doesn't make
> it so.   Bear in mind when you look at the W3C document that it is talking
> about what would be ideal, not what is actually present in browsers.
>
> As an app developer worried about security footprint, I would be wiser to
> be cautious and use ::1 or 127.0.0.1, rather than using localhost and
> relying on the name resolution infrastructure.   But the use case that I
> would be most skeptical about is using localhost in a URL.   I think that
> should be MUST NOT.   Apparently there is not wholehearted agreement on
> this topic, however... :)
>

You have this backwards.  Browser today do take the more cautious, IP-based
approach.  It sucks for developers.  They want to be able to use
"localhost", but in order to do it safely, they will need to hard-wire it
internally (since as you say, writing an RFC doesn't make resolvers
change).  And they don't want to hard-wire unless that's the clear semantic
because standards are what make the web work.

--Richard
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Status of "let localhost be localhost"?

2017-08-02 Thread Richard Barnes
On Wed, Aug 2, 2017 at 9:18 AM, Richard Barnes <r...@ipv.sx> wrote:

>
>
> On Wed, Aug 2, 2017 at 9:10 AM, Ted Lemon <mel...@fugue.com> wrote:
>
>> On Aug 2, 2017, at 9:02 AM, Richard Barnes <r...@ipv.sx> wrote:
>>
>> But of course having IP addresses in URLs is both a PITA for developers
>> and an anti-pattern more generally.
>>
>>
>> While true, I would argue that this is actually a problem.   E.g., I
>> actually literally cannot surf to a link-local URL without having a DNS
>> record for it, because http://[
>> <http://%5Bfe80::1806:ec37:3d5f:9580%25en0%5D/>fe80::1806:ec37:3d5f:9
>> 580%en0]/ <http://%5Bfe80::1806:ec37:3d5f:9580%25en0%5D/> has an
>> interface identifier in it, and modern browsers consider this an
>> anti-pattern, I guess.   And you don't want to put link-local addresses in
>> DNS, even if it made sense to do so, so what is one to do?   I'm not
>> convinced that this anti-pattern is the wrong anti-pattern, but here we
>> have two examples of it being problematic, in the least.
>>
>> If "localhost" were properly defined to be loopback, then applications
>> could just hard-wire resolution, and not depend on the good graces of the
>> platform resolver.  As, for example, Firefox does with ".onion" today:
>>
>>
>> Right.   But there was actually a long discussion on why that's
>> problematic when we were doing the .onion RFC.   The reason is that one
>> can't count on any particular piece of application software correctly
>> interpreting the rightmost label.   We can write RFCs encouraging it, but
>> if I am writing a URL into a piece of HTML, I have no idea whether the
>> thing that interprets the HTML will or will not do the right thing.
>>
>
> The point you're missing here is that the application is both the thing
> relying on the definition of "localhost" and the thing empowered to enforce
> the RFC.  If the application doesn't care whether "localhost" resolves to
> anything special, then it can pass it to the platform and take its
> chances.  If it does, it can hard-wire it to loopback.
>

To address the concerns of HTML authors here:

As with any change to web semantics, this introduces a challenge for web
developers because different versions of browsers will interpret things
differently.  For example, if the W3C Secure Contexts spec changes to treat
"localhost" URLs as secure, and browsers implement that, then if you load "
http://localhost; in a new browser, it will be get access to certain APIs
that it wouldn't on earlier versions.

Two points here:

1. It's up to the browsers to make this transition fail safe, in the sense
that if you write code that depends on "localhost" being secure, then your
code will break if the browser is not going to ensure that "localhost" is
loopback.  This is what the Secure Contexts spec is for, and the gist of my
comments above.

2. Web developers have to deal with this sort of incompatibility all the
time anyway, because their sites are accessed by many different browsers
with different capabilities.

In other words, there's only breakage risk here (not security risk), only
for new things, and not any worse than web developers already have to deal
with.

And based on the feedback from web developers so far, the risk of breakage
is strongly preferred to the pain of hard-coding IP addresses.

--Richard




>
>> We just accept that as a risk with .onion because we don't have a better
>> option, but for localhost we definitely do have a better option.   That's
>> all I'm saying.
>>
>
> Using IP addresses is not a better option.
>
> --Richard
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Status of "let localhost be localhost"?

2017-08-02 Thread Richard Barnes
On Wed, Aug 2, 2017 at 9:10 AM, Ted Lemon <mel...@fugue.com> wrote:

> On Aug 2, 2017, at 9:02 AM, Richard Barnes <r...@ipv.sx> wrote:
>
> But of course having IP addresses in URLs is both a PITA for developers
> and an anti-pattern more generally.
>
>
> While true, I would argue that this is actually a problem.   E.g., I
> actually literally cannot surf to a link-local URL without having a DNS
> record for it, because http://[
> <http://%5Bfe80::1806:ec37:3d5f:9580%25en0%5D/>fe80::1806:ec37:3d5f:
> 9580%en0]/ <http://%5Bfe80::1806:ec37:3d5f:9580%25en0%5D/> has an
> interface identifier in it, and modern browsers consider this an
> anti-pattern, I guess.   And you don't want to put link-local addresses in
> DNS, even if it made sense to do so, so what is one to do?   I'm not
> convinced that this anti-pattern is the wrong anti-pattern, but here we
> have two examples of it being problematic, in the least.
>
> If "localhost" were properly defined to be loopback, then applications
> could just hard-wire resolution, and not depend on the good graces of the
> platform resolver.  As, for example, Firefox does with ".onion" today:
>
>
> Right.   But there was actually a long discussion on why that's
> problematic when we were doing the .onion RFC.   The reason is that one
> can't count on any particular piece of application software correctly
> interpreting the rightmost label.   We can write RFCs encouraging it, but
> if I am writing a URL into a piece of HTML, I have no idea whether the
> thing that interprets the HTML will or will not do the right thing.
>

The point you're missing here is that the application is both the thing
relying on the definition of "localhost" and the thing empowered to enforce
the RFC.  If the application doesn't care whether "localhost" resolves to
anything special, then it can pass it to the platform and take its
chances.  If it does, it can hard-wire it to loopback.


> We just accept that as a risk with .onion because we don't have a better
> option, but for localhost we definitely do have a better option.   That's
> all I'm saying.
>

Using IP addresses is not a better option.

--Richard
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Status of "let localhost be localhost"?

2017-08-02 Thread Richard Barnes
On Wed, Aug 2, 2017 at 8:48 AM, Ted Lemon <mel...@fugue.com> wrote:

> On Aug 2, 2017, at 8:40 AM, Richard Barnes <r...@ipv.sx> wrote:
>
> The underlying need here is that application software would like to make
> use of the fact that it is connecting to "localhost" (vs. other domain
> names) to make security decisions based on whether traffic is going to
> leave the host.  So if the network layer remaps localhost to something
> other than a loopback interface without telling the applications, then
> you're going to have security problems.
>
> The point of this document is to avoid this disconnect by discouraging the
> sorts of remappings you're talking about.
>
>
> Of course, arguably this is the wrong approach.   Perhaps the right
> approach is to understand that the security characteristics of "localhost"
> are not the ones that we want when our goal is to be sure we are connecting
> to the local host.   Apps don't control the name resolution software that's
> running on the local host.   If they want to be sure they are connecting
> locally, perhaps they should be using ::1 instead of localhost as their
> explicit destination identifier.
>

This is indeed what happens today.

https://w3c.github.io/webappsec-secure-contexts/#is-origin-trustworthy

But of course having IP addresses in URLs is both a PITA for developers and
an anti-pattern more generally.

If "localhost" were properly defined to be loopback, then applications
could just hard-wire resolution, and not depend on the good graces of the
platform resolver.  As, for example, Firefox does with ".onion" today:

http://searchfox.org/mozilla-central/source/netwerk/dns/nsDNSService2.cpp#708

(The "localhost" stuff in that method is unrelated to this discussion BTW;
it relates to a Firefox-internal mapping of other domains to localhost.)
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Status of "let localhost be localhost"?

2017-08-02 Thread Richard Barnes
On Wed, Aug 2, 2017 at 6:39 AM, william manning 
wrote:

> localhost is just a string, like www or mail or supralingua.  A DNS
> operator may
> chose to map any given string to any given IP address.  restricting  ::1
>  so that it never leaves
> the host is pretty straight forward.  if I map localhost to
> 3ffe::53:dead:beef and NOT ::1 in my
> systems, why should you care?
>

The underlying need here is that application software would like to make
use of the fact that it is connecting to "localhost" (vs. other domain
names) to make security decisions based on whether traffic is going to
leave the host.  So if the network layer remaps localhost to something
other than a loopback interface without telling the applications, then
you're going to have security problems.

The point of this document is to avoid this disconnect by discouraging the
sorts of remappings you're talking about.

--Richard



> if you are concerned that completion logic is broken in resolvers and the
> string "localhost" is not
> appended to the domain, then you really are asking for the root servers to
> backstop the query with
> an entry for localhost.  and for the first 20 years of the DNS, there was
> an entry for localhost. in
> many of the root servers.  it was phased out for several reasons, two key
> ones were DNSSEC and
> the fact that most resolvers had corrected their broken completion logic.
> There is no good reason to bring it back for special processing.  It's
> just a string.
>
> /Wm
>
> On Tue, Aug 1, 2017 at 11:59 AM, Jacob Hoffman-Andrews 
> wrote:
>
>> On 08/01/2017 03:48 AM, Mike West wrote:
>> > The only open issue I know of is some discussion in the thread at
>> > https://www.ietf.org/mail-archive/web/dnsop/current/msg18690.html that
>> I
>> > need help synthesizing into the draft. I don't know enough about the
>> > subtleties here to have a strong opinion, and I'm happy to accept the
>> > consensus of the group.
>>
>> Reading back through this thread, it seems like the concerns were about
>> how to represent the  ".localhost" TLD in the root zone, or how to use
>> DNSSEC to express that the root zone will not speak for ".localhost".
>> However, I think we don't need either. This draft attempts to codify the
>> idea that queries for "localhost" or "foo.localhost" should never leave
>> the local system, and so it doesn't matter what the root zone says about
>> ".localhost".
>>
>> I would even take it a step further: It would be a mistake to add any
>> records for ".localhost" to the root zone, because it would mask
>> implementation errors. If a local resolver accidentally allows a query
>> for "foo.localhost" to hit the wire, it should result in an error.
>>
>> IMHO, the document is good as it stands.
>>
>> ___
>> DNSOP mailing list
>> DNSOP@ietf.org
>> https://www.ietf.org/mailman/listinfo/dnsop
>>
>
>
> ___
> DNSOP mailing list
> DNSOP@ietf.org
> https://www.ietf.org/mailman/listinfo/dnsop
>
>
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Last Call: draft-ietf-dnsop-onion-tld-00.txt (The .onion Special-Use Domain Name) to Proposed Standard

2015-07-17 Thread Richard Barnes
On Fri, Jul 17, 2015 at 4:20 PM, Eliot Lear l...@cisco.com wrote:
 I have no particular objection to the concept here, but I do have a
 question about one sentence in the draft.  Section 1 states:
Like Top-Level Domain Names, .onion addresses can have an arbitrary
number of subdomain components.  This information is not meaningful
to the Tor protocol, but can be used in application protocols like
HTTP [RFC7230].

 I honestly don't understand what is being stated here, or why a claim is
 made about HTTP at all in this document.  Are we talking about the
 common practice of www.example.com == example.com?  And what
 significance does that last phrase have to the document?

I made a comment on this to the authors earlier, and they decided to
leave it as-is :)

The idea is that TOR routing will only use the first label after
.onion, but if you're using the .onion name in an application, that
application might use the whole name.  For example, if you put
http://mail.example.onion/;, TOR will route on example.onion, but
the HTTP Host header might be mail.example.onion.

--Richard



 Eliot



 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Last Call: draft-ietf-dnsop-onion-tld-00.txt (The .onion Special-Use Domain Name) to Proposed Standard

2015-07-16 Thread Richard Barnes
On Thu, Jul 16, 2015 at 12:44 AM, Joe Hildebrand hil...@cursive.net wrote:
 On 15 Jul 2015, at 5:37, David Conrad wrote:

 I try to be pragmatic. Given I do not believe that refusing to put ONION
 in the special names registry will stop the use of .ONION, the size of the
 installed base of TOR implementations, and the implications of the use of
 that string in certificates, I supporting moving ONION to the special names
 registry.  I really (really) wish there was more concrete, objective metrics
 (e.g., size of installed base or some such), but my gut feeling is that TOR
 is pretty well deployed and given the CAB Forum stuff, I see no particular
 reason to delay (after all, it's not like the deployed base of TOR is likely
 to get smaller).


 I don't see any mention of the CAB Forum stuff in the draft.  Has anyone
 done the analysis to see if CAB Forum members really will issue certs to
 .onion addresses if we do this?  Do they issue certs for .example or .local
 today?

There are at least a few CAs issuing for .onion right now, under the
exceptions that are going to expire in a few months.  So I assume that
these CAs will be interested in issuing if policy allows.

My understanding is that the basic requirement that CABF has is that a
name either be clearly a valid DNS name or clearly *not* a valid DNS
name.  (And in either case, that the applicant be able to demonstrate
control.)  Right now, that's ambiguous.  Adding .onion to the RFC 6761
registry would remove the ambiguity, since it would officially mark
names under .onion as not DNS names.

--Ricahrd



 If certificate issuance is one of the key drivers for this work, there needs
 to be information in the draft that shows that this approach will work.

 --
 Joe Hildebrand


 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Last Call: draft-ietf-dnsop-onion-tld-00.txt (The .onion Special-Use Domain Name) to Proposed Standard

2015-07-15 Thread Richard Barnes
On Wed, Jul 15, 2015 at 5:52 PM, Hugo Maxwell Connery h...@env.dtu.dk wrote:
 Or to re-quote Paul Vixie:

 what the internet should be doing is defining escape mechanisms for
 non-internet systems, rather than saying we are the only thing you can
 use

 RFC 6761 is that mechanism for DNS.

Nice summary.

I have read this document, and sent comments on earlier drafts.  I
think the current version clearly expresses the requirements on DNS
actors to make .onion labels safe to use in DNS-like slots (e.g.,
URLs).  Especially given that there are a good number of sites already
using URLs with .onion names, and the PKI requirement for the status
of these names to be clarified, I strongly support the publication of
this document.

--Richard



 /Hugo
 
 From: DNSOP [dnsop-boun...@ietf.org] on behalf of hellekin [helle...@gnu.org]
 Sent: Wednesday, 15 July 2015 17:02
 To: dnsop@ietf.org
 Subject: Re: [DNSOP] Last Call: draft-ietf-dnsop-onion-tld-00.txt (The 
 .onion Special-Use Domain Name) to Proposed Standard

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 07/14/2015 11:37 PM, David Conrad wrote:

 To put it bluntly, from a certain perspective, 6762 and
 dnsop-onion are essentially about the same thing: they are
 formalizing squatting on namespace (by Apple in the first
 instance and by TOR in the second).


 This is blunt in more than one aspect. That you consider squatting as a
 negative is insulting for those people who actually need to rely on
 squatting not to be excluded from society.

 But the argument that this is about, correct my paraphrase if I'm wrong,
 taking over by force part of the namespace is in my opinion misguided.

 The Domain Name System is *one way* of managing *a* global namespace.
 That it is the canonical way of naming things chosen for the Internet
 does not exclude that it's only one only way. Special-Use Domain Names
 exemplify this point, and particularly P2PNames such as .onion
 demonstrate the viability of other techniques than the hierarchical tree
 of DNS to manage global namespaces.

 The objective of this registration is convergent with the idea that the
 DNS is the canonical global namespace of the Internet. Indeed .onion can
 do without caring about the DNS, but this is not the point. The point is
 to recognize the variety of techniques within the scope of DNS so that
 future implementations can rely on the DNS as a correct source for
 global information about namespaces.

 I regret not to have mentioned this before, and hope that it frames the
 problematic beyond territorial claims, operational issues, and security
 issues.

 ==
 hk

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQJ8BAEBCgBmBQJVpnXeXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
 ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0
 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9r5QP/i6bE5b7u5M4JrIN+98GS8HS
 SG0wcDwVX13SWZujJ92ZFGy7lHDfG9wQr8WO/AoAlWT0vMzyfixMpWJZ66gxxthA
 F0fdZtI4N4nfjwolpQUnBnY/39yW1sumYg50AsS5dmX026F+wkjqidIV2s5PiZQr
 D4GC+6XvvYMvsYmLKwv8JeK1+wqkRw9nl2YSX6Wt/U0EwaI/VpIgjYkaT0VIFjw+
 c5OBkRfdaY4pFZ/NMfjiIvcYQp7MQhFPjvpsRMFtvtwpn+ZiJKoB4e3dOPCeL1S2
 dANLyutiodFTMGYGWn9W6Zcfv9SckSOiblH5qvNpkMcAumQe09fTQGxNX14OQXWr
 g6qV8oeNc2k1DsmPHM9UsDYSJmEy4zikGKLCcjpOC3Y4h+6aqyvBsby43dJfr7Fy
 tajr8nh1IcA8VZtM/K5+rqMZabg0EPIPujkchdrJTZ8+jiT0uT8pEDR4VammAcOz
 9sMufzxdv30yYDYuFpTeTAf27z8h1232yhKOHaBaueDsZmva/IccHyHiw4ZQg/6Y
 NEoZ87UJa1lXWqJ7+XeyOfwJp1adPwXWb2IiNDIjXndXwt94yBPinAL/3E/2gnfw
 /XSKMTaeGBtixhllwidAtBSX7EeWTGQl7kWlH8MsvoLvpcZmuTTHpuWZ9k5VEcTe
 rn6UM1/Ooyjp2i90Gz7q
 =jn7Y
 -END PGP SIGNATURE-

 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] More after onion? was Re: Some distinctions and a request

2015-07-01 Thread Richard Barnes
On Wed, Jul 1, 2015 at 2:23 PM, Warren Kumari war...@kumari.net wrote:
 On Wed, Jul 1, 2015 at 10:08 AM, Suzanne Woolf suzworldw...@gmail.com wrote:
 Ed,

 First-- apologies for the misunderstanding.

 On Jul 1, 2015, at 9:53 AM, Edward Lewis edward.le...@icann.org wrote:

 Trying to be more clear, I have in the past imagined that today someone is
 inventing a new communications technology, in 6 months will need to cobble
 an identifier space and in 2 years the IETF-connected crowd detects
 significant deployment of this and needs to decide whether to register a
 TLD to avoid name collisions.  I've been told that this wouldn't happen
 because the IETF will have rules - which I am skeptical would prevent
 the situation from happening.

 I don't think we have rules or even guidelines now that have any chance of 
 preventing it.

 I agree we'll never prevent it completely; it's the nature of the DNS and 
 the internet that people can do things with names and they don't have to ask 
 the IETF first.

 But I don't think it's impossible that we'll be able to provide guidance, 
 such that developers who follow it are reasonably sure of avoiding the 
 various types of collisions and ambiguities we're concerned about-- and such 
 that there's a clear basis for saying You're doing something outside of the 
 guidance we can provide about how names work in the internet, you're on your 
 own.


 Warren points at ALT-TLD

 Yup, we will not be able to prevent people from using an identifier
 space that looks like a DNS name not rooted in the DNS, but we *can*
 provide them with guidance and a safe place to do this sort of thing,
 namely under the .alt TLD.



 To underscore - I am not against the innovation.  I am urging that the
 processes put in place are future proof by being reactionary - reacting
 to the new names, not trying to fend off the situation.  I.e., in
 agreement with the words below trying to apply RFC 6761 and finding that
 it remains subjective.

 This supports the initial suggestion that we need to get serious about a 
 6761bis, am I correct?

 I believe so, but instead of simply raising the bar to get a special
 use name (which will simply result in people squatting more), I think
 we need to provide solid, usable advice and an option for people...

+many to what Warren says.

We do our best work when we do engineering, not rule-making.  Let's
engineer a solution here that's more appealing than squatting.  For my
money, alt-TLD looks about right.

--Richard


 W




 thanks,
 Suzanne


 On 7/1/15, 9:05, Suzanne Woolf suzworldw...@gmail.com wrote:

 (no hats, for the moment)

 Ed,

 It seems to me that this is exactly the issue: we've already had multiple
 drafts requesting new entries in the special use names registry, and
 expect more. Your note sounds as if you're fairly sanguine about a
 stream of unpredictable requests; however, based on what we've seen so
 far, I admit I'm not.

 I'm still re-immersing in DNSOP after being entirely absorbed in other
 work the last couple of weeks, but I want to support us continuing this
 discussion, because it seems to me that the point Andrew started the
 thread to make is valid: we don't have a coherent view of how the
 relevant namespaces (based on DNS protocol, compatible with DNS protocol
 but intended for different protocol use, or otherwise) interact.

 The painful immediate consequence is that we're trying to apply RFC 6761
 and finding that it remains subjective to do so, with an element of
 beauty contest in the deliberations that means outcomes are
 unpredictable. There's no meaningful guidance we can give developers on
 what names it's safe for them to use in new protocols, or even for
 specific uses in-protocol, and as Andrew and others have pointed out,
 there may even be ambiguity about what our own registries mean in
 protocol or operational terms.

 Longer term, this lack of clarity has implications for both architecture
 and policy for the DNS, including our ability to support innovation and
 to coordinate with other groups in the IETF and beyond.


 best,
 Suzanne


 On Jul 1, 2015, at 8:26 AM, Edward Lewis edward.le...@icann.org wrote:

 On 7/1/15, 1:47, DNSOP on behalf of str4d dnsop-boun...@ietf.org on
 behalf of st...@i2pmail.org wrote:
 .onion and .i2p (and to my knowledge, the other proposed P2P-Names
 TLDs too) have to conform to DNS rules in order to be usable in legacy
 applications that expect domain names.

 I'd been told that onion. was a one-time thing, that in the future
 conflicts wouldn't happen.  What I read in the quoted message is that
 onion.'s request isn't a one-time thing but a sign of things to come.

 I'm sympathetic to the use the path of least resistance - e.g., use
 names
 that syntactically are DNS names - instead of building a separate
 application base.  I expect innovation to be free-form and thus a stream
 of unpredictable requests to reserve names for special purposes,
 including
 DNS-like names.

 What DNSOP 

Re: [DNSOP] More after onion? was Re: Some distinctions and a request

2015-07-01 Thread Richard Barnes
On Wed, Jul 1, 2015 at 2:54 PM, Edward Lewis edward.le...@icann.org wrote:
 On 7/1/15, 14:26, Richard Barnes r...@ipv.sx wrote:

We do our best work when we do engineering, not rule-making.  Let's
engineer a solution here that's more appealing than squatting.  For my
money, alt-TLD looks about right.

 How does that help this:

On 7/1/15, 1:47, st...@i2pmail.org wrote:
 .onion and .i2p (and to my knowledge, the other proposed P2P-Names
 TLDs too) have to conform to DNS rules in order to be usable in
legacy
 applications that expect domain names.

 Having a alt-TLD is fine.  But what if names are proposed, experimented
 and deployed outside the sphere of influence of the IETF and/or working
 group?  Writing this as someone who is unfamiliar with other proposed
 P2P-Names efforts and whether they want to engage with standards bodies
 before deploying.  I've gotten the impression that members of those
 efforts dislike standards processes - I may be wrong but that's the
 impression I've gotten from the discussion on this list.

The thing that got .onion to the IETF is that they needed to be
official.  (So that they could get certificates for .onion names.)
Until they get an RFC 6761 registration, they're in a grey zone of
being neither officially DNS names nor officially not DNS names.

ISTM that the benefit of .alt is that it creates a
clearly-not-normal-DNS zone.  We would have to check with the CAs, but
I think that that would do a lot to prevent issues like what .onion
ran into.  My hope would be that that benefit would make it appealing
enough for at least some of these other pseudo-TLDs to tolerate the
switching cost.

--Richard


 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Adoption and Working Group Last Call for draft-appelbaum-dnsop-onion-tld

2015-05-23 Thread Richard Barnes
On Thu, May 21, 2015 at 3:20 PM, John R Levine jo...@taugh.com wrote:

 It would be a shame for them to nitpick the rules because special purpose
 namespace != TLD?


 Is the CAB really likely to waste its time on that?  I don't know them, I
 have no idea.  I'd hope they had better things to worry about if it's
 abundantly clear whether we've declared .onion to be special.


Speaking with my CAB Forum member hat on, I would be happy to make the
argument there that they should allow CAs to issue for special purpose
names, as long as they follow validation procedures appropriate to each
special-purpose namespace.  (Though clearly, I can't guarantee an outcome.)

The critical thing is having a clear designation, rather than the ambiguity
we have now.

--Richard



 Regards,
 John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
 Please consider the environment before reading this e-mail.


 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Adoption and Working Group Last Call for draft-appelbaum-dnsop-onion-tld

2015-05-23 Thread Richard Barnes
tl;dr: Ship it.

On adoption: I agree that we should adopt this document.

On WGLC: I have reviewed this document, and I think it's generally in fine
shape to send to the IESG.  I have included a few comments below, but
they're mostly editorial.  The only issue of any substance is that I would
prefer some of the SHOULDs be MUSTs, for extra clarity.

Thanks to the WG for the good discussion, and to the chairs for acting with
lightning speed in IETF terms.

--Richard



   This information is not meaningful to the Tor
   protocol, but can be used in application protocols like HTTP
   [RFC7230].


It took me a second to process what this meant.  Would the following
phrasing be correct?


   Labels beyond the first label under .onion are not used by
   the Tor routing, so for example, foo.example.onion will route
   to (and authenticate) the same Tor service as example.onion.
   However, additional labels might be used by application services
   to distinguish different sub-services accessible via the same Tor
   service.  In the case of HTTP, for example, the full name, with
   all labels, will be included in the Host header, and can be used
   to identify HTTP virual hosts on a common server.


Might not be necessary to clarify this much, but like I said, it wasn't
obvious to me what the sub-label handling would be.


--


Note that this draft was preceded by
[I-D.grothoff-iesg-special-use-p2p-names] ...

This paragraph can probably be deleted in the final version.


--


The .onion Special-Use TLD - The .onion Special-Use Domain Name

(For consistency with RFC 6761)


--



   ... or using a proxy (e.g., SOCKS [RFC1928])
   to do so.  Applications that do not implement the Tor protocol
   SHOULD generate an error upon the use of .onion, and SHOULD NOT
   perform a DNS lookup.


It might be worth noting that in the scope of the last sentence,
Applications includes proxies.  That is, your proxy should n't generate a
DNS request if it gets a .onion request either.  I would just add
(including proxies) between protocol and SHOULD.


--



   3.  Name Resolution APIs and Libraries: Resolvers that implement the
   Tor protocol MUST either respond to requests for .onion names by
   resolving them (see [tor-rendezvous]) or by responding with
   NXDOMAIN.  Other resolvers SHOULD respond with NXDOMAIN.


This seems a little backward.  It seems like the general requirement is
that resolvers MUST either resolve over Tor or return NXDOMAIN.  If you
don't support Tor, you just fall in the latter bucket.  Don't be afraid to
MUST DNS servers, here or in the subsequent points.


--



On Wed, May 20, 2015 at 1:12 PM, Tim Wicinski tjw.i...@gmail.com wrote:


 Greetings,

 From the outcome of the Interim meeting, and discussion on the list, this
 draft appears to both have strong support and address the problem space of
 RFC 6761.  The authors have requested a Call for Adoption. The chairs want
 to move forward with this draft if it has consensus support.

 It also seems that the document is relatively mature in terms of what
 people need to know in order to decide whether to support advancing it. As
 we have done with other drafts where a lengthy revision process didn’t seem
 necessary to reach a draft we could advance further, and in consideration
 of the timeliness constraint raised by the authors, the chairs are going to
 combine the adopting of the document with the Working Group Last Call.

 The draft can be found here:

 https://datatracker.ietf.org/doc/draft-appelbaum-dnsop-onion-tld/

 https://tools.ietf.org/html/draft-appelbaum-dnsop-onion-tld-01

 Please review the draft and offer relevant comments. In particular, we’ve
 heard reservations expressed about the precedent that might be set by
 advancing this document, and about the level of specification of the TOR
 protocols that we might like to see included in the descriptions of the
 expected “special” treatment of .onion names in the field. So if people
 feel strongly about possible changes, we need to know.

 Because of the compression of adoption and WGLC, we're making this a three
 week window.  The working group last call will end on Wednesday June 10th,
 2015.

 thanks
 tim

 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A comparison of IANA Considerations for .onion

2015-05-12 Thread Richard Barnes
On Tue, May 12, 2015 at 9:34 AM, Tom Ritter t...@ritter.vg wrote:

 On 12 May 2015 at 07:23, Andrew Sullivan a...@anvilwalrusden.com wrote:
  If the Tor Browser has its own resolver that is used just by it and
  that is not a separate service installed with the expectation that
  other clients will use it, then it seems to me the built-in Tor
  resolver is part of the application, even if it happens to be built
  out of components that _could_ be a name resolution API or library in
  the general case.  It is definitely my impression that (for instance)
  the Onion Browser installed on my iphone doesn't provide services to
  other applications, and has its very own resolution system as a
  result.  That suggests to me that there's more than one way to do
  this, and one of those ways is for the application to be special.
  It's not the only way, though, I agree.

 Like you say there are a multitude of ways to do it, and there are
 examples of most of them:

 The tor daemon (often called little-t tor or just tor) is a daemon
 running on the OS that exposes a SOCKS service for anyone who speaks
 SOCKS to connect to. You can point an unmodified browser at it, and
 access .onion services. [0]  This is also how OrBot works on Android!

 You can configure little-t tor to act as a DNS resolver, point
 /etc/resolve.conf at it, and have all your DNS queries go through tor,
 but not any of your actual traffic.[1][2]

 You can use iptables and transparently proxy non-SOCKS traffic through
 tor as either the main mechanism for internet access or as a backup to
 prevent anything from not going through tor. TAILS and other anonymous
 LiveDVD systems do this, and OrBot on Android supports this mechanism
 also, if you have root access.

 You can use TorBrowser, which bundles little-t tor, uses the SOCKS
 access method, and requires no configuration to access .onion
 services.

 You can use a SOCKS aware program to access .onion services (or the
 Internet) using TorBrowser's bundled tor, which is how Pond works.
 Shutting down TorBrowser closes the connection to .onion services, and
 Pond is stranded.

 You can create a bundle, like Onion Browser on iPhone, which does
 _not_ allow other applications to make use of the bundled daemon.


Thanks for enumerating the possibilities :)  I think those are all
consistent with the guidance in draft-appelbaum-dnsop-onion-tld, yes?  Most
of them correctly handle .onion names properly, and the DNS resolution
fails (correctly).

--Richard




 -tom


 [0] As mentioned, this is a wholly insecure way to access sites
 anonymously, as there are ways to a) get your real IP address b) link
 you between TLDs c) correlate your browsing sessions and d)
 fingerprint you uniquely.

 [1] This is kind of a nifty way to get DNS privacy.

 [2] If you attempt to resolve a .onion this way (as opposed to letting
 SOCKS resolve it), this is the response:
 dig @127.0.0.1 -p 5353 facebookcorewwwi.onion

 ;  DiG 9.10.1-P1  @127.0.0.1 -p 5353 facebookcorewwwi.onion
 ; (1 server found)
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 41248
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

 ;; QUESTION SECTION:
 ;facebookcorewwwi.onion. IN A

 ;; Query time: 0 msec
 ;; SERVER: 127.0.0.1#5353(127.0.0.1)
 ;; WHEN: Tue May 12 09:31:31 EDT 2015
 ;; MSG SIZE  rcvd: 40

 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A comparison of IANA Considerations for .onion

2015-05-12 Thread Richard Barnes
On Tue, May 12, 2015 at 9:17 AM, hellekin helle...@gnu.org wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 05/12/2015 09:23 AM, Andrew Sullivan wrote:
 
  Is your complaint that appelbaum-dnsop-onion reads to you as though
  such special applications are the only way to do this?  If so, then
  you're right that it needs adjustment.
 
 *** Yes, my concern is that we can get consensus on how to interpret
 what an application means, and what a name resolution APIs and
 libraries mean in a consistent manner in the context of RFC6761, as it
 can lead to wide differences in the resulting rules for the readers.


Could you clarify what differences you see arising?

The difference between application and name resolution seems pretty
clear to me.  A name resolution library looks up information related to a
name; an application does more.  It's a functional distinction more than a
distinction between two pieces of software.  (As Andrew notes, they can be
combined.)

In any case, this doesn't seem like a hugely critical distinction, since
the requirements on both are pretty much the same -- do the special Tor
stuff with .onion domains or fail.

--Richard




 ==
 hk

 P.S.: your previous response was instrumental in my understanding of the
 difference of views on MUST and SHOULD, and on the point developed in
 this message, and I thank you for that.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQJ8BAEBCgBmBQJVUf1iXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
 ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQ0IyNkIyRTNDNzEyMTc2OUEzNEM4ODU0
 ODA2QzM2M0ZDMTg5ODNEAAoJEEgGw2P8GJg9CCoP/26lZAk6UymUHKe6CyjG+XDe
 yvIgtI/vWvIlr2RvUycRO6mQwrDij6ZHi7lt/+ku0Xqrr+cDlNo55zfw5DrzJ8G7
 r31m7RS1mq6y4H9aPxkGwgB3FhrAM9A7hj/3Mu0zgewv0QrhSgW+zW9pw/315SM3
 h5Xaj4tVx6tsXAPkG6pefDaXFfXjJhuRtL0vXsk8pn2s1kc/2RekX0InUABEiKUf
 h6Y3M4uoYxfoMcbj6a0x7udihaANxt9/6B/npqjvyZXrSemYsxvUqEOcw32NtweD
 +C5EJJur7yFrAL3xYvRHKSVJSs1XJvjY3FvpfC0yhYmEUO7QUHmMYYAYVjCVpgEF
 2raqdVaFoICARdiCbm/d5dQujfXPQfti8o5vQy72H2ZYiSgScO3GnwYHil70RhkN
 JcTbWeQ2+oyKvWEuVwF/I2E0WWXXSbZmbRKCKJQ4RnKfM1IOxfCnpx3wBoLk8nLX
 hKAB9pgjyk0T8T7v9wXeCKyIP18L82iGJQi9MCc/YOaSTL1tyNKjs/Hso5p1LSxS
 tuAAouE5TONI37Ud77Up13Mw9OJkCxse6o2bNs0Jox4laxvJV+nAJzSOwa6F6noj
 8sU0Gv+v037U/bZQHRq/PIk4ELoAs3dxPhpG3AHCH0iqFPDAvTd/WkNzK/zmhPnN
 TeB49F986AAe85Hph8AA
 =qKXA
 -END PGP SIGNATURE-

 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] A comparison of IANA Considerations for .onion

2015-05-11 Thread Richard Barnes
On Mon, May 11, 2015 at 7:21 PM, Alec Muffett al...@fb.com wrote:

 Hi Hellekin!

  Since Alec Muffett seems to have better things to do

 I'm sorry if you've been waiting for my input - I am not the primary
 author of the document; Jacob Appelbaum's name is in the document's
 title for a good reason, and my involvement has been one of tuning a
 few paragraphs, providing some wordsmithing, and cheerleading loudly.

 Jake is - as I am sure you are aware - working for Tor, and is a busy
 guy (last I heard was off in China doing something amazing) hence I
 mailed out the latest draft when there was in extended (and ongoing)
 lull in the desire for anyone on our editorial maillist to tweak it.

 I'll do my best to respond to your points, albeit Jake and other
 (wiser?) heads may have additional insights that I may miss by dint of
 this being dropped on me at 10pm the night before the big phone call
 to discuss such matters.

 On that basis, you'll also please forgive me excising brevity.


   1. the users considerations pretend that users must use onion-aware
  software in order to access Onionspace, but I assert that you and I
  can use an ordinary Web browser, type in a .onion address, and
  access the requested service.


 If you are consciously running TAILS, I suppose so [ED: TAILS is a
 Linux distribution which funnels almost all communication through Tor]
 albeit that Tor would likely recommend against using a vanilla browser
 in default configuration to access any part of Tor, let alone .onion
 addresses, because risk of deanonymisation is too high with normal
 browsers.  Hence the imprecations in favour of informed users,
 reflecting Tor user-policy.

 If you are not talking about running TAILS (or similar) then I must be
 misapprehending what you mean by can use an ordinary Web browser,
 type in a .onion address, and access the requested service because
 your average browser - say Chrome - cannot access .onion without Tor
 software help and some fiddly configuration.


   2. more importantly, where P2PNames imposes NXDOMAIN response to
  authoritative name servers, OnionTLD makes it a soft requirement,
  thus leaving the possibility for name servers to hijack Onionspace
  without user consent nor awareness.


 Yeah, we tossed that one back and forth a bit, and eventually if
 slightly grudgingly went with the SHOULD on the basis that we wanted
 the draft to be adopted more than we wanted to be thinking wishfully.


   3. this error is confirmed for DNS server operators, where OnionTLD
  makes it a soft requirement not to override responses.


 This might be an issue so long as your threat model includes blindly
 unaware users who are typing .onion addresses into non-Tor-capable
 browsers in the (presumably first-time) expectation that it will work,
 and using resolver libraries which don't honour the imprecation that:

 [draft-appelbaum-dnsop-onion-tld-01]
 Resolvers that implement theTor protocol MUST either respond to
 requests for .onion names by resolving them (see [tor-rendezvous]
 [ED: A TOR-INTERNAL THING]) or by responding with NXDOMAIN.

 ...on a network infrastructure which is thoroughly pwned by a capable
 bad actor.  Not totally impossible, I'll grant you, but threat models
 which start from the assumption of a wholly ignorant userbase are
 (joking aside) pretty flawed.

 Continuing...

 [DELETIA]


  Since there is no central authority necessary or possible for
  assigning .onion names, and those names correspond to cryptographic
  keys, users need to be aware that they do not belong to regular DNS,
  but are still global in their scope.
  
  OnionTLD contradicts this: Users: human users are expected to
  recognize .onion names as having different security properties, and
  also being only available through software that is aware of onion
  addresses.

 Please explain the contradiction, I fail to see it?

 [DELETIA]


  This is the main conflicting point: OnionTLD does not recognize
  .onion as special and allows Authoritative DNS servers to respond
  for .onion (SHOULD).  From the P2PNames perspective, this is
  unacceptable, and a complete failure to address the privacy concerns
  set forth by the draft.  If OnionTLD would be accepted in that form,
  it would allow the root servers to capture leaked onion requests AND
  RESPOND POSITIVELY FOR THEM !  *

 There are entire papers about that.  Thank you for raising that point,
 I wanted an excuse to post this URL to the DNSOP list:

 https://petsymposium.org/2014/papers/Thomas.pdf

 Measuring the Leakage of Onion at the Root - A measurement of
 Tor’s .onion pseudo-top-level domain in the global domain name
 system

 ...to help drive home the need for making .onion special.


To save some time, the headline number is: The rate of .onion requests
hitting the A and J roots is ~200k requests per day and growing.

--Richard





 As before, ignoring the potential for privacy-leakage of which site
 you are