Re: [DNSOP] Registry of non-service _prefix names?
On 11/16/2015 12:39 AM, Ray Bellis wrote: >>From my previous recollection of this, ISTR there was a suggestion that > your draft only directly register "single-label" names, but with "_tcp", > "_udp" et al listed in the registry as a link to RFC 6335? (oops. missed the need to respond to this.) It's taken awhile, but I've finally come around to thinking that, yes, the public registry needs to be only for the 'global' part of any of these namespaces. I originally attempted to cover a complete hierarchy of underscore names, but the term "tar baby" isn't sufficient to impart how intractable that became. So in DNS terms, that's the 'highest' underscore name. Anything below them is scoped to be invisible to the public concern. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
[DNSOP] a new draft?? Bounding of authentication and authorization
Dear All, Before writing a draft (since I have had some unused drafts so far and ... do not want to repeat the same mistake...), I would like to know the opinion of WG on the overview of an idea for the extension of DANE so that it can be used for other use cases beyond Email and web, especially for certificates based authentication (known as TLS based authentication) of single devices in the network where we have DANE as a complete option in place of PKI model and the first A of AAA model. In other word, the extension focuses on not only giving the TLSA record back to the verifier node but also the references of policy templates in resource policy storage. Let me give you an example to be clearer ( I did not use the real example so that it can be simpler to avoid the use of network related concept) In Alice's enterprise, instead of AAA method, we assume that it is TLS based authentication where authentication is based on the devices' certificates and not the username and password. Therefore, DANE is in use as a way to authenticate the Alice's laptop. Alice need to access business server C and B in her enterprise network after her laptop is being authenticated. For authentication, Alice can herself generates a public/private key and self-sign it. then it goes to the administrator of her enterprise network called Bob to generate the TLSA record of her certificate and store it in the DNS server so that later on business server C can verify Alice by query the DNS server. Further, Bob also creates two new templates in resource policy server and call them template AliceB and AliceC. Then in AliceB it includes the authorization access to business server B and in AliceC it defines Business server C. To provide the binding of the Alice template with the authentication information (certificates), Bob stores the template reference number of Alice policy in a new resource record so called Parent policy Indices (PP RR) on DNS server (This is the extension to DNS to bound the authentication and authorization). Now, Alice tries to access business server C, the business server C asks the certificates of Alice to establish a secure TLS channel. Alice's laptop submits its certificates to business server C. Business server C needs to verify this certificates, it queries the DNS server and asks for TLSA record of Alice. DNS server checks Alice's laptop FQDN that includes in the query of Business server C and retrieves Alice's laptop TLSA record. Further it needs to retrieve the authorization information of Alice's laptop. The extension is that business server C also retrieves the reference numbers that Bob stored it before in PP RR from DNS server along with TLSA record. Then it can query the resource policy storage based on this reference number that is the reference to the Alice template in resource policy and retrieves the ALiCE's authorization information to make sure Alice is authorized to use business server C. With this simple resource record, we added this bounding of authentication and authorization. There might be some questions here such as: 1- Why not to store the whole authorization information on DNS server instead of only storing the reference number Answer: because at the moment in most network infrastructure, there are already resource policy servers that stored the authorization information and it is so costly to restore them all again in a new place otherwise there is easy way of conversion. Further, since DNS is usually publicly available to its local network or global network, therefore, for security reason, not all nodes should be able to know the content of the authorization information. It might leak some critical information about the infrastructure and resources in the network which might be both security and privacy issue. This is why, only reference number and a small readable human text is enough for that. 2- Why not to store the TLSA record also in the resource policy so that the business server C (as in my example) query the resource policy based on the TLSA record. By doing that, we limit the DANE use case and it can no longer be used for some other use cases that we have in our mind for multi-tenancy where each tenants can create subdomain on its own zone and re-assign these policies to its sub-domain. This is because a resource policy usually cannot be shared with the tenant but DNS power allows a tenant to only access its own zone and update its own resources. Therefore, this tenant can also create a sub domain on its own zone and assign a part of its resources to third parties. In my example, Alice can allow David, her employee to access business server B but not business server C by updating DNS server with the TLSA record of David's laptop's certificate and in PP RR adding the template of business server B (AliceB). Therefore, Alice, without the need to ask Bob again, could authorize someone else to only access a part of its authorized resources in the network. The
Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
Mark, > What is the actual harm, discounting aesthetics? For one thing, names not supported by the underlying infrastructure will _always_ leak. In the bad old days, when an application got a string ending in .UUCP, .BITNET, .CSNET, etc., it had to know that those strings had to be treated differently. Various hacked libraries did different things to deal with those endings, and usually imperfectly. Worse, the universe of endings was local policy specific but the use of those names was global in scope, so there were a never ending series of issues where a string would work in one locale but not in another, resulting in user complaints, general confusion, and much gnashing of teeth. After a number of years, we (re)learned that maybe using the name of something to distinguish its underlying infrastructure requirements wasn't the best idea. .LOCAL, .ONION, and 6761 in general allow us to repeat history yet again, since we seemed doomed to be unable to remember earlier lessons. Regards, -drc signature.asc Description: Message signed with OpenPGP using GPGMail ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] comments on draft-bortzmeyer-dnsop-nxdomain-cut-00
On Tue, Nov 24, 2015 at 05:39:04AM -0500, Shumon Huquewrote a message of 234 lines which said: > > That was exactly my point, and in that sense I'd say "SHOULD > > delete" is redundant (and possibly imposes unnecessary > > restrictions on implementations). > > > Yes, I agree. The current description is a bit too implementation specific. My concern is that some implementations may have a cache composed of a tree structure *plus* a hashed index for speed. When receiving a query whose answer is in the cache, such implementation may not perform a "downward search" in the cache. May be something like: "After the reception of a NXDOMAIN answer for a given name, the resolver SHOULD/MUST? reply NXDOMAIN for every name under the denied name." (There are details, such as TTL and such as RFC 6604, see the draft for these.) That way, we just specify a behaviour, with zero implementation detail. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
[DNSOP] I-D Action: draft-ietf-dnsop-qname-minimisation-08.txt
A New Internet-Draft is available from the on-line Internet-Drafts directories. This draft is a work item of the Domain Name System Operations Working Group of the IETF. Title : DNS query name minimisation to improve privacy Author : Stephane Bortzmeyer Filename: draft-ietf-dnsop-qname-minimisation-08.txt Pages : 11 Date: 2015-11-29 Abstract: This document describes a technique to improve DNS privacy, a technique called "QNAME minimisation", where the DNS resolver no longer sends the full original QNAME to the upstream name server. The IETF datatracker status page for this draft is: https://datatracker.ietf.org/doc/draft-ietf-dnsop-qname-minimisation/ There's also a htmlized version available at: https://tools.ietf.org/html/draft-ietf-dnsop-qname-minimisation-08 A diff from the previous version is available at: https://www.ietf.org/rfcdiff?url2=draft-ietf-dnsop-qname-minimisation-08 Please note that it may take a couple of minutes from the time of submission until the htmlized version and diff are available at tools.ietf.org. Internet-Drafts are also available by anonymous FTP at: ftp://ftp.ietf.org/internet-drafts/ ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] I-D Action: draft-ietf-dnsop-qname-minimisation-08.txt
On Sun, Nov 29, 2015 at 06:52:34AM -0800, internet-dra...@ietf.orgwrote a message of 36 lines which said: > Title : DNS query name minimisation to improve privacy > Filename: draft-ietf-dnsop-qname-minimisation-08.txt ... > A diff from the previous version is available at: > https://www.ietf.org/rfcdiff?url2=draft-ietf-dnsop-qname-minimisation-08 Post IETF-LC version, now waiting a writeup before going to the IESG. Note a terminology change (suggested by Ralph Droms) in the suggested algorithm "parent" has been replaced by "ancestor". (The node carrying the authoritative name servers to query is not always an immediate parent because 1) zone cuts are not at every label 2) the resolver may not know the real closest node, when its cache is still cold.) ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
Hi George, > I have a different perspective on this question Mark. > > Firstly, I find use of .magic as the extreme RHS of a name, to force > special behaviour architecturally disqueting. > > I really do worry about what we think we're building when we encode this > behaviour into name strings. It leads to all kinds of bad places. Some of > them, like the homoglyph problems John Klensin has raised, simply don't > have good answers (the assumption the string .onion is the literal ASCII > 'o' 'n' 'i' 'o' 'n' is not well founded) On the face of it, it sounds like that's a problem shared by any application of DNS names. > We were here a long time ago, when we had pre-Internet mail and used things > like .UUCP as magic break-out signals in email. This rapidly becomes the > problem: its bound to applications-level decisions about when to honour > magic, and when not, and it certainly doesn't avoid lower level > gethostbyname() calls everywhere. So the .magic label winds up being > half-true, depending. .onion was the chosen approach precisely because nothing else but lookup and subsequent routing has to change; there are no other application-level decisions about .onion, and that's a feature. HTTP still works, TLS still works (once you can get a cert), links still work, HTML still works. Same-origin policy still works. The one difference is that .onion asks that applications and resolvers not leak requests, to avoid privacy issues with misconfigured or misused clients. This is defence in depth, not a hard requirement (after all, .onion has been running for several years with the benefit of that requirement). This doesn't seem like "magic" to me, it's just fencing off part of the name space and asking others not to play there. > Secondly, While I think I now understand some of the problems you have in > web/apps layer (from talking to Wendy Seltzer) and I have sympathies about > the syntactic constructs welded into code around URL forms, I think these > problems are different to the architectural/layer violation explicit in > forcing .magic names into the namespace. "Architectural/layer violation" is not in and of itself a knock-down blow; what's important is whether the harm that's caused is greater than any other alternative approach available. I haven't seen much detail on that yet. What is the actual harm, discounting aesthetics? I make that qualification because I very much acknowledge that this is messy. I don't yet see how it's creating technical problems, though. What you bring up is more akin to limitations of the approach -- limitations which the folks defining .onion chose to work within. Presumably, future approaches to distributed naming will make the same tradeoffs. > What really got me floored, was the qualities of cryptographic protection > which a project like TOR needs, and the implication a public/commercial CA > service embedded in the browser TA set is the right path. I'm frankly > horrified, even under certificate pinning, that we've gone to a space where > any TA can claim to sign over .onion, and excluding the pinned > applications, lead people into paths where their assumptions of TLS backed > security are simply not true. Again, that's not unique to Tor and .onion; it's a problem shared by the whole Web. This is not new; it's unfortunately the result of many choices over the years, and there are many efforts to improve it (e.g., CT/TRANS, pinning, etc.). Considering what's happening over the "normal" Web (e.g., banking), that's just as bad, I'd say. > As I understand things, TOR *wanted* .onion to get X509 PKI over the label > in a browser, and the CA community refused unless its TLD status was > confirmed. Is this the kind of rigour in technical process we expect, to > make technical calls to pre-empt the namespace? (which btw, we passed > otherwise to another body, reserving an RFC backed process to get names, > but I think that was a hugely unwise decision) Some would call this pragmatism. > To protect .onion certs, the TOR developers are going to have to code in > cert pinning behaviour, all kinds of things, which frankly sound to me a > lot like the cost of not having the name, or having a name buried under a > 2LD instead. Not necessarily. CT, pinning and similar approaches can help as well, and these are already getting deployed on the Web overall. Regardless, putting .onion into a 2LD doesn't help avoid these problems. > So I come to a different place. I come to a place where requests for magic > names look like violations of any spirit of an architectural view of the > network, and where retaining some technical basis to reserve them looks > like violations of the separation of functional roles between ICANN and > IETF, absent very very clear, strong reasons to have the name held back. > > I don't entirely see these reasons emerging. I see the opposite. I see > expediency from apps communities, seeking to use .magic tricks
Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
>> The purpose of the domain name system is to name things. We have IP >> addresses and we want to refer to them using names. We do the same thing >> with mail domains, etc. > >That is not the sole purpose - we use DNS for keys, for time stamps, >for data of all kinds. In a well designed system, names are only used to name things. >From the good old days, telnet and ftp are clear examples. After looking up the name in DNS, you don't need the name anymore. All you need are the addresses that were returned in the DNS lookup. SSH with SSHFP also has that property. Lookup the SSH fingerprint in the SSHFP record, lookup an address, connect, and verify the fingerprint. No need for the name after the lookup. (Storing other stuff in a naming system is not a problem, it is just a big distributed database). >From day one however, this principle was violated. SMTP does use the domain name after the name lookup. With the interduction of the http host header, http also violates this principle. With the introduciton of SNI, TLS violates it. However, SMTP, HTTP, and TLS have one in common, from the network point of view, the name is not used for routing. It is only later, at the application layer that the name is used again. It is here that .onion goes one step further. Onion 'names' are derived from public keys. So instead a name being independent of an address, the .onion name is the address. Unlike, TLS or SSH, where a network connection is set up and then the crypto runs on top of it, in TOR this all integrated. For good reason, however that make the .onion 'name' an address. >> In goes a name, out comes some lower level entity. >> >> In this context an onion address should have been an 'IN ONION', i.e, >> www.example.com might have an 'IN ONION' address for use with TOR. >> > >And that would also require special handling... Yes, but in a way that doesn't abuse the model. If an 'IN ONION' would be stored on the host as a 'struct sockaddr_onion' then all code can be cleanly adapted to support TOR hidden services. Instead of adding pattern matching hacks to name resolution. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
Hi, On 11/29/15, Philip Homburgwrote: >>> The purpose of the domain name system is to name things. We have IP >>> addresses and we want to refer to them using names. We do the same thing >>> with mail domains, etc. >> >>That is not the sole purpose - we use DNS for keys, for time stamps, >>for data of all kinds. > > In a well designed system, names are only used to name things. > I disagree, I think. In a "well designed" system, a name is meaningful to different parties differently. We can use it as a person's easy to remember name, or we can use it to express a relationship, or we can use it to ensure that related things are seen as related in a mathematical sense, or other things or all of these things and then more. I'm in favor of the DNS serving as a global petname system. When we use DNS to exchange keys as a kind of hack, we're using it badly as a petname system. We also do so without query privacy in most cases, so we find ourselves having bad petname system and with terrible privacy properties. > From the good old days, telnet and ftp are clear examples. > After looking up the name in DNS, you don't need the name anymore. > All you need are the addresses that were returned in the DNS lookup. Those are not protocols that do not provide confidentiality, integrity or authenticity. They do not solve hard problems. Once they start to solve harder problems, we start to see the problem with treating a name as *just* an easy to remember string, which it isn't. > SSH with SSHFP also has that property. Lookup the SSH fingerprint in > the SSHFP record, lookup an address, connect, and verify the fingerprint. > No need for the name after the lookup. > > (Storing other stuff in a naming system is not a problem, it is just a > big distributed database). > I think the idea that it is considered primarily a naming system is a bit strange once you make the above concession. It is effectively a government and corporate distributed database with almost no privacy or security protections against those operating or monitoring the DNS. > From day one however, this principle was violated. SMTP does use the > domain name after the name lookup. > > With the interduction of the http host header, http also violates this > principle. With the introduciton of SNI, TLS violates it. > > However, SMTP, HTTP, and TLS have one in common, from the > network point of view, the name is not used for routing. The name in HTTP and SMTP are most certainly used for routing. It is just a very weak, non-cryptographic routing. With TLS, it is a stronger kind of cryptographic routing but still weak to an adversary performing surveillance. > > It is only later, at the application layer that the name is used again. > > It is here that .onion goes one step further. Onion 'names' are derived > from > public keys. So instead a name being independent of an address, the .onion > name is the address. The name is not an address per se. It is a self-authenticating url - this means you can prove that you're talking to who you intended to speak with all along. It says nothing of the routing, except that you are routed by public key and IP addresses are not as relevant. They're still used in some cases but the system is designed to fail closed. This is not exactly the same as the .onion name being an address. The *publickey* is also not an address. In the Tor Hidden Service protocol, we have descriptors and other things that are more akin to the address. They change often and are authenticated by the public key that you would expect to authenticate them. Those authenticated bits are the address in my view. > Unlike, TLS or SSH, where a network connection is set up and then the > crypto runs on top of it, in TOR this all integrated. For good reason, > however that make the .onion 'name' an address. It can be done in stages - transparent setups work differently from a local Tor client, for example. A resover can be Tor enabled, as a network, as a browser, etc. I guess I have a hard time calling the name an address. It is a name and it is a kind of name with security properties. See also Zooko's triangle for what I mean - the normal DNS and self-authenticating names are not the same mapping on Zooko's triangle. >>> In goes a name, out comes some lower level entity. >>> >>> In this context an onion address should have been an 'IN ONION', i.e, >>> www.example.com might have an 'IN ONION' address for use with TOR. >>> >> >>And that would also require special handling... > > Yes, but in a way that doesn't abuse the model. > Perhaps. > If an 'IN ONION' would be stored on the host as a 'struct sockaddr_onion' > then all code can be cleanly adapted to support TOR hidden services. > Instead of adding pattern matching hacks to name resolution. > I'm skeptical but I do look forward to you expanding on how this would work with *less* work than our current approach. I'd also be curious what work would need to be done and
Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
>.onion was the chosen approach precisely because nothing else but lookup and s >ubsequent routing has to change; there are no other application-level decision >s about .onion, and that's a feature. HTTP still works, TLS still works (once >you can get a cert), links still work, HTML still works. Same-origin policy st >ill works. Call me old-fashioned, but I think this is silly. The purpose of the domain name system is to name things. We have IP addresses and we want to refer to them using names. We do the same thing with mail domains, etc. In goes a name, out comes some lower level entity. In this context an onion address should have been an 'IN ONION', i.e, www.example.com might have an 'IN ONION' address for use with TOR. Now instead, .onion doesn't map to anything. In goes an onion address (and not a name) out comes nothing. All, .onion does is signal a particular transport protocol. So it is a clear abuse of the domain name system. It might be that it is the best option. But my guess is that is was just the easiest hack to get it working. ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Call for Adoption for draft-wessels-edns-key-tag
Some feedback with respect to installed trust anchors is needed. Whether this is the correct solution I'm not sure. It requires updating all resolvers in the resolution path to both cache and relay tags. The same can be achieved by encoding the tags into qnames/qtypes without needing the entire ecosystem to be upgraded which this proposal requires. e.g. _ta_./NULL Mark In message <5659a1db.5090...@gmail.com>, Tim Wicinski writes: > > This starts a Call for Adoption for draft-wessels-edns-key-tag > > The draft is available here: > https://datatracker.ietf.org/doc/draft-wessels-edns-key-tag/ > > There was unanimous support this during the meeting in Yokohama, so this > is more of a formality, unless we hear strong negative reaction. > > However, please indicate if you are willing to contribute text, review, etc. > > Since there was unanimous support for this draft, I am going with a one > week Call for Adoption. Please feel free to protest if anyone feels this > is out of line. > > This call for adoption ends 7 December 2015. > > Thanks, > tim wicinski > DNSOP co-chair > > ___ > DNSOP mailing list > DNSOP@ietf.org > https://www.ietf.org/mailman/listinfo/dnsop -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org ___ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop
Re: [DNSOP] Some thoughts on special-use names, from an application standpoint
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Jacob Appelbaum wrote: > Hi, > > On 11/29/15, Philip Homburgwrote: >> >> It is only later, at the application layer that the name is used >> again. >> >> It is here that .onion goes one step further. Onion 'names' are >> derived from public keys. So instead a name being independent of >> an address, the .onion name is the address. > > The name is not an address per se. It is a self-authenticating url > - this means you can prove that you're talking to who you intended > to speak with all along. It says nothing of the routing, except > that you are routed by public key and IP addresses are not as > relevant. They're still used in some cases but the system is > designed to fail closed. > > This is not exactly the same as the .onion name being an address. > The *publickey* is also not an address. In the Tor Hidden Service > protocol, we have descriptors and other things that are more akin > to the address. They change often and are authenticated by the > public key that you would expect to authenticate them. Those > authenticated bits are the address in my view. > >> Unlike, TLS or SSH, where a network connection is set up and >> then the crypto runs on top of it, in TOR this all integrated. >> For good reason, however that make the .onion 'name' an address. > > It can be done in stages - transparent setups work differently > from a local Tor client, for example. A resover can be Tor enabled, > as a network, as a browser, etc. > > I guess I have a hard time calling the name an address. It is a > name and it is a kind of name with security properties. See also > Zooko's triangle for what I mean - the normal DNS and > self-authenticating names are not the same mapping on Zooko's > triangle. I concur with Jacob. An I2P Destination (the public keys defining a particular client or service) is essentially equivalent to an IP address, and .i2p domain names do essentially "resolve" to Destinations. Furthermore, a .b32.i2p is functionally equivalent to an .onion - it is a self-authenticating name, that has a one-to-one correspondence with an address-like network object (the Destination). But just because the .b32.i2p or .onion is derived from the address, IMHO does not mean it is not a name. For one thing, a .b32.i2p cannot be used in isolation to connect to an I2P HS - it must be looked up in-net to obtain the Destination. > In goes a name, out comes some lower level entity. In this context an onion address should have been an 'IN ONION', i.e, www.example.com might have an 'IN ONION' address for use with TOR. >>> >>> And that would also require special handling... >> >> Yes, but in a way that doesn't abuse the model. >> > > Perhaps. > >> If an 'IN ONION' would be stored on the host as a 'struct >> sockaddr_onion' then all code can be cleanly adapted to support >> TOR hidden services. Instead of adding pattern matching hacks to >> name resolution. >> > > I'm skeptical but I do look forward to you expanding on how this > would work with *less* work than our current approach. I'd also be > curious what work would need to be done and how it could be done > in a way that fails closed and does not harm the user's privacy. I actually looked into doing this for I2P several months ago ago (long after .i2p and .onion were proposed in the P2PNames draft, but before .onion was separately proposed and accepted), because the I2P network model would map well onto the existing getaddrbyhost() mechanisms. I concluded that defining AF_I2P (necessary to enable all code to be "cleanly adapted") would require patching the Linux kernel, the Windows kernel, and probably the kernels of any other OS you wanted to run it on, because none of them provide any way to define additional AF_* or PF_* either in userspace or via kernel modules. You *can* define the 'struct sockaddr_i2p' in a module, but it would have to co-opt some pre-existing and maybe-or-maybe-not-unused AF_*, which is a laughable approach. Now wind back ten years, to when I2P's decision to use .i2p was made. The network was far smaller then (a few hundred nodes, c/f ~40k today). Am I really expected to believe that the small group of pseudonymous developers at the time could have got AF_I2P supported across every OS they wanted I2P to run on (which was every OS that supported Java), in a reasonable space of time (e.g. shorter than this ML has been debating the P2PNames drafts)? When actually using it would require a per-OS kernel module that talked to a user-space usually-not-installed Java process? And when the alternative was to simply use an HTTP proxy that listens for .i2p addresses, which required very little work, was compatible with anything that supported HTTP proxies, and was functional for users immediately? I believe that without the backing of a major business or organization it would not happen now, and it