Re: draft-ietf-nat-protocol-complications-02.txt
Greg Skinner wrote: > The IETF might perhaps take an advocacy position for traditional Internet > service. An RFC on the order of "Full Internet Access is Good" might > sway a few people who are unaware of the wealth of services a full > provider offers. On the other hand, a provider that actually offers > such services is much more likely (imho) to have success among a > potential customer base for them, and arguably has more resources to do so. > By this I mean the money for a PR campaign, advertising, etc. On the other other hand, such a PR campaign might be able to gain some weight by quoting the RFC. -- /\ |John Stracke| http://www.ecal.com |My opinions are my own. | |Chief Scientist |===| |eCal Corp. |All I ask is a chance to prove that money can't| |[EMAIL PROTECTED]|make me happy. | \/
Re: draft-ietf-nat-protocol-complications-02.txt
Keith Moore <[EMAIL PROTECTED]> wrote: > the reason I say that your statement is content-free is that it offers > no specific criticism of IETF that can be used in a constructive fashion. With respect to this particular thread, the only criticism I'd make is I don't see how the draft in question will alter the business practices of AOL or any other large Internet access provider that does not provide full Internet service. I think the draft is useful for protocol developers who may require interoperability across NAT boundaries, and network managers who may need to explain why certain architectures may cause certain protocols to break. Beyond that, I don't see that it will cause any significant change in the business practices of companies who have decided (for whatever reasons) that it is not necessary to give any (or all) of their customers full Internet service. The IETF might perhaps take an advocacy position for traditional Internet service. An RFC on the order of "Full Internet Access is Good" might sway a few people who are unaware of the wealth of services a full provider offers. On the other hand, a provider that actually offers such services is much more likely (imho) to have success among a potential customer base for them, and arguably has more resources to do so. By this I mean the money for a PR campaign, advertising, etc. > I find that the work of IETF varies in quality - much of it is quite > good, some of it mediocre, a small fraction is highly dubious. > Most IETF WGs I've worked with do not operate with an attitude like > "people will have to do what we say", but rather "how do we solve > this problem". Most of them seem to understand that they have not > only to solve the problem, but also to make the solution technically > sound, attractive to those who would use it, and easy to deploy. OK, in this case, what is the problem that needs to be solved, from the standpoint of AOL? Their customers, for the most part, are either unaware that there is a problem, or the problem does not currently affect them. If enough of their customers feel it is a problem, no doubt AOL will change their business practices (because if they fail to do so they will lose business to access providers who will solve their problems). So, what is the problem we are trying to solve here, and is the IETF the organization that can provide the most effective solution? --gregbo
Re: draft-ietf-nat-protocol-complications-02.txt
> The bottom line is, the world isn't waiting for us to tell them the > right way to do what they want and the clever solutions we came up with > as solutions to the networking problems of 1970, or 1980, or 1990 don't > demand that they adopt our proposals for solving their problems of 2000. > We're in serious danger of surrendering to the same elitist posturing > for which we used to vilify the mainframe community. Pity, but we'll > have only ourselves to blame if and when the users pass us by... not that I disagree with you, but this is a fairly content-free statement. yes, we are seriously deluded if we think that the industry will do whatever we say just because we are IETF. but a similar statement could be made of any organization that had been around awhile - including, for instance, (but by no means specifically) your employer, a few other large gorillas, telephants, other standards bodies, etc. in a free market, *anyone* is in danger of becoming irrelevant. the reason I say that your statement is content-free is that it offers no specific criticism of IETF that can be used in a constructive fashion. I find that the work of IETF varies in quality - much of it is quite good, some of it mediocre, a small fraction is highly dubious. Most IETF WGs I've worked with do not operate with an attitude like "people will have to do what we say", but rather "how do we solve this problem". Most of them seem to understand that they have not only to solve the problem, but also to make the solution technically sound, attractive to those who would use it, and easy to deploy. So while I'm not familiar with every area of IETF, I don't see that IETF as a whole is suffering from the problem you describe. If there are specific areas or groups you think are suffering from this problem, you would do better to direct your criticism to those areas or groups, to defend your argument with specific examples, and to provide constructive suggestions for improvement. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
g'day, Masataka Ohta wrote: . . . > If IETF makes it clear that AOL is not an ISP, it will commercially > motivate AOL to be an ISP. Not to be unkind, since the IETF has done some good work, but the above statement is incorrect. If you'd written "If AOL perceives that the market would punish them if the IETF makes it clear that AOL is not an ISP, it will commercially motivate AOL to be an ISP" you might be closer to the mark. The bottom line is, the world isn't waiting for us to tell them the right way to do what they want and the clever solutions we came up with as solutions to the networking problems of 1970, or 1980, or 1990 don't demand that they adopt our proposals for solving their problems of 2000. We're in serious danger of surrendering to the same elitist posturing for which we used to vilify the mainframe community. Pity, but we'll have only ourselves to blame if and when the users pass us by... - peterd (feeling testy this evening) -- Peter Deutsch work email: [EMAIL PROTECTED] Engineerng Manager Caching & Content Routing Content Services Business Unit private: [EMAIL PROTECTED] Cisco Systems "I want to die quietly and peacefully in my sleep like my granfather, not screaming in terror like his passengers..." - Emo Phillips
RE: draft-ietf-nat-protocol-complications-02.txt
I think we are moving off subject here, why not drop the thread and concentrate on the job in hand!! Regards Mark Paton CEO/DIR. Internet Network Eng Mercury Network Systems Limited +44 585 649051 +44 1256 761925 http://www.mnsl.org "Mercury Network Systems - The Unstoppable Force" This e-mail is intended only for the addressee named above. As this e-mail may contain confidential or privileged information if you are not, or suspect that you are not, the named addressee or the person responsible for delivering the message to the named addressee, please telephone us immediately. Please note that we cannot guarantee that this message or any attachment is virus free or has not been intercepted and amended. The views of the author may not necessarily reflect those of the Company. -Original Message- From: Eli Sanchez [mailto:[EMAIL PROTECTED]] Sent: 18 July 2000 16:02 To: Greg Skinner; [EMAIL PROTECTED] Subject: Re: draft-ietf-nat-protocol-complications-02.txt Wrong I'm reading it :P Leave AOL alone. There are many FREE connections to select one can have AOL and many other "raw" connections. Maybe we like AOL, and that is why we pick it. AOlers also know that AOL isnt top class but we like easy listening once in a while :P We're not all idiots and although there are some exceptions we have brains to! -Eli --- Greg Skinner <[EMAIL PROTECTED]> wrote: > Masataka Ohta wrote: > > > If IETF makes it clear that AOL is not an ISP, it > will commercially > > motivate AOL to be an ISP. > > Why? Certainly, they are aware that they are not an > ISP by your > definition. It hasn't changed their business > practices. Why would > an IETF RFC change their business practices? The > business practices > of AOL are determined, for the most part, by what > Wall Street and > their customers think is important, not what the > IETF thinks. Most > of their customers are unlikely to read such an RFC > anyway. > > --gregbo > __ Do You Yahoo!? Get Yahoo! Mail Free email you can access from anywhere! http://mail.yahoo.com/ BEGIN:VCARD VERSION:2.1 N:Paton;Mark.;J.S;; FN:Mark. J.S Paton ORG:Mnsl;Consultancy TITLE:Network Design / Support TEL;WORK;VOICE:+44 0585 649051 TEL;CELL;VOICE:+44 (0585) 649051 ADR;WORK;ENCODING=QUOTED-PRINTABLE:;Basingstoke;Willow Cottage=0D=0AReading Road;Mattingley;Hampshire;RG27 8JU;= United Kingdom LABEL;WORK;ENCODING=QUOTED-PRINTABLE:Basingstoke=0D=0AWillow Cottage=0D=0AReading Road=0D=0AMattingley, Hampshire= RG27 8JU=0D=0AUnited Kingdom URL: URL:http://www.mnsl.org EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:19990422T133901Z END:VCARD
Re: draft-ietf-nat-protocol-complications-02.txt
>Wrong I'm reading it :P Leave AOL alone. >AOlers also know that AOL >isnt top class but we like easy listening once in a while Um, I guess this isn't one of those 'whiles.' >Received: from [4.33.131.234] by web4601.mail.yahoo.com ;-) RGF Robert G. Ferrell, CISSP Who goeth without humor goeth unarmed.
Re: draft-ietf-nat-protocol-complications-02.txt
Wrong I'm reading it :P Leave AOL alone. There are many FREE connections to select one can have AOL and many other "raw" connections. Maybe we like AOL, and that is why we pick it. AOlers also know that AOL isnt top class but we like easy listening once in a while :P We're not all idiots and although there are some exceptions we have brains to! -Eli --- Greg Skinner <[EMAIL PROTECTED]> wrote: > Masataka Ohta wrote: > > > If IETF makes it clear that AOL is not an ISP, it > will commercially > > motivate AOL to be an ISP. > > Why? Certainly, they are aware that they are not an > ISP by your > definition. It hasn't changed their business > practices. Why would > an IETF RFC change their business practices? The > business practices > of AOL are determined, for the most part, by what > Wall Street and > their customers think is important, not what the > IETF thinks. Most > of their customers are unlikely to read such an RFC > anyway. > > --gregbo > __ Do You Yahoo!? Get Yahoo! Mail Free email you can access from anywhere! http://mail.yahoo.com/
Re: draft-ietf-nat-protocol-complications-02.txt
Masataka Ohta wrote: > If IETF makes it clear that AOL is not an ISP, it will commercially > motivate AOL to be an ISP. Why? Certainly, they are aware that they are not an ISP by your definition. It hasn't changed their business practices. Why would an IETF RFC change their business practices? The business practices of AOL are determined, for the most part, by what Wall Street and their customers think is important, not what the IETF thinks. Most of their customers are unlikely to read such an RFC anyway. --gregbo
Re: draft-ietf-nat-protocol-complications-02.txt
Masataka: > If IETF makes it clear that AOL is not an ISP, it will commercially > motivate AOL to be an ISP. Keith: > probably not. folks who subscribe to AOL aren't likely to be > reading IETF documents. > face it, it's not the superior quality of AOL's service that keeps > AOLers from moving - it's their susceptibility to marketing BS and > their addiction to chat rooms. it's hard to help those people. Assuming they need help. The impression I have gotten from people who regularly use AOL is that they are generally satisfied with the nature of the service (as opposed to the quality of the service). As they discover more about the Internet, they may or may not switch to "real" ISPs, depending on whether they have needs for what "real" ISPs provide. --gregbo
Re: draft-ietf-nat-protocol-complications-02.txt
Keith; > > If IETF makes it clear that AOL is not an ISP, it will commercially > > motivate AOL to be an ISP. > > probably not. folks who subscribe to AOL aren't likely to be > reading IETF documents. AOL will be motivated but, considering other factors, may not change the current practice. That's fine. > face it, it's not the superior quality of AOL's service that keeps > AOLers from moving - it's their susceptibility to marketing BS and > their addiction to chat rooms. it's hard to help those people. I'm no user of AOL that I don't have to face it. It is arrogance of you if you think you must help AOL users. Just make the IETF definition of "ISP" clear and let ISP competitors of AOL give reasons to say "AOL is not ISP". Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Keith Moore <[EMAIL PROTECTED]> > ... > > Other than latency, message size, message rate, and the number of > > participants, what is the difference between an AOL chat room and this > > mailing list as exemplified by this thread? > > why, the participants, of course. > > once people move into chat rooms and make acquaintances there, they > become reluctant to leave. if changing service providers means they > must leave, they're less likely to change service providers even > though they get lousy service. It sounded as if you were making the standard swipe at people who use AOL. If you had been, my rejoinder would have been to quote the first vacation message I got in response to my note. However, I've suddenly realized that the fault for some of the vacation messages rests with the people running this list. Notice that they are not including a "Precedence: bulk" or "Precedence: bulk" lines which tell at least some `vacation` programs to do the obvious. They do include "X-Loop: [EMAIL PROTECTED]" In other words, the IETF itself is not above some standads bending.. > ... > still, my point was that the AOL community is more-or-less disjoint > from the community of folks who read IETF documents, and is likely > to remain so. that's not necessarily a bad thing overall, but it > does imply that IETF will have little effect on the decision-making > habits of AOL subscribers. yes. > if AOL subscribers cared about standards > compliance they would have left AOL long ago. You're wrong about that. Remember when AOL didn't have anything to do with the Internet? That the standards for some of the services that AOL sells are written and published by AOL instead of the IETF, ITU, ANSI, or IEEE doesn't make them any less interesting to those buying those services. In other words, if the IETF were to suddenly and not upwardly compatibly change the spelling of "Rctp To" in "To Rcpt" in SMTP and if AOL were to make a similar change in their services, which change would generate more enraged phone calls? Note that I think AOL ought to be sued for false advertising because of the SMTP redirection proxies it interposes in the services that it does claim have something to do with the Internet. Vernon Schryver[EMAIL PROTECTED]
RE: draft-ietf-nat-protocol-complications-02.txt
> -Original Message- > From: Masataka Ohta [mailto:[EMAIL PROTECTED]] > > > If IETF makes it clear that AOL is not an ISP, it will commercially > motivate AOL to be an ISP. Oh really - it is that simple! Guess who is better known in masses - IETF or AOL :-). Cheers, --brijesh Ennovate Networks Inc.
Re: draft-ietf-nat-protocol-complications-02.txt
> Other than latency, message size, message rate, and the number of > participants, what is the difference between an AOL chat room and this > mailing list as exemplified by this thread? why, the participants, of course. once people move into chat rooms and make acquaintances there, they become reluctant to leave. if changing service providers means they must leave, they're less likely to change service providers even though they get lousy service. I suspect the "reluctance to leave" is no less true for mailing lists. but IETF mailing lists don't require you to use any particular ISP. still, my point was that the AOL community is more-or-less disjoint from the community of folks who read IETF documents, and is likely to remain so. that's not necessarily a bad thing overall, but it does imply that IETF will have little effect on the decision-making habits of AOL subscribers. if AOL subscribers cared about standards compliance they would have left AOL long ago. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Keith Moore <[EMAIL PROTECTED]> > ... > face it, it's not the superior quality of AOL's service that keeps > AOLers from moving - it's their susceptibility to marketing BS and > their addiction to chat rooms. it's hard to help those people. Other than latency, message size, message rate, and the number of participants, what is the difference between an AOL chat room and this mailing list as exemplified by this thread? I don't intend to imply anything bad about this thread or suggest any of the participants do or stop doing anything. I've never subscribed to AOL, never used an AOL chat room, and the closest I've come to something like IIRC is the BSD UNIX `talk` and `write` commands and SDS-940 predecessors in the 1960's. However, I don't think that confers any virtue, and I hope it doesn't blind me to all of the similarities between pots and kettles. Vernon Schryver[EMAIL PROTECTED]
Re: draft-ietf-nat-protocol-complications-02.txt
> If IETF makes it clear that AOL is not an ISP, it will commercially > motivate AOL to be an ISP. probably not. folks who subscribe to AOL aren't likely to be reading IETF documents. face it, it's not the superior quality of AOL's service that keeps AOLers from moving - it's their susceptibility to marketing BS and their addiction to chat rooms. it's hard to help those people. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
Aboba; > >I don't see any problems people making money > >on weird NAT-munging-weirdo-webonly-wap things > >which they sell to customers > > "Making money" implies that for every seller > there is a willing buyer. For NAT to have > progressed from a twinkle-in-the-eye to the > near ubiquity that it will have in a few > years, there need to be a *lot* of willing > buyers. The marketplace rewards those who > satisfy a perceived need. > > If we would prefer that those customers > choose another solution (IPv6), then we > will need to make it every bit as easy > to install and use as the alternative. See draft-ohta-address-allocation-00.txt on how to commercially motivate ISPs (and private IP network providers with NAT, too) deploy IPv6 service. It also makes NAT unnecessary. > I'm not sure that in practice this is a > distinction that will ever be universally > understood in the marketplace. AOL isn't > Internet access either, but it serves > more than 25 million users. As with > NAT, AOL thrives because it fills a > perceived need better than the alternative. If IETF makes it clear that AOL is not an ISP, it will commercially motivate AOL to be an ISP. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Greg; > I could make the argument that they provide Internet access, in the > sense that one can use these providers to gain access to a subset of > content and services that is "traditionally" called Internet service. > I would support them being classified as Internet Access Providers > (IAPs). In some circles, that's what they're called. Your points are taken that you can call them WASP (Web Access Service Providers). Masakaka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Jon: > personal comment > Other classes of organisation may simply be providing a subset of > internet services - I don't see a market or technical case for these > and in fact would encourage regulatory bodies to see if these types of > organisations are trying to achieve lock out or are engaged in > other types of monopolistic or anti-competitive behaviour. :-) If I'm understanding you correctly, there is clearly a market for such organizations, otherwise they would not exist. Whether or not there is technical justification for what they do is a matter of opinion. For reasons that have been beaten to death here and elsewhere, they provide some function that is not met with the existing IPv4 service. I could make the argument that they provide Internet access, in the sense that one can use these providers to gain access to a subset of content and services that is "traditionally" called Internet service. I would support them being classified as Internet Access Providers (IAPs). In some circles, that's what they're called. Masataka: > I just want to make it illegal for these types of organisations call > their service "Internet" or "internet". > It's something like "Olympic". How would you go about doing that? What judicial organization is likely to make an issue of this, in light of all the other (arguably more serious) issues on their plates? --gregbo
Re: draft-ietf-nat-protocol-complications-02.txt
Randy; > > My intention is to provide a semi permanent definition as an Informational > > RFC. > > > > It is important to make the definition protected by bogus opinions > > of various bodies including IETF. > > of course you will exuse the providers if we continue to be perverse and > find new business models. Exuse? If you mean execution or decapitalization, yes, I will. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Bob; > *> but yes, likely some things in this world are not acceptable to some > *> segment of the population. so don't accept them. but life goes on and > *> things change. > *> > *> randy Changes are already implied by RC1958, which I refer. As things change, new RFCs can be issued. > Resist entropy. You can't. Entropy and the number of RFCs monotonically increase. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
> masataka was saying that he could classify providers given a rather fixed > model. i was saying that the world changes and that providers will find > new business models and bend masataka's rigid classification. yes, but the desire to have classification of providers is significantly motiviated by providers that keep coming up with "business models" that involve deliberately corrupting the data that they carry. of course such providers would rather act as if they weren't doing anything harmful... which only further illustrates the need to have such classifications. Keith
RE: draft-ietf-nat-protocol-complications-02.txt
*> *> but yes, likely some things in this world are not acceptable to some *> segment of the population. so don't accept them. but life goes on and *> things change. *> *> randy *> *> Resist entropy. Bob Braden
RE: draft-ietf-nat-protocol-complications-02.txt
>I don't see any problems people making money >on weird NAT-munging-weirdo-webonly-wap things >which they sell to customers "Making money" implies that for every seller there is a willing buyer. For NAT to have progressed from a twinkle-in-the-eye to the near ubiquity that it will have in a few years, there need to be a *lot* of willing buyers. The marketplace rewards those who satisfy a perceived need. If we would prefer that those customers choose another solution (IPv6), then we will need to make it every bit as easy to install and use as the alternative. However, even that may not be enough -- because history tells us that displacing a deeply entrenched competitor (IPv4 + NAT) can generally only be accomplished by exploiting points of inflection. Perhaps Wireless and Infiniband will provide the required inflection points; we will see. >BUT, it is NOT Internet access. I'm not sure that in practice this is a distinction that will ever be universally understood in the marketplace. AOL isn't Internet access either, but it serves more than 25 million users. As with NAT, AOL thrives because it fills a perceived need better than the alternative. >I would not buy it, because I want Internet access. Perhaps it is more instructive to turn the tables and complete the sentence: "I would prefer {insert abomination here} to Internet Access because {insert reason for preferring abomination}" Then, try to complete an alternative sentence: "I would prefer IPv6 to {insert abomination here} because in addition to providing {insert reason for preferring abomination} it can also enable my business to grow faster in ways that IPv4 and {insert abomination here} cannot provide: {insert tangible business benefits of IPv6 here}" The goal is to repeat the exercise until the above sentence convinces large numbers of customers (who may not know what Internet Access is) to part with their money.
RE: draft-ietf-nat-protocol-complications-02.txt
> Joking aside, I agree with Keith Moore, some things are totally > unacceptable and this falls into that category. so we try to stay somewhere within the solar system, let's review what "this" is. it was a discussion with masataka and jon about defining classes of providers. From: Randy Bush <[EMAIL PROTECTED]> To: Masataka Ohta <[EMAIL PROTECTED]> Cc: Jon Crowcroft <[EMAIL PROTECTED]>, [EMAIL PROTECTED] Subject: Re: draft-ietf-nat-protocol-complications-02.txt Date: Mon, 10 Jul 2000 06:35:47 -0700 >> I would go further - first to define by exclusion, secondly to define >> a new class of providers (according tro common uisage) so that >> discussion can proceed > > My intention is to provide a semi permanent definition as an > Informational RFC. > > It is important to make the definition protected by bogus opinions > of various bodies including IETF. of course you will exuse the providers if we continue to be perverse and find new business models. masataka was saying that he could classify providers given a rather fixed model. i was saying that the world changes and that providers will find new business models and bend masataka's rigid classification. and then keith came out of left field without bothering to actually read the thread and went off on his usual jihad against whatever perfidy drives him to wild accusations and libel this week, usually nats, proxies, and whatever. bring. but yes, likely some things in this world are not acceptable to some segment of the population. so don't accept them. but life goes on and things change. randy
Re: draft-ietf-nat-protocol-complications-02.txt
> What I oppose strongly, is that people sell weird stuff and call it Internet. I've never seen a marketing person that wouldn't lie and do exactly that. If folks want to buy wierd stuff, and they know it's wierd stuff and are aware of its limitations, I don't have much problem with that. But I've yet to see a NAT product that was advertised honestly.
RE: draft-ietf-nat-protocol-complications-02.txt
I thought the real purpose of life was too make money!! Joking aside, I agree with Keith Moore, some things are totally unacceptable and this falls into that category. Data integrity should be of the utmost importance in any network. The InterNet is no exception. Regards Mark Paton CEO/DIR. Internet Network Eng Mercury Network Systems Limited +44 585 649051 +44 1256 761925 http://www.mnsl.org "Mercury Network Systems - The Unstoppable Force" This e-mail is intended only for the addressee named above. As this e-mail may contain confidential or privileged information if you are not, or suspect that you are not, the named addressee or the person responsible for delivering the message to the named addressee, please telephone us immediately. Please note that we cannot guarantee that this message or any attachment is virus free or has not been intercepted and amended. The views of the author may not necessarily reflect those of the Company. -Original Message- From: Randy Bush [mailto:[EMAIL PROTECTED]] Sent: 11 July 2000 03:26 To: Keith Moore Cc: Masataka Ohta; Jon Crowcroft; [EMAIL PROTECTED] Subject: Re: draft-ietf-nat-protocol-complications-02.txt >> of course you will exuse the providers if we continue to be perverse and >> find new business models. > > not bloody likely. some things are inexcusable. munging data in > transit is one of them. the fact that you may have a business > model that says you can make money doing something that is inexcusable > is not a justification for doing that thing. > > I'm sick and tired of folks justifing all manner of brain damage > merely because they think they can make money at it. you'd think > that they believe that the only purpose in life is to make money. > > Keith > > p.s. sorry to single you out, there are far worse culprits. have you tried valerian root tea? BEGIN:VCARD VERSION:2.1 N:Paton;Mark.;J.S;; FN:Mark. J.S Paton ORG:Mnsl;Consultancy TITLE:Network Design / Support TEL;WORK;VOICE:+44 0585 649051 TEL;CELL;VOICE:+44 (0585) 649051 ADR;WORK;ENCODING=QUOTED-PRINTABLE:;Basingstoke;Willow Cottage=0D=0AReading Road;Mattingley;Hampshire;RG27 8JU;= United Kingdom LABEL;WORK;ENCODING=QUOTED-PRINTABLE:Basingstoke=0D=0AWillow Cottage=0D=0AReading Road=0D=0AMattingley, Hampshire= RG27 8JU=0D=0AUnited Kingdom URL: URL:http://www.mnsl.org EMAIL;PREF;INTERNET:[EMAIL PROTECTED] REV:19990422T133901Z END:VCARD
Re: draft-ietf-nat-protocol-complications-02.txt
>> of course you will exuse the providers if we continue to be perverse and >> find new business models. > > not bloody likely. some things are inexcusable. munging data in > transit is one of them. the fact that you may have a business > model that says you can make money doing something that is inexcusable > is not a justification for doing that thing. > > I'm sick and tired of folks justifing all manner of brain damage > merely because they think they can make money at it. you'd think > that they believe that the only purpose in life is to make money. > > Keith > > p.s. sorry to single you out, there are far worse culprits. have you tried valerian root tea?
Re: draft-ietf-nat-protocol-complications-02.txt
At 21.43 -0400 00-07-10, Keith Moore wrote: >not bloody likely. some things are inexcusable. munging data in >transit is one of them. the fact that you may have a business >model that says you can make money doing something that is inexcusable >is not a justification for doing that thing. I don't see any problems people making money on weird NAT-munging-weirdo-webonly-wap things which they sell to customers, BUT, it is NOT Internet access. I would not buy it, because I want Internet access. What I oppose strongly, is that people sell weird stuff and call it Internet. paf
Re: draft-ietf-nat-protocol-complications-02.txt
> of course you will exuse the providers if we continue to be perverse and > find new business models. not bloody likely. some things are inexcusable. munging data in transit is one of them. the fact that you may have a business model that says you can make money doing something that is inexcusable is not a justification for doing that thing. I'm sick and tired of folks justifing all manner of brain damage merely because they think they can make money at it. you'd think that they believe that the only purpose in life is to make money. Keith p.s. sorry to single you out, there are far worse culprits.
Re: draft-ietf-nat-protocol-complications-02.txt
>> I would go further - first to define by exclusion, secondly to define >> a new class of providers (according tro common uisage) so that >> discussion can proceed > > My intention is to provide a semi permanent definition as an Informational > RFC. > > It is important to make the definition protected by bogus opinions > of various bodies including IETF. of course you will exuse the providers if we continue to be perverse and find new business models. randy
Re: draft-ietf-nat-protocol-complications-02.txt
Jon; > >>Any comments on the content of the draft? > > I would go further - first to define by exclusion, secondly to define > a new class of providers (according tro common uisage) so that > discussion can proceed My intention is to provide a semi permanent definition as an Informational RFC. It is important to make the definition protected by bogus opinions of various bodies including IETF. > An ISP _hosts_ its own and customer's hosts. Hosts follow the > hosts requirements RFC, at least. > > An ISP uses routers to interconnect its, its customers, and other to ISPs > networks, Routers follow the router requirements RFC, at least. They are requirements by IETF. Worse, even in IETF, there is no Internet Standard of router requirements yet and the newest revision to the Proposed Standard is BCP. So, please don't attempt to rely on it. > Service Organisations that don't allow a host or router that follows the above > definition to excercise capabilities defined are what we now know as > Content Service Providers, and must provide application level gateways, > Application Service Providers, and offer portals or ALGs. In each case there > may be good performance or security reasons for this mode of service, but > there will usually be lack of flexibility or ease of introdution to new > services, content and applications in general. I think my draft covers the case to make such network providers not ISPs. > personal comment > Other classes of organisation may simply be providing a subset of > internet services - I don't see a market or technical case for these > and in fact would encourage regulatory bodies to see if these types of > organisations are trying to achieve lock out or are engaged in > other types of monopolistic or anti-competitive behaviour. :-) I just want to make it illegal for these types of organisations call their service "Internet" or "internet". It's something like "Olympic". Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
>>Any comments on the content of the draft? I would go further - first to define by exclusion, secondly to define a new class of providers (according tro common uisage) so that discussion can proceed An ISP _hosts_ its own and customer's hosts. Hosts follow the hosts requirements RFC, at least. An ISP uses routers to interconnect its, its customers, and other to ISPs networks, Routers follow the router requirements RFC, at least. Service Organisations that don't allow a host or router that follows the above definition to excercise capabilities defined are what we now know as Content Service Providers, and must provide application level gateways, Application Service Providers, and offer portals or ALGs. In each case there may be good performance or security reasons for this mode of service, but there will usually be lack of flexibility or ease of introdution to new services, content and applications in general. personal comment Other classes of organisation may simply be providing a subset of internet services - I don't see a market or technical case for these and in fact would encourage regulatory bodies to see if these types of organisations are trying to achieve lock out or are engaged in other types of monopolistic or anti-competitive behaviour. :-) cheers j.
Re: draft-ietf-nat-protocol-complications-02.txt
Dear all; Based on the previous discussion, Jon> In message <[EMAIL PROTECTED]>, Masataka Ohta ty Jon> ped: Jon> Jon> >>Is it fair if providers using iMODE or WAP are advertised Jon> >>to be ISPs? Jon> >> Jon> >>Is it fair if providers using NAT are advertised to be ISPs? Jon> >> Jon> >>My answer to both questions is Jon> >> Jon> >>No, while they may be Internet Service Access Providers and Jon> >>NAT users may be IP Service Providers, they don't provide Jon> >>Internet service and are no ISPs. Jon> Jon> i agree: Jon> in the UK, i would say that someone claiming internet access via WAP Jon> would be in breach of the trades description act. Jon> Jon> >>Any oppositions? Jon> Jon> not from here (for wap - i dont know enough about iMODE to comment) and lack of oppsitions in the thread, I have drafted an attached ID. However, IESG is blocking the publication of the ID (it is just an Internet Draft, NOT an Informational RFC)! IESG says they have not even saw the content, which means a lot of time will be wasted further. So, I post the draft to the Mailing List. Any comments on the content of the draft? Comments for the blockage, if any, should be given with a separate subject? I hope IESG has not taken the obvious next step of the moderation of the Mailing List. Masataka Ohta --- INTERNET DRAFT M. Ohta draft-ohta-isps-00.txt Tokyo Institute of Technology July 2000 The Internet and ISPs Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Copyright (C) The Internet Society (May/1/2000). All Rights Reserved. Abstract This memo gives definitions on the Internet and ISPs (Internet Service Providers). 1. The Internet The Internet is a public IP [1, 2] network globally connected end to end [3] at the Internetworking layer. 2. ISPs A network provider is an ISP, if and only if its network, including access parts of the network to its subscribers, is a part of the Internet. As such, ISPs must preserve the end to end and globally connected principles of the Internet at the Internetworking layer. M. OhtaExpires on January 1, 2001 [Page 1] INTERNET DRAFTISPs July 2000 A network provider of a private IP or non-IP network, which is connected to the Internet through an application and/or transport gateway is not an ISP. Dispit the requirement of "global connectivity", a network provider may use transparent firewalls to the Internet with no translation to filter out a limited number of problematic well known ports of TCP and/or UDP and can still be an ISP. However, if filtering out is a default and only a limited number of protocols are allowed to pass the firewalls (which means snooping of transport/application layer protocols), it can not be regarded as full connectivity to the Internet and the provider is not an ISP. 3. Security Considerations While some people may think that filtering by application/transport gateways offer some sort of security, they should recognize that macro virus in e-mails can pass and are passing through all such gateways. 4. References [1] J. Postel, "Internet Protocol", RFC791, September 1981. [2] S. Deering, R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC2460, December 1998. [3] B. Carpenter, "Architectural Principles of the Internet", RFC1958, June 1996. 5. Author's Address Masataka Ohta Computer Center, Tokyo Institute of Technology 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550, JAPAN Phone: +81-3-5734-3299 Fax: +81-3-5734-3415 EMail: [EMAIL PROTECTED] M. OhtaExpires on January 1, 2001 [Page 2] INTERNET DRAFTISPs July 2000 6. Full Copyright Statement "Copyright (C) The Internet Society (July/1/2000). All Rights Re
Re: draft-ietf-nat-protocol-complications-02.txt
Steve Deering; > >Unfortunately, IPv6's current addressing architecture makes it very > >difficult to do this sort of traditional multihoming if one is not > >IPv6's larger address space is merely a necessary piece of an > >Internet which will not run out of numbers. > > Wow, we actually agree on something! (Though I could quibble over the > "merely".) As you two seemingly have agreed, IPv6, as is, is not so useful for scalable multihoming. However, transition to IPv6 is important to solve multihoming issues, because IPv6 routing space is not yet polluted by unaggregated addresses. Important pieces are documented in The Architecture of End to End Multihoming just become available. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Masataka Ohta wrote: > > i-mode uses native > > http servers with some relatively transparent html > > extensions for handsets (such as http://www.ecal.com |My opinions are my own.| |Chief Scientist |=| |eCal Corp. |"If nobody believes what I say, I feel | |[EMAIL PROTECTED]|ineffective." "Oh, I don't believe that."| \==/
Re: draft-ietf-nat-protocol-complications-02.txt
Robert; > WAP and i-mode are *very* different. FTP and SMTP are *very* different, because SMTP is a lot easier to pass application/transport gateways. However, the question of whether it is IP or not is enough to dismiss iMODE and WAP. The battle has been and still is fought between the end to end Internet and intelligent telephone network. NAT was merely an interim solution for telephone network people until they are ready with non-IP protocols of iMODE or WAP. Now, the option is between the stupid Internet with global IP connectivity and the telephony based intelligent non-IP network with a lot of application/transport gateways. There is no ecological niche for intelligent IP network with a lot of application/transport gateways (NAT), any more. > i-mode uses native > http servers with some relatively transparent html > extensions for handsets (such as
Re: draft-ietf-nat-protocol-complications-02.txt
In message <[EMAIL PROTECTED]>, Masataka Ohta ty ped: >> Is it fair if providers using iMODE or WAP are advertised >> to be ISPs? >> >> Is it fair if providers using NAT are advertised to be ISPs? >> >>My answer to both questions is >> >> No, while they may be Internet Service Access Providers and >> NAT users may be IP Service Providers, they don't provide >> Internet service and are no ISPs. i agree: in the UK, i would say that someone claiming internet access via WAP would be in breach of the trades description act. >>Any oppositions? not from here (for wap - i dont know enough about iMODE to comment) >> Masataka Ohta >> cheers jon
Re: draft-ietf-nat-protocol-complications-02.txt
Vint; > that's right - they use iMODE on the DOCOMO mobiles. iMODE and > WAP seem to have that in common: a non-IP radio link protocol > and an application gateway. Of course, this limits the applications > to those that can be "translated" in the gateway, while an end to > end system (such as the Ricochet from Metricom) would allow > essentially any application on an Internet server to interact > directly with the mobile device because the gateway would merely > be an IP level device, possibly with NAT functionality. > With a JAVA interpreter or other similar capability in the > mobile, one could imagine considerable competition for development > of new applications. As it stands, only the applications NTT > chooses to implement in the translating gateway are accessible. An interesting thing is that iMODE is so successful that DOCOMO is suffering from the usual problems (lack of scalability and robustness) caused by violating the end to end principle. iMODE is now infamous for its frequent service interruption. DOCOMO users are refunded for the interruption. > Since HTTP is one of the "applications" served, there is still > a lot of room for competition, I suppose. To make the competition fair, the important questions are: Is it fair if providers using iMODE or WAP are advertised to be ISPs? Is it fair if providers using NAT are advertised to be ISPs? My answer to both questions is No, while they may be Internet Service Access Providers and NAT users may be IP Service Providers, they don't provide Internet service and are no ISPs. Any oppositions? Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
that's right - they use iMODE on the DOCOMO mobiles. iMODE and WAP seem to have that in common: a non-IP radio link protocol and an application gateway. Of course, this limits the applications to those that can be "translated" in the gateway, while an end to end system (such as the Ricochet from Metricom) would allow essentially any application on an Internet server to interact directly with the mobile device because the gateway would merely be an IP level device, possibly with NAT functionality. With a JAVA interpreter or other similar capability in the mobile, one could imagine considerable competition for development of new applications. As it stands, only the applications NTT chooses to implement in the translating gateway are accessible. Since HTTP is one of the "applications" served, there is still a lot of room for competition, I suppose. vint At 02:53 PM 4/30/2000 +0859, Masataka Ohta wrote: >In Japan, there are more than 5 million non-IP mobile WWW browsers >served by a single application gateway. = I moved to a new MCI WorldCom facility on Nov 11, 1999 MCI WorldCom 22001 Loudoun County Parkway Building F2, Room 4115, ATTN: Vint Cerf Ashburn, VA 20147 Telephone (703) 886-1690 FAX (703) 886-0047 "INTERNET IS FOR EVERYONE!" See you at INET2000, Yokohama, Japan July 18-21, 2000 http://www.isoc.org/inet2000
Re: draft-ietf-nat-protocol-complications-02.txt
Matt; > >I don't know about you, but it scares me to read the various forecasts > >about how wireless will transform the landscape over the next few > >years. E.g., more wireless phones with internet connectivity than > >PCs. The numbers are just staggering and the associated demand for > >addresses will be astonishing. We ain't seen nothing yet. > > It seems to be a given now that 3G phones will be IPv6 (at least outside > the U.S.) when they roll out over the next few years. But we should make > people clearly aware of what to expect when they NAT their way back to the > IPv4 Internet or IPv4 intranets. That is one of the purposes of this draft. > (oh yeah, this thread is supposed to be about contributing to the draft in > the subject line) The reality is that mobile phones are using non-IP protocols. If you use NAT, you have no reason to use IP. In Japan, there are more than 5 million non-IP mobile WWW browsers served by a single application gateway. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
> > >> You appear to be saying that because historically people screwed up > >> configuring their DNS that it is impossible to rely on the DNS for > >> critical infrastructure. > > > I wouldn't say 'impossible'. My point is that it is more difficult to > > get this to work well than it might seem at first glance. ... and note > > that even if you fix the reliability problems associated with using DNS > > to do mapping from global endpoint IDs to local routing information, > > you still have the performance problems to deal with. > > Are you making the assumption that we can grow the network in size (let's not > even get into functionality) *without* adding extra architecture/mechanism? no, it's fairly clear to me that we either constrain how people connect their networks together, or we have to change the mechanism by which we propagate and/or compute routes (what kind of changes, or how to deploy them, are not clear), if we want to make the network considerably larger. my general point is not that additional complexity is always bad - but if you're going to add a lot of complexity you should get a better system as a result. and a system that doesn't meet the needs of applications is not clearly better. and the specific point is that folks who are thinking "just use DNS or something like it" probably haven't looked at this in enough detail, and/or they have a oversimplified view of applications' needs. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Keith Moore <[EMAIL PROTECTED]> >> You appear to be saying that because historically people screwed up >> configuring their DNS that it is impossible to rely on the DNS for >> critical infrastructure. > I wouldn't say 'impossible'. My point is that it is more difficult to > get this to work well than it might seem at first glance. ... and note > that even if you fix the reliability problems associated with using DNS > to do mapping from global endpoint IDs to local routing information, > you still have the performance problems to deal with. Are you making the assumption that we can grow the network in size (let's not even get into functionality) *without* adding extra architecture/mechanism? In other words, is your problem with DNS as currently designed/implemented/ maintained - or is it more (as I seem to recall from previous messages from you) with the general notion that more complex things are fundamentally bad (since any extra mechanism is also a place for something to go wrong, or a place to incur overhead)? If so, I'd say that's false economy - to paraphrase Lincoln on leg length, a network of a certain size and functionality neeeds a certain amount of complexity, and if you fail to architect it in (i.e. cleanly), it will get added around the edges in all sorts of ugly warts (i.e. the kind the Internet stack is currently infested with). Noel
RE: draft-ietf-nat-protocol-complications-02.txt
> From: "BookIII, Robert" <[EMAIL PROTECTED]> > ... > save for a couple of auto-responses from NTMail in the name of > ... > but have started up again. Does anyone know how I could go about addressing > this? Thanks for your time and consideration. You can expect at least 3 and usually several more "vacation" notices, delayed delivery warnings, and non-delivery bounces for each message you send to the main IETF list over the course the week or 10 days after sending. By my recollections, even 10 years ago such garbage was unusual and cause for complaints and apologies. That nothing happens today, not even the automatic removal of persistently broken addresses, is a commentary on the changing nature of the IETF. If you're an optimist, you might infer only something about growth. I've thought of suggesting that the IESG should maintain an address to which such bounces could be sent to remove the offending address, but then I've also throught of the reasonable (e.g. authentication) and silly (e.g. censorship) controveries such a suggestion would trigger. I've also thought of mentioning that I really don't need to see a dozen announcements for the same junket of some other organization's boondoogle, but then I've remembered we've gone through that many times. Vernon Schryver[EMAIL PROTECTED]
RE: draft-ietf-nat-protocol-complications-02.txt
This may be a divergence from the topic, so I'll apologize in advance, but Vernon's point about bounced emails struck a cord with me. I made the mistake of leaving a couple of options on my MS Outlook which causes a receipt to be sent back to me when an email is delivered and when it's read. At this point, the flurry has calmed down from my first email to the list, save for a couple of auto-responses from NTMail in the name of [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> . I've been getting repetitive delivery and read acknowledgements, which seemed to have abated but have started up again. Does anyone know how I could go about addressing this? Thanks for your time and consideration. -Original Message- From: Vernon Schryver [mailto:[EMAIL PROTECTED]] Sent: Wednesday, April 26, 2000 10:56 PM To: [EMAIL PROTECTED] Subject: Re: draft-ietf-nat-protocol-complications-02.txt > From: "Steven M. Bellovin" <[EMAIL PROTECTED]> > ... > There is some data indicating that Keith is right, that there are problems in > the DNS. See, for example, http://www.research.att.com/~edith/Papers/infocom2000.ps I don't think I understand the connection between that paper about "Prefetching the Means for Dcoument Transfer: A New Approach for Reducing Web Latency" and Keith's statement that DNS email errors are usually on the receiver's side: ] (email errors are usually detected by the sender of a message, since ] that's who gets the bounced message. but the party who has responsibility ] for fixing the error is usually not on the sender's end of things) My perhaps irrelevant, boring, or even wrong claim was that I'm seeing more sender-side than receiver-side SMTP+DNS problems. If the relevance of that paper is that people are have fun and games with DNS to help HTTP, and that causes DNS errors that in turn cause DNS problems seen by HTTP clients, then that's consistent with my personal experience and my claim. I see many crazy DNS failures in my personal web surfing. (crazy either because obviously silly for a very big, presumably competently run site or because temporary, which says either roots are hosed or the same very big, presumably competent site is crazy...have I mentioned lately how frequently Akamai is not working for me?) I bet that the types and frequencies of DNS errors varies with the application, which strikes me as a significant change from how things used to be. For example, how many SMTP servers are behind DNS names that do fancy load balancing?--yes, I think I can name a very few, but isn't the vast majority of SMTP load balancing and so forth based on turning off the listen socket (e.g. sendmail), MX records, and non-fancy round-robin RR serving? On the extreme, every venture capital fund seems to still be shoveling money at anyone who wants to try anything you'd care to mention (and lots more besides) to make HTTP go faster, and many of those schemes seem to involve DNS creativity. Vernon Schryver[EMAIL PROTECTED]
Re: draft-ietf-nat-protocol-complications-02.txt
In message <[EMAIL PROTECTED]>, "J. Noel Chiappa" typed: >>> right, noels wrong. >>Noel is happy to wait, and see who's right. (I've been through this exact >>same experience before, with CLNP, so I understand the life-cycle.) So far, >>I've been waiting for quite a few years with IPv6, and so far I'm right. >>Let's see, how many years have these standards been out, and how much >>deployment has there been? Hmm, RFC-1883 was in December 1995. Can you point >>me to *any* other IETF product that, 5 years after the Proposed Standard came >>out, still hadn't been significantly deployed - and then went on to be a >>success? >>No? wrong - multicast. >>I didn't think so. read again - LOTS of things have seen almost no deployment since being standar,d and lots of things haev seen deploymewnt (e.g. napster hit around 15% of college traffic) without even a breath of an i-d >>> NATs are not only bad e2e karma, they are bad tech >>I'm not denying that - and I've said as much. All address-sharing devices are >>problematic, and some (e.g. NAT boxes) are downright disgusting kludges. >> >>However, history shows that bad tech doesn't magically replace itself, it has >>to be replaced by an economically viable alternative. (For an example of this >>principle in action, note that the vast majority of cars are still powered by >>reciprocating internal-combustion engines... talk about poor basic concept! >>But I digress) i agree... >>Judging from the real world out there, it appears that IPv6 isn't a viable >>alternative. i agree its not worth holding one's breath... cheers jon
Re: draft-ietf-nat-protocol-complications-02.txt
> From: "Steven M. Bellovin" <[EMAIL PROTECTED]> > ... > There is some data indicating that Keith is right, that there are problems in > the DNS. See, for example, http://www.research.att.com/~edith/Papers/infocom2000.ps I don't think I understand the connection between that paper about "Prefetching the Means for Dcoument Transfer: A New Approach for Reducing Web Latency" and Keith's statement that DNS email errors are usually on the receiver's side: ] (email errors are usually detected by the sender of a message, since ] that's who gets the bounced message. but the party who has responsibility ] for fixing the error is usually not on the sender's end of things) My perhaps irrelevant, boring, or even wrong claim was that I'm seeing more sender-side than receiver-side SMTP+DNS problems. If the relevance of that paper is that people are have fun and games with DNS to help HTTP, and that causes DNS errors that in turn cause DNS problems seen by HTTP clients, then that's consistent with my personal experience and my claim. I see many crazy DNS failures in my personal web surfing. (crazy either because obviously silly for a very big, presumably competently run site or because temporary, which says either roots are hosed or the same very big, presumably competent site is crazy...have I mentioned lately how frequently Akamai is not working for me?) I bet that the types and frequencies of DNS errors varies with the application, which strikes me as a significant change from how things used to be. For example, how many SMTP servers are behind DNS names that do fancy load balancing?--yes, I think I can name a very few, but isn't the vast majority of SMTP load balancing and so forth based on turning off the listen socket (e.g. sendmail), MX records, and non-fancy round-robin RR serving? On the extreme, every venture capital fund seems to still be shoveling money at anyone who wants to try anything you'd care to mention (and lots more besides) to make HTTP go faster, and many of those schemes seem to involve DNS creativity. Vernon Schryver[EMAIL PROTECTED]
Re: draft-ietf-nat-protocol-complications-02.txt
In message <[EMAIL PROTECTED]>, Vernon Schryver writes >It may be irrelevant, and my personal sample size is trivially tiny. ... > >In recent months very little email I've sent was bounced due to DNS errors >and only a little more has been delayed. My logs say the delivery of much >more from others to me was delayed or failed because the sender's domain >did not resolve for hours or ever. (familiar anti-spam hack, with my >guesses used to distinguish spam from legitimate mail suffering from bad >sender DNS) > There is some data indicating that Keith is right, that there are problems in the DNS. See, for example, http://www.research.att.com/~edith/Papers/infocom2000.ps --Steve Bellovin
Re: draft-ietf-nat-protocol-complications-02.txt
At 2:42 PM -0700 4/26/00, David R. Conrad wrote: >Perhaps it is obvious to you, however it has been implied that one of the >advantages of v6 is that it has a limited number of TLAs which would be found >the the DFZ of the v6 Internet. The truth is subtly different than what what was implied or thought to be implied, as I have tried to explain (with limited success, obviously), but that's beside the point of your message: >My point was that this is not an advantage of v6 but rather an advantage >of starting fresh and that the limitations on TLAs is not an artifact of >the v6 protocol, but rather an administrative limit established by policy Yes, that's right. If I understand correctly, you are upset by the imprecise shorthand of saying "this is an advantage of IPv6 over IPv4" and would prefer that we were careful always to say "this is an advantage of the IPv6 addressing plan over the addressing plan of the existing IPv4 Internet, an advantage of starting fresh". Fair enough. Or are you asking us not to mention it at all, perhaps because there is a plan to undertake a major renumbering of the IPv4 Internet so that there will be a similarly small limit on the number of globally-advertised IPv4 prefixes required to ensure reachability of all IPv4 customers? >-- something quite malleable over time (perhaps more so >now given the creation of ICANN). In what way do you think the creation of ICANN might have made it easy or easier to impose address changes on IPv4 customers? > > IPv4 could in theory have 2^32 TLAs and IPv6 could in theory have > > 2^128 TLAs. Are you saying that 2^32 TLAs would be OK? > >No more than I would assume you'd say 2^128 TLAs would be OK. So you agree there is a need to limit the number of TLAs to a number less than that permitted by the address size, even for IPv4. The longer address of IPv6 does not introduce a new problem in this regard, contrary to what you seemed to imply -- whatever method or policy you would use to limit the number of TLAs in IPv4 could just as well be used in IPv6. Or do you, perhaps, think we really ought to be moving to a version of IP with 16-bit addresses, to avoid the risk of creating too many TLAs? :-) Steve
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Keith Moore <[EMAIL PROTECTED]> > ... > (email errors are usually detected by the sender of a message, since > that's who gets the bounced message. but the party who has responsibility > for fixing the error is usually not on the sender's end of things) > ... It may be irrelevant, and my personal sample size is trivially tiny. ... In recent months very little email I've sent was bounced due to DNS errors and only a little more has been delayed. My logs say the delivery of much more from others to me was delayed or failed because the sender's domain did not resolve for hours or ever. (familiar anti-spam hack, with my guesses used to distinguish spam from legitimate mail suffering from bad sender DNS) Vernon Schryver[EMAIL PROTECTED]
Re: multihoming (was Re: draft-ietf-nat-protocol-complications-02.txt)
> So one of IPv6's multihoming approaches is no worse than IPv4, > while another appears to be significantly better. ...in terms of its impact on the routing system. it's not clear that having multiple addresses per host is significantly better for applications in general. my guess is that some applications will do well with multiple-address style multihoming, while others will not. but even if multiple-address style multihoming isn't a suitable replacement for all applications of single-address multihoming, it's still nice to have as an option. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
> > even the DNS names for major services may not be well maintained. > > at one time I did a survey of the reasons for mail bounces > > for one of my larger mailing lists. > > You appear to be saying that because historically people screwed up > configuring their DNS that it is impossible to rely on the DNS for critical > infrastructure. This seems wrong to me. I wouldn't say 'impossible'. My point is that it is more difficult to get this to work well than it might seem at first glance. one reason I cited the DNS-related problems in email is that many people would consider email a critical service, and also one that is employed on a daily basis by a large portion of one's network users. so if people won't do what's necessary to make their email work, will they take the necessary steps to make other less critical services work? > If a properly configured DNS was a fundamental requirement of a working > network connection as is assumed by something like 8+8, I think it fairly > certain that any misconfigurations would be fixed as quickly as (say) a BGP > misconfiguration. it depends ... on the size of the user population affected by a DNS record (probably much smaller than the typical BGP misconfiguration, and therefore less important) and also on where the errors are detected (email errors are usually detected by the sender of a message, since that's who gets the bounced message. but the party who has responsibility for fixing the error is usually not on the sender's end of things) and note that even if you fix the reliability problems associated with using DNS to do mapping from global endpoint IDs to local routing information, you still have the performance problems to deal with. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
Steve, > I interpreted Bill's question as how would this be different > than limiting IPv4 prefix advertisements to only /8s *today*, i.e., without > renumbering the IPv4 Internet, so I answered accordingly. I didn't think > to interpret the question the way you did, because the answer is so obvious > it wouldn't have made sense to ask. Perhaps it is obvious to you, however it has been implied that one of the advantages of v6 is that it has a limited number of TLAs which would be found the the DFZ of the v6 Internet. My point was that this is not an advantage of v6 but rather an advantage of starting fresh and that the limitations on TLAs is not an artifact of the v6 protocol, but rather an administrative limit established by policy -- something quite malleable over time (perhaps more so now given the creation of ICANN). > IPv4 could in theory have 2^32 TLAs and IPv6 could in theory have > 2^128 TLAs. Are you saying that 2^32 TLAs would be OK? No more than I would assume you'd say 2^128 TLAs would be OK. Rgds, -drc
Re: draft-ietf-nat-protocol-complications-02.txt
Keith, > even the DNS names for major services may not be well maintained. > at one time I did a survey of the reasons for mail bounces > for one of my larger mailing lists. You appear to be saying that because historically people screwed up configuring their DNS that it is impossible to rely on the DNS for critical infrastructure. This seems wrong to me. If a properly configured DNS was a fundamental requirement of a working network connection as is assumed by something like 8+8, I think it fairly certain that any misconfigurations would be fixed as quickly as (say) a BGP misconfiguration. Rgds, -drc
Re: draft-ietf-nat-protocol-complications-02.txt
> Date: Wed, 26 Apr 2000 14:31:23 -0400 > From: "J. Noel Chiappa" <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Subject: Re: draft-ietf-nat-protocol-complications-02.txt > > Noel is happy to wait, and see who's right. (I've been through this exact > same experience before, with CLNP, so I understand the life-cycle.) So far, > I've been waiting for quite a few years with IPv6, and so far I'm right. > [...] Perhaps, we can start a pool to guess the date of the first IPv6-only IETF meeting (laptop connections, wireless network, desktops all running only IPv6). Or, perhaps a true believer would like to propose that the network at an upcoming IETF meeting support _only_ IPv6. Perhaps the IPv6 working group could staff a help desk. I would make this proposal myself, but I am concerned that I might be accused of trying to create a media spectacle that would set back (?) IPv6 adoption by several years. (If there is an IPv6-only IETF meeting soon, I hope I can get one of those IETF phone cards from Phil Gross -- I might want to go to me room and get some work done.) ((Actually, I think an IPv6-only IETF meeting would be sort of neat. I would at least try to get IPv6 working on my laptop.)) (((If we think an IPv6-only IETF meeting in impractical at this time, we should consider not asking anyone else to migrate their network to IPv6 until it is mature enough to support an IETF meeting.))) -tjs
Re: draft-ietf-nat-protocol-complications-02.txt
> right, noels wrong. Noel is happy to wait, and see who's right. (I've been through this exact same experience before, with CLNP, so I understand the life-cycle.) So far, I've been waiting for quite a few years with IPv6, and so far I'm right. Let's see, how many years have these standards been out, and how much deployment has there been? Hmm, RFC-1883 was in December 1995. Can you point me to *any* other IETF product that, 5 years after the Proposed Standard came out, still hadn't been significantly deployed - and then went on to be a success? No? I didn't think so. > NATs are not only bad e2e karma, they are bad tech I'm not denying that - and I've said as much. All address-sharing devices are problematic, and some (e.g. NAT boxes) are downright disgusting kludges. However, history shows that bad tech doesn't magically replace itself, it has to be replaced by an economically viable alternative. (For an example of this principle in action, note that the vast majority of cars are still powered by reciprocating internal-combustion engines... talk about poor basic concept! But I digress) Judging from the real world out there, it appears that IPv6 isn't a viable alternative. Noel
Re: multihoming (was Re: draft-ietf-nat-protocol-complications-02.txt)
> architecture. That accusation is false, and nothing in IPv6 prevents > the use of the same, lousy multihoming solution we have today for IPv4. Just for the record, I was *not* suggesting that IPv4 solves the multihoming problem (I said nothing about it one way or the other). I understand that internet-wide advertising of multiple prefixes for multihomed sites does not scale, and so does not constitute a solution. (I would not use the word "solution" to describe IPv4's defacto approach to multihoming.) > hosts. This is work-in-progress, and is likely to produce solutions > with a different (but, we hope, acceptable in many contexts) set of > shortcomings. Ok, so I think we may be in factual agreement if not in agreement on emphasis or tone. So, let me repeat my representation here and ask you if the (hopefully acceptable) shortcomings would likely have one or more of the following characteristics: >1. discouraging the use of multihoming, primarily may making >multihomed customers pay more for it. >2. forcing paths to multihomed sites to be less efficient (at >least for all but one of the ISP connection points) and or, >3. limiting the regions of the internet for which multihoming >is effective for a given customer. Keep in mind that these three characteristics are gleaned (and then negatively spun) directly from what Thomas wrote. The negative spin was a reaction to what I percieved as a misrepresentation of the difficulties by Thomas. Alternatively, maybe you could just point me to the work-in-progress you refer to (even if it is in the form of IPv6 mailing list archives). (I'm aware of the IDs on router renumbering, DNS suffix changing, and the one about using mobility mechanisms for dealing with multihoming.) PF
Re: draft-ietf-nat-protocol-complications-02.txt
Thomas, I think you're actually both right. Noel's comment was meant as a general case for even a very large enterprise. You provided a counterexample of some number of providers that will need huge amounts of address space. Much as I dislike NAT I believe it's an unproven assertion that those providers won't find a way to molest the architecture further to make it work. Really all that is required is some small percentage of public addresses, based on usage patterns. Any Bell Heads want to take a stab at that percentage? My hope is, however, that a NAT deployment would be 6-4, so that they could at least have the pipe dream that those NATs might some day become unnecessary. Also, many people have made claims of grand visions when it comes to usage of address space, and these grand visions have been rebutted with "it hasn't happened yet." In some cases it hasn't happened because it couldn't happen. Fred tells the story of China. I can tell the story of a cable provider, back in 1993, who were planning a large scale Cable-IP deployment. They were ahead of their time, but not by that much. Eliot
Re: draft-ietf-nat-protocol-complications-02.txt
>I don't know about you, but it scares me to read the various forecasts >about how wireless will transform the landscape over the next few >years. E.g., more wireless phones with internet connectivity than >PCs. The numbers are just staggering and the associated demand for >addresses will be astonishing. We ain't seen nothing yet. It seems to be a given now that 3G phones will be IPv6 (at least outside the U.S.) when they roll out over the next few years. But we should make people clearly aware of what to expect when they NAT their way back to the IPv4 Internet or IPv4 intranets. That is one of the purposes of this draft. (oh yeah, this thread is supposed to be about contributing to the draft in the subject line)
RE: draft-ietf-nat-protocol-complications-02.txt
In his previous mail, Thomas Narten writes: > > > Now, consider someone in the process of deploying massive numbers of > devices (100's of millions) together with the infrastructure to > support them (e.g., wireless). With IPv4, they face not only the > necessity of using NAT to get to outside destinations, but also the > use of NAT _internally_ because there isn't enough private address > space to properly number the internal part of the infrastructure. > > > I don't know about you, but it scares me to read the various forecasts > about how wireless will transform the landscape over the next few > years. E.g., more wireless phones with internet connectivity than > PCs. The numbers are just staggering and the associated demand for > addresses will be astonishing. We ain't seen nothing yet. > The basic assumptions in your answer are: 1) wireless devices will need an IP address. 2) Wireless devices will need to run TCP over IP for doing file transfer, web browsing etc. These are neither necessary, nor desirable solutions for wireless data or voice devices providing data. Most end user don't care whether their wireless email comes using an IP address or using a GSM ID or a Re-FLEX capcode. Wireless standards folks, if they want, can continue to keep NAT and IPv6 addresses away from end wireless devices. Cheers, --brijesh Ennovate Networks Inc.,
Re: multihoming (was Re: draft-ietf-nat-protocol-complications-02.txt)
Thomas Narten writes: | The point of the IPv6 addressing architecture is to make that | sort of multihoming a _possibility_ and an _optimization_ rather | than a _requirement_. In a purely technical sense, redundancy of any sort is an _optimization_ rather than a _requirement_. There is absolutely nothing in IPv4 that _requires_ any entity to be multiply homed or multiply connected at all. It seems out of touch with reality to rest on the argument that multihoming by entities too small to qualify for a (scarce) TLA needn't be considered from first principles because such a multihoming is an _optimization_ and isn't really required. | In contrast, today's | IPv4 has lots of long prefixes in the DFZ with no clear way of placing | an upper bound on the number of prefixes that must be maintained in | the DFZ to provide reachability to all sites. In IPv6, the small | number (8K's worth) of TLAs should do the trick. This sounds like virtue without sacrifice - ecologically correct routing at zero cost, except to "polluters" who are in it only for themselves, out to _optimize_ their perfectly sound single connectivity. Great! In this polluter-pays world, if you have a TLA assignment and you change your topology so that your TLA prefix is announced to my network from two directions instead of one, you are able to influence my routing decision in how traffic I generate will return to you. If you do not have a TLA assignment and you change your topology, I cannot see that, because I implement the standardized /19 (oops, TLA) filter. Unless you pay me. Cool. Obviously, I cannot be too critical of this approach, because it is precisely what I tried (and failed) to do with the /19 (and earlier /18) filters in Sprintlink. The horrendous failure of those filters was the inability on my part to add economics to the mix, and to allow organizations to offer some consideration in exchange for a relaxing of the filtering policy. This failure turns out not to be simply local -- there is no reasonable scheme available to settle with one, two or three filtering networks, let alone tens or hundreds or thousands. So when someone has a reason to want to pay for an _optimization_, there is no practical means to do so, and therefore, technical reasons for not imposing them as well as merely really bad P.R. ones. In the absence of a market, it is very hard to argue that "the market" will sort things out. Engineers shouldn't resort to belief in the divine Invisible Hand when the mechanisms and rules of a market do not exist yet. There was also a backing-away from the original filtering policy. The step back from /18 to /19 happened because the place where economics was working best -- the RIPE registry -- was allocating nothing smaller than /19s. /19s were chosen because they best fit the size of an initially-multihoming entity, and /18s seemed to be much too big an allocation. The initial allocation of a /19 was based on a simple market principle: if you were willing to pay $x to the registry, you get a /19. Come back when you want more, we'll talk about it. As the registries converged on the model of charging for registering standard-or-shorter prefixes, the /19 filters merely became a self-defensive measure to avoid hearing accidentally-announced long prefixes. The TLAs are much too big for most initially-multihoming entities, and thus the TLAs themselves are essentially irrelevant and ultimately meaningless, in the same way that the /18s were.Today's TLAs are tomorrow's /8s, as observed by Bill Manning. | As others have pointed out, IPv6 is also developing a multiple | addresses per end-node approach to multihoming. This is pushing the NAT function of multihoming-using-NATs-and-PA-space into the end hosts. The problem is that each time a new TLA is connected-to, a multihomed entity that does not qualify for a large enough allocation will have to convince all the devices covered by the original address space to now adopt a 2nd, 3rd or nth address. In a sizable NLA, where the devices are not all under the control of the NLA's administrators, this seems pretty challenging. (It's really cool for a sizable dialup provider!) Worse, the NLA's administrators are STILL bereft of a way to influence the routing decision made by a distant TLA towards the multiply-addressed end hosts. That is, if I want traffic from AboveNet's TLA to come in via Sprintlink and traffic from Exodus's TLA to come in via GTEI, how do I get the multiply- addressed hosts that I do not control, and the various TLAs and NLAs, to cooperate in that? (For example, how could I get a host owned by some customer's dialup customer to use an address with Sprintlink's TLA when talking to www.above.net, and to use an address with GTEI's TLA when talking to www.exodus.net?) Section 5 of draft-ietf-ipngwg-default-addr-select-00.txt doesn't have very much meat on this topic... | > IPv6 does not solve the multihoming problem. In
Re: draft-ietf-nat-protocol-complications-02.txt
> | I don't know about you, but it scares me to read the various forecasts > | about how wireless will transform the landscape over the next few > | years. > It scares me how much people buy into hype. If only 10% of the hype is true, I'm still scared. Will it be celluar phones? Maybe, maybe not. If not, there are still a bunch of other wanna-be next-big technologies behind it. Don't underestimate the potential for consumer-oriented per-person technologies to create real astonishing demands. Thomas
Re: draft-ietf-nat-protocol-complications-02.txt
In message <[EMAIL PROTECTED]>, Thomas Narten typed: >>> IPv6's claimed big advantage - a bigger address space - turns out >>> not to be an advantage at all - at least in any stage much short of >>> completely deployment. >>Not surprisingly, I disagree. right, noels wrong. the amount of address translation state you have to keep (and syncronise between failover NATs etc) per active session decreases as the percentage of hosts that are native IPv6 increases, (and obviously also decreases as the absolute number of hosts increases but new hosts are all v6) - in your scenario (a likely wireless 3G deployment one, this could happen pretty fast the amount of disconnect the v4 legacy machines will see because the state maintenance will fail (as any large system does partially, ALL THE TIME), will increase, and possibly quite fast...routing state is already in a bad enough statewithout adding address translation state to it... NATs are not only bad e2e karma, they are bad tech, just like x.25 and atm. >>> Here's why: >> >>> >> if you have a site which has more hosts than it can get external IPv4 >>> >> addresses for, then as long as there are considerable numbers of IPv4 >>> >> hosts a site needs to interoperate with, *deploying IPv6 internally to >>> >> the site does the site basically no good at all*. >> >>Actually, in the above scenario, NAT is already a requirement for IPv4 >>communication with the outside world. So, if you switch to IPv6 >>internally, use IPv6-IPv4 NATPT (i.e., combination of NAT and IPv6 to >>IPv4 translation) you have pretty much the same >>functionality/limitation as with IPv4 NAT. >> >>Now, consider someone in the process of deploying massive numbers of >>devices (100's of millions) together with the infrastructure to >>support them (e.g., wireless). With IPv4, they face not only the >>necessity of using NAT to get to outside destinations, but also the >>use of NAT _internally_ because there isn't enough private address >>space to properly number the internal part of the infrastructure. >> >>In this scenario, IPv6 internally at least gives them end-to-end ness >>internally (plus scalability, more robustness, etc., etc.), something >>the can't get with IPv4. And it gives them the same set of >>issues/headaches when talking to the outside world that they would >>have if just using IPv4. >> >>I don't know about you, but it scares me to read the various forecasts >>about how wireless will transform the landscape over the next few >>years. E.g., more wireless phones with internet connectivity than >>PCs. The numbers are just staggering and the associated demand for >>addresses will be astonishing. We ain't seen nothing yet. >> >>Thomas >> cheers jon
Re: draft-ietf-nat-protocol-complications-02.txt
Thomas Narten writes: | I don't know about you, but it scares me to read the various forecasts | about how wireless will transform the landscape over the next few | years. It scares me how much people buy into hype. I seem to recall reading forecasts about many bright shiny trendy things would change transform the landscape over the next few years, if only you invest in it. What you are asking for is investment in IPv6 based on your fear and uncertainty about an explosion in the number of end-systems that will need to communicate with the Internet as a whole, rather than within an individual addressing domain (or set thereof), with gateways if and as necessary between these addressing domains. | E.g., more wireless phones with internet connectivity than | PCs. The numbers are just staggering and the associated demand for | addresses will be astonishing. We ain't seen nothing yet. Wireless phones? Dead technology. My prediction: zero phones, but rather alot of pocket-sized computers which run a variety of local and network applications, with voice simply being one of those. IETF-conference-like wireless LANs instead of cellular base-stations, DHCP, connections which can survive renumbering when moving from place to place (or which are short/stateless enough not to care), something IMPP-like for registering at a rendezvous point to accept incoming connections, and whatever evolves out of these technologies will simply kill GSM, UMTS and the like, all without forcing a dubious fundamental change upon the rest of the Internet. IPv6ification is one such dubious fundamental change, driven in part by people who have succumbed to the fear that the hypesters behind many of the claims of these soon-to-be-dead bellhead systems really would like everyone to believe in. It inflates their share prices, and deflates critical analysis of various approaches to Internet scalability on ALL metrics (not just size of header fields). Sean.
Re: multihoming (was Re: draft-ietf-nat-protocol-complications-02.txt)
> Sean said that traditional multihoming would be "very difficult". Actually, Sean's statement was that "IPv6's current addressing architecture" makes multihoming very difficult, and that is the point that is untrue. > You replied that "This is not true" (which I take to mean > that multihoming is not very difficult), and then go on to describe > something that sounds very difficult to me (maintain longer prefixes, > make multihomed customers pay for it, get the ISPs to agree to > handle such longer prefixes). This is exactly what we have in IPv4 today, except that multihomed customers aren't really paying. And maybe they won't pay in IPv6 either. The market can sort that out. The point of the IPv6 addressing architecture is to make that sort of multihoming a _possibility_ and an _optimization_ rather than a _requirement_. In contrast, today's IPv4 has lots of long prefixes in the DFZ with no clear way of placing an upper bound on the number of prefixes that must be maintained in the DFZ to provide reachability to all sites. In IPv6, the small number (8K's worth) of TLAs should do the trick. That still leaves room for many more "value-add" routes when one considers that today's IPv4 already can/must handle 75K worth of prefixes in the DFZ. The difference is in IPv4 it is a requirement to maintain all 75K+ routes in order to just get reachability. > You say that "multihoming is still quite posible", but nobody said > that it was impossible, just difficult. For me, your statement > certainly reinforces the idea that multihoming in IPv6 is indeed > very difficult. As others have pointed out, IPv6 is also developing a multiple addresses per end-node approach to multihoming. This approach appears to be more scalable that the current IPv4 solution. Indeed, people seem to point to multihoming as one of the biggest threats to continued scaling of the IPv4 routing infrastructure. So one of IPv6's multihoming approaches is no worse than IPv4, while another appears to be significantly better. So once again (in contrast to assertions made by some), IPv6 *does* have some features that should make IPv6 routing scale better than it does in IPv4. > I read your statement as follows: > IPv6 does not solve the multihoming problem. Instead, it tries > to minimize the damage by: > > 1. discouraging the use of multihoming, primarily may making > multihomed customers pay more for it. > 2. forcing paths to multihomed sites to be less efficient (at > least for all but one of the ISP connection points) and or, > 3. limiting the regions of the internet for which multihoming > is effective for a given customer. > Is this an accurate representation? Absolutely not, as I hope has now been made clear. Thomas
Re: draft-ietf-nat-protocol-complications-02.txt
> IPv6's claimed big advantage - a bigger address space - turns out > not to be an advantage at all - at least in any stage much short of > completely deployment. Not surprisingly, I disagree. > Here's why: > >> if you have a site which has more hosts than it can get external IPv4 > >> addresses for, then as long as there are considerable numbers of IPv4 > >> hosts a site needs to interoperate with, *deploying IPv6 internally to > >> the site does the site basically no good at all*. Actually, in the above scenario, NAT is already a requirement for IPv4 communication with the outside world. So, if you switch to IPv6 internally, use IPv6-IPv4 NATPT (i.e., combination of NAT and IPv6 to IPv4 translation) you have pretty much the same functionality/limitation as with IPv4 NAT. Now, consider someone in the process of deploying massive numbers of devices (100's of millions) together with the infrastructure to support them (e.g., wireless). With IPv4, they face not only the necessity of using NAT to get to outside destinations, but also the use of NAT _internally_ because there isn't enough private address space to properly number the internal part of the infrastructure. In this scenario, IPv6 internally at least gives them end-to-end ness internally (plus scalability, more robustness, etc., etc.), something the can't get with IPv4. And it gives them the same set of issues/headaches when talking to the outside world that they would have if just using IPv4. I don't know about you, but it scares me to read the various forecasts about how wireless will transform the landscape over the next few years. E.g., more wireless phones with internet connectivity than PCs. The numbers are just staggering and the associated demand for addresses will be astonishing. We ain't seen nothing yet. Thomas
Re: draft-ietf-nat-protocol-complications-02.txt
Steve Deering writes: | Sheesh -- we get flamed for trying to impose a limit on the number of TLAs | and we get flamed for the possibility that the number TLAs might not be | limited... The salient point here is that the current inter-domain routing architecture is not robust to growth in the number of entities which are multihomed. This is a failing of both IPv4 and IPv6. There is no known dynamic IDR system which will do a substantially better job with CIDR-style addressing, which both IPv4 and IPv6 are "cursed" with. However, there are known dynamic IDR systems which are robust to increasing amounts of multihoming, however these generally require an addressing architecture that is fundamentally different from CIDR, and/or a scheme in which the host-router and router-router packet formats differ significantly. Sean.
Re: draft-ietf-nat-protocol-complications-02.txt
From: Bill Manning <[EMAIL PROTECTED]> > So, of the 7763 visable servers, 45 are improperly configured in the > visable US. tree. Thats 4.53% of those servers being "not well > maintained. > > Keith, These two data points seem to bear your assertion out. It is always possible to do something poorly. You can take the best engineered product and misconfigure it, or otherwise not maintain it. Sorta like not changing the oil. On the other hand, how many times do you see a name service failure for someone in the Keynote 40? How often do you get a failure for a site that uses a commercial service like Akamai? If you still get lots of failures when people are using the mechanism as recommended, then the mechanism itself is poorly designed. It also strikes me that DNS is continuing to mature, and that things are getting better, as more products come to the market, but this is more gut feel. There are some open questions out there: does DNS provide sufficient granularity? Are its semantics rich enough for a mobile world? People are currently playing tricks with DNS that PVM had not envisioned, and Paul Vixie loves to say something like, "DNS is not the droid you're looking for" (forgive me, Paul, if I didn't get that quite right). Just some thoughts...
Re: draft-ietf-nat-protocol-complications-02.txt
On Tue, 25 Apr 2000 08:18:20 PDT, Bill Manning said: > The 2q2000 data for the in-addr tree shows 77402 unique > servers answering for 693,337 zones. > 19515 servers blocked/refused data. Of the 57887 that > answered, these are the numbers for improper configuration: > > BAD_SERVER: 4278 > FORMERR:8 > NXDOMAIN: 28 > > So, of the 57,887 visable servers, 4314 are improperly configured > in the visable in-addr.arpa. tree. Thats 7.45% of the > servers being "not well maintained". I know of no similar data > (... for forward data ...) Rather than continue to whine about lack of data, this is the .US forward zone data: 9928 unique servers answering for 33299 zones. 2165 servers blocked or refused data. Of the 7763 that answered, these are the numbers for improper configuration: BAD_SERVER: 14 FORMERR:2 NXDOMAIN: 29 So, of the 7763 visable servers, 45 are improperly configured in the visable US. tree. Thats 4.53% of those servers being "not well maintained. Keith, These two data points seem to bear your assertion out. in-addr.arpa7.45% - poorly managed us. 4.53% - poorly managed --bill
Re: draft-ietf-nat-protocol-complications-02.txt
At 11:00 AM -0700 4/25/00, David R. Conrad wrote: > > At 8:48 AM -0700 4/25/00, Bill Manning wrote: > > >and this is different from only carrying the 253 usable /8 prefixes in > > >IPv4 how? > > > > The set of customers who have addresses under a given IPv4 /8 prefix greater > > than 127 do not all aggregate into a single topological subregion (e.g., a > > single ISP), and therefore more granular routes must be widely disseminated > > to make those customers reachable. That's the difference. > >No. This is a historical feature that IPv6 alleviates by being able to start >over with a clean slate. You could (at least theoretically) emulate this in >v4. Of course. I interpreted Bill's question as how would this be different than limiting IPv4 prefix advertisements to only /8s *today*, i.e., without renumbering the IPv4 Internet, so I answered accordingly. I didn't think to interpret the question the way you did, because the answer is so obvious it wouldn't have made sense to ask. >The difference is that v6 gives you the option of significantly more TLAs than >v4 can ever have. Of course, this isn't really a feature. Right. IPv4 could in theory have 2^32 TLAs and IPv6 could in theory have 2^128 TLAs. Are you saying that 2^32 TLAs would be OK? Sheesh -- we get flamed for trying to impose a limit on the number of TLAs and we get flamed for the possibility that the number TLAs might not be limited... Steve
Re: draft-ietf-nat-protocol-complications-02.txt
>From: Keith Moore <[EMAIL PROTECTED]> > >> If people's livelihood depends on something, they're more likely to insure >> it actually works. > >that's a good point. but it's one thing to make sure that DNS mappings >for "major" services are correct, and quite another to make sure that >the DNS mappings are correct in both directions for every single host. > >even the DNS names for major services may not be well maintained. >at one time I did a survey of the reasons for mail bounces >for one of my larger mailing lists. about half of the mail bounces >seemed to be due to configuration errors. about half of those >seemed to be due to DNS configuration errors - e.g. MX records pointing >to the wrong host, zone replicas not being kept in sync, zone >replicas which were different but with the same serial number. > >> In your view, what is it in the DNS protocol(s) that results in a lack of >> reliability? > >the reliability problems are mostly not the protocol...though the protocol >does have limitations if you want to use it (as some have proposed) to >support host or process mobility. and in the face of even moderate >packet losses DNS queries can take a very long time. > >mainly it's the fact that DNS is maintained as a separate entity. >if you really want it to be in sync with reality, you need some >mechanisms to ensure that updates happen automagically, and/or that >configuration errors are automatically and quickly detected and >the information about the error gets to the person who can fix it. Problems with DNS maintainance arise from fact that DNS should be maintained manual. If we return back to routing addresses resolution it can be deployed on maintainless basis because all needed information already concentrated in routers. There is not any human-like names here and allocation of route addresses may be done automatic (on tree base strategy from some root point in network). - Leonid Yegoshin, LY22
Re: draft-ietf-nat-protocol-complications-02.txt
I think there are interesting things happening in DNS. I wrote a not very good paper for AUUG a few years back noting an error rate in DNS above 10% for the mirror site I do stats on. Reviewing the figures for yesterday I get 9.75% unresolvable which is pretty close to Bill Mannings figure. But then I checked over the last 116 days since the start of the year. I find that for a deployed site, logging IP and DNS name into CLF format (so I can use analog) for ftp ,rsync and www I get: avg=15.288431, lo=4.213000, hi=33.265000 I think Bill is saying what really exists in DNS. I'm saying what a box deployed in the field can expect to see. Its pretty damn variable and its a lot worse than the DNS records would themselves suggest. Remember that DNS is timebound with timeouts in client code, uses UDP and is subject to the same kind of loss issues as the general datapath. (this is on a client base of around 3000 hosts, weighted for Australia/NZ) cheers -George -- George Michaelson | DSTC Pty Ltd Email: [EMAIL PROTECTED]| University of Qld 4072 Phone: +61 7 3365 4310| Australia Fax: +61 7 3365 4311| http://www.dstc.edu.au
Re: multihoming (was Re: draft-ietf-nat-protocol-complications-02.txt)
Paul Francis wrote: > Sean said that traditional multihoming would be "very difficult". > > You replied that "This is not true" (which I take to mean > that multihoming is not very difficult), and then go on to describe > something that sounds very difficult to me (maintain longer prefixes, > make multihomed customers pay for it, get the ISPs to agree to > handle such longer prefixes). But these are the same problems that you have with multihoming in IPv4. So the assertion is that IPv6 doesn't make it any worse, and tries to make it less necessary. -- /===\ |John Stracke| http://www.ecal.com |My opinions are my own. | |Chief Scientist |==| |eCal Corp. |"Fate just isn't what it used to be." --Hobbes| |[EMAIL PROTECTED]| | \===/
Re: draft-ietf-nat-protocol-complications-02.txt
On Tue, 25 Apr 2000, David R. Conrad wrote: > Keith, > > > a 92.55% reliability rate is not exactly impressive, at least not in > > a favorable sense. > > > > it might be tolerable if a failure of the PTR lookup doesn't cause > > the application to fail. > > If people's livelihood depends on something, they're more likely to insure it > actually works. Very little depends on PTR records doing anything (with a > relatively few exceptions of sites that configure it otherwise). The fact > that Bill's getting a 92.55% reliability figure for something that the vast > majority of people use to get something other than IP addresses in logfiles is > actually surprisingly good. Except that figure is just for the delegation of the reverse to somewhere that answers and knows something. If you look at it on a host by host basis, you will find a lot more hosts with incorrect or no reverse DNS. I think that the separation between reverse and forward DNS is a big problem in keeping things in synch. Sure, this is a UI issue in some ways. But I can assure you that if, by default, people just had to edit a single file and it would automatically do both forward and reverse mappings, reverse DNS would be a lot more reliable. Sure, there are cases when you need to manually setup what you really want anyway. It is even worse when forward and reverse DNS are controlled by different groups. But, as you say, if it has to work period then it will work. If only 5% of the things require it, however, those 5% will always be getting broken. In some ways, I think DNS is too reliable, strictly on the protocol level. I would be interested to see numbers from a sample of randomly selected domains regarding what percentage of their listed nameservers were working properly, and by that I mean returning a SOA for the zone or something equally mundane. I can start off with a good example netscape.com: 75%. 1 of their listed nameservers hasn't been responding (well, to DNS requests; handles HTTP requests just fine...) for months. I am completely unable to get such a major domain to fix their broken DNS setup, despite the fact that it causes me (not to mention the other couple of people who ever resolve netscape.com like, say, anyone who ever starts up Navigator, regardless of what their home page is set to) significantly noticible delays at times... Also of interest would be looking at the difference in nameservers between what a zone itself lists for the zone and what the parent lists for it. That is always the problem when designing a protocol, of course. If you make it robust, you end up with a net loss when things are "working normally" compared to if it were less robust and foced people to fix their brokenness. But that is getting off topic, whatever the topic really is...
Re: draft-ietf-nat-protocol-complications-02.txt
On Tue, 25 Apr 2000 12:19:49 PDT, "David R. Conrad" said: > If it "isn't very good", try using the Internet without it for a bit. > > In your view, what is it in the DNS protocol(s) that results in a lack of > reliability? Actually, in my experience, the protocol isn't the biggest problem. To paraphrase an old joke - "95% of all automobile accidents are caused by a mechanical failure - a loose nut behind the wheel". It's like open mail relays - the majority of sites *wont* fix the problem until the cost of fixing it is less than the pain of not fixing it. We had a number of hosts at our site listed in ORBS and similar databases for MONTHS that cleaned up their act quickly once they couldn't post to Listserv.. -- Valdis Kletnieks Operating Systems Analyst Virginia Tech
Re: draft-ietf-nat-protocol-complications-02.txt
> > I don't see what you're getting at. the outside sites may be running v4 > > with a limited number of external addresses ... if they are running v6 > > they will have plenty of external addresses. > > Not external *IPv4* addresses, they won't - which is what kind of addresses > they need to communicate with other IPv4 sites. IPv4 vs. IPv6 isn't an either/or - it's quite reasonable for a site to run IPv4 (with NAT if necessary) alongside IPv6. I expect that most sites already on the net will do exactly this - use IPv4 with NAT for traditional applications that NAT supports and use IPv6 for the other ones. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Keith Moore <[EMAIL PROTECTED]> > I don't see what you're getting at. the outside sites may be running v4 > with a limited number of external addresses ... if they are running v6 > they will have plenty of external addresses. Not external *IPv4* addresses, they won't - which is what kind of addresses they need to communicate with other IPv4 sites. > if the remote sites are *only* running IPv4, having v6 locally won't > help much. ... it's generally true that if you're talking between a > v6-only and v4-only host you have to accept the limitations of NAT. That's the point I've been trying to make. Noel
Re: draft-ietf-nat-protocol-complications-02.txt
On Tue, 25 Apr 2000, J. Noel Chiappa wrote: > > From: Brian Lloyd <[EMAIL PROTECTED]> > > I was thinking about your message, and something from my exchanges > with Keith Moore suddenly popped into my head with great clarity. I > think it's the answer to your question immediately below - and it has > some very grave consequences. > > ... > > > whatever happened to IPv6? 128 bit addresses would certainly allow us > > to continue using IP addresses as endpoint identifiers thus eliminating > > the need for NAT. It seems that this is a more reasonable solution than > > trying to make NAT work under all circumstances. > > The basic key *architectural* problem with NAT (as opposed to all the > mechanical problems like encrypted checksums, etc, some of which can > be solved with variant mechanisms like RSIP), as made clear by Keith's > comments, is that when you have a small number of external addresses > being shared by a larger number of hosts behind some sort of > "address-sharing" device, there's no permanent association between an > address and a host. It's *that* that causes many of the worst problems > - problems for which there *is* no good work-around (because the > problem is fundamental in nature). Right. NAT, or any other technology that breaks the static relationship between identifiers and the hosts they identify, is basically flawed. You can never again count on the address/identifier as a means to identify a given host. I saw that one coming back in '92 and managed to scrounge a class-B for my little grassroots ISP so that I wouldn't have to deal with that problem nor would my customers. I am so glad I did that. It is still paying dividends to me today. > Now, if you have a site which has more hosts than it can get external > IPv4 addresses for, then as long as there are considerable numbers of > IPv4 hosts a site needs to interoperate with, *deploying IPv6 > internally to the site does the site basically no good at all*. Why? That seems obvious to me. IPv6 over IPv4 is NAT all over again. Once again you are trying to cram 100 kg of routing drek into a 5 kg bag. > Because for interactions with those external IPv4 hosts (who will be > the vast majority of the hosts one wants to talk to, in the initial > stages of deployment), *you have exactly the same architectural > problem*. No matter what IPv6<->IPv4 interoperability mechanism you > use, you still have that same *fundamental* problem - no permanent > association between a host and an address (in this case, the IPv4 > address that it *has* to use to communicate with an IPv4-only host). Right. But it is safe to go the other way ... oops, no it's not. I still have only 32 bits locally to identify all those remote hosts. Sure I can give every local host a virtual IPv6 address by making the 'v4 address the low order 32 bits of the 'v6 address but that doesn't help my local host find a remote host across the 'v6 network. But it does seem that we could begin deploying 'v6 in the backbone piecemeal while leaving the high order 96 bits zero'd out. Sun, Linux, MacOS, and Win-whatever add IPv6 to their stacks as part of their ongoing upgrade and you have the means for a relatively peaceful transition. > When one looks at the overall business/economic case for deploying > IPv6, in the light of this, the results are fairly devastating - and > explain perfectly what we've been seeing for the last couple of years > (rapid increase in the number of NAT boxes, and basically no traction > for IPv6). There are hard decisions to be made. You can begin do it now for $$$ or you can do it later for $$. What is the future cost and what is the future value of money? It seems to me that the business folk can understand that equation. > A site considering deploying IPv6 is in one of two cases: it already > has enough IPv4 addresses, or it doesn't. In the foremer case, what's > the upside to deploying IPv6? Autoconfiguration, etc aren't enough to > outweigh all the costs of switching (to software which is less > available, less tested, less tuned, etc). It may be cheaper to deploy IPv6 now because I don't have as many hosts now as I will have to convert in the future. > In the latter case, it's equally as bad: they are going to have to > struggle with the problems inherent in IPv4-address-sharing technology > whether they go with IPv6 or not, and again, the remaining advantages > of IPv6 (autoconfig, etc) are outweighed by the costs. > > I'm still sorting through the implications from this, trying to put > them all with equal clarity, but one thing that does seem clear is > that this kind of upgrade model is economically unworkable in the > current large-scale Internet. Exactly what will work is something that > needs to be pondered for a while. What works is a function of the motivation of the people involved. What I am about to say will undoubtedly violate the libertarian and anarchistic bent of most of the pe
Re: draft-ietf-nat-protocol-complications-02.txt
> IPv6's claimed big advantage - a bigger address space - turns out not to be an > advantage at all - at least in any stage much short of completely deployment. that's an exaggeration. if you have an app that needs IPv6, you don't need complete deployment of IPv6 throughout the whole network to use that app. you just need for the hosts that want to support that app to have IPv6 (on many hosts the IPv6 stack can be installed at the same time as the app itself), and you need a way to route IPv6 packets between those hosts. and as long as a site has one external public IPv4 address, 6to4 and 6over4 go a long way toward providing the ability for all of the internal hosts at that site to run IPv6 if their local stack supports it. of course, if the backbone supports v6 natively, that's a nice optimization. > >> if you have a site which has more hosts than it can get external IPv4 > >> addresses for, then as long as there are considerable numbers of IPv4 > >> hosts a site needs to interoperate with, *deploying IPv6 internally to > >> the site does the site basically no good at all*. > > > I do think that the main incentive to deploy v6 will come from the need > > to communicate with global addresses to points *outside* of folks' > > internal networks. > > Huh? > > If those outside sites are running IPv4, deployment of IPv6 does the people > who deployed it basically no good at all over IPv4 NAT - because the > fundamental problem (of not having enough external addresses) is the same, > regardless of whether the internal protocol is IPv4 or IPv6. I don't see what you're getting at. the outside sites may be running v4 with a limited number of external addresses - not enough addresses for each host. but if they are running v6 they will have plenty of external addresses. > Thus, the problems caused by that limitation (many of which you so well > articulated in a previous message, such as the need to go through a > rendezvous to set up translation state) will also be the same, regardless of > whether the internal protocol is IPv4 or IPv6. with 6to4 the mapping is algorithmic so there's no need to go through a rendezvous, or do a network query, to set up translation state. > > deploying IPv6 internally .. will of course do some good if the site > > has applications on internal hosts that need to communicate with > > external hosts using global addresses. if you're .. point [is] that > > there's little purpose in having your own IPv6 island > > Deployment won't do any good if the people it's trying to communicate with > externally are running IPv4 if the remote sites are *only* running IPv4, having v6 locally won't help much. but the fact that they're running v4 makes it easy for them to bootstrap to v6 by using 6to4. but it's generally true that if you're talking between a v6-only and v4-only host you have to accept the limitations of NAT. > So, if i) a company has an acceptable mechanism to interoperate, ii) they > won't see any big improvement from IPv6<->IPv6 operation, and iii) there's no > advantage in the IPv4 ineroperation mechanism to be gained by deployment of > IPv6 internally - then where's the incentive to deploy? as I see it the incentives to deploy v6 include the following: a) ability to run applications that won't work over IPv4 in practice due to a shortage of addresses and the resulting inability to use addresses as connection endpoint identifiers; ability to write distributed apps that scale better than hub-and-spoke apps. (distributed games, conferencing systems, instant messaging, high performance distributed computations) b) ability to deploy new networks which can directly address large numbers of devices, where this would be infeasible using IPv4 (might include traditional Internet in emerging areas of the globe, IP-enabled cellphones and mobile devices, and new networks intended to support instrumentation/telemetery - e.g. monitoring of power meters) c) ability to individually and remotely address every device in a network which is currently accessible only via a NAT or not at all - i.e. change a NAT-based client-only LAN into a LAN which can potentially support client and/or server operation from every host. doing this gives you the ability to support new applications or support existing applications more efficiently (internet phone, fax to the desktop, instant messaging, etc.); it also allows you to do new things with your network (e.g. for home networks; be able to program or diagnose your vcr remotely) d) ability to communicate with the new networks in (c) and (d) from the rest of the Internet. e) ability to deploy new applications without installing a new ALG in the NAT6to4 becomes the universal ALG. if the Internet were going to stay just like it is today - email and web - there would be no need to deploy v6. but the Internet won't sta
multihoming (was Re: draft-ietf-nat-protocol-complications-02.txt)
> > Sean Doran <[EMAIL PROTECTED]> writes: > > > Unfortunately, IPv6's current addressing architecture makes it very > > difficult to do this sort of traditional multihoming if one is not > > a TLA. > > This is not true. IPv6's TLA scheme has as its primary goal placing an > upper bound on the number of routing prefixes that are needed in the > > That doesn't mean that all or even any of these routes will be > optimal. An individual provider, may, maintain longer prefixes for > selected destinations in order to provide better contectivity, say, to > those willing to pay. Thus, traditional multihoming is still quite > possible, assuming the various ISPs that need to handle the routes > agree to do so. But the goal is to make such routes exceptions that > are value-add rather than are a fundamental requirement in order to > insure global reachability for all. > Sean said that traditional multihoming would be "very difficult". You replied that "This is not true" (which I take to mean that multihoming is not very difficult), and then go on to describe something that sounds very difficult to me (maintain longer prefixes, make multihomed customers pay for it, get the ISPs to agree to handle such longer prefixes). You say that "multihoming is still quite posible", but nobody said that it was impossible, just difficult. For me, your statement certainly reinforces the idea that multihoming in IPv6 is indeed very difficult. I read your statement as follows: IPv6 does not solve the multihoming problem. Instead, it tries to minimize the damage by: 1. discouraging the use of multihoming, primarily may making multihomed customers pay more for it. 2. forcing paths to multihomed sites to be less efficient (at least for all but one of the ISP connection points) and or, 3. limiting the regions of the internet for which multihoming is effective for a given customer. Is this an accurate representation? PF
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Keith Moore <[EMAIL PROTECTED]> IPv6's claimed big advantage - a bigger address space - turns out not to be an advantage at all - at least in any stage much short of completely deployment. IPv6 deployment is going to have to be driven by IPv6's *other* features, and when you take bigger addresses out of the cost/benefit ration, I'm even more dubious that the features that are left (autoconfiguration, etc) outweigh all the costs and risks of IPv6 conversion. It seems that you can postulate whatever level of IPv6 deployment you like (a long stretch in itself, but just for the sake of argument, let's make it) - 5%, 10%, whatever - and there's still no mechanism to drive further deployment. Here's why: >> if you have a site which has more hosts than it can get external IPv4 >> addresses for, then as long as there are considerable numbers of IPv4 >> hosts a site needs to interoperate with, *deploying IPv6 internally to >> the site does the site basically no good at all*. > I do think that the main incentive to deploy v6 will come from the need > to communicate with global addresses to points *outside* of folks' > internal networks. Huh? If those outside sites are running IPv4, deployment of IPv6 does the people who deployed it basically no good at all over IPv4 NAT - because the fundamental problem (of not having enough external addresses) is the same, regardless of whether the internal protocol is IPv4 or IPv6. Thus, the problems caused by that limitation (many of which you so well articulated in a previous message, such as the need to go through a rendezvous to set up translation state) will also be the same, regardless of whether the internal protocol is IPv4 or IPv6. > deploying IPv6 internally .. will of course do some good if the site > has applications on internal hosts that need to communicate with > external hosts using global addresses. if you're .. point [is] that > there's little purpose in having your own IPv6 island Deployment won't do any good if the people it's trying to communicate with externally are running IPv4 - and I *don't* include only the cases where there's a local island of IPv6. Here's the logic: As long as a substantial portion of the Internet is running IPv4 only, any site is going to have to have some mechanism to communicate with the IPv4 portion of the Internet. It may not be quite as good as native IPv6-IPv6 - *but it has to be OK, or the company is toast*. As an important corollary, there may be an incremental improvement in functionality with a pure IPv6<->IPv6 mode, but that increment is inevitably going to be minimal. And my new point is that whether the site is running IPv4 or IPv6 internally, it won't make any big difference to how well that IPv4 interoperation mechanism operates, since the fundamental problem is the same. So, if i) a company has an acceptable mechanism to interoperate, ii) they won't see any big improvement from IPv6<->IPv6 operation, and iii) there's no advantage in the IPv4 ineroperation mechanism to be gained by deployment of IPv6 internally - then where's the incentive to deploy? Noel
Re: draft-ietf-nat-protocol-complications-02.txt
> If people's livelihood depends on something, they're more likely to insure > it actually works. that's a good point. but it's one thing to make sure that DNS mappings for "major" services are correct, and quite another to make sure that the DNS mappings are correct in both directions for every single host. even the DNS names for major services may not be well maintained. at one time I did a survey of the reasons for mail bounces for one of my larger mailing lists. about half of the mail bounces seemed to be due to configuration errors. about half of those seemed to be due to DNS configuration errors - e.g. MX records pointing to the wrong host, zone replicas not being kept in sync, zone replicas which were different but with the same serial number. > In your view, what is it in the DNS protocol(s) that results in a lack of > reliability? the reliability problems are mostly not the protocol...though the protocol does have limitations if you want to use it (as some have proposed) to support host or process mobility. and in the face of even moderate packet losses DNS queries can take a very long time. mainly it's the fact that DNS is maintained as a separate entity. if you really want it to be in sync with reality, you need some mechanisms to ensure that updates happen automagically, and/or that configuration errors are automatically and quickly detected and the information about the error gets to the person who can fix it. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
Keith, > a 92.55% reliability rate is not exactly impressive, at least not in > a favorable sense. > > it might be tolerable if a failure of the PTR lookup doesn't cause > the application to fail. If people's livelihood depends on something, they're more likely to insure it actually works. Very little depends on PTR records doing anything (with a relatively few exceptions of sites that configure it otherwise). The fact that Bill's getting a 92.55% reliability figure for something that the vast majority of people use to get something other than IP addresses in logfiles is actually surprisingly good. > if applications were > written to depend on DNS reverse lookups in order to get endpoint > identifiers of their peers, they would only work as reliably as DNS, > which isn't very good. If it "isn't very good", try using the Internet without it for a bit. In your view, what is it in the DNS protocol(s) that results in a lack of reliability? Rgds, -drc
Re: draft-ietf-nat-protocol-complications-02.txt
> Now, if you have a site which has more hosts than it can get external IPv4 > addresses for, then as long as there are considerable numbers of IPv4 hosts a > site needs to interoperate with, *deploying IPv6 internally to the site does > the site basically no good at all*. Why? this sounds like a slight exaggeration, but I do think that the main incentive to deploy v6 will come from the need to communicate with global addresses to points *outside* of folks' internal networks. it doesn't follow that deploying IPv6 internally to a site does no good at all - it will of course do some good if the site has applications on internal hosts that need to communicate with external hosts using global addresses. and of course having v6 deployed internally would allow the site to run those same applications interally without any configuration changes. but if you're trying to make the point that there's little purpose in having your own IPv6 island - I think that's basically right. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
At 11:16 AM -0700 4/25/00, Bill Manning wrote: > And why do you think that the ISP community and others will not > meet the IPv6 routing proposal with anything less than the > "hostility and derision" that came from the previous attempts > to impose "topological constraints and interchange requirements" > on them? I was talking about the requirements of geographic addressing and routing. The IPv6 addressing and routing plan does not impose those requirements. Rather, the IPv6 addressing and routing plan is just the CIDR plan that is being used today by that community for IPv4, with some policy proposals to attempt to limit the number of prefixes that must be globally advertised, something that many in that community have said over and over again is of vital importance. (Of course, some will respond with hostility and derision to any plan, even if it's exactly what they would argue for, if it comes in IPv6 clothing.) > >% - Are there not a large number of Class B addresses (and Class C >%addresses, but maybe those have all been filtered out by now) >%that were assigned before the registries were established, and >%thus not aggregatable under the registry allocation prefixes? > > Yup, a bunch. OK, so because of that, just changing the IPv4 Internet today to advertise only /8s globally would not be functionally equivalent to just advertising IPv6 TLAs, which was your original question. >% - Why are we talking about this? Yes, you could adopt the same >%or a similar address allocation/aggregation policy in IPv4 as >%has been specified for IPv6, if you were starting all over again >%with IPv4. But so what? > > Well, for two reasons: a) IPv4 address delegation policy, since > about 1996 has been done at a gross level, on continental bounds > (e.g. the RIR model) so there is a rough alignment with the proposed > IPv6 plan, at least as far as "modern" delegations are concerned. > This is something that could be exploited for testing to see if the > IPv6 delegation/aggregation plan is actually going to fly. Huh? We have been "testing" that for some years in the IPv4 world, and we have adopted it for IPv6. What further testing do you have in mind, and in what salient way do you think the IPv6 delegation/aggregation plan is different? > b) We have IPv4 addresses as legacy environments that -RIGHT NOW- > are showing problems with computing/maintaining state in a dynamic > world. If we can "prove" the solution in the IPv4 world, then that > would remove much of the "hostility & derision" when moving on to > IPv6. Huh? You want to stop routing to IPv4 prefixes that don't aggregate under /8s in order to "prove" that we shouldn't allocate such prefixes in IPv6??? Steve
Re: draft-ietf-nat-protocol-complications-02.txt
> I've not backed your assertion. I've provided some data > on the relative stability of the in-addr space. You've provided > zero data on the efficacy of the forward delegations. > > Can you, with a straight face, claim the servers for the > forward zones have a better reliability rate than the 92.55% > that I have seen in the inverse tree? actually, no. I don't have hard data to back the assertion that the reverse tree is less reliable than the forward tree... nor do I really need to do that to make the point that DNS PTR lookup isn't reliable enough to allow DNS to be used to lookup connection endpoint identifiers. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
It is a problem of lack well designed user-interface in DNS packet. DNS from the beginning presents a tool more than a product. Most of my friends who handles DNS create some PERL scripts or so. Or try to use something from public domain but it is not adequate sometime. Also a miscommunication between IP provider and customer pays a big toll: one my friend can't get a right NS address records long time because it "illegally" cached at some providers point and nobody knows where. - Leonid Yegoshin. --- >From: [EMAIL PROTECTED] > >On Tue, 25 Apr 2000 08:18:20 PDT, Bill Manning said: >> The 2q2000 data for the in-addr tree shows 77402 unique >> servers answering for 693,337 zones. >> 19515 servers blocked/refused data. Of the 57887 that >> answered, these are the numbers for improper configuration: >> >> BAD_SERVER: 4278 >> FORMERR:8 >> NXDOMAIN: 28 >> >> So, of the 57,887 visable servers, 4314 are improperly configured >> in the visable in-addr.arpa. tree. Thats 7.45% of the >> servers being "not well maintained". I know of no similar data > >Does "not well maintained" include the following: > >1) DNS server for the zone is originally configured correctly, and the >first 20-30 hosts are entered with a proper A record and a PTR that matches. > >2) Clueful guy leaves, new DNS "goo-roo" takes over, and adds the next 300 >machines with just an A record, and no PTR matching. The checks you make >would show this as "well maintained", even though 90% of the hosts are broken >with respect to PTR entries. > >Given that 7% of the sites can't get past step (1), I'm willing to bet that >a lot MORE of the sites are accumulating cruft under step (2). > >From: Jeffrey Altman <[EMAIL PROTECTED]> >> % DNS reverse lookup tables (PTR) are not as well maintained as forward >> % lookup tables (A) so they're even less reliable. >> >> This is an assertion that I've heard over the years >> and I've come to beleive (based on regular audits of >> the in-addr space) that this is an Internet equivalent >> of an urban legend. I'd really like to see your backing >> data on this. > >This is hardly an urban legend. Columbia University requires the >use of tcpwrappers in Paranoid mode which requires that the forward >and reverse lookups for an IP address in DNS match. The Kermit >Project is based at Columbia University and uses its systems for >our FTP and HTTP access. A week does not go by when we do not >get complaints about people being unable to access our FTP server >due to a failure of the forward and reverse to match. > >Just from the first 8 hours of logs today: > > proxauth3-bb2.globalintranet.net != 212.234.59.254 > hide193.nhs.uk != 195.107.47.193 > marta-c-gw.caravan.ru != 212.24.53.234 > su9127.eclipse.co.uk != 212.104.136.138 > >Granted this is hardly a scientific study. But we see this from >approximately a dozen new addresses every day.
Re: draft-ietf-nat-protocol-complications-02.txt
% % At 10:22 AM -0700 4/25/00, Bill Manning wrote: % >Given the nature of trans-oceanic b/w vs. local b/w arguments I've heard % >over the years, I'd say that these delegations are esentially constrained % >to topological subregions and that for the most part, having the largest % >incumbent ISPs in each region announce the respective /8 would roughly % >meet the IPv6, heirarchical aggregation argument. % % A few counter-points: % % - As you know, I am a fan of geographic addressing (to provide a % real, scalable, user-friendly solution to most of the multihoming % and renumbering problems), but as you also know, proposals to do % any sort of geographic routing have usually been met with % hostility and derision from the ISP community and others, because % of the topological constraints and interchange requirements it % would impose on the ISPs. I welcome you to try, but don't get % your hopes up. And why do you think that the ISP community and others will not meet the IPv6 routing proposal with anything less than the "hostility and derision" that came from the previous attempts to impose "topological constraints and interchange requirements" on them? % - Are there not a large number of Class B addresses (and Class C % addresses, but maybe those have all been filtered out by now) % that were assigned before the registries were established, and % thus not aggregatable under the registry allocation prefixes? Yup, a bunch. % - Why are we talking about this? Yes, you could adopt the same % or a similar address allocation/aggregation policy in IPv4 as % has been specified for IPv6, if you were starting all over again % with IPv4. But so what? Well, for two reasons: a) IPv4 address delegation policy, since about 1996 has been done at a gross level, on continental bounds (e.g. the RIR model) so there is a rough alignment with the proposed IPv6 plan, at least as far as "modern" delegations are concerned. This is something that could be exploited for testing to see if the IPv6 delegation/aggregation plan is actually going to fly. b) We have IPv4 addresses as legacy environments that -RIGHT NOW- are showing problems with computing/maintaining state in a dynamic world. If we can "prove" the solution in the IPv4 world, then that would remove much of the "hostility & derision" when moving on to IPv6. Two thoughts from here... :) % % Steve % % -- --bill
Re: draft-ietf-nat-protocol-complications-02.txt
% % So, of the 57,887 visable servers, 4314 are improperly configured % in the visable in-addr.arpa. tree. Thats 7.45% of the % servers being "not well maintained". % % a 92.55% reliability rate is not exactly impressive, at least not in % a favorable sense. % % it might be tolerable if a failure of the PTR lookup doesn't cause % the application to fail. but after all, people are saying that DNS % is good enough to serve as a means to map endpoint identifiers % to realm-local addresses or routing goop. if applications were % written to depend on DNS reverse lookups in order to get endpoint % identifiers of their peers, they would only work as reliably as DNS, % which isn't very good. % % thanks for backing up my assertion with hard data. % % Keith I've not backed your assertion. I've provided some data on the relative stability of the in-addr space. You've provided zero data on the efficacy of the forward delegations. Can you, with a straight face, claim the servers for the forward zones have a better reliability rate than the 92.55% that I have seen in the inverse tree? If so, where is the data to back your assertion? Or would the community like me to do for the forward tree what I have been doing for the inverse tree? --bill
Re: draft-ietf-nat-protocol-complications-02.txt
> At 8:48 AM -0700 4/25/00, Bill Manning wrote: > >and this is different from only carrying the 253 usable /8 prefixes in > >IPv4 how? > > The set of customers who have addresses under a given IPv4 /8 prefix greater > than 127 do not all aggregate into a single topological subregion (e.g., a > single ISP), and therefore more granular routes must be widely disseminated > to make those customers reachable. That's the difference. No. This is a historical feature that IPv6 alleviates by being able to start over with a clean slate. You could (at least theoretically) emulate this in v4. The difference is that v6 gives you the option of significantly more TLAs than v4 can ever have. Of course, this isn't really a feature. Rgds, -drc
Re: draft-ietf-nat-protocol-complications-02.txt
At 10:22 AM -0700 4/25/00, Bill Manning wrote: >Given the nature of trans-oceanic b/w vs. local b/w arguments I've heard >over the years, I'd say that these delegations are esentially constrained >to topological subregions and that for the most part, having the largest >incumbent ISPs in each region announce the respective /8 would roughly >meet the IPv6, heirarchical aggregation argument. A few counter-points: - As you know, I am a fan of geographic addressing (to provide a real, scalable, user-friendly solution to most of the multihoming and renumbering problems), but as you also know, proposals to do any sort of geographic routing have usually been met with hostility and derision from the ISP community and others, because of the topological constraints and interchange requirements it would impose on the ISPs. I welcome you to try, but don't get your hopes up. - Are there not a large number of Class B addresses (and Class C addresses, but maybe those have all been filtered out by now) that were assigned before the registries were established, and thus not aggregatable under the registry allocation prefixes? - Why are we talking about this? Yes, you could adopt the same or a similar address allocation/aggregation policy in IPv4 as has been specified for IPv6, if you were starting all over again with IPv4. But so what? Steve
Re: draft-ietf-nat-protocol-complications-02.txt
So, of the 57,887 visable servers, 4314 are improperly configured in the visable in-addr.arpa. tree. Thats 7.45% of the servers being "not well maintained". a 92.55% reliability rate is not exactly impressive, at least not in a favorable sense. it might be tolerable if a failure of the PTR lookup doesn't cause the application to fail. but after all, people are saying that DNS is good enough to serve as a means to map endpoint identifiers to realm-local addresses or routing goop. if applications were written to depend on DNS reverse lookups in order to get endpoint identifiers of their peers, they would only work as reliably as DNS, which isn't very good. thanks for backing up my assertion with hard data. Keith
Re: draft-ietf-nat-protocol-complications-02.txt
>From: Keith Moore <[EMAIL PROTECTED]> > >> >even if you do this the end system identifier needs to be globally >> >scoped, and you need to be able to use the end system identifier >> >from anywhere in the net, as a means to reach that end system. >> >> DNS is a bright and successfull example of such deal. > >actually, DNS is slow, unreliable, and often out of sync with reality. > >DNS reverse lookup tables (PTR) are not as well maintained as forward >lookup tables (A) so they're even less reliable. > (Bill Manning has another mind here - read it, please) >hosts often don't know their own DNS names, so they wouldn't know >their connection endpoint names either. > >DNS names are often ambiguous - because a single DNS name corresponds >to multiple hosts (all implementing the same service) or because a >single host supports multiple DNS domains (a different name for each >service) or both. > Yes, it is a reason why we can't use today domain names as system ID. But it means that we should shift consideration to appropriate level - - IP addresses. The second level of indirection arises but it is not bad as long as we do not decrease a setup speed. >the binding between a DNS name and an address is not the same thing >as the binding between an address and a connection endpoint. the >two have different purposes and different lifetimes. > I agree here. >when people say "DNS can do the job" they may be saying different >things, e.g. >(a) they are thinking in terms of using existing DNS servers, >(b) they are thinking in terms of using the DNS protocol, having >translation occur at the boundaries between routing goop realms >(similar to NAT's DNS ALG), or >(c) they are thinking in terms of a DNS-like system, but not DNS > I speak about (c). Routing addresses may be handled by providers but not end-user. It should be (1) more accurate (2) more fast (3) independent in terms "rule of game" (may be chanded later without rewriting all) >with separate servers (whether they are existing DNS servers or not) >there is the problem of keeping the servers updated as to the >current condition of the network. > >with DNS at translation boundaraies there is the problem of >"call setup overhead" (having queries propagate through >multiple layers of translation until they reach their destination >network) also in this case DNS becomes a routing protocol of >sorts, since the thing you advertise from one realm to another >becomes a DNS suffix rather than than an address prefix. >it's not at all clear that this scales. > Keith, it depends from design. I can propose (1) caching, (2) implicit router resolution on each DNS A? query (each TCP setup precedes DNS A? query or using cached values) - "router DNS" may track each DNS query/response and append routing prefix to response. In this case there isn't any time lag... at least up to TTL expiration. (3) Separate network datagram service addresses from connection-oriented. We have two real network datagram services now - DNS and may be NTP (Voice RTP traffic or like UDP is in real a connection-oriented srvs) But I don't afraid "call setup overhead" (see previous) because I afraid only call setup speed decrease and support issues. As long as call setup speed is not changed and support simple I like it. >in either case having to do a DNS-like query before you can transmit >is slow compared to just sending a packet to an IP address. > We can implement technics which do both in the same time. We do not need to replicate DNS again. We should admit that DNS is network service in conrast with end-customer srvs and handle it different. It means that different service rules and may be different set of addresses may be used for this. The same is in BGP - there is a practice to hide inter-router addresses of provider backbone, for exam to increase security and give additional flexibility of backbone configuration. >Keith > >p.s. if there's ever going to be a split between endpoint names and >routing goop, I'm convinced that endpoint names have to be usable >by themselves Yes. > (perhaps with some speed penalty), that the >mapping between endpoint names and routing goop needs to be maintained >by the routing infrastructure rather than in some separate database, Yes ! Yes ! >and that the lookup needs to be able to be done implicitly (as a side >effect of sending packets without routing goop to their destination) >rather than explicitly. I think such a separation might be a good idea, >because our current means of propagating reachability information >and computing routes does have limitations. but I don't see any need >to change the packet format seen by hosts (from IPv6), or any need to >change end hosts at all, in order to do this. Keith, sorry, didn't read this p.s. before I wrote answer on main mail body. You absolutely understand me, thank you. - Leonid Yegoshin, LY22
Re: draft-ietf-nat-protocol-complications-02.txt
% % At 8:48 AM -0700 4/25/00, Bill Manning wrote: % >and this is different from only carrying the 253 usable /8 prefixes in % >IPv4 how? % % The set of customers who have addresses under a given IPv4 /8 prefix greater % than 127 do not all aggregate into a single topological subregion (e.g., a % single ISP), and therefore more granular routes must be widely disseminated % to make those customers reachable. That's the difference. % % Steve Given the nature of trans-oceanic b/w vs. local b/w arguments I've heard over the years, I'd say that these delegations are esentially constrained to topological subregions and that for the most part, having the largest incumbent ISPs in each region announce the respective /8 would roughly meet the IPv6, heirarchical aggregation argument. 133.0.0.0/8 - Japan 193/8 RIPE NCC - Europe May 93 194/8 RIPE NCC - Europe May 93 195/8 RIPE NCC - Europe May 93 199/8 ARIN - North AmericaMay 93 200/8 ARIN - Central and South AmericaMay 93 201/8 Reserved - Central and South AmericaMay 93 202/8 APNIC - Pacific Rim May 93 203/8 APNIC - Pacific Rim May 93 204/8 ARIN - North AmericaMar 94 205/8 ARIN - North AmericaMar 94 206/8 ARIN - North AmericaApr 95 207/8 ARIN - North AmericaNov 95 208/8 ARIN - North AmericaApr 96 209/8 ARIN - North AmericaJun 96 210/8 APNIC - Pacific Rim Jun 96 211/8 APNIC - Pacific Rim Jun 96 212/8 RIPE NCC - Europe Oct 97 213/8 RIPE NCC - Europe Mar 99 214/8 US-DOD Mar 98 215/8 US-DOD Mar 98 216/8 ARIN - North AmericaApr 98 -- --bill
Re: draft-ietf-nat-protocol-complications-02.txt
> From: Brian Lloyd <[EMAIL PROTECTED]> I was thinking about your message, and something from my exchanges with Keith Moore suddenly popped into my head with great clarity. I think it's the answer to your question immediately below - and it has some very grave consequences. Although it's something which has basically been said before, I find this new formulation makes the problem - and its implications - especially clear (and so I hope everyone will take the time to read it - it's not long). > whatever happened to IPv6? 128 bit addresses would certainly allow us > to continue using IP addresses as endpoint identifiers thus eliminating > the need for NAT. It seems that this is a more reasonable solution than > trying to make NAT work under all circumstances. The basic key *architectural* problem with NAT (as opposed to all the mechanical problems like encrypted checksums, etc, some of which can be solved with variant mechanisms like RSIP), as made clear by Keith's comments, is that when you have a small number of external addresses being shared by a larger number of hosts behind some sort of "address-sharing" device, there's no permanent association between an address and a host. It's *that* that causes many of the worst problems - problems for which there *is* no good work-around (because the problem is fundamental in nature). Now, if you have a site which has more hosts than it can get external IPv4 addresses for, then as long as there are considerable numbers of IPv4 hosts a site needs to interoperate with, *deploying IPv6 internally to the site does the site basically no good at all*. Why? Because for interactions with those external IPv4 hosts (who will be the vast majority of the hosts one wants to talk to, in the initial stages of deployment), *you have exactly the same architectural problem*. No matter what IPv6<->IPv4 interoperability mechanism you use, you still have that same *fundamental* problem - no permanent association between a host and an address (in this case, the IPv4 address that it *has* to use to communicate with an IPv4-only host). When one looks at the overall business/economic case for deploying IPv6, in the light of this, the results are fairly devastating - and explain perfectly what we've been seeing for the last couple of years (rapid increase in the number of NAT boxes, and basically no traction for IPv6). A site considering deploying IPv6 is in one of two cases: it already has enough IPv4 addresses, or it doesn't. In the foremer case, what's the upside to deploying IPv6? Autoconfiguration, etc aren't enough to outweigh all the costs of switching (to software which is less available, less tested, less tuned, etc). In the latter case, it's equally as bad: they are going to have to struggle with the problems inherent in IPv4-address-sharing technology whether they go with IPv6 or not, and again, the remaining advantages of IPv6 (autoconfig, etc) are outweighed by the costs. I'm still sorting through the implications from this, trying to put them all with equal clarity, but one thing that does seem clear is that this kind of upgrade model is economically unworkable in the current large-scale Internet. Exactly what will work is something that needs to be pondered for a while. One possible lesson is that we need think about how any new stuff is going to make peoples lives significantly easier overall as soon as they start to deploy it, because without that, probably very little is going to get done. Noel
Re: draft-ietf-nat-protocol-complications-02.txt
> % > >even if you do this the end system identifier needs to be globally > % > >scoped, and you need to be able to use the end system identifier > % > >from anywhere in the net, as a means to reach that end system. > % > > % > DNS is a bright and successfull example of such deal. > % > % actually, DNS is slow, unreliable, and often out of sync with reality. > % > % DNS reverse lookup tables (PTR) are not as well maintained as forward > % lookup tables (A) so they're even less reliable. > > This is an assertion that I've heard over the years > and I've come to beleive (based on regular audits of > the in-addr space) that this is an Internet equivalent > of an urban legend. I'd really like to see your backing > data on this. This is hardly an urban legend. Columbia University requires the use of tcpwrappers in Paranoid mode which requires that the forward and reverse lookups for an IP address in DNS match. The Kermit Project is based at Columbia University and uses its systems for our FTP and HTTP access. A week does not go by when we do not get complaints about people being unable to access our FTP server due to a failure of the forward and reverse to match. Just from the first 8 hours of logs today: proxauth3-bb2.globalintranet.net != 212.234.59.254 hide193.nhs.uk != 195.107.47.193 marta-c-gw.caravan.ru != 212.24.53.234 su9127.eclipse.co.uk != 212.104.136.138 Granted this is hardly a scientific study. But we see this from approximately a dozen new addresses every day. Jeffrey Altman * Sr.Software Designer * Kermit-95 for Win32 and OS/2 The Kermit Project * Columbia University 612 West 115th St #716 * New York, NY * 10025 http://www.kermit-project.org/k95.html * [EMAIL PROTECTED]
Re: draft-ietf-nat-protocol-complications-02.txt
On Tue, 25 Apr 2000 08:18:20 PDT, Bill Manning said: > The 2q2000 data for the in-addr tree shows 77402 unique > servers answering for 693,337 zones. > 19515 servers blocked/refused data. Of the 57887 that > answered, these are the numbers for improper configuration: > > BAD_SERVER: 4278 > FORMERR:8 > NXDOMAIN: 28 > > So, of the 57,887 visable servers, 4314 are improperly configured > in the visable in-addr.arpa. tree. Thats 7.45% of the > servers being "not well maintained". I know of no similar data Does "not well maintained" include the following: 1) DNS server for the zone is originally configured correctly, and the first 20-30 hosts are entered with a proper A record and a PTR that matches. 2) Clueful guy leaves, new DNS "goo-roo" takes over, and adds the next 300 machines with just an A record, and no PTR matching. The checks you make would show this as "well maintained", even though 90% of the hosts are broken with respect to PTR entries. Given that 7% of the sites can't get past step (1), I'm willing to bet that a lot MORE of the sites are accumulating cruft under step (2). -- Valdis Kletnieks Operating Systems Analyst Virginia Tech
Re: draft-ietf-nat-protocol-complications-02.txt
At 8:48 AM -0700 4/25/00, Bill Manning wrote: >and this is different from only carrying the 253 usable /8 prefixes in >IPv4 how? The set of customers who have addresses under a given IPv4 /8 prefix greater than 127 do not all aggregate into a single topological subregion (e.g., a single ISP), and therefore more granular routes must be widely disseminated to make those customers reachable. That's the difference. Steve
Re: draft-ietf-nat-protocol-complications-02.txt
Thomas, > This is not true. IPv6's TLA scheme has as its primary goal placing an > upper bound on the number of routing prefixes that are needed in the > core. ... > Contrast that with today's IPv4 where the number of > prefixes that need to be maintained in the DFZ in order to have global > reachability is more-or-less unbounded, so some prefixes are not > reachable from everywhere. As you know, this is not IPv6 magic. The underlying routing technology between v4 and v6, namely CIDR, is identical. The only difference is that _by convention_, the number of routing prefixes in v6 is limited. If we were to create an IPv4 Internet' without the historical baggage of the existing IPv4 Internet, the same conventions could be applied. > Thus, traditional multihoming is still quite > possible, assuming the various ISPs that need to handle the routes > agree to do so. I find it a bit strange that people seem to think the logic / incentives / disincentives that drive multihoming in the v4 Internet will not apply in the v6 Internet. Rgds, -drc
Re: draft-ietf-nat-protocol-complications-02.txt
% % Sean Doran <[EMAIL PROTECTED]> writes: % % > Unfortunately, IPv6's current addressing architecture makes it very % > difficult to do this sort of traditional multihoming if one is not % > a TLA. % % This is not true. IPv6's TLA scheme has as its primary goal placing an % upper bound on the number of routing prefixes that are needed in the % core. I.e., a backbone provider is only required to maintain routes % for each individual TLA in order to have reachability to ALL % destinations. Contrast that with today's IPv4 where the number of % prefixes that need to be maintained in the DFZ in order to have global % reachability is more-or-less unbounded, so some prefixes are not % reachable from everywhere. % % Thomas and this is different from only carrying the 253 usable /8 prefixes in IPv4 how? --bill
Re: draft-ietf-nat-protocol-complications-02.txt
% > >even if you do this the end system identifier needs to be globally % > >scoped, and you need to be able to use the end system identifier % > >from anywhere in the net, as a means to reach that end system. % > % > DNS is a bright and successfull example of such deal. % % actually, DNS is slow, unreliable, and often out of sync with reality. % % DNS reverse lookup tables (PTR) are not as well maintained as forward % lookup tables (A) so they're even less reliable. This is an assertion that I've heard over the years and I've come to beleive (based on regular audits of the in-addr space) that this is an Internet equivalent of an urban legend. I'd really like to see your backing data on this. The 2q2000 data for the in-addr tree shows 77402 unique servers answering for 693,337 zones. 19515 servers blocked/refused data. Of the 57887 that answered, these are the numbers for improper configuration: BAD_SERVER: 4278 FORMERR:8 NXDOMAIN: 28 So, of the 57,887 visable servers, 4314 are improperly configured in the visable in-addr.arpa. tree. Thats 7.45% of the servers being "not well maintained". I know of no similar data collected on the forward tree as a whole. I'm currently checking the data in a few TLDs to see if spot data may indicate a trend. Now as to the accuracy of the data in the zones, that depends on the owner of the data beleiving the data correct and that will be very hard to check. --bill