Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
If the internet core is going to carry traffic that traditionally was delivered via switched tdm networks, I think we can expect significantly more regulation in the coming years. The FCC and state PUC's will want to see VOIP reliability and call completion statistics that are on par with existing tdm networks. That means ISP's of moderate size and larger will have to have buildouts comparable to the existing phone networks: e.g. Hardened physical facilities and triply redundant network paths into every service area. That's a very expensive undertaking and may lead the internet business back into the regulated, guaranteed margin business models of the ILECS. E911 and FBI surveillance are just the tip of the iceberg... Joe
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
E911 and FBI surveillance are just the tip of the iceberg... Joe Does anyone have documented instances in which misconfigured or failed residential VOIP services have resulted in deaths, or major injury? I can see how it would be easy for the typical end-user to choose the wrong regional 911 call centre when configuring their service. Outside of those of us with well-equipped home offices, I don't think many typical residential broadband users keep a UPS connected to their cablemodem/DSL modem and NAT box. Vonage and their competitors provide thorough warnings about this, but I think they may be going unheeded. As a side note, are *all* cablemodem operators and DSL operators deploying UPSes as standard policy, in manholes or on poles with their digital loop / hybrid fibre-coax converters? Will the FCC desire that they do so in the future? That said, most POTS cordless 2.4GHz phones don't operate in a power failure situation, as the base station requires 110V AC from the wall
Re: Converged Networks Threat (Was: Level3 Outage)
Wouldn't it be great if routers had the equivalent of 'User mode Linux' each process handling a service, isolated and protected from each other. The physical router would be nothing more than a generic kernel handling resource allocation. Each virtual router would have access to x amount of resources and will either halt, sleep, crash when it exhausts those resources for a given time slice. This is possible today. Build your own routers using the right microkernel, OSKIT and the Click Modular Router software and you can have this. When we restrict ourselves only to router packages from major vendors then we are doomed to using outdated technology at inflated prices. --Michael Dillon
How relable does the Internet need to be? (Was: Re: Converged Network Threat)
In article [EMAIL PROTECTED] net, Pendergrass, Greg [EMAIL PROTECTED] writes if you want to call an ambulance you DON'T use the internet And you also need a way to persuade the Ambulance Service not to terminate their calls via VoIP, or send dispatch instructions via public-IP over GSM (or whatever) to their vehicles. Or the IP bits need to be assured as good enough that it doesn't matter. It's perhaps three years since I heard that there was real possibility of some of the above. That stable door may be more open than you think. -- Roland Perry
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
I think the Internet is doing pretty well save some IOS code problems from time to time, and the typical root server hicups. I'm interested to know what you mean by typical root server hicups. I'm trying to think of an incident which left the Internet generally unable to receive answers to queries on the root zone, but I can't think of one. There have been several incidents in which some root servers have hiccuped, sometimes being down for several days. But since the service they provide has N+12 resiliency, the service itself has never been unavailable. Similarly, the Internet has always had N+1 or better vendor resiliency so IOS can have problems while the non-IOS vendor (or vendors) keep on running. In fact, I would argue that N+1 vendor resiliency is a good thing for you to implement in your network and N+2 vendor resiliency is a good thing for the Internet as a whole. Let's hope that vendor P manages to get some critical mass in the market along with J and C. --Michael Dillon
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
On Thu, Feb 26, 2004 at 11:48:17AM +, [EMAIL PROTECTED] wrote: Similarly, the Internet has always had N+1 or better vendor resiliency so IOS can have problems while the non-IOS vendor (or vendors) keep on running. In fact, I would argue that N+1 vendor resiliency is a good thing for you to implement in your network and N+2 vendor resiliency is a good thing for the Internet as a whole. Let's hope that vendor P manages to get some critical mass in the market along with J and C. Unfortunately, while this sounds excellent in theory, what really happens is that you have a large chunk of equipment in the network belonging to vendor X, and then you introduce vendor Y. Most people I know don't suddenly throw out vendor X (assuming that this was a somewhat competent choice in the first place, jumped up l2 boxes with slow-first-path-to-setup-tcams-for-subsequent-flows don't count as somewhat competent). People don't do that because it costs a lot of capital and opex. So now we have a partial X and partial Y network, X goes down, and chances are your network got hammered like an icecube in a blender set to Frappe. You could theroetically have a multiplane network with each plane comprising of a different vendor (and we do that on some of our DWDM rings), but that is a luxury ill-afforded to most people. /vijay
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
So now we have a partial X and partial Y network, X goes down, and chances are your network got hammered like an icecube in a blender set to Frappe. If IP networks become the single layer 2/3 telecommunications technology in the world then we can never let that Frappe happen. We will have to find ways to deliberately build networks using vendor X and vendor Y in such a way that the sum total is more reliable than a pure X or a pure Y network. --Michael Dillon
Re: Converged Networks Threat (Was: Level3 Outage)
On Thu, 26 Feb 2004 14:48:55 GMT, [EMAIL PROTECTED] said: History shows that if you can build a mousetrap that is technically better than anything on the market, your best route for success is to sell it into niche markets where the customer appreciates the technical advances that you can provide and is willing to pay for those technical advances. I don't think that describes the larger Internet provider networks. So your target market is those mompop ISPs that *dont* buy their Ciscos from eBay? :) pgp0.pgp Description: PGP signature
RE: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
I think how reliable the internet needs to be depends on what you want to use it for: if you want to call an ambulance you DON'T use the internet, if you want to transfer money from one account to another you DO use the internet. In other words right now it's good for things that are important but not critical from an immediate action standpoint. If it can wait until tomorrow use the internet otherwise pick up the phone and dial. I can count on one hand the number of times I've had problems with my landline in my entire life but I can count on two hands the number of problems I've had with my internet connection in one year. If we ever want the internet to grow from being a handy medium for exchanging data to the converged, all-encompassing communications medium then it needs to go from Mom, the internet's down again! to Dude, my internet connection went down yesterday, that ever happen to you before?. For that to happen there has to be more accountability in the industry. -GP -Original Message- From: Steve Gibbard [mailto:[EMAIL PROTECTED] Sent: 26 February 2004 00:30 To: [EMAIL PROTECTED] Subject: How relable does the Internet need to be? (Was: Re: Converged Network Threat) Having woken up this morning and realized it was raining in my bedroom (last night was the biggest storm the Bay Area has had since my house got its new roof last summer), and then having moved from cleaning up that mess to vacuuming water out of the basement after the city's storm sewer overflowed (which seems to happen to everybody in my neighborhood a couple of times a year), I've spent lots of time today thinking about general expectations of reliability. In the telecommunications industry, where we tend to treat reliability as very important and any outage as a disaster, hopefully the questions I've been coming up with aren't career ending. ;) With that in mind, how much in the way of reliability problems is it reasonable to expect our users to accept? If the Internet is a utility, or more generally infrastructure our society depends on, it seems there are a bunch of different systems to compare it to. In general, if I pick up my landline phone, I expect to get a dialtone, and I expect to be able to make a call. If somebody calls my landline, I expect the phone to ring, and if I'm near the phone I expect to be able to answer. Yet, if I want somebody to actually get through to me reliably, I'll probably give them my cell phone number instead. If it rings, I'm far more likely to able to answer it easily than I am my landline, since the landline phone is in a fixed location. Yet some significant portion of calls to or from my cell phone come in when I'm in areas with bad reception, and the conversation becomes barely understandable. In many cases, the signal is too weak to make a call at all, and those who call me get sent straight to voicemail. Most of us put up with this, because we judge mobility to be more important than reliability. I don't think I've ever had a natural gas outage that I've noticed, but most of my gas appliances won't work without electric power. I seem to lose electric power at home for a few hours once a year or so, and after the interuption life tends to resume as it was before. When power outages were significantly more frequent, and due to rationing rather than to accidents, it caused major political problems for the California government. There must be some threshold for what people are willing to accept in terms of residential power outages, that's somewhere above 2-3 hours per year. In Ann Arbor, Michigan, where I grew up, the whole town tended to pretty much grind to a halt two or three days a year, when more snow fell than the city had the resources to deal with. That quantity of snow necessary to cause that was probably four or five inches. My understanding is that Minneapolis and Washington DC both grind to a halt due to snow with somewhat similar frequency, but the amount of snow requred is significantly more in Minneapolis and significantly less in DC. Again, there must be some threshold of interruptions due to exceptionally bad weather that are tolerated, which nobody wants to do worse than and nobody wants to spend the money to do better than. So, it appears that among general infrastructure we depend on, there are probably the following reliability thresholds: Employees not being able to get to work due to snow: two to three days per year. Berkeley storm sewers: overflow two to three days per year. Residential Electricity: out two to three hours per year. Cell phone service: Somewhat better than nine fives of reliability ;) Landline phone service: I haven't noticed an outage on my home lines in a few years. Natural gas: I've never noticed an outage. How Internet service fits into that of course depends on how you're accessing the Net. The T-Mobile GPRS card I got recently seems significantly less reliable than my cell phone. My SBC DSL line is almost to
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
On Thu, 26 Feb 2004 15:58:47 GMT, Roland Perry [EMAIL PROTECTED] said: And you also need a way to persuade the Ambulance Service not to terminate their calls via VoIP, or send dispatch instructions via public-IP over GSM (or whatever) to their vehicles. We often can't get the owners of the fiber to 'fess up to the actual physical path, when we're trying to build out diversity. What makes you think the Ambulance Service will have the competency to have any *clue* where their dial tone actually comes from and goes to? pgp0.pgp Description: PGP signature
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
In article [EMAIL PROTECTED], [EMAIL PROTECTED] writes We often can't get the owners of the fiber to 'fess up to the actual physical path, when we're trying to build out diversity. What makes you think the Ambulance Service will have the competency to have any *clue* where their dial tone actually comes from and goes to? You need a Regulator[tm] which insists that the Ambulance Service demonstrates that they understand these issues, or revoke their licence. A bit like you do for the wetware behind the steering wheel (or the life support system in the back). -- Roland Perry
Microsoft on security holes
I just saw this on slashdot, so for those of you who don't read slashdot, enjoy. http://news.bbc.co.uk/1/hi/technology/3485972.stm Yeah, its a little bit off topic, but with the recent amount of viruses, worms, trojans, etc going around the Internet that are causing havoc with general day to day operations of ISPs, this is quite an interesting read. Basically, Microsoft is claiming that security exploits only come out after patches. Uh huh, yeah right. (waiting for his list AUP violation notice, again) -- Brian Bruns The Summit Open Source Development Group Open Solutions For A Closed World / Anti-Spam Resources http://www.sosdg.org The Abusive Hosts Blocking List http://www.ahbl.org
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
So does this mean we also need a regulator to make sure fiber providers fess up to the actual diversity of their physical paths. The R word is not one to be tossed around lightly. When does it apply and when does it not, or does it never apply. The more critical you get get the more R creeps in, but who defines critical and when does that rise above a threshold to induce R? - Original Message - From: Roland Perry [EMAIL PROTECTED] Date: Thursday, February 26, 2004 12:20 pm Subject: Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat) In article [EMAIL PROTECTED], [EMAIL PROTECTED] writes We often can't get the owners of the fiber to 'fess up to the actual physical path, when we're trying to build out diversity. What makes you think the Ambulance Service will have the competency to have any *clue* where their dial tone actually comes from and goes to? You need a Regulator[tm] which insists that the Ambulance Service demonstrates that they understand these issues, or revoke their licence. A bit like you do for the wetware behind the steering wheel (or the life support system in the back). -- Roland Perry
Re: Converged Networks Threat (Was: Level3 Outage)
On Thu, Feb 26, 2004 at 02:48:55PM +, [EMAIL PROTECTED] wrote: This is possible today. Build your own routers using the right microkernel, OSKIT and the Click Modular Router software and you can have this. When we restrict ourselves only to router packages from major vendors then we are doomed to using outdated technology at inflated prices. Tell you what Michael, build me some of those, have it pass my labs and I'll give you millions in business. Deal? The problem with your lab is that you have too many millions to give. In order to win those millions people would have to prove that their box is at least as good as C and J in the core of the largest Internet backbones in the world. That is an awfully big Let me try this one more time. From the top. You said: begin quote software and you can have this. When we restrict ourselves only to router packages from major vendors then we are doomed to using outdated technology at inflated prices. end quote So now we have to give. In order to win those millions people would have to prove that their box is at least as good as C and J in the core of the So the outdated technology at inflated prices is too high of a hurdle to pass for the magic Click Modular Software router, the ones that are allegedly NOT antiquated and are not using outdated technology? But somehow still cannot function in a core? History shows that if you can build a mousetrap that is technically better than anything on the market, your best route for success is Thought it went build a better mousetrap and the world will beat a path to your door, etc etc etc. to sell it into niche markets where the customer appreciates the technical advances that you can provide and is willing to pay for those technical advances. I don't think that describes the larger Internet provider networks. How would you know this? Historically, the cutting edge technology has always gone into the large cores first because they are the ones pushing the bleeding edge in terms of capacity, power, and routing. /vijay
Re: Converged Networks Threat (Was: Level3 Outage)
On Thu, Feb 26, 2004 at 10:05:03AM -0800, David Barak wrote: --- vijay gill [EMAIL PROTECTED] wrote: How would you know this? Historically, the cutting edge technology has always gone into the large cores first because they are the ones pushing the bleeding edge in terms of capacity, power, and routing. /vijay I'm not sure that I'd agree with that statement: most of the large providers with whom I'm familiar tend to be relatively conservative with regard to new technology deployments, for a couple of reasons: 1) their backbones currently work - changing them into something which may or may not work better is a non-trivial operation, and risks the network. This is perhaps current. Check back to see large deployments GSR - sprint/UUNEt GRF - uunet Juniper - UUNET/CWUSA In all of the above cases, those were the large isps that forced development of the boxes. Most of the smaller cutting edge networks are still running 7513s. GSR was invented because the 7513s were running out of PPS. CEF was designed to support offloading the RP. 2) they have an installed base of customers who are living with existing functionality - this goes back to reason 1 - unless there is money to be made, nobody wants to deploy anything. 3) It makes more sense to deploy a new box at the edge, and eventually permit it to migrate to the core after it's been thoroughly proven - the IP model has features living on the edges of the network, while capacity lives in the core. If you have 3 high-cap boxes in the core, it's probably easier to add a fourth than it is to rip the three out and replace them with two higher-cap boxes. The core has expanded to the edge, not the other way around. The aggregate backplane bandwidth requirements tend to drive core box evolution first while the edge box normally has to deal with high touch features and port multiplexing. These of course are becoming more and more specialized over time. 4) existing management infrastructure permits the management of existing boxes - it's easier to deploy an all-new network than it is to upgrade from one technology/platform to another. Only if you are willing to write off your entire capital investment. No one is willing to do that today. -David Barak -Fully RFC 1925 Compliant /vijay __ Do you Yahoo!? Get better spam protection with Yahoo! Mail. http://antispam.yahoo.com/tools
Re: Converged Networks Threat (Was: Level3 Outage)
1) their backbones currently work - changing them into something which may or may not work better is a non-trivial operation, and risks the network. i would disagree. their backbone tend to reach scaling problems, hence the need for bleeding/leading edge technologies. that's been my experience in three past-large networks. This is perhaps current. Check back to see large deployments GSR - sprint/UUNEt GRF - uunet Juniper - UUNET/CWUSA indeed, and going back even further is-is, 7000 and the original SSE - mci/sprint vip and netflow - genuity (the original)/probably many others -b
RE: How reliable does the Internet need to be? (Was: Re: Converged Network Threat)
On Thu, 26 Feb 2004, Pendergrass, Greg wrote: I think how reliable the internet needs to be depends on what you want to use it for: if you want to call an ambulance you DON'T use the internet, if you want to transfer money from one account to another you DO use the internet. In other words right now it's good for things that are important but not critical from an immediate action standpoint. If it can wait until tomorrow use the internet otherwise pick up the phone and dial. This seems to me to have very little to do with network reliability, and far more to do with feedback. When sending somebody e-mail you assume they'll probably check their e-mail and receive the message eventually, but you have no idea if they'll get it right away, or if they'll notice it along with all the other e-mail they get. When phoning somebody, you know right away whether they answer, and you know right away how they respond to whatever you have to say. If you really need to get in touch with somebody right now, do you call their presumably more reliable land line, or their presumably less reliable cell phone? -Steve
Re: Converged Networks Threat (Was: Level3 Outage)
vijay gill wrote: CEF was designed to support offloading the RP. Not really. There existed distributed fastswitching before DCEF came along. It might still exist. CEF was developed to address the issue of route cache insertion and purging. The unneccessarily painful 60 second interval new destination stall was widely documented before CEF got widespread use. The fast switching approach was also particularly painful when DDOS attacks occurred. Pete
Re: Converged Networks Threat (Was: Level3 Outage)
On Thu, Feb 26, 2004 at 09:32:07PM +0200, Petri Helenius wrote: along. It might still exist. CEF was developed to address the issue of route cache insertion and purging. The unneccessarily painful 60 second interval new destination stall was widely documented before CEF got widespread use. The fast switching approach was also particularly painful when DDOS attacks occurred. Thanks for the correction. I clearly was not paying enough attention when composing. /vijay
Re: Converged Networks Threat (Was: Level3 Outage)
History shows that if you can build a mousetrap that is technically better than anything on the market, your best route for success is to sell it into niche markets where the customer appreciates the technical advances that you can provide and is willing to pay for those technical advances. I don't think that describes the larger Internet provider networks. and this has been so well shown by the blazing successes of bay networks, avici, what-its-name that burst into flames in everyone's labs, ... watch out for flying pigs randy
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
http://biz.yahoo.com/rc/040226/tech_verisign_2.html can't say I'm surprised. Another nail in the Verisign coffin.
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
On Thu, 26 Feb 2004, Deepak Jain wrote: Since no one else has mentioned this: http://biz.yahoo.com/rc/040226/tech_verisign_2.html Looks like I need to stock up on popcorn. -- Jay Hennigan - CCIE #7880 - Network Administration - [EMAIL PROTECTED] WestNet: Connecting you to the planet. 805 884-6323 WB6RDV NetLojix Communications, Inc. - http://www.netlojix.com/
Re: Converged Networks Threat (Was: Level3 Outage)
and this has been so well shown by the blazing successes of bay networks, avici, what-its-name that burst into flames in everyone's labs, ... That's a very good point. Building a router that works (at least learning from J's example) is hiring away the most important talent from your competition. Though, it could also be said that the companies that hired that same talent away from J have not met the same success, yet. Deepak
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
On Thu, 26 Feb 2004, Roman Volf wrote: When are they up for renewal exactly? November 10, 2007, according to http://www.icann.org/tlds/agreements/verisign/registry-agmt-com-25may01.htm -S
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
in response to... http://biz.yahoo.com/rc/040226/tech_verisign_2.html X-Mailer: MH-E 7.4; nmh 1.0.4; GNU Emacs 21.3.1 [EMAIL PROTECTED] (Neil J. McRae) writes: can't say I'm surprised. Another nail in the Verisign coffin. it's not nearly that simple. [EMAIL PROTECTED] (John Neiberger) added: They must have taken a page from the recently-released book How to Shoot Your Company in the Foot, by SCO. there's a certain inevitability to these things. sco believed that it had no choice except closing its doors or suing. verisign may feel likewise. the palatable choices were all discarded much earlier, and not nec'ily in ways whose outcomes were knowable. [EMAIL PROTECTED] (William Leibzon) writes: And I'm sure ICANN will remember it for long time - right up to the point when Verisign's contracts for .com/.net management are up for renewal. IANAL, but upon rereading the contract a few months ago they looked self-perpetuating and there appears to be no circumstance no matter how unreasonable under which icann could select a different operator for the .com or .net registries. but don't take my word for it -- pay a lawyer to read http://www.icann.org/registries/agreements.htm and then let us all know what she tells you. the paper at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=475281 entitled Site Finder and Internet Governance by Jonathan Weinberg is also quite instructive. -- Paul Vixie
RE: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
By the way, do we even know what we're talking about? Specifically, has VeriSign produced a set of specifications for exactly what SiteFinder is and does? For example, is it guaranteed to return the same A record for all unregistered domains? Is it guaranteed that that A record will not change? Until VeriSign produces a technical specification for what it is they intend to do, they cannot expect other people to opine about what effects their changes will have. VeriSign has not yet even started the notification and analysis period. Isn't VeriSign's lawsuit premature? I mean, ICANN has not yet said no to any specific technical proposal from VeriSign, at least as far as I know. Is VeriSign arguing that they should be able to do whatever they want with the root DNS, with no advance notice to anyone? DS
RE: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
The lawsuit is not premature to the extent that 1. VRSN were told (however justly) to cease and desist Site Finder 1.0 or else face consequences. 2. VRSN were told they couldn't implement the Consolidate service without making other concessions [according to the complaint the service allowed registrants to buy fractions of a year registrations to top up existing ones so that a whole portfolio would come due on the same day -- a useful feature]. 3. ICANN hasn't implemented the parts of the contracts that call for review panels in cases of disputes. 4. VRSN are looking for leverage to force a favorable outcome in Rome on WLS or on the forthcoming Sitefinder 2.0 as part of settlement negotiations if any. Not, I hasten to add, that I support Sitefinder or WLS (although I think I like consolidate). But what I like isn't the issue. Even if having ICANN win some of these is a short-run gain for usability of the Internet, making ICANN's approval required for every ancillary service or change in business model of every registry is a serious long-term drag on the evolution of the Internet. Although, like all regulatory compliance work, it would generate serious lawyers' fees On Thu, 26 Feb 2004, David Schwartz wrote: By the way, do we even know what we're talking about? Specifically, has VeriSign produced a set of specifications for exactly what SiteFinder is and does? For example, is it guaranteed to return the same A record for all unregistered domains? Is it guaranteed that that A record will not change? Until VeriSign produces a technical specification for what it is they intend to do, they cannot expect other people to opine about what effects their changes will have. VeriSign has not yet even started the notification and analysis period. Isn't VeriSign's lawsuit premature? I mean, ICANN has not yet said no to any specific technical proposal from VeriSign, at least as far as I know. Is VeriSign arguing that they should be able to do whatever they want with the root DNS, with no advance notice to anyone? DS -- http://www.icannwatch.org Personal Blog: http://www.discourse.net A. Michael Froomkin |Professor of Law| [EMAIL PROTECTED] U. Miami School of Law, P.O. Box 248087, Coral Gables, FL 33124 USA +1 (305) 284-4285 | +1 (305) 284-6506 (fax) | http://www.law.tm --It's warm here.--
ICANN/Registry Agreement:
Doesn't sitefinder give one registry superior access to the registry's resources than the others, etc, etc? --- http://www.icann.org/tlds/agreements/verisign/registry-agmt-apph-16apr01.htm VeriSign Equivalent Access Certification VeriSign, as Registry Operator (VGRS), makes the following certification: 1. All registrars (including any registrar affiliated with VGRS) connect to the Shared Registration System Gateway via the Internet by utilizing the same maximum number of IP addresses and SSL certificate authentication. 2. VGRS has made the current version of the registrar toolkit software accessible to all registrars and has made any updates available to all registrars on the same schedule. 3. All registrars have the same level of access to VGRS customer support personnel via telephone, e-mail and the VGRS website. 4. All registrars have the same level of access to the VGRS registry resources to resolve registry/registrar or registrar/registrar disputes and technical and/or administrative customer service issues. 5. All registrars have the same level of access to VGRS-generated data to reconcile their registration activities from VGRS Web and ftp servers. 6. All registrars may perform basic automated registrar account management functions using the same registrar tool made available to all registrars by VGRS. 7. The Shared Registration System does not include any algorithms or protocols that differentiate among registrars with respect to functionality, including database access, system priorities and overall performance. 8. All VGRS-assigned personnel have been directed not to give preferential treatment to any particular registrar. 9. I have taken reasonable steps to verify that the foregoing representations are being complied with. This Certification is dated this the __ day of __, _.
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
Scott Call wrote: On Thu, 26 Feb 2004, Roman Volf wrote: When are they up for renewal exactly? November 10, 2007, according to http://www.icann.org/tlds/agreements/verisign/registry-agmt-com-25may01.htm -S I think as far as Verisign is concerned, they might not be an ongoing concern in 2007, so why worry? They need to do something to get their revenues up or risk the wrath of wallstreet: http://biz.yahoo.com/rc/040129/tech_verisign_earns_4.html At $6/year per domain registered, VGRS makes the lion share of money in the domain registry business for .com and .net. Yet, they are losing $20MM per last quarter (or more, they lost over 200MM in 2003) And only have about $300MM in cash . And their revenues are falling. Deepak Jain AiNET
Re: ICANN/Registry Agreement:
On Thursday, February 26, 2004 8:21 PM [EST], Deepak Jain [EMAIL PROTECTED] wrote: Doesn't sitefinder give one registry superior access to the registry's resources than the others, etc, etc? Rather then clutter up NANOG with this stuff, since its apparent that we will be having more issues about SiteFinder, I've gone ahead and setup a discussion list on my server for general talk about SiteFinder. Its unmoderated, everyone is welcome to signup and post your views. http://wwwapps.2mbit.com/mailman/listinfo/sitefinder-discuss -- Brian Bruns The Summit Open Source Development Group Open Solutions For A Closed World / Anti-Spam Resources http://www.sosdg.org The Abusive Hosts Blocking List http://www.ahbl.org
Re: ICANN/Registry Agreement:
[It isn't important who] wrote: It gives Verisign/NetSol the ability to generate exclusive profit from the hijacking of every non-existant domain name in existance. No other registar could do something like this without paying for every last domain they take, or could they ever do anything like this due to the fact that Verisign/NetSol controls ALL of the TLD servers for .com and .net. ...hijacking of every non-existent domain name in existence. ...non-existent ... in existence. Several people have said things like that in recent times. Including me, I'll bet. What exactly does it mean? (Yes, I know. We are talking about the fact that strings submitted for lookup that have not been registered as names would not be cause an error to be returned. And that is clearly a lot more words, if not a clearer description of the problem. We need a wordsmith to give us a short string that can be converted into a useful TLA.)
RE: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
By the way, do we even know what we're talking about? that is not needed to flame folk such as verisign. lynch mobs look pretty good until you are the one on guantanamo. randy
Best Common Practice - Listening to local routes from peers?
Hello: We have a customer of a customer who is attempting to send traffic from IP space we control, through the Internet and back into us via one of our transit connections. I have filters in place that block all inbound traffic from the blocks I announce coming in over my transit and peering connections. This is breaking the downstream customer ability to route from them, through UUNet, and back to me. I'm curious what the Best Common Practice is for this type of scenario. I have always used this type of filtering as a way to bury source-spoofed traffic in a DDOS situation but I'm not sure if it's appropriate, generally speaking. If other operators would like to reply directly to me I would be more than happy to summarize to the list. Thank you for any assistance you can provide. Michael Smith [EMAIL PROTECTED]
Re: Best Common Practice - Listening to local routes from peers?
On Feb 26, 2004, at 11:22 PM, Michael Smith wrote: We have a customer of a customer who is attempting to send traffic from IP space we control, through the Internet and back into us via one of our transit connections. I have filters in place that block all inbound traffic from the blocks I announce coming in over my transit and peering connections. This is breaking the downstream customer ability to route from them, through UUNet, and back to me. I'm curious what the Best Common Practice is for this type of scenario. I have always used this type of filtering as a way to bury source-spoofed traffic in a DDOS situation but I'm not sure if it's appropriate, generally speaking. It is a good idea to filter source IP on the edge. Since your customer has more than one upstream, filtering their IP space at your border is not the edge. Filter their source IP where your network meets their network. Filter your source IP at your upstream borders. My $0.003411284. :) -- TTFN, patrick
Re: ICANN/Registry Agreement:
--- Laurence F. Sheldon, Jr. [EMAIL PROTECTED] wrote: ...hijacking of every non-existent domain name in existence. ...non-existent ... in existence. Several people have said things like that in recent times. Including me, I'll bet. What exactly does it mean? (Yes, I know. We are talking about the fact that strings submitted for lookup that have not been registered as names would not be cause an error to be returned. And that is clearly a lot more words, if not a clearer description of the problem. We need a wordsmith to give us a short string that can be converted into a useful TLA.) How about this: Sitefinder gives Verisign revenue from every non-existent, well-formed domain name. -David Barak -Fully RFC 1925 Compliant- __ Do you Yahoo!? Get better spam protection with Yahoo! Mail. http://antispam.yahoo.com/tools
Re: ICANN/Registry Agreement:
On Thursday, February 26, 2004 8:21 PM [EST], Deepak Jain [EMAIL PROTECTED] wrote: Doesn't sitefinder give one registry superior access to the registry's resources than the others, etc, etc? It gives Verisign/NetSol the ability to generate exclusive profit from the hijacking of every non-existant domain name in existance. No other registar could do something like this without paying for every last domain they take, or could they ever do anything like this due to the fact that Verisign/NetSol controls ALL of the TLD servers for .com and .net. -- Brian Brunsk The Summit Open Source Development Group Open Solutions For A Closed World / Anti-Spam Resources http://www.sosdg.org The Abusive Hosts Blocking List http://www.ahbl.org
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
[EMAIL PROTECTED] (william(at)elan.net) writes: ... And based on that Verisign rule over these tlds ends in November 2007 no. See page 19 of: http://papers.ssrn.com/sol3/delivery.cfm/SSRN_ID475281_code70168.pdf?abstractid=475281 i think that verisign and icann are stuck with each other, in perpetuity. -- Paul Vixie
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
On Thu, 26 Feb 2004, John Kinsella wrote: When are they up for renewal exactly? November 10, 2007, according Any way to speed that up? ;) http://www.icann.org/tlds/agreements/verisign/registry-agmt-com-25may01.htm 16. Termination ... B. In the event of termination by DOC of its Cooperative Agreement with Registry Operator pursuant to Section 1.B.8 of Amendment ___ to that Agreement, ICANN shall, after receiving express notification of that fact from DOC and a request from DOC to terminate Registry Operator as the operator of the Registry TLD, terminate Registry Operator's rights under this Agreement, and shall cooperate with DOC to facilitate the transfer of the operation of the Registry Database to a successor registry C. This Agreement may also be terminated in the by ICANN on written notice given at least forty days after the final and nonappealable occurrence of either of the following events: (i) Registry Operator: (a) is convicted by a court of competent jurisdiction of a felony or other serious offense related to financial activities, or is the subject of a determination by a court of competent jurisdiction that ICANN reasonably deems as the substantive equivalent of those offenses ; or (b) is disciplined by the government of its domicile for conduct involving dishonesty or misuse of funds of others ii) Any officer or director of Registry Operator is convicted of a felony or of a misdemeanor related to financial activities, or is judged by a court to have committed fraud or breach of fiduciary duty, or is the subject of a judicial determination that ICANN deems as the substantive equivalent of any of these So all we need to do is either lobby us government (get to your senator or congressman; and before Verisign starts lobbying him directly) or get federal courts to convict the people at Verisign responsible for all this mess. -- William Leibzon Elan Networks [EMAIL PROTECTED]
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
Any way to speed that up? ;) John On Thu, Feb 26, 2004 at 03:57:12PM -0800, Scott Call wrote: On Thu, 26 Feb 2004, Roman Volf wrote: When are they up for renewal exactly? November 10, 2007, according to http://www.icann.org/tlds/agreements/verisign/registry-agmt-com-25may01.htm
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
For ICANN/Registry agreements see here: http://www.icann.org/registries/agreements.htm Specific agreements all technical specs Verisign agreed to follow: http://www.icann.org/tlds/agreements/verisign/com-index.htm http://www.icann.org/tlds/agreements/verisign/net-index.htm And based on that Verisign rule over these tlds ends in November 2007 On Thu, 26 Feb 2004, Roman Volf wrote: When are they up for renewal exactly? william(at)elan.net wrote: On Thu, 26 Feb 2004, Deepak Jain wrote: Since no one else has mentioned this: http://biz.yahoo.com/rc/040226/tech_verisign_2.html And I'm sure ICANN will remember it for long time - right up to the point when Verisign's contracts for .com/.net management are up for renewal.
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
When are they up for renewal exactly? william(at)elan.net wrote: On Thu, 26 Feb 2004, Deepak Jain wrote: Since no one else has mentioned this: http://biz.yahoo.com/rc/040226/tech_verisign_2.html And I'm sure ICANN will remember it for long time - right up to the point when Verisign's contracts for .com/.net management are up for renewal.
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
On Thu, 26 Feb 2004, Deepak Jain wrote: Since no one else has mentioned this: http://biz.yahoo.com/rc/040226/tech_verisign_2.html And I'm sure ICANN will remember it for long time - right up to the point when Verisign's contracts for .com/.net management are up for renewal. -- William Leibzon Elan Networks [EMAIL PROTECTED]
Re: Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
Neil J. McRae [EMAIL PROTECTED] 2/26/04 3:03:52 PM http://biz.yahoo.com/rc/040226/tech_verisign_2.html can't say I'm surprised. Another nail in the Verisign coffin. They must have taken a page from the recently-released book How to Shoot Your Company in the Foot, by SCO. * John -- The information contained in this electronic communication and any document attached hereto or transmitted herewith is confidential and intended for the exclusive use of the individual or entity named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any examination, use, dissemination, distribution or copying of this communication or any part thereof is strictly prohibited. If you have received this communication in error, please immediately notify the sender by reply e-mail and destroy this communication. Thank you. --
Lawsuit on ICANN (was: Re: A few words on VeriSign's sitefinder)
Since no one else has mentioned this: http://biz.yahoo.com/rc/040226/tech_verisign_2.html
RE: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
I don't post much as I'm mostly on here to learn and have little I can contribute, but... While following all the discussions, I wonder if there's too many people here that work at large highly redundant facilities and live in expensive areas with new circuits. I don't believe the rest of the world has such high expectations. I live in a typical USA '70s era neighborhood and have (this year) had nearly 2 full days without power (not counting that nationwide blackout thing, and not even guessing how many 1-2 hour power losses), 4 or 5 days without dialtone (multiple episodes lasting over a day each, also suffering static on the line everytime it rains), and had the cable modem down for 3 days straight (was up MOST of the time the power was out. As a side note, tried a BRI, but cancelled after the phone company couldn't keep it up more than 50% of the time). We're used to it, that's just life in this city. Cell phone coverage is good in the cities, however the stretches in between, the cell phone is just a paper weight. Just last night, we had 2 T-1s down for 5.5 hours here at work (I must say though, reliability at work has GREATLY improved the last couple of years!)... I can go on and on about this, but won't as this whole thing is really stretching the limits of network related now ;-) __ This message was scanned by GatewayDefender 3:23:48 PM ET - 2/26/2004
Re: Converged Networks Threat (Was: Level3 Outage)
--- vijay gill [EMAIL PROTECTED] wrote: In all of the above cases, those were the large isps that forced development of the boxes. Most of the smaller cutting edge networks are still running 7513s. Hmm - what I was getting at was that the big ISPs for the most part still have a whole lot of 7513s running around (figuratively), while if I were building a new network from the ground up, I'd be unlikely to use them. GSR was invented because the 7513s were running out of PPS. CEF was designed to support offloading the RP. 2) they have an installed base of customers who are living with existing functionality - this goes back to reason 1 - unless there is money to be made, nobody wants to deploy anything. 3) It makes more sense to deploy a new box at the edge, and eventually permit it to migrate to the core after it's been thoroughly proven - the IP model has features living on the edges of the network, while capacity lives in the core. If you have 3 high-cap boxes in the core, it's probably easier to add a fourth than it is to rip the three out and replace them with two higher-cap boxes. The core has expanded to the edge, not the other way around. The aggregate backplane bandwidth requirements tend to drive core box evolution first while the edge box normally has to deal with high touch features and port multiplexing. These of course are becoming more and more specialized over time. I agree, from a capacity perspective: the GSR began life as a core router because it supported big pipes. It's only recently that it's had anywhere near the number of features which the 7500 has (and there are still a whole lot of specialized features which it doesn't have). From a feature deployment approach, new boxes come in at the edge (think of the deployment of the 7500 itself: it was an IP front-end for ATM networks) 4) existing management infrastructure permits the management of existing boxes - it's easier to deploy an all-new network than it is to upgrade from one technology/platform to another. Only if you are willing to write off your entire capital investment. No one is willing to do that today. That is EXACTLY my point: as new companies are unwilling to write off an investment, they MUST keep supporting the old stuff. once they're supporting the old stuff of vendor X, that provides an incentive to get more new stuff from vendor X, if the management platform is the same. For instance, if I've got a Marconi ATM network, I'm unlikely to buy new Cisco ATM gear, unless I'm either building a parallel network, or am looking for an edge front-end to offer new features. However, if I were building a new ATM network today, I would do a bake-off between the vendors and see which one met my needs best. -David Barak -Fully RFC 1925 Compliant- = David Barak -fully RFC 1925 compliant- __ Do you Yahoo!? Get better spam protection with Yahoo! Mail. http://antispam.yahoo.com/tools
Re: Converged Networks Threat (Was: Level3 Outage)
--- vijay gill [EMAIL PROTECTED] wrote: How would you know this? Historically, the cutting edge technology has always gone into the large cores first because they are the ones pushing the bleeding edge in terms of capacity, power, and routing. /vijay I'm not sure that I'd agree with that statement: most of the large providers with whom I'm familiar tend to be relatively conservative with regard to new technology deployments, for a couple of reasons: 1) their backbones currently work - changing them into something which may or may not work better is a non-trivial operation, and risks the network. 2) they have an installed base of customers who are living with existing functionality - this goes back to reason 1 - unless there is money to be made, nobody wants to deploy anything. 3) It makes more sense to deploy a new box at the edge, and eventually permit it to migrate to the core after it's been thoroughly proven - the IP model has features living on the edges of the network, while capacity lives in the core. If you have 3 high-cap boxes in the core, it's probably easier to add a fourth than it is to rip the three out and replace them with two higher-cap boxes. 4) existing management infrastructure permits the management of existing boxes - it's easier to deploy an all-new network than it is to upgrade from one technology/platform to another. -David Barak -Fully RFC 1925 Compliant __ Do you Yahoo!? Get better spam protection with Yahoo! Mail. http://antispam.yahoo.com/tools
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
In article [EMAIL PROTECTED], Laurence F. Sheldon, Jr. [EMAIL PROTECTED] writes I think we will need also to make it illegal (to control the liability issues) to need emergency assistance in a place whose only link is via public-IP. This is an interesting issue, and one which is currently being debated in the UK (where a newly reformed regulator is taking a fresh look at VoIP)[1]. Most end users that I've discussed it with (geeks to a man) say it's not society's problem if they (the geeks) choose to limit their availability of emergency assistance[2], when buying a new toy like VoIP (and throwing away their POTS). I'm not sure that I entirely agree. Less well informed users probably need someone making that decision for them. (Just call me Nanny.) [1] Should VoIP include 911/999 service, and how does one resolve the various geographic location issues associated with this. [2] By, for example, having no 911/999 service available *at all* from their chosen provider, and relying on a mobile phone or a neighbour with POTS. -- Roland Perry
Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)
Roland Perry wrote: In article [EMAIL PROTECTED] net, Pendergrass, Greg [EMAIL PROTECTED] writes if you want to call an ambulance you DON'T use the internet And you also need a way to persuade the Ambulance Service not to terminate their calls via VoIP, or send dispatch instructions via public-IP over GSM (or whatever) to their vehicles. I think we will need also to make it illegal (to control the liability issues) to need emergency assistance in a place whose only link is via public-IP. (I hear that there are places in Papua New Guinea that are being brought on-line where everything (EVERYthing) else is stone-age-standard.) Or the IP bits need to be assured as good enough that it doesn't matter. It's perhaps three years since I heard that there was real possibility of some of the above. That stable door may be more open than you think.
Re: Converged Networks Threat (Was: Level3 Outage)
On Thu, Feb 26, 2004 at 11:28:09AM +, [EMAIL PROTECTED] wrote: Wouldn't it be great if routers had the equivalent of 'User mode Linux' each process handling a service, isolated and protected from each other. The physical router would be nothing more than a generic kernel handling resource allocation. Each virtual router would have access to x amount of resources and will either halt, sleep, crash when it exhausts those resources for a given time slice. This is possible today. Build your own routers using the right microkernel, OSKIT and the Click Modular Router software and you can have this. When we restrict ourselves only to router packages from major vendors then we are doomed to using outdated technology at inflated prices. Tell you what Michael, build me some of those, have it pass my labs and I'll give you millions in business. Deal? Let me draw it out here: Step 1: Buy box Step 2: Install Click Modular Router Software Step 3: Profit /vijay
Expectations or It can't happen to me (was Re: How Reliable)
On Wed, 25 Feb 2004, Bora Akyol wrote: It needs to be as reliable as the services that depend on it. E.g. if bank A is using the Internet exclusively without leased line back up to run its ATMs, or to interface with its customers, then it needs to be VERY reliable. That's not very reliable. On a normal day, 95% of the cash machines are working nationwide. Telephones, E911, hospitals, nuclear power plants have a variety of normal failures all the time. Humans are traditionally very bad at understanding risk. As more and more critical services/infrastructure moves to the IP/MPLS, the expectations in terms of reliability go up every year. The real questions are: * How much are the customer's willing to pay for it? * What kind of reporting/management infrastructure we have to enforce/monitor the reliability commitment in the SLA? Unfortunately, both of those are marketing issues and have very little to do with actual reliability. One very well-known ISP had a premium Internet service that only cost 30% more than its standard Internet service with a 100% SLA. What you received was the same service with an insurance policy. If the service met the SLA you paid 30% more, if it didn't you only paid the standard price. Does buying travel insurance change the risk of the plane crashing?