Re: DNS was Re: Internet Vulnerabilities
--On Friday, July 05, 2002 17:50:24 +0100 Simon Waters [EMAIL PROTECTED] wrote: I would guess the . zone probably isn't that large in absolute terms, so large ISPs (NANOG members ?) could arrange for their recursive servers to act as private secondaries of ., thus eliminating the dependence on the root servers entirely for a large chunks of the Internet user base. -rw-r--r-- 1 9998 213 14102 Jul 14 19:56 root.zone.gz -rw-r--r-- 1 9998 21375 Jul 14 20:41 root.zone.gz.md5 -rw-r--r-- 1 9998 21372 Jul 14 20:42 root.zone.gz.sig I think the kinds of zones being handled by the gtld-servers would be harder to relocate, if only due to size, although the average NANOG reader probably has rather more bandwidth available than I do, they may not have the right kind of spare capacity on their DNS servers to secondary .com at short notice. Exactly. The .com zone is large. I doubt that the average NANOG reader has a 16GB RAM machine idling just in case some kiddie wants to DoS Verisign. All I think root server protection requires is someone with access to the relevant zone to make it available through other channels to large ISPs. There is no technical reason why key DNS infrastructure providers could not implement such a scheme on their own recursive DNS servers now, and it would offer to reduce load on both their own, and the root DNS servers and networks. Network load is hardly the problem, except in very starved cases; a big well-used server will perhaps fill a T-1 or two. The single limiting factor on implementing such an approach would be DNS know-how, as whilst it is probably a two line change for most DNS servers to forward to their ISPs DNS server (or zone transfer .), many sites probably lack the inhouse skills to make that change at short notice. This is the problem with clever tricks; they can be implemented by people who are in the loop, but most others will not make it work. In practical terms I'd be more worried about smaller attacks against specific CC domains, I could imagine some people seeing disruption of il as a more potent (and perhaps less globally unpopular) political statement, than disrupting the whole Internet. Similarly an attack on a commercial subdomain in a specific country could be used to make a political statement, but might have significant economic consequences for some companies. Attacking 3 or 4 servers is far easier than attacking 13 geographically diverse, well networked, and well protected servers. Similarly I think many CC domains, and country based SLD are far more hackable than many people realised due to the extensive use of out of bailiwick data, as described by DJB. At some point the script kiddies will realise they can own a country or two instead of one website, by hacking one DNS server, and the less well secured DNS servers will all go in a week or two. I definitely agree. ccTLDen are in very varying states of security awareness, and while I believe .il is aware and prepared, other conflict zone domains might not be... -- Måns NilssonSystems Specialist +46 70 681 7204 KTHNOC MN1334-RIPE We're sysadmins. To us, data is a protocol-overhead.
Re: No one behind the wheel at WorldCom
On Mon, Jul 15, 2002 at 01:58:44AM -0400, Frank Scalzo wrote: See now we are back to the catch 22 that is IRR. No one will use it because the data isnt there, and no one will put the data into it because no one uses it. [CC: list trimmed] Actually, I think you'll find that bad data is only a small part of the problem; even with good data, there isn't enough support from various router vendors to make it worthwhile; it's effectively impossible to prefix filter a large peer due to router software restrictions. We need support for very large (256k+ to be safe) prefix filters, and the routing process performance to actually handle a prefix list this large, and not just one list, but many. IRR support for automagically building these prefix lists would be a real plus too. Building and then pushing out filters on another machine can be quite time consuming, especially for a large network. I think the way to get IRR into the real world production realm, is to really drive home the issue w/IPV6. This still doesn't solve the scaling issue. This is no different than running your own RR, which many ISPs already do -- and they still have to exempt many of their peers. Typically, RR derived prefix filtering is something reserved for only their transit customers. If it were that easy, everyone (well, some people) would be doing it. --msa
Re: No one behind the wheel at WorldCom
There are different types of filter tho and I'd suggest they are suitable in different circumstances. eg small peer 100 prefixes - build prefix filter list, as path list middle peer - either depending on requirement (eg cust, peer) large peer 1000 prefixes - as path filter plus max prefix I'm not implementing the above so the numbers and suggestions are a little arbitrary but I'm making the point that you can filter smaller peers who are less experienced and more likely to give an error and for larger peers you have to be less granular but can still impose failsafes without increasing CPU. Steve On Mon, 15 Jul 2002, Majdi S. Abbas wrote: On Mon, Jul 15, 2002 at 01:58:44AM -0400, Frank Scalzo wrote: See now we are back to the catch 22 that is IRR. No one will use it because the data isnt there, and no one will put the data into it because no one uses it. [CC: list trimmed] Actually, I think you'll find that bad data is only a small part of the problem; even with good data, there isn't enough support from various router vendors to make it worthwhile; it's effectively impossible to prefix filter a large peer due to router software restrictions. We need support for very large (256k+ to be safe) prefix filters, and the routing process performance to actually handle a prefix list this large, and not just one list, but many. IRR support for automagically building these prefix lists would be a real plus too. Building and then pushing out filters on another machine can be quite time consuming, especially for a large network. I think the way to get IRR into the real world production realm, is to really drive home the issue w/IPV6. This still doesn't solve the scaling issue. This is no different than running your own RR, which many ISPs already do -- and they still have to exempt many of their peers. Typically, RR derived prefix filtering is something reserved for only their transit customers. If it were that easy, everyone (well, some people) would be doing it. --msa
RE: No one behind the wheel at WorldCom
The problem with doing that is you do not get fine grained control. You can only say only send me 50k routes, and things not from other peers. The really nifty part about prefix-limiting your peers is when they deaggregate toward you, you drop all your bgp sessions to them at once, completely depeering. You still cannot prevent announcement of weird routes like 63/8. Do not be fooled into believing that just because a network is big they know what they are doing. Some of the 63/8 announcements I have seen came by way of sprint. Let's step back and think about this on a security front, anyone with access to a tier 1 ISPs router can dos anyone in the internet, just by throwing in a null route for the block that is more specific then the one they have advertised. Granted not easily done, but just the same I like to be the only one who can break my network. Unfortunately Majdi is correct, we do not have sufficient functionality in today's routing software to fix the problem. Oh well I guess it has worked for this long. -Original Message- From: Stephen J. Wilcox [mailto:[EMAIL PROTECTED]] Sent: Monday, July 15, 2002 8:39 AM To: Majdi S. Abbas Cc: Frank Scalzo; [EMAIL PROTECTED] Subject: Re: No one behind the wheel at WorldCom There are different types of filter tho and I'd suggest they are suitable in different circumstances. eg small peer 100 prefixes - build prefix filter list, as path list middle peer - either depending on requirement (eg cust, peer) large peer 1000 prefixes - as path filter plus max prefix I'm not implementing the above so the numbers and suggestions are a little arbitrary but I'm making the point that you can filter smaller peers who are less experienced and more likely to give an error and for larger peers you have to be less granular but can still impose failsafes without increasing CPU. Steve On Mon, 15 Jul 2002, Majdi S. Abbas wrote: On Mon, Jul 15, 2002 at 01:58:44AM -0400, Frank Scalzo wrote: See now we are back to the catch 22 that is IRR. No one will use it because the data isnt there, and no one will put the data into it because no one uses it. [CC: list trimmed] Actually, I think you'll find that bad data is only a small part of the problem; even with good data, there isn't enough support from various router vendors to make it worthwhile; it's effectively impossible to prefix filter a large peer due to router software restrictions. We need support for very large (256k+ to be safe) prefix filters, and the routing process performance to actually handle a prefix list this large, and not just one list, but many. IRR support for automagically building these prefix lists would be a real plus too. Building and then pushing out filters on another machine can be quite time consuming, especially for a large network. I think the way to get IRR into the real world production realm, is to really drive home the issue w/IPV6. This still doesn't solve the scaling issue. This is no different than running your own RR, which many ISPs already do -- and they still have to exempt many of their peers. Typically, RR derived prefix filtering is something reserved for only their transit customers. If it were that easy, everyone (well, some people) would be doing it. --msa
Re: QoS/CoS in the real world?
--On Sunday, July 14, 2002 9:26 PM -0400 Art Houle [EMAIL PROTECTED] wrote: On Sun, 14 Jul 2002, Marshall Eubanks wrote: On Sun, 14 Jul 2002 21:13:13 -0400 (EDT) Art Houle [EMAIL PROTECTED] wrote: Or, to put it another way, how are the packets marked ? And why not just drop them then and there, instead of later ? If we are not using our WAN connections to capacity, then p2p traffic can expand and fill the pipe, but if business packets are filling the pipes, then the p2p stuff is throttled back. This makes 100% use of an expensive resource. So, you are doing straight tcp port filtering. Are there any clients that use dynamic ports? Things will get trickier for you. Other than Packetteer, are there any other products that can look into the data of a packet at any usable rate to do filtering/marking?
Re: No one behind the wheel at WorldCom
On Mon, Jul 15, 2002 at 01:39:01PM +0100, Stephen J. Wilcox wrote: eg small peer 100 prefixes - build prefix filter list, as path list maxprefix makes sense here too on a Juniper router since it applies maxprefix to the _received routes_ (not to the routes after filtering as Cisco does it). tschuess Stefan -- Stefan MinkSchlund + Partner AG Tel: +49 721 91374 0 Netzwerkabteilung AS8560 Fax: +49 721 91374 212 Key fingerprint = 389E 5DC9 751F A6EB B974 DC3F 7A1B CF62 F0D4 D2BA msg03740/pgp0.pgp Description: PGP signature
Re: All-optical networking Was: [Re: Notes on the Internet for BellHeads]
Add in the fact that optical sniffing, while not impossible by any means today, will increasingly become non-trivial as bandwidth increases. Which is exactly one of the 'problems' they expect optical network to solve. You mean just expensive, right? i.e. a couple transponders and an OC48 or OC192 switch. Depending on what you are trying to gather, it will also become more difficult at higher speeds to due the data volume. But you are right in that it's more about money than effort in the end. - kurtis -
Re: QoS/CoS in the real world?
A number of people think QoS was interesting for a while but that its never either found its true use or is dead. There are unresolved questions from a customer point of view as to what they are actually going to get, what difference it will make and how they can measure their performance and the improvements from QoS. Having worked for a pretty large, now bankrupt, Netherlands based operator - where we where looking at QoS what we concluded was that a) QoS mechanisms are for the local-tail. Backbones should have enough bandwidth (and bandwidth is cheap). b) QoS was for customers with services like VoIP and VPN - and in most cases they where needed becuase the end users refused to buy the bandwidth they actually needed. c) The QoS implementations in the vendor boxes at best leaves a lot to whish for and in most cases simply does not work (but to their credit they where really helpful in working with us on this). - kurtis - PS. Notice that I left out the M... word. :)
ATT
Anyone seeing any problems with ATT in LAX? I have ATT colo in Mesa, AZ with some routing problems... Best regards, Bryan Heitman Interland, Inc.
AS4006
Hi, Anybody has a contact for AS4006 ? (peering issue) Thanks André - Andre Chapuis IP+ Engineering Swisscom Ltd Genfergasse 14 3050 Bern +41 31 893 89 61 [EMAIL PROTECTED] CCIE #6023 --
Re: Stop it with putting your e-mail body in my MUA OT
At 2:48 PM -0400 2002/07/10, Martin Hannigan wrote: To the people who so arrogantly pgp sign every email they send: Learn how to consider the importance of your words. In the wise words of Brian Hatch (author of _Hacking Linux Exposed_ and _Building Linux VPNs_): If it ain't signed, it ain't me. -- Brad Knowles, [EMAIL PROTECTED] They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -Benjamin Franklin, Historical Review of Pennsylvania.
Re: Evil PGP sigs thread must die. was Re: Stop it withputting your e-mail body in my MUA OT
At 3:01 PM -0400 2002/07/10, Andy Dills wrote: The passive assumption is that your words are important enough that somebody might want to verify them. Correct. This statement will be true for just about everyone, at some point in their life. So, does EVERY email need to be pgp signed? Do you need to use ssh every time you access a server remotely? Surely you know when your line is being tapped or when your packets are being sniffed, and you choose only those times to use ssh, and otherwise you use telnet? Same goes for actually using passwords to login -- surely you know when it's a legitimate user that is trying to login and when it's someone trying to gain illicit access to your system, and you require them to use passwords accordingly? When was the last time somebody on this list bothered to check the validity of a pgp signed message which they received via nanog? When was the last time anyone on this list bothered to check the validity of any message they received via any channel? I mean, if you're going to use probability to support your argument, you might as well widen the discussion to a much broader sample group. I mean, if John Sidgmore posted to that from now on, Worldcom's official pricing is $100/meg with a 3 meg commit, I wouldn't believe it for a second unless it was signed and I verified it. Not everything is black and white. At what level would you choose to validate a message like this? -- Brad Knowles, [EMAIL PROTECTED] They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -Benjamin Franklin, Historical Review of Pennsylvania.
Re: Evil PGP sigs thread must die. was Re: Stop it withputting your e-mail body in my MUA OT
At 3:45 PM -0400 2002/07/10, Andy Dills wrote: Lest anybody confuse my argument, I think PGP signatures are a good thing. I just don't think people need to sign everything they send. And I'm talking about posts to Nanog here, not private communication. In private communication, it's reasonable to sign most everything sent with official business purpose. No. It is precisely the public e-mail messages which should always be signed, since they are the ones likely to reach the largest audience, and the ones that are likely to have the biggest negative impact if they are successfully spoofed. You should sign all private e-mail, too, but the public e-mail messages are the ones that need it the most. -- Brad Knowles, [EMAIL PROTECTED] They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -Benjamin Franklin, Historical Review of Pennsylvania.
Re: DNS was Re: Internet Vulnerabilities
At 9:07 AM +0200 2002/07/15, Måns Nilsson quoted Simon Waters [EMAIL PROTECTED] as saying: I would guess the . zone probably isn't that large in absolute terms, so large ISPs (NANOG members ?) could arrange for their recursive servers to act as private secondaries of ., thus eliminating the dependence on the root servers entirely for a large chunks of the Internet user base. 1266 A records 1243 NS records 1 SOA record 1 TXT record Currently, B, C, F are open to zone transfers. I think the kinds of zones being handled by the gtld-servers would be harder to relocate, if only due to size, although the average NANOG reader probably has rather more bandwidth available than I do, they may not have the right kind of spare capacity on their DNS servers to secondary .com at short notice. Edu is pretty good size: 17188 NS records 5514 A records 1 SOA record 1 TXT record A complete zone transfer comprises some 1016491 bytes. All I think root server protection requires is someone with access to the relevant zone to make it available through other channels to large ISPs. There is no technical reason why key DNS infrastructure providers could not implement such a scheme on their own recursive DNS servers now, and it would offer to reduce load on both their own, and the root DNS servers and networks. I disagree. This is only going to help those ISPs that are clued-in enough to act as a stealth secondary of the zone, and then only for those customers that will be using their nameservers as caching/recursive servers, or have their own caching/recursive servers forward all unknown queries to their ISPs. I'm sorry, but that's a vanishingly small group of people, and will have little or no measurable impact. Better would be for the root nameservers to do per-IP address throttling. If you send them too many queries in a given period of time, they can throw away any excess queries. This prevents people from running tools like queryperf on a constant basis from excessively abusing the server. Indeed, some root nameservers are already doing per-IP address throttling. In practical terms I'd be more worried about smaller attacks against specific CC domains, I could imagine some people seeing disruption of il as a more potent (and perhaps less globally unpopular) political statement, than disrupting the whole Internet. Keep in mind that some ccTLDs are pretty good size themselves. The largest domain I've been able to get a zone transfer of is .tv, comprising some 20919120 bytes of data -- 381812 NSes, 72694 A RRs, 5754 CNAMEs, and 3 MXes. Any zone that is served by a system that is both authoritative and public caching/recursive is wide-open for cache-poisoning attacks -- such as any zone served by nic.lth.se [130.235.20.3]. Similarly an attack on a commercial subdomain in a specific country could be used to make a political statement, but might have significant economic consequences for some companies. Attacking 3 or 4 servers is far easier than attacking 13 geographically diverse, well networked, and well protected servers. Who said that the root nameservers were geographically diverse? I don't think the situation has changed much since the list at http://www.icann.org/committees/dns-root/y2k-statement.htm was created. I don't call this geographically diverse. I definitely agree. ccTLDen are in very varying states of security awareness, and while I believe .il is aware and prepared, other conflict zone domains might not be... Except for the performance issues, IMO ccTLDs should be held to the same standards of operation as the root nameservers, and thus subject to RFC 2010 Operational Criteria for Root Name Servers by B. Manning, P. Vixie and RFC 2870 Root Name Server Operational Requirements by R. Bush, D. Karrenberg, M. Kosters, R. Plzak. Those of you who are interested in this topic may want to drop in on my invited talk Domain Name Server Comparison: BIND 8 vs. BIND 9 vs. djbdns vs. ??? at LISA 2002. Root TLD server issues will figure heavily in comparison. ;-) -- Brad Knowles, [EMAIL PROTECTED] They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety. -Benjamin Franklin, Historical Review of Pennsylvania.
Re: Evil PGP sigs thread must die. was Re: Stop it with putting your e-mail body in my MUA OT
oh. my. god. just when i thought that the subject line could not get any longer in this thread. it's just one of my pointless pet peeves... deeann m.m. mikula director of operations telerama public access internet http://www.telerama.com * 412.688.3200
Re: ATT
No problems from our line in PHX at all today, once in a while they've been known to block ICMP to some routers in LAX, but that's just cosmetic. Regards, Matt -- Matt Levine @Home: [EMAIL PROTECTED] @Work: [EMAIL PROTECTED] ICQ : 17080004 AIM : exile GPG : http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x6C0D04CF The Trouble with doing anything right the first time is that nobody appreciates how difficult it was. -BIX - Original Message - From: Bryan Heitman [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, July 15, 2002 11:24 AM Subject: ATT Anyone seeing any problems with ATT in LAX? I have ATT colo in Mesa, AZ with some routing problems... Best regards, Bryan Heitman Interland, Inc.
Re: ATT
No problems from our line in PHX at all today, once in a while they've been known to block ICMP to some routers in LAX, but that's just cosmetic. Regards, Matt -- Matt Levine @Home: [EMAIL PROTECTED] @Work: [EMAIL PROTECTED] ICQ : 17080004 AIM : exile GPG : http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x6C0D04CF The Trouble with doing anything right the first time is that nobody appreciates how difficult it was. -BIX - Original Message - From: Bryan Heitman [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, July 15, 2002 11:24 AM Subject: ATT Anyone seeing any problems with ATT in LAX? I have ATT colo in Mesa, AZ with some routing problems... Best regards, Bryan Heitman Interland, Inc.
RE: No one behind the wheel at WorldCom
On Mon, 15 Jul 2002, Frank Scalzo wrote: The problem with doing that is you do not get fine grained control. You can only say only send me 50k routes, and things not from other peers. The really nifty part about prefix-limiting your peers is when they deaggregate toward you, you drop all your bgp sessions to them at once, Your max prefixes should give wide enough margin and larger peers should be responsible to let you know of any large %-age increase in prefixes happening in one go. completely depeering. You still cannot prevent announcement of weird routes like 63/8. Do not be fooled into believing that just because a 63/8 I dont like but I can live with.. multiple 63.x.x.0/24 I cant network is big they know what they are doing. Some of the 63/8 announcements I have seen came by way of sprint. Let's step back and think about this on a security front, anyone with access to a tier 1 ISPs router can dos anyone in the internet, just by throwing in a null route for the block that is more specific then the one they have advertised. Granted not easily done, but just the same I like to be the only one who can break my network. I thought someone would mention that.. the post before mine suggested there was no method of filtering, I suggested there was a way to improve greatly the restrictions without killing CPU. I still acknowledge that its possible to break it by hacking BGP routes but something is better than nothing. Unfortunately Majdi is correct, we do not have sufficient functionality in today's routing software to fix the problem. Oh well I guess it has worked for this long. I agree also, and cant fix but can offer improvement. Steve -Original Message- From: Stephen J. Wilcox [mailto:[EMAIL PROTECTED]] Sent: Monday, July 15, 2002 8:39 AM To: Majdi S. Abbas Cc: Frank Scalzo; [EMAIL PROTECTED] Subject: Re: No one behind the wheel at WorldCom There are different types of filter tho and I'd suggest they are suitable in different circumstances. eg small peer 100 prefixes - build prefix filter list, as path list middle peer - either depending on requirement (eg cust, peer) large peer 1000 prefixes - as path filter plus max prefix I'm not implementing the above so the numbers and suggestions are a little arbitrary but I'm making the point that you can filter smaller peers who are less experienced and more likely to give an error and for larger peers you have to be less granular but can still impose failsafes without increasing CPU. Steve On Mon, 15 Jul 2002, Majdi S. Abbas wrote: On Mon, Jul 15, 2002 at 01:58:44AM -0400, Frank Scalzo wrote: See now we are back to the catch 22 that is IRR. No one will use it because the data isnt there, and no one will put the data into it because no one uses it. [CC: list trimmed] Actually, I think you'll find that bad data is only a small part of the problem; even with good data, there isn't enough support from various router vendors to make it worthwhile; it's effectively impossible to prefix filter a large peer due to router software restrictions. We need support for very large (256k+ to be safe) prefix filters, and the routing process performance to actually handle a prefix list this large, and not just one list, but many. IRR support for automagically building these prefix lists would be a real plus too. Building and then pushing out filters on another machine can be quite time consuming, especially for a large network. I think the way to get IRR into the real world production realm, is to really drive home the issue w/IPV6. This still doesn't solve the scaling issue. This is no different than running your own RR, which many ISPs already do -- and they still have to exempt many of their peers. Typically, RR derived prefix filtering is something reserved for only their transit customers. If it were that easy, everyone (well, some people) would be doing it. --msa
Re: Evil PGP sigs thread must die. was Re: Stop it with putting your e-mail body in my MUA OT
On Mon, 15 Jul 2002, Brad Knowles wrote: So, does EVERY email need to be pgp signed? Do you need to use ssh every time you access a server remotely? Every time the device runs ssh and I have to type a password, yes. Surely you know when your line is being tapped or when your packets are being sniffed, and you choose only those times to use ssh, and otherwise you use telnet? There's some degree of truth to this. For instance, most of my routers do not run ssh. However, I control the network between here and there, so I am comfortable that nobody is capable of sniffing the session, so I am comfortable using telnet and not going through an OOB connection. Same goes for actually using passwords to login -- surely you know when it's a legitimate user that is trying to login and when it's someone trying to gain illicit access to your system, and you require them to use passwords accordingly? Of course not. In the previous two situations, a human is making decisions, judgement calls. This situation, you're asking a computer to do so. Bad analogy. When was the last time somebody on this list bothered to check the validity of a pgp signed message which they received via nanog? When was the last time anyone on this list bothered to check the validity of any message they received via any channel? I mean, if you're going to use probability to support your argument, you might as well widen the discussion to a much broader sample group. So why is it that people are bothering to sign their posts to nanog if nobody cares if the people are who they say they are? I mean, if John Sidgmore posted to that from now on, Worldcom's official pricing is $100/meg with a 3 meg commit, I wouldn't believe it for a second unless it was signed and I verified it. Not everything is black and white. At what level would you choose to validate a message like this? Not everything is black and white. Does that mean you agree with me that not everything needs to be signed? Or does that mean you agree with me in that a judgement call must be made? Andy Andy Dills 301-682-9972 Xecunet, LLCwww.xecu.net Dialup * Webhosting * E-Commerce * High-Speed Access
PGP: learn it, use it, love it
On Mon, Jul 15, 2002 at 02:50:48PM -0400, [EMAIL PROTECTED] said: [snip] Not everything is black and white. Does that mean you agree with me that not everything needs to be signed? Or does that mean you agree with me in that a judgement call must be made? *sigh* Sign your mail, or at least stop protesting about those that make the effort to do so. There are a great many good reasons to do so, and no good reasons not to. Broken software and laziness don't count. -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui msg03757/pgp0.pgp Description: PGP signature
Re: PGP: learn it, use it, love it
Scott Francis wrote: There are a great many good reasons to do so, and no good reasons not to. Broken software and laziness don't count. Sure there are. Non-repudiation is not always a good thing. Do you get every physical document you write notarized? If you are sued and email is submitted as evidence by the plaintiff would you rather the mail be signed or unsigned? Bradley
Re: PGP: learn it, use it, love it
On Mon, Jul 15, 2002 at 03:43:12PM -0400, [EMAIL PROTECTED] said: Scott Francis wrote: There are a great many good reasons to do so, and no good reasons not to. Broken software and laziness don't count. Sure there are. Non-repudiation is not always a good thing. Do you get every physical document you write notarized? If you are sued and email is No, but I use an envelope and a signature on every piece of snail mail I send that I author myself. (Not that there are that many nowadays.) submitted as evidence by the plaintiff would you rather the mail be signed or unsigned? I stand behind what I write. If I am sued, I doubt that anything I wrote in email would be to blame. In such a scenario (which, I might add, is entirely hypothetical), the existence or lack of a PGP signature would hardly be the problem. The actions that prompted the lawsuit would, and that is a whole other kettle of fish altogether. This is now so far off-topic I can't even _see_ the NANOG charter. Final post by me. I did enjoy reading the various opinions submitted, but I hold little hope that any arguments given, no matter their merit, will prompt any change in the same. -- -= Scott Francis || darkuncle (at) darkuncle (dot) net =- GPG key CB33CCA7 has been revoked; I am now 5537F527 illum oportet crescere me autem minui msg03759/pgp0.pgp Description: PGP signature
Train Derailment near Milwaukee (Washingon County)
Anybody know if there's any fiber runs affected? Regards, Matt -- Matt Levine @Home: [EMAIL PROTECTED] @Work: [EMAIL PROTECTED] ICQ : 17080004 AIM : exile GPG : http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x6C0D04CF The Trouble with doing anything right the first time is that nobody appreciates how difficult it was. -BIX
Re: verio arrogance
On Mon, Jul 15, 2002 at 05:10:28PM -0400, Ralph Doncaster wrote: http://info.us.bb.verio.net/routing.html#PeerFilter It seems if I were one of their customers they would accept my 66.11.168/23 announcement and re-announce it to their peers, but they won't accept it from any of their peers. As a customer you pay them to announce your /23, as a peer you don't. Their line of logic is that if you are a peer of theirs you don't have to accept that /23 either. Announcing a covering /20 along with the regional more specifics I have will only serve to increase the size of the routing table for most backbones, and lead to sub optimal routing in some cases since I'm announcing the more specifics due to geographical diversity. Announce the /20 to your transit providers, and the more specifics with no-export. Verio's position is that they don't want to or need to hear your /23s unless you are a customer, and for the most part they are right. -- Richard A Steenbergen [EMAIL PROTECTED] http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)
Re: Train Derailment near Milwaukee (Washingon County)
People are a lot less expensive than fiber!! And more quickly replaced. :) On Mon, 15 Jul 2002, Clarke wrote: The fiber is okay, thank goodness! There may have been a few casualties, but at least the fiber is okay - Original Message - From: Matt Levine [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, July 15, 2002 1:31 PM Subject: Train Derailment near Milwaukee (Washingon County) Anybody know if there's any fiber runs affected? Regards, Matt -- Matt Levine @Home: [EMAIL PROTECTED] @Work: [EMAIL PROTECTED] ICQ : 17080004 AIM : exile GPG : http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x6C0D04CF The Trouble with doing anything right the first time is that nobody appreciates how difficult it was. -BIX
Re: verio arrogance
This is really old news...actually, I seem to recall that they would only accept /19 or shorter prefixes from former Class A B space...I pressed Sprint for a /21 from the swamp (instead of the former Class A space /21 they initially assigned) because of Verio's policy, in fact. They must have softened the policy within the past year or so to /21 or shorter. On Mon, 15 Jul 2002, Ralph Doncaster wrote: http://info.us.bb.verio.net/routing.html#PeerFilter It seems if I were one of their customers they would accept my 66.11.168/23 announcement and re-announce it to their peers, but they won't accept it from any of their peers. Announcing a covering /20 along with the regional more specifics I have will only serve to increase the size of the routing table for most backbones, and lead to sub optimal routing in some cases since I'm announcing the more specifics due to geographical diversity. Ralph Doncaster principal, IStop.com James Smallacombe PlantageNet, Inc. CEO and Janitor [EMAIL PROTECTED] http://3.am =
Re: Train Derailment near Milwaukee (Washingon County)
http://www.jsonline.com/news/ozwash/jul02/59094.asp Associated Press Last Updated: July 15, 2002 Allenton - A 70-car freight train carrying hazardous materials derailed Monday afternoon, causing a fire and sending 16 cars off the track in Washington County. The Canadian National freight derailed about 2:30 p.m. on Wildlife Road near County Trunk K about a mile west of U.S. 41, said Washington County Sgt. Jill Raffay. They are having a hard time getting up to evaluate what happened because of the fire that is going on, Raffay said. There were no reported injuries. Some of the 16 cars were carrying hazardous materials, she said, but the type of materials was not immediately known. The ones on fire were not the ones with hazardous material. At least four fire departments were on the scene and a hazardous materials team from Milwaukee was on the way, Raffay said. Some side roads were closed due to the accident, but no major highways were shut down, she said. There are some houses in the area but no one has been evacuated, Raffay said. Raffay did not know the train's destination or origin. It was probably going northbound because the engine stopped in Allenton, she said. A more complete version of this story will appear online later tonight and in the Milwaukee Journal Sentinel in the morning. --- Matt Levine [EMAIL PROTECTED] wrote: Anybody know if there's any fiber runs affected? Regards, Matt -- Matt Levine @Home: [EMAIL PROTECTED] @Work: [EMAIL PROTECTED] ICQ : 17080004 AIM : exile GPG : http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x6C0D04CF The Trouble with doing anything right the first time is that nobody appreciates how difficult it was. -BIX
Re: verio arrogance
Unless you are in the swamp - the old Class C, where I believe that they do accept /24's. Regards Marshall Eubanks Richard A Steenbergen wrote: On Mon, Jul 15, 2002 at 05:10:28PM -0400, Ralph Doncaster wrote: http://info.us.bb.verio.net/routing.html#PeerFilter It seems if I were one of their customers they would accept my 66.11.168/23 announcement and re-announce it to their peers, but they won't accept it from any of their peers. As a customer you pay them to announce your /23, as a peer you don't. Their line of logic is that if you are a peer of theirs you don't have to accept that /23 either. Announcing a covering /20 along with the regional more specifics I have will only serve to increase the size of the routing table for most backbones, and lead to sub optimal routing in some cases since I'm announcing the more specifics due to geographical diversity. Announce the /20 to your transit providers, and the more specifics with no-export. Verio's position is that they don't want to or need to hear your /23s unless you are a customer, and for the most part they are right. -- T.M. Eubanks Multicast Technologies, Inc 10301 Democracy Lane, Suite 410 Fairfax, Virginia 22030 Phone : 703-293-9624 Fax : 703-293-9609 e-mail : [EMAIL PROTECTED] http://www.multicasttech.com Test your network for multicast : http://www.multicasttech.com/mt/ Status of Multicast on the Web : http://www.multicasttech.com/status/index.html
Re: QoS/CoS in the real world?
a) QoS mechanisms are for the local-tail. Backbones should have enough bandwidth (and bandwidth is cheap). b) QoS was for customers with services like VoIP and VPN - and in most cases they where needed becuase the end users refused to buy the bandwidth they actually needed. c) The QoS implementations in the vendor boxes at best leaves a lot to whish for and in most cases simply does not work (but to their credit they where really helpful in working with us on this). the ietf ieprep (emergency preparednes) wg is going to force you to put qos in your backbone or not sell to the government(s) etc. it i svery hard to push simplicity to those making money by inflating fear. you might be concerned. randy
Re: verio arrogance
On Mon, 15 Jul 2002, Richard A Steenbergen wrote: On Mon, Jul 15, 2002 at 05:10:28PM -0400, Ralph Doncaster wrote: Announcing a covering /20 along with the regional more specifics I have will only serve to increase the size of the routing table for most backbones, and lead to sub optimal routing in some cases since I'm announcing the more specifics due to geographical diversity. Announce the /20 to your transit providers, and the more specifics with no-export. Verio's position is that they don't want to or need to hear your /23s unless you are a customer, and for the most part they are right. But I've broken my /20 into a /21 for Ottawa, a /22 for Toronto, a /23 for Montreal, and a /23 for expansion. I'm currently only getting transit in Toronto, but will have a second transit provider restored in Ottawa (I was using GT for a short while). While announcing the the /20 will my network is reachable for single-homed Verio customers, it won't provide the true best path that simply accepting the regional more specifics. -Ralph
RE: AS4006
Andre, Using ARIN's Whois service I obtained the following: as4006 OrgName: NetRail, Inc. OrgID: RAIL ASNumber: 4006 - 4006 ASName: NETRAIL ASHandle: AS4006 Comment: RegDate: 1994-11-10 12:00:00 Updated: TechHandle: NRAIL-NOC-ARIN TechName: NetRail, Inc., TechPhone: +1-404-522-5400 TechEmail: [EMAIL PROTECTED] Looking at the age of this AS number, it may not be totally current, but hope this helps. Don Wilder Senior Systems Administrator American Registry for Internet Numbers -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Andre Chapuis Sent: Tuesday, July 16, 2002 12:34 AM To: [EMAIL PROTECTED] Subject: AS4006 Hi, Anybody has a contact for AS4006 ? (peering issue) Thanks André - Andre Chapuis IP+ Engineering Swisscom Ltd Genfergasse 14 3050 Bern +41 31 893 89 61 [EMAIL PROTECTED] CCIE #6023 --
Re: AS4006
Hi, actually netrail is Cogent now. I think its like 18774cogent or something like that you can find their number on their front page at http://www.cogentco.com. There noc has always been very responsive and helpful to me actually really above average so they should be able to help. Scott - Original Message - From: Don Wilder [EMAIL PROTECTED] To: Andre Chapuis [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, July 15, 2002 5:37 PM Subject: RE: AS4006 Andre, Using ARIN's Whois service I obtained the following: as4006 OrgName: NetRail, Inc. OrgID: RAIL ASNumber: 4006 - 4006 ASName: NETRAIL ASHandle: AS4006 Comment: RegDate: 1994-11-10 12:00:00 Updated: TechHandle: NRAIL-NOC-ARIN TechName: NetRail, Inc., TechPhone: +1-404-522-5400 TechEmail: [EMAIL PROTECTED] Looking at the age of this AS number, it may not be totally current, but hope this helps. Don Wilder Senior Systems Administrator American Registry for Internet Numbers -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Andre Chapuis Sent: Tuesday, July 16, 2002 12:34 AM To: [EMAIL PROTECTED] Subject: AS4006 Hi, Anybody has a contact for AS4006 ? (peering issue) Thanks André - Andre Chapuis IP+ Engineering Swisscom Ltd Genfergasse 14 3050 Bern +41 31 893 89 61 [EMAIL PROTECTED] CCIE #6023 --
Re: AS4006
On Tue, 16 Jul 2002 09:37:35 +0900 Don Wilder [EMAIL PROTECTED] wrote: Andre, Netrail was bought by Cogent after it went belly up. Try their NOC. Regards Marshall Eubanks Using ARIN's Whois service I obtained the following: as4006 OrgName: NetRail, Inc. OrgID: RAIL ASNumber: 4006 - 4006 ASName: NETRAIL ASHandle: AS4006 Comment: RegDate: 1994-11-10 12:00:00 Updated: TechHandle: NRAIL-NOC-ARIN TechName: NetRail, Inc., TechPhone: +1-404-522-5400 TechEmail: [EMAIL PROTECTED] Looking at the age of this AS number, it may not be totally current, but hope this helps. Don Wilder Senior Systems Administrator American Registry for Internet Numbers -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Andre Chapuis Sent: Tuesday, July 16, 2002 12:34 AM To: [EMAIL PROTECTED] Subject: AS4006 Hi, Anybody has a contact for AS4006 ? (peering issue) Thanks André - Andre Chapuis IP+ Engineering Swisscom Ltd Genfergasse 14 3050 Bern +41 31 893 89 61 [EMAIL PROTECTED] CCIE #6023 --
fractional gigabit ethernet links?
Hello, I'm trying to troubleshoot a problem with a fractional (311 mbit/second) gigabit-ethernet line provided to me by a metro access provider. Specifically, it is riding a gig-e port of a 15454. The behavior we are seeing is an occasional loss of packets, adding up to a few percent. When doing a cisco-type ping across the link, we were seeing a consistent 3 to 4 percent loss. For fun, the provider brought it up to 622 mbit/second, and loss dropped considerably, but still hangs at about 1 to 2 percent. There is no question in my mind the issue is with the line, as we've done a wide variety of tests to rule out the local equipment (MSFC2s, FYI). Any clues would be exceptional. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben -- --Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
RE: fractional gigabit ethernet links?
On Mon, 15 Jul 2002, Phil Rosenthal wrote: Hello Alex, I'd say this sounds obvious, but may be deceptively so... If you are taking a pipe capable of 1000 mbit, and rate-limiting it to 311 mbit, the logic used may be: In the last 1000 msec have there been more than 311mbits? If yes: drop. Except, we're at the levels of 100 kbit/second in our tests. I did just find CSCdr94172, which might be related. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben -- --Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
RE: fractional gigabit ethernet links?
This may sound a bit ridiculous, but say the timer is every 0.25ms. 100kbit per 0.25ms = 400,000kbit or 400 mbit. It is remotely possible to hit a 300 mbit limit with only 100kbits of traffic, if the timer is sufficiently short, and your traffic is sufficiently bursty. Unless your traffic is Mcast, I doubt that issue is related. Can you ask your provider how exactly they are limiting the pipe? When dealing with 300 or so megs, I doubt they will be shaping with a policy friendly to you, as the logistics of doing so are a bit difficult. --Phil -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Alex Rubenstein Sent: Monday, July 15, 2002 11:06 PM To: Phil Rosenthal Cc: [EMAIL PROTECTED] Subject: RE: fractional gigabit ethernet links? On Mon, 15 Jul 2002, Phil Rosenthal wrote: Hello Alex, I'd say this sounds obvious, but may be deceptively so... If you are taking a pipe capable of 1000 mbit, and rate-limiting it to 311 mbit, the logic used may be: In the last 1000 msec have there been more than 311mbits? If yes: drop. Except, we're at the levels of 100 kbit/second in our tests. I did just find CSCdr94172, which might be related. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben -- --Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
Colocation Enclosures
Greetings, I'm trying to find alternative sources for a 2 or 3 section locked colocation cabinet cosmetically similar to the following: http://www.budind.com/images/big/DC-8125bg.jpg It appears that Encoreusa is no longer in business so I would appreciate any pointers as to where I may locate such an enclosure. Thank you! Chris
RE: Colocation Enclosures
I may be wrong, but I believe Chatsworth makes some 2 section cabs. I remember verio having some 2-sections and I was pretty sure they only had Chatsworth in NYC... --Phil -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Christopher J. Wolff Sent: Monday, July 15, 2002 11:49 PM To: [EMAIL PROTECTED] Subject: Colocation Enclosures Greetings, I'm trying to find alternative sources for a 2 or 3 section locked colocation cabinet cosmetically similar to the following: http://www.budind.com/images/big/DC-8125bg.jpg It appears that Encoreusa is no longer in business so I would appreciate any pointers as to where I may locate such an enclosure. Thank you! Chris
Re: fractional gigabit ethernet links?
On Mon, 15 Jul 2002 22:48:12 -0400 (Eastern Daylight Time) Alex Rubenstein [EMAIL PROTECTED] wrote: Hello, I'm trying to troubleshoot a problem with a fractional (311 mbit/second) gigabit-ethernet line provided to me by a metro access provider. Specifically, it is riding a gig-e port of a 15454. The behavior we are seeing is an occasional loss of packets, adding up to a few percent. When doing a cisco-type ping across the link, we were seeing a consistent 3 to 4 percent loss. Over what averaging time ? Could this be an example of the 65 second problem, where the router stops dead for one or two seconds out of every 65 seconds ? Regards Marshall Eubanks For fun, the provider brought it up to 622 mbit/second, and loss dropped considerably, but still hangs at about 1 to 2 percent. There is no question in my mind the issue is with the line, as we've done a wide variety of tests to rule out the local equipment (MSFC2s, FYI). Any clues would be exceptional. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben -- --Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
Re: Colocation Enclosures
Try SharkRack. They'll make custom racks if need be. Very nice sales people. http://www.sharkrack.com/ -Mike On Mon, 15 Jul 2002, Christopher J. Wolff wrote: Greetings, I'm trying to find alternative sources for a 2 or 3 section locked colocation cabinet cosmetically similar to the following: http://www.budind.com/images/big/DC-8125bg.jpg It appears that Encoreusa is no longer in business so I would appreciate any pointers as to where I may locate such an enclosure. Thank you! Chris
Re: fractional gigabit ethernet links?
Might want to query your provider as to where the rate limitting is being done. In some cases, if rate limit is being done egress from the layer 3 infracture towards the MAN layer 2 equipment, there might be a lack of processing power on that device, causing the drops. Of course this will depend on the type of device and whether the rate limiting is being done on hardware or not too. Sush Hello, I'm trying to troubleshoot a problem with a fractional (311 mbit/second) gigabit-ethernet line provided to me by a metro access provider. Specifically, it is riding a gig-e port of a 15454. The behavior we are seeing is an occasional loss of packets, adding up to a few percent. When doing a cisco-type ping across the link, we were seeing a consistent 3 to 4 percent loss. For fun, the provider brought it up to 622 mbit/second, and loss dropped considerably, but still hangs at about 1 to 2 percent. There is no question in my mind the issue is with the line, as we've done a wide variety of tests to rule out the local equipment (MSFC2s, FYI). Any clues would be exceptional. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben -- --Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
Ok, I give up - Pipeline 75
Is there a Pipeline 75 guru out there tonight that might be willing to help me offline? This is my first experience with the Pipeline and I am feeling really stupid about now! Please excuse the minor operation content, but I am stumped and really need a hand. I am replacing my fading (but still functional) Flowpoint 128 ISDN router with a used Pipeline 75 that I got off eBay. It appears to be fine. I have upgraded the firmware and have spent all day trying to get this !**~!! thing to dial a call to the ISDN WAN. Both POTS work in out bound. Show if stat indicates the all the WAN ports are down except 'wanidle0' which is up. The system options status window says dynamic bandwith allocation is not installed and I can't fiqure out how to turn it on or install it. Is this my problem? In diagnostics, bridisplay occasionally reports --- parameter 1,128.1 could not be loaded --- parameter 1,128.2 could not be loaded but I am unable to locate any cross reference between these parameter numbers and the associated configuration item. Your kind assistance is greatly appreciated. Dennis ...
Re: fractional gigabit ethernet links?
Since this is being done with the 15454s this is not true rate limiting, more its a matter of STS channels being made available for use on the Vlan assigned to the two GigE ports. We have experienced this problem when sending traffic in excess of 250mbps through either: a) 4 or more 15454 hops that tired to achieve timing off of eachother rather than a stratum one source: timing===15454===15454===15454===15454===15454 OR b) a 15454 that uplinked its OC48 backhaul through an active DWDM device (in our case a Sycamore SN8000) when both the SN8000 and 15454 were again not on the same stratum one source : 15454===sn8000===sn8000===15454 By not allowing a timing source to be more than 4 hops from any chassis and by ensuring that all DWDM devices in between are also timed the same, we have pushed up to 605mbps over a 15454 GigE link with no recorded packet loss. Hope this helps Alex, Vin PS As another aside we have seen problems when simultaneously using both ports on the 2 x GigE card for more then 200mbps a piece, but not with any regularity. -vb - Original Message - From: Sush Bhattarai [EMAIL PROTECTED] To: Alex Rubenstein [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, July 15, 2002 11:57 PM Subject: Re: fractional gigabit ethernet links? Might want to query your provider as to where the rate limitting is being done. In some cases, if rate limit is being done egress from the layer 3 infracture towards the MAN layer 2 equipment, there might be a lack of processing power on that device, causing the drops. Of course this will depend on the type of device and whether the rate limiting is being done on hardware or not too. Sush Hello, I'm trying to troubleshoot a problem with a fractional (311 mbit/second) gigabit-ethernet line provided to me by a metro access provider. Specifically, it is riding a gig-e port of a 15454. The behavior we are seeing is an occasional loss of packets, adding up to a few percent. When doing a cisco-type ping across the link, we were seeing a consistent 3 to 4 percent loss. For fun, the provider brought it up to 622 mbit/second, and loss dropped considerably, but still hangs at about 1 to 2 percent. There is no question in my mind the issue is with the line, as we've done a wide variety of tests to rule out the local equipment (MSFC2s, FYI). Any clues would be exceptional. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben -- --Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
Re: Colocation Enclosures
Southwest Data Products - distributed through Graybar AMCO DATACOM www.amcoengineering.com On 7/15/02 8:48 PM, Christopher J. Wolff [EMAIL PROTECTED] wrote: Greetings, I'm trying to find alternative sources for a 2 or 3 section locked colocation cabinet cosmetically similar to the following: http://www.budind.com/images/big/DC-8125bg.jpg It appears that Encoreusa is no longer in business so I would appreciate any pointers as to where I may locate such an enclosure. Thank you! Chris
Re: No one behind the wheel at WorldCom
[EMAIL PROTECTED] (Majdi S. Abbas) writes: Actually, I think you'll find that bad data is only a small part of the problem; even with good data, there isn't enough support from various router vendors to make it worthwhile; it's effectively impossible to prefix filter a large peer due to router software restrictions. We need support for very large (256k+ to be safe) prefix filters, and the routing process performance to actually handle a prefix list this large, and not just one list, but many. IRR support for automagically building these prefix lists would be a real plus too. Building and then pushing out filters on another machine can be quite time consuming, especially for a large network. From a point of view of routing software the major challenge of handling a 256k prefix list is not actually applying it to the received prefixes. The most popular BGP implementations all, to my knowledge, have prefix filtering algorithms that are O(log2(N)) and which probably scale ok... while it would be not very hard to make this a O(4) algorithm that is probably not the issue. Implementations do always have to do a O(log2(N)) lookup on the routing table with a received prefix, and to afaik that is not a performance problem for anyone. What all implementations that i'm familiar with do have a problem with is to actually accept the configuration of 256k lines of text to use as a filter. Configuration parsing is typically not designed for such numbers... it tends to work with major vendors albeith a bit slowly. If the above disagrees with your experience please let me know. Assuming that the bottleneck is in fact being able to parse configuration, it begs the question what to do about it... I'm sure all vendors will be able to, given enought incentive, optimize their text parsing code in order to do this faster... but it begs the question, would you actually fix anything by doing that ? My inclination would be to think that you would just tend to move the bottleneck to the backend systems managing the configuration of such lists, if it isn't there already, presently. Of course i'm completly ignorant of the backends that most of you use to manage your systems and the above is just uneducated guessing, although i would apreciate further education. I would be inclined to agree with your statement that the major blame should lie on router vendors if you see your router vendor as someone that sells you the network elements + the NMS to manage it. But in my guestimate the focal point of our search for a culprit should be the NMS or the NMS - router management mechanism. Idealy the latter should be more computer friendly than text parsing. Just an attempt to equally and democratically distribute blame around :-) regards, Pedro.
RE: No one behind the wheel at WorldCom
I've found that a regex that is longer than about 200 characters with the format of ^1_(2|3|4|5)$ (say 20 different as numbers in the parenthesis) can easily crash a Bigiron running the latest code. If you were to set up a filter that only accepted updates with ^customer_(d1|d2|d3)$ d1=downstream of customer 1, it will choke with a fairly large peer... Don't know how the other vendors handle it. I reported this to foundry a few weeks ago, no fix as of yet (and I doubt there will be). --Phil -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Pedro R Marques Sent: Tuesday, July 16, 2002 2:44 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: No one behind the wheel at WorldCom [EMAIL PROTECTED] (Majdi S. Abbas) writes: Actually, I think you'll find that bad data is only a small part of the problem; even with good data, there isn't enough support from various router vendors to make it worthwhile; it's effectively impossible to prefix filter a large peer due to router software restrictions. We need support for very large (256k+ to be safe) prefix filters, and the routing process performance to actually handle a prefix list this large, and not just one list, but many. IRR support for automagically building these prefix lists would be a real plus too. Building and then pushing out filters on another machine can be quite time consuming, especially for a large network. From a point of view of routing software the major challenge of handling a 256k prefix list is not actually applying it to the received prefixes. The most popular BGP implementations all, to my knowledge, have prefix filtering algorithms that are O(log2(N)) and which probably scale ok... while it would be not very hard to make this a O(4) algorithm that is probably not the issue. Implementations do always have to do a O(log2(N)) lookup on the routing table with a received prefix, and to afaik that is not a performance problem for anyone. What all implementations that i'm familiar with do have a problem with is to actually accept the configuration of 256k lines of text to use as a filter. Configuration parsing is typically not designed for such numbers... it tends to work with major vendors albeith a bit slowly. If the above disagrees with your experience please let me know. Assuming that the bottleneck is in fact being able to parse configuration, it begs the question what to do about it... I'm sure all vendors will be able to, given enought incentive, optimize their text parsing code in order to do this faster... but it begs the question, would you actually fix anything by doing that ? My inclination would be to think that you would just tend to move the bottleneck to the backend systems managing the configuration of such lists, if it isn't there already, presently. Of course i'm completly ignorant of the backends that most of you use to manage your systems and the above is just uneducated guessing, although i would apreciate further education. I would be inclined to agree with your statement that the major blame should lie on router vendors if you see your router vendor as someone that sells you the network elements + the NMS to manage it. But in my guestimate the focal point of our search for a culprit should be the NMS or the NMS - router management mechanism. Idealy the latter should be more computer friendly than text parsing. Just an attempt to equally and democratically distribute blame around :-) regards, Pedro.